HandBooks Professional Java-C-Scrip-SQL part 126

Tham khảo tài liệu 'handbooks professional java-c-scrip-sql part 126', công nghệ thông tin, kỹ thuật lập trình phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả | Some Random Musings Before we try to optimize our search let us define some terms. There are two basic categories of storage devices distinguished by the access they allow to individual records. The first type is sequential access 1 in order to read record 1000 from a sequential device we must read records 1 through 999 first or at least skip over them. The second type is direct access on a direct access device we can read record 1000 without going past all of the previous records. However only some direct access devices allow nonsequential accesses without a significant time penalty these are called random access devices. Unfortunately disk drives are direct access devices but not random access ones. The amount of time it takes to get to a particular data record depends on how close the read write head is to the desired position in fact sequential reading of data may be more than ten times as fast as random access. Is there a way to find a record in a large file with an average of about one nonsequential access Yes in fact there are several such methods varying in complexity. They are all variations on hash coding or address calculation as you will see such methods actually can be implemented quite simply although for some reason they have acquired a reputation for mystery. Hashing It Out Let s start by considering a linear or sequential search. That is we start at the beginning of the file and read each record in the file until we find the one we want because its key is the same as the key we are looking for . If we get to the end of the file without finding a record with the key we are looking for the record isn t in the file. This is certainly a simple method and indeed is perfectly acceptable for a very small file but it has one major drawback the average time it takes to find a given record increases every time we add another record. If the file gets twice as big it takes twice as long to find a record on the average. So this seems useless. Divide and Conquer

Không thể tạo bản xem trước, hãy bấm tải xuống
TÀI LIỆU LIÊN QUAN
5    176    1
5    255    1
5    106    0
5    121    1
6    103    1
6    107    1
6    121    1
6    103    0
6    140    0
TÀI LIỆU MỚI ĐĂNG
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.