THE FRACTAL STRUCTURE OF DATA REFERENCE- P16

THE FRACTAL STRUCTURE OF DATA REFERENCE- P16:For purposes of understanding its performance, a computer system is traditionally viewed as a processor coupled to one or more disk storage devices, and driven by externally generated requests (typically called transactions). Over the past several decades, very powerful techniques have become available to the performance analyst attempting to understand, at a high level, the operational behavior of such systems. | 62 THE FRACTAL STRUCTURE OF DATA REFERENCE 1. THE CASE FOR LRU In this section our objective is to determine the best scheme for managing memory given that the underlying data conforms to the multiple-workload hierarchical reuse model. For the present we focus on the special case 01 01 . 0 . In this special case we shall discover that the scheme we are looking for is in fact the lru algorithm. As in Chapter 4 we consider the optimal use ofmemory to be the one that minimizes the total delay due to cache misses. We shall assume that a fixed delay D D2 . . . D D 0 measured in seconds is associated with each cache miss. Also we shall assume that all workloads share a common stage size z Z2 . Zn z 0. We continue to assume as in the remainder of the book that the parameter 0 lies in the range 0 0 1. Finally we shall assume that all workloads are non-trivial that is a non-zero i o rate is associated with every workload . The final assumption is made without loss of generality since clearly there is no need to allocate any cache memory to a workload for which no requests must be serviced. We begin by observing that for any individual workload data items have corresponding probabilities of being requested that are in descending order of the time since the previous request due to . Therefore for any individual workload the effect ofmanaging that workload s memory via the lru mechanism is to place into cache memory exactly those data items which have the highest probabilities ofbeing referenced next. This enormously simplifies our task since we know how to optimally manage any given amount of memory assigned for use by workload i. We must still however determine the best trade-offof memory among the n workloads. The optimal allocation of memory must be the one for which the marginal benefit reduction of delays per unit of added cache memory is the same for all workloads. otherwise we could improve performance by taking memory away from the workload with the smallest .

Không thể tạo bản xem trước, hãy bấm tải xuống
TÀI LIỆU MỚI ĐĂNG
7    100    4    16-05-2024
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.