Nothing Special   »   [go: up one dir, main page]

US6745295B2 - Designing a cache with adaptive reconfiguration - Google Patents

Designing a cache with adaptive reconfiguration Download PDF

Info

Publication number
US6745295B2
US6745295B2 US09/838,433 US83843301A US6745295B2 US 6745295 B2 US6745295 B2 US 6745295B2 US 83843301 A US83843301 A US 83843301A US 6745295 B2 US6745295 B2 US 6745295B2
Authority
US
United States
Prior art keywords
stack
cache
stacks
recited
entry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US09/838,433
Other versions
US20020156980A1 (en
Inventor
Jorge R. Rodriguez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US09/838,433 priority Critical patent/US6745295B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Rodriguez, Jorge R.
Priority to US10/005,426 priority patent/US6792509B2/en
Publication of US20020156980A1 publication Critical patent/US20020156980A1/en
Application granted granted Critical
Publication of US6745295B2 publication Critical patent/US6745295B2/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/6042Allocation of cache space to multiple users or processors

Definitions

  • the present invention relates to the field of cache design, and more particularly to designing a cache with adaptive reconfiguration thereby improving the performance of the cache.
  • a network server e.g., file server, database server, web server
  • These requests may form what is commonly referred to as a “workload” for the network server. That is, a workload may refer to the requests that need to be serviced by the network server.
  • a server in a network system comprises a disk adapter that bridges the disk, e.g., disk drive, to the processing unit of the server unit.
  • a server may further comprise a cache commonly referred to as a disk cache within the disk adapter to increase the speed of accessing data.
  • a cache is faster than a disk and thereby allows data to be read at higher speeds. Thus, if data is stored in the cache it may be accessed at higher speeds than accessing the data on the disk.
  • a “cache hit” is said to occur if an item, e.g., data, requested by the processor in the server or a client in a network system, is present in the disk cache.
  • an item, e.g., data, requested by the processor in the server or a client in the network system is not present in the cache, a “cache miss” is said to occur.
  • a “cache hit rate” may refer to the rate at which cache hits occur.
  • LRU replacement method uses a single stack 101 comprising a set of cache entries where each cache entry stores particular data.
  • a “cache hit” is said to occur.
  • the cache entry comprising the information, e.g., data, requested moves to the first stack position as illustrated in FIG. 1 .
  • a “cache miss” is said to occur.
  • the requested item is retrieved from the disk and then stored in the first stack position as illustrated in FIG. 1 .
  • the cache entry in the last stack position of stack 101 is evicted.
  • the information, e.g., data, may subsequently be discarded.
  • the S-LRU replacement method may use two stacks 201 A-B.
  • Each stack, stack 201 A-B may comprise a set of cache entries where each cache entry stores particular instructions and data.
  • the cache entry comprising the information, e.g., data, requested moves up to the first stack position of the second stack, e.g., stack 201 B, as illustrated in FIG. 2 .
  • the cache entry at the last stack position of stack 201 B is evicted to the first stack position of stack 201 A.
  • the cache entry at the last stack position of stack 201 A is evicted.
  • the information, e.g., data may subsequently be discarded.
  • the cache entry comprising the information, e.g., data requested moves up to the first stack position of that stack, e.g., stack 201 B, as illustrated in FIG. 2 .
  • the cache entry at the last stack position of stack 201 B is evicted to the first stack position of stack 201 A.
  • the cache entry at the last stack position of stack 201 A is evicted.
  • the requested item is retrieved from the disk and then stored in the first stack position of the first stack, e.g., stack 201 A, as illustrated in FIG. 2 .
  • the cache entry at the last stack position of stack 201 A is evicted.
  • the information, e.g., data, may subsequently be discarded.
  • the problems outlined above may at least in part be solved in some embodiments by designing a cache array reconfigurable based on tracking the changes in the request stream, i.e., workload.
  • a method for reconfiguring a cache may comprise the step of creating a cache array with one or more stacks of cache entries based on a workload.
  • Each stack may be associated with a particular frequency count. That is, each cache entry in that particular stack has a frequency count of at least the frequency count associated with that particular stack.
  • a frequency count may indicate the number of times the information, e.g., data, in the associated cache entry was requested.
  • the one or more stacks in the cache array may then be ordered in an array from most frequently used to least frequently used based on the frequency counts associated with the one or more stacks.
  • the cache entries in each particular stack may be ordered from most recently used to least recently used based on a logical time stamp indicating the time the information, e.g., data, associated with the cache entry was requested.
  • a workload is not static but dynamic and changes over time.
  • the cache may be reconfigured based on tracking the changes in the workload. If an item requested in the stream of new requests, i.e., changes in the request stream, is present in a particular cache entry, a “cache hit” is said to occur.
  • the frequency count associated with the cache entry requested is updated, i.e., increased by one, in the cache directory associated with that cache entry.
  • a determination may then be made as to whether the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count associated with the next higher level stack.
  • the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count associated with the next higher level stack, then that particular cache entry may be stored in a most recently used stack position in the next higher level stack.
  • the next higher level stack Upon storing the particular cache entry in the most recently used stack position in the next higher level stack, the next higher level stack subsequently expands in size by one entry.
  • the next lower level stack reduces in size by one entry.
  • a cache array may be reconfigured when a cache miss occurs by tracking the number of cache hits in or more particular stack positions in each particular stack of the cache array during a particular duration of time.
  • the one or more stack positions tracked in each stack may be located towards the end of each stack since the cache entries in these stack positions are least likely to incur a cache hit and hence most desirable to evict so as to provide an entry to store the requested information from a disk.
  • the number of cache hits in each of the one or more stack positions tracked in each stack during a particular period of time may be counted.
  • the number of cache hits counted in each of the one or more stack positions tracked in each stack during a particular period of time may be added.
  • the total number of cache hits in the one or more stack positions tracked in each stack during a particular period of time may be compared with one another.
  • the cache entry in the least recently used stack position in the stack with the lowest number of cache hits in the one or more stack positions tracked may be evicted thereby allowing a new entry to be inserted in the most recently used stack position in the lowest level stack to store the requested information.
  • the stack with the lowest number of cache hits in the one or more stack positions tracked may be reduced in size by one entry and the stack storing the requested information may be increased in size by one entry.
  • FIG. 1 illustrates an embodiment of the Least Recently Used replacement method for designing a cache
  • FIG. 2 illustrates an embodiment of the Segmented Least Recently Used replacement method for designing a cache
  • FIG. 3 illustrates an embodiment of a network system configured in accordance with the present invention
  • FIG. 4 illustrates an embodiment of the present invention of a server
  • FIG. 5 is a flowchart of a method for designing a cache configured to adaptively reconfigure
  • FIG. 6 illustrates an embodiment of a cache array created based on an analysis of a workload configured in accordance with the present invention
  • FIG. 7 illustrates an embodiment of a cache array comprising two logical portions configured in accordance with the present invention
  • FIG. 8 is a flowchart of the sub-steps of the step of reconfiguring a cache array based on changes in the workload
  • FIG. 9 illustrates an embodiment of the present invention of a cache array configured to adaptively reconfigure when a request in the request stream results in a cache hit
  • FIG. 10 illustrates an embodiment of the present invention of a cache array configured to adaptively reconfigure when a request in the request stream results in a cache miss
  • FIG. 11 illustrates an embodiment of the present invention of tracking cache hits in one or more windows of a particular duration of time.
  • a cache array may be created with one or more stacks of cache entries based on a workload.
  • the one or more stacks in the cache array may be ordered in an array from most frequently used to least frequently used based on the frequency counts associated with the one or more stacks.
  • the cache entries in each particular stack may be ordered from most recently used to least recently used based on a logical time stamp indicating the time the information, e.g., data, associated with the cache entry was requested.
  • the frequency count associated with the cache entry requested is updated, i.e., increased by one, in the cache directory associated with that cache entry.
  • the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count associated with the next higher level stack, then that particular cache entry may be stored in a most recently used stack position in the next higher level stack.
  • the next higher level stack Upon storing the particular cache entry in the most recently used stack position in the next higher level stack, the next higher level stack subsequently expands in size by one entry.
  • the next lower level stack reduces in size by one entry.
  • the cache entry in a least recently used stack position in the stack with the lowest number of cache hits in the one or more stack positions tracked during a particular period of time may be evicted thereby allowing a new entry to be inserted in the most recently used stack position in the lowest level stack to store the requested information.
  • the stack with the lowest number of cache hits in the one or more stack positions tracked may be reduced in size by one entry and the stack storing the requested information may be increased in size by one entry. It is noted that even though the following discusses the present invention in connection with a disk cache, the present invention may be implemented in any type of cache including a memory cache and a filter cache.
  • FIG. 3 Network System
  • FIG. 3 illustrates one embodiment of the present invention of a network system 300 .
  • Network system 300 may comprise one or more clients 301 A-D configured to send requests to a server 302 , e.g., file server, database server, web server.
  • Clients 301 A-D may collectively or individually be referred to as clients 301 or client 301 , respectively.
  • system 300 may comprise any number of clients 301 and that FIG. 3 is illustrative.
  • network system 300 may be any type of system such as a file system or a database system and that FIG. 3 is not to be limited in scope to any one particular embodiment.
  • FIG. 4 Server
  • FIG. 4 illustrates an embodiment of the present invention of server 302 .
  • one or more clients 301 may issue requests to read from or write to a disk 420 in server 302 .
  • the embodiment of the present invention is not limited to read and/or write requests but any requests that require service from server 302 .
  • these stream of requests may form what is commonly referred to as a workload. That is, a workload may refer to the requests that need to be serviced by server 302 .
  • the workload may be managed by a disk adapter 418 .
  • a disk cache (not shown) within disk adapter 418 instead of disk 420 , then the instructions and data requested may be accessed faster. Therefore, it is desirable to optimize the disk cache (not shown) so that as many requests may be serviced by the disk cache as possible. It is noted that a disk cache may reside in other locations than disk adapter 418 , e.g., disk unit 420 , application 450 .
  • server 302 may further comprise a central processing unit (CPU) 410 coupled to various other components by system bus 412 .
  • An operating system 440 runs on CPU 410 and provides control and coordinates the function of the various components of FIG. 4 .
  • Application 450 e.g., program for designing a cache, e.g., disk cache, configured to adaptively reconfigure, e.g., length of the stacks in the cache may adapt to changes in the request stream, as described in FIG. 5, runs in conjunction with operating system 440 which implements the various functions to be performed by application 450 .
  • Read only memory (ROM) 416 is coupled to system bus 412 and includes a basic input/output system (“BIOS”) that controls certain basic functions of server 302 .
  • BIOS basic input/output system
  • Random access memory (RAM) 414 , disk adapter 418 and communications adapter 434 are also coupled to system bus 412 . It should be noted that software components including operating system 440 and application 450 are loaded into RAM 414 which is the computer system's main memory. Disk adapter 418 may be a small computer system interface (“SCSI”) adapter that communicates with disk units 420 , e.g., disk drive. It is noted that the program of the present invention that designs a cache, e.g., disk cache, configured to adaptively reconfigure, e.g., length of the stacks in the cache may adapt to changes in the request stream, as described in FIG. 5 may reside in disk adapter 418 , disk unit 420 or in application 450 . Communications adapter 434 interconnects bus 412 with an outside network enabling server 302 to communicate with other such systems.
  • SCSI small computer system interface
  • Implementations of the invention include implementations as a computer system programmed to execute the method or methods described herein, and as a computer program product.
  • sets of instructions for executing the method or methods are resident in the random access memory 414 of one or more computer systems configured generally as described above.
  • the set of instructions may be stored as a computer program product in another computer memory, for example, in disk drive 420 (which may include a removable memory such as an optical disk or floppy disk for eventual use in disk drive 420 ).
  • the computer program product can also be stored at another computer and transmitted when desired to the user's workstation by a network or by an external network such as the Internet.
  • the physical storage of the sets of instructions physically changes the medium upon which it is stored so that the medium carries computer readable information. The change may be electrical, magnetic, chemical or some other physical change.
  • FIG. 5 Method for Designing a Cache Configured to Adaptively Reconfigure
  • FIG. 5 is a flowchart of one embodiment of the present invention of a method 500 for designing a cache configured to adaptively reconfigure.
  • prior art methods of designing caches focus on static techniques instead of adaptive techniques. For example, the length of the stacks in these caches do not adapt, i.e., change in size, to changes in the request stream. Consequently, these methods do not efficiently use memory space thereby improving the cache hit rate since the cache is not designed based on adaptive techniques. It would therefore be desirable to develop a cache configured to adaptively reconfigure thereby improving the performance of the cache, i.e., improving the cache hit rate.
  • Method 500 is a method for designing a cache configured to adaptively reconfigure.
  • a cache e.g., Least Recently Used (LRU)—Least Frequently Used (LFU) cache
  • LRU Least Recently Used
  • LFU Least Frequently Used
  • the cache created may comprise one or more stacks where each stack comprises one or more cache entries as illustrated in FIG. 6 .
  • FIG. 6 illustrates an embodiment of the present invention of a cache array 600 created based on an analysis of a workload.
  • Cache array 600 may comprise one or more stacks 601 A-D.
  • Stacks 601 A-D may collectively or individually be referred to as stacks 601 or stack 601 , respectively.
  • Each stack 601 may comprise one or more cache entries.
  • cache array 600 comprises a total of 256 cache entries which are allocated across stacks 601 A-D.
  • stack 601 A may comprise 128 cache entries.
  • Stack 601 B may comprise 14 cache entries.
  • Stack 601 C may comprise 36 cache entries.
  • Stack 601 D may comprise 78 cache entries. It is noted that cache array 600 may comprise any number of stacks 601 and that each stack 601 may comprise any number of cache entries and that FIG. 6 is illustrative.
  • Cache array 600 may comprise two logical portions, e.g., data storage area, cache directory, as illustrated in FIG. 7 .
  • FIG. 7 illustrates an embodiment of the present invention of cache array 600 comprising two logical portions. It is noted that cache array 600 may comprise a different number of logical portions and that FIG. 7 is illustrative.
  • a first logical portion is a data storage area 701 where data storage area 701 comprises a set of cache entries where each cache entry stores particular instructions and data.
  • a second logical portion is a cache directory 702 storing the logical base addresses associated with the cache entries in data storage area 701 .
  • Cache directory 702 may further be configured to store a logical time stamp associated with each cache entry in data storage area 701 indicating the time the information, e.g., data, in the associated cache entry was requested. Cache directory 702 may further be configured to store the frequency count associated with each cache entry in cache array 600 where the frequency count indicates the number of times the information, e.g., data, in the associated cache entry was requested. Cache directory 702 may further be configured to store the hit count associated with each stack position in each stack 601 in cache array 600 where the hit count indicates the number of times the information, e.g., data, in the associated stack position was requested.
  • the cache entries may be stored in particular stacks 601 based on the frequency counts of the cache entries.
  • stack 601 A may comprise cache entries that have a frequency count less than or equal to C 0 .
  • Stack 601 B may comprise cache entries that have a frequency count less than or equal to C 1 and greater than C 0 .
  • Stack 601 C may comprise cache entries that have a frequency count less than or equal to C 2 and greater than C 1 .
  • Stack 601 D may comprise cache entries that have a frequency count less than or equal to C 3 and greater than C 2 .
  • stacks 601 A-D may be ordered from most frequently used to least frequently used based on the frequency counts associated with each stack 601 .
  • stack 601 A is located on the lowest level of the array since the frequency count, e.g., C 0 , associated with stack 601 A is lower than the frequency counts, e.g., C 1 , C 2 , C 3 , associated with the other stacks 601 , e.g., stack 601 B-D.
  • Stack 601 D is located on the highest level of the array since the frequency count, e.g., C 3 , associated with stack 601 D is higher than the frequency counts, e.g., C 0 , C 1 , C 2 , associated with the other stacks 601 , e.g., stack 601 A-C, in cache array 600 .
  • the cache entries in each particular stack 601 may be ordered within stack 601 from most recently used to least recently used based on the logical time stamps of the cache entries. That is, the cache entry whose logical time stamp indicates the most recent time entry of all the cache entries in stack 601 is placed in the first stack position commonly referred to as the most recently used stack position in stack 601 . The cache entry whose logical time stamp indicates the last time entry of all the cache entries in stack 601 is placed in the last stack position commonly referred to as the least recently used stack position in stack 601 .
  • server 302 may be configured to receive a new request from a particular client 301 .
  • the request may be a request to read from and/or write to disk 420 of server 302 . It is further noted that the embodiment of the present invention is not limited to read and/or write requests but any requests that require service from server 302 .
  • step 503 the workload comprising a stream of requests including the new request may be tracked.
  • a workload is not static but dynamic and changes over time. Consequently, it may be desirable for cache array 600 to adapt to changes in the request stream.
  • cache array 600 may be reconfigured based on tracking the workload. Step 504 may comprise sub-steps as illustrated in FIG. 8 .
  • a “cache hit” is said to occur. It may be desirable for cache array 600 to adapt to changes in the request stream such as when a request results in a cache hit as illustrated in FIG. 9 .
  • FIG. 9 illustrates an embodiment of the present invention of cache array 600 configured to adaptively reconfigure when a request in the stream of new requests, i.e., changes in the request stream, results in a cache hit.
  • a cache hit occurs in a particular stack 601 , e.g., stack 601 A
  • the frequency count associated with that cache entry is updated, i.e., increased by one, in the cache directory in step 802 .
  • a determination is then made as to whether the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count, e.g., C 1 , associated with the next higher level stack 601 , e.g., stack 601 B, in step 803 .
  • the updated frequency count associated with that particular cache entry does not subsequently increase in number to the frequency count, e.g., C 1 , associated with the next higher level stack 601 , e.g., stack 601 B, then that particular cache entry may be stored in the most recently used stack position in its stack 601 , e.g., stack 601 A, in step 804 . If the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count, e.g., C 1 , associated with the next higher level stack 601 , e.g., stack 601 B, then that particular cache entry may be stored in the most recently used stack position in the next higher level stack 601 , e.g., stack 601 B in step 805 .
  • the next higher level stack 601 Upon storing the particular cache entry in the most recently used stack position in the next higher level stack 601 , e.g., stack 601 B, the next higher level stack 601 , e.g., stack 601 B, subsequently expands in size by one entry.
  • the cache entry with an updated frequency count, e.g., C 1 associated with the next higher level stack 601 , e.g., stack 601 B
  • the next lower level stack 601 e.g., stack 601 A, reduces in size by one entry.
  • a cache hit occurs in the highest level stack 601 , e.g., stack 601 D, in the array, the cache entry associated with the cache hit is stored at the most recently used stack position in that stack 601 , e.g., stack 601 D.
  • a stack 601 may be reduced in size to zero and therefore the number of stacks 601 in cache array 600 may be reduced. For example, if stack 601 B were reduced in size to zero, then cache array 600 would comprise stacks 601 A, 601 C and 601 D only.
  • cache array 600 may initially comprise only one stack 601 and expand into a plurality of stacks 601 . It is further noted that cache array 600 may initially comprise a plurality of stacks 601 and reduce to one stack 601 .
  • a “cache miss” is said to occur.
  • the requested item e.g., information, data
  • the requested item may be retrieved from disk 420 and then stored in the most recently used stack position of the lowest level stack, e.g., stack 601 A, as illustrated in FIG. 9 .
  • a cache entry in a least recently used stack position in one of the stacks 601 of cache array 600 may be evicted. The method of selecting which cache entry in one of the stacks 601 to be evicted is described in steps 806 - 809 .
  • cache array 600 may be reconfigured when a cache miss occurs by tracking the number of cache hits in one or more particular stack positions in each particular stack 601 of cache array 600 during a particular duration of time.
  • the number of cache hits are tracked in the one or more stack positions located towards the end of each stack 601 since the cache entries in these stack positions are least likely to incur a cache hit and hence most desirable to evict so as to provide an entry to store the requested information form disk 420 .
  • the last four stack positions in each particular stack 601 of cache array 600 may be tracked for cache hits as illustrated in FIG. 10 .
  • FIG. 10 comprises an embodiment of the present invention of a cache array 600 with additional units, e.g., adders 1001 A- 1001 D, comparison unit 1002 , configured to adaptively reconfigure cache 600 when a cache miss occurs.
  • stack positions 125 - 128 in stack 601 A may be tracked.
  • Stack positions 11 - 14 in stack 601 B may be tracked.
  • Stack positions 33 - 36 in stack 601 C may be tracked.
  • Stack positions 75 - 78 in stack 601 D may be tracked. It is noted that any particular stack position in each particular stack may be tracked. However, the number of stack positions tracked in each particular stack 601 should be the same. A more detailed explanation of FIG. 10 is provided further below.
  • each particular stack 601 e.g., stacks 601 A-D
  • the cache hits may be tracked for each particular stack 601 , e.g., stacks 601 -D, of cache 600 in one or more windows of a particular duration of time, e.g., time t n to t n ⁇ 4 , as illustrated in FIG. 11 .
  • FIG. 11 illustrates an embodiment of the present invention of tracking cache hits in one or more windows of a particular duration of time. It is noted that the windows may vary in duration of time and that FIG. 11 is illustrative.
  • FIG. 11 is illustrative.
  • the duration of time from time t n to t n ⁇ 4 may comprise four windows, e.g., window n, window n ⁇ 1, window n ⁇ 2, window n ⁇ 3.
  • the number of cache hits in one or more particular stack positions in each particular stack 601 e.g., stacks 601 A-D, may be tracked.
  • the first window e.g., window n
  • two cache hits occurred in the one or more stack positions tracked in stack 601 A as indicated by the two “A's” under window n.
  • the other cache hits are similarly indicated in the other windows, e.g., window n ⁇ 1, window n ⁇ 2, window n ⁇ 3, of a particular duration of time.
  • the particular time a cache hit occurs may be based on a logical time stamp that marks the arrival of the particular request in the request stream. That is, a logical time stamp may mark the arrival of a request that results in a cache hit.
  • the cache hits in each window may be assigned a particular weight based on the recency of the cache hit. That is, the more current requests in the request stream may be assigned a greater weight than the requests issued further back in time.
  • the cache hits may be assigned a weight of 0.4 for those occurring in window n, a weight of 0.3 for those occurring in window n ⁇ 1, a weight of 0.2 for those occurring in window n ⁇ 2 and a weight of 0.1 for those occurring in window n ⁇ 3.
  • the number of cache hits in each of the one or more stack positions tracked in each particular stack 601 during a particular period of time may be counted.
  • the number of cache hits in each stack position in each stack 601 may be counted by a particular counter associated with that particular stack position.
  • Each counter associated with a particular stack position may be implemented in software.
  • disk unit 420 or application 450 may comprise software configured to generate a particular counter associated with a particular stack position.
  • FIG. 10 comprises an embodiment of the present invention of a cache array 600 with additional units, e.g., adders 1001 A- 1001 D, comparison unit 1002 , configured to adaptively reconfigure cache 600 when a cache miss occurs.
  • Cache array 600 comprises stacks 601 A-D where the number of cache hits counted in the one or more stack positions, e.g., last four positions, tracked during a particular period of time in each particular stack 601 may be added by adders 1001 A- 1001 D.
  • Adders 1001 A- 1001 D may collectively or individually be referred to as adders 1001 or adder 1001 , respectively.
  • the output of adders 1001 is inputted to a comparison unit 1002 configured to determine which stack 601 had the highest hit count in the one or more stack positions tracked and which stack 601 had the lowest hit count in the one or more stack positions tracked during a particular period of time as explained in greater detail below.
  • the stacks 601 of cache array 600 may be coupled to a different number of adders 1001 corresponding to a different number of stacks 601 in cache array 600 and that FIG. 10 is illustrative.
  • one or more stack positions in each particular stack 601 may be tracked for cache hits during a particular period of time in step 806 .
  • the number of cache hits occurring in the one or more cache entries tracked in step 806 during a particular period of time in each particular stack 601 may be counted in step 807 .
  • the number of cache hits counted in the one or more cache entries tracked in each particular stack 601 may be added in step 808 using adders 1001 A-D. For example, referring to FIGS. 10 and 11, the number of cache hits occurring in stack positions 125 - 128 in stack 601 A was seven from time t n to t n ⁇ 4 .
  • the number of cache hits occurring in stack positions 11 - 14 in stack 601 B was four from time t n to t n ⁇ 4 .
  • the number of cache hits occurring in stack positions 33 - 36 in stack 601 C was three from time t n to t n ⁇ 4 .
  • the number of cache hits occurring in stack positions 75 - 78 in stack 601 D was five from time t n to t n ⁇ 4 .
  • the number of cache hits counted in step 807 and added in step 808 may be adjusted according to a weight assigned to the one or more windows of the period of time, e.g., t n to t n ⁇ 4 , used to track the one or more stack positions in stacks 601 .
  • the cache hits may be assigned a weight of 0.4 for those occurring in window n, a weight of 0.3 for those occurring in window n ⁇ 1, a weight of 0.2 for those occurring in window n ⁇ 2 and a weight of 0.1 for those occurring in window n ⁇ 3.
  • the number of cache hits occurring in stack positions 125 - 128 in stack 601 A is 1.9 from time t n to t n ⁇ 4 .
  • the number of cache hits occurring in stack positions 11 - 14 in stack 601 B is 1 from time t n to t n ⁇ 4 .
  • the number of cache hits occurring in stack positions 33 - 36 in stack 601 C is 0.8 from time t n to t n ⁇ 4 .
  • the number of cache hits occurring in stack positions 75 - 78 in stack 601 D is 1.3 from time t n to t n ⁇ 4 .
  • step 809 the total number of cache hits in the one or more stack positions, e.g., four stack positions, tracked in each stack 601 during a particular period of time may be compared with one another by comparison unit 902 .
  • comparison unit 902 Upon comparing the total number of cache hits in the one or more stack positions, e.g., four stack positions, tracked in each stack 601 with one another, a cache entry may be evicted in one of the stacks 601 of cache array 600 thereby allowing cache array 600 to store the requested information in a cache miss in step 809 as described in greater detail below.
  • the stack 601 with the lowest number of hit counts in the one or more stack positions tracked may be reduced in size by one entry by comparison unit 902 evicting the cache entry in the least recently used stack position in that stack 601 .
  • a “cache miss” is said to occur.
  • the requested item may be retrieved from disk 420 and then stored in the most recently used stack position in the lowest level stack 601 , e.g., stack 601 A.
  • cache array 600 may have a fixed number of cache entries, e.g., 256.
  • a cache entry in order to store a new entry, a cache entry must be evicted from one of the stacks 601 in cache array 600 . It may be desirable to evict the cache entry that is least important thereby being able to insert a new entry to store the requested information.
  • the cache entry that is least important may be indicated by a low number of hit counts.
  • the least recently used stack position in the stack 601 with the lowest number of hit counts in the one or more stack positions tracked may be evicted. For example, the information, e.g., data, in the cache entry in the least recently used stack position may be discarded.
  • a new entry may then be inserted in the most recently used stack position in the lowest level stack 601 , e.g., stack 601 A, to store the requested information from disk 420 .
  • stack 601 C has the lowest hit count number in the one or more stack positions tracked during a particular period of time.
  • comparison unit 902 may reduce the size of stack 601 C by one entry by evicting the cache entry in the least recently used stack position.
  • Stack 601 C may then be reconfigured to have a length of 35 cache entries instead of 36 cache entries.
  • a new entry may then be inserted in the most recently used stack position in the lowest level stack 601 , e.g., stack 601 A, to store the requested information from disk 420 .
  • Stack 601 A would then be reconfigured to have a length of 129 cache entries instead of 128 cache entries.
  • Comparison unit 902 may be configured to evict the least recently used stack position in the stack 601 associated with the lowest frequency count. It is further noted that if cache array 600 has only one stack 601 then the cache entry at the least recently used stack position in the one stack 601 would be evicted to make room for the new entry inserted in the most recently used stack position in the one stack 601 to store the requested information from disk 420 in a cache miss. It is further noted that a stack 601 may be reduced in size to zero and therefore the number of stacks 601 in cache array 600 may be reduced. For example, if stack 601 B were reduced in size to zero, then cache array 600 would comprise stacks 601 A, 601 C and 601 D only.
  • the stack 601 e.g., stack 601 C, with the lowest number of hit counts in the one or more stack positions tracked may be reduced in size by one entry and the stack 601 , e.g., stack 601 A, with the highest number of hit counts in the one or more stack positions tracked may be increased in size by one entry by comparison unit 902 .
  • the stack 601 e.g., stack 601 C, with the lowest number of hit counts in the one or more stack positions tracked may be reduced in size by one entry by comparison unit 902 evicting the cache entry in least recently used stack position.
  • An entry may then be added to the stack 601 , e.g., stack 601 A, with the highest number of hit counts in the one or more stack positions tracked by comparison unit 902 which may store the information, e.g., data, requested in a cache miss or information in a cache entry evicted.
  • stack 601 e.g., stack 601 A
  • comparison unit 902 may store the information, e.g., data, requested in a cache miss or information in a cache entry evicted.
  • step 505 a determination may be made as to whether there are more new requests, e.g., request to read from and/or write to disk 420 of server 302 , to be received by server 302 . If there are more new requests, then server 302 receives the new request in step 502 . If there are no more new requests, then method 500 is terminated in step 506 .
  • new requests e.g., request to read from and/or write to disk 420 of server 302
  • method 500 may be executed in a different order presented and that the order presented in the discussion of FIGS. 5 and 8 is illustrative. It is further noted that certain steps may be executed almost concurrently.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A system, computer program product and method for reconfiguring a cache. A cache array may be created with one or more stacks of cache entries based on a workload. The one or more stacks may be ordered from most frequently used to least frequently used. The cache entries in each particular stack may be ordered from most recently used to least recently used. When a cache hit occurs, the cache entry requested may be stored in the next higher level stack if the updated frequency count is associated with the next higher level stack. When a cache miss occurs, the cache entry in a least recently used stack position in the stack with the lowest number of cache hits in the one or more stack positions tracked during a particular period of time may be evicted thereby allowing the requested information to be stored in the lowest level stack.

Description

CROSS REFERENCE TO RELATED APPLICATION
The present invention is related to the following U.S. patent application which is incorporated herein by reference:
Ser. No. 09/838,607 entitled “Designing a Cache Using a Canonical LRU-LFU Array” filed Apr. 19, 2001.
TECHNICAL FIELD
The present invention relates to the field of cache design, and more particularly to designing a cache with adaptive reconfiguration thereby improving the performance of the cache.
BACKGROUND INFORMATION
A network server, e.g., file server, database server, web server, may be configured to receive a stream of requests from clients in a network system to read from or write to a disk, e.g., disk drive, in the network server. These requests may form what is commonly referred to as a “workload” for the network server. That is, a workload may refer to the requests that need to be serviced by the network server.
Typically, a server in a network system comprises a disk adapter that bridges the disk, e.g., disk drive, to the processing unit of the server unit. A server may further comprise a cache commonly referred to as a disk cache within the disk adapter to increase the speed of accessing data. A cache is faster than a disk and thereby allows data to be read at higher speeds. Thus, if data is stored in the cache it may be accessed at higher speeds than accessing the data on the disk.
There have been many methods in designing disk caches that seek to increase the cache hit rate thereby improving performance of the disk cache. A “cache hit” is said to occur if an item, e.g., data, requested by the processor in the server or a client in a network system, is present in the disk cache. When an item, e.g., data, requested by the processor in the server or a client in the network system, is not present in the cache, a “cache miss” is said to occur. A “cache hit rate” may refer to the rate at which cache hits occur. By improving the cache hit rate, the performance of the cache may be improved, i.e., less data needs to be serviced from the disk.
One method to improve the performance of a disk cache is commonly referred to as the Least Recently Used (LRU) replacement method as illustrated in FIG. 1. The LRU replacement method uses a single stack 101 comprising a set of cache entries where each cache entry stores particular data. As stated above, if an item, e.g., data, requested by the processor in the server or client in a network system is present in the cache, a “cache hit” is said to occur. When a cache hit occurs, the cache entry comprising the information, e.g., data, requested moves to the first stack position as illustrated in FIG. 1. As stated above, if an item, e.g., data, requested by the processor in the server or client in a network system is not present in the cache, a “cache miss” is said to occur. When a cache miss occurs, the requested item is retrieved from the disk and then stored in the first stack position as illustrated in FIG. 1. When a new entry is inserted in stack 101, the cache entry in the last stack position of stack 101 is evicted. The information, e.g., data, may subsequently be discarded.
Another method to improve the performance of a disk cache is commonly referred to as the Segmented LRU (S-LRU) replacement method as illustrated in FIG. 2. The S-LRU replacement method may use two stacks 201A-B. Each stack, stack 201A-B, may comprise a set of cache entries where each cache entry stores particular instructions and data. When a cache hit occurs in the first stack, e.g., stack 201A, the cache entry comprising the information, e.g., data, requested moves up to the first stack position of the second stack, e.g., stack 201B, as illustrated in FIG. 2. When a new entry is added to stack 201B, the cache entry at the last stack position of stack 201B is evicted to the first stack position of stack 201A. When a new entry is inserted in stack 201A, the cache entry at the last stack position of stack 201A is evicted. The information, e.g., data, may subsequently be discarded. When a cache hit occurs in the second stack, e.g., stack 201B, the cache entry comprising the information, e.g., data, requested moves up to the first stack position of that stack, e.g., stack 201B, as illustrated in FIG. 2. When a new entry is inserted in stack 201B, the cache entry at the last stack position of stack 201B is evicted to the first stack position of stack 201A. When a new entry is inserted in stack 201A, the cache entry at the last stack position of stack 201A is evicted. When a cache miss occurs, the requested item is retrieved from the disk and then stored in the first stack position of the first stack, e.g., stack 201A, as illustrated in FIG. 2. When a new entry is inserted in stack 201A, the cache entry at the last stack position of stack 201A is evicted. The information, e.g., data, may subsequently be discarded.
Unfortunately, these methods of cache design focus on static techniques instead of adaptive techniques. For example, the length of the stacks in these caches do not adapt, i.e., change in size, to changes in the request stream. Consequently, these methods do not efficiently use memory space since the cache is not designed based on adaptive techniques. If the memory space was efficiently used, then the cache hit rate may be improved.
It would therefore be desirable to develop a cache based on adaptive techniques thereby improving performance of the cache, i.e., improving the cache hit rate.
SUMMARY
The problems outlined above may at least in part be solved in some embodiments by designing a cache array reconfigurable based on tracking the changes in the request stream, i.e., workload.
In one embodiment of the present invention, a method for reconfiguring a cache may comprise the step of creating a cache array with one or more stacks of cache entries based on a workload. Each stack may be associated with a particular frequency count. That is, each cache entry in that particular stack has a frequency count of at least the frequency count associated with that particular stack. A frequency count may indicate the number of times the information, e.g., data, in the associated cache entry was requested. The one or more stacks in the cache array may then be ordered in an array from most frequently used to least frequently used based on the frequency counts associated with the one or more stacks. The cache entries in each particular stack may be ordered from most recently used to least recently used based on a logical time stamp indicating the time the information, e.g., data, associated with the cache entry was requested.
A workload is not static but dynamic and changes over time. As the workload changes, the cache may be reconfigured based on tracking the changes in the workload. If an item requested in the stream of new requests, i.e., changes in the request stream, is present in a particular cache entry, a “cache hit” is said to occur. When a cache hit occurs, the frequency count associated with the cache entry requested is updated, i.e., increased by one, in the cache directory associated with that cache entry. A determination may then be made as to whether the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count associated with the next higher level stack. If the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count associated with the next higher level stack, then that particular cache entry may be stored in a most recently used stack position in the next higher level stack. Upon storing the particular cache entry in the most recently used stack position in the next higher level stack, the next higher level stack subsequently expands in size by one entry. Upon moving the cache entry with an updated frequency count to the next higher level stack, the next lower level stack reduces in size by one entry.
If an item requested in the stream of new requests, i.e., changes in the request stream, is not present in a particular cache entry, a “cache miss” is said to occur. A cache array may be reconfigured when a cache miss occurs by tracking the number of cache hits in or more particular stack positions in each particular stack of the cache array during a particular duration of time. The one or more stack positions tracked in each stack may be located towards the end of each stack since the cache entries in these stack positions are least likely to incur a cache hit and hence most desirable to evict so as to provide an entry to store the requested information from a disk. The number of cache hits in each of the one or more stack positions tracked in each stack during a particular period of time may be counted. The number of cache hits counted in each of the one or more stack positions tracked in each stack during a particular period of time may be added. The total number of cache hits in the one or more stack positions tracked in each stack during a particular period of time may be compared with one another. The cache entry in the least recently used stack position in the stack with the lowest number of cache hits in the one or more stack positions tracked may be evicted thereby allowing a new entry to be inserted in the most recently used stack position in the lowest level stack to store the requested information. Subsequently, the stack with the lowest number of cache hits in the one or more stack positions tracked may be reduced in size by one entry and the stack storing the requested information may be increased in size by one entry.
The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
A better understanding of the present invention can be obtained when the following detailed description is considered in conjunction with the following drawings, in which:
FIG. 1 illustrates an embodiment of the Least Recently Used replacement method for designing a cache;
FIG. 2 illustrates an embodiment of the Segmented Least Recently Used replacement method for designing a cache;
FIG. 3 illustrates an embodiment of a network system configured in accordance with the present invention;
FIG. 4 illustrates an embodiment of the present invention of a server;
FIG. 5 is a flowchart of a method for designing a cache configured to adaptively reconfigure;
FIG. 6 illustrates an embodiment of a cache array created based on an analysis of a workload configured in accordance with the present invention;
FIG. 7 illustrates an embodiment of a cache array comprising two logical portions configured in accordance with the present invention;
FIG. 8 is a flowchart of the sub-steps of the step of reconfiguring a cache array based on changes in the workload;
FIG. 9 illustrates an embodiment of the present invention of a cache array configured to adaptively reconfigure when a request in the request stream results in a cache hit;
FIG. 10 illustrates an embodiment of the present invention of a cache array configured to adaptively reconfigure when a request in the request stream results in a cache miss; and
FIG. 11 illustrates an embodiment of the present invention of tracking cache hits in one or more windows of a particular duration of time.
DETAILED DESCRIPTION
The present invention comprises a system, computer program product and method for reconfiguring a cache. In one embodiment of the present invention, a cache array may be created with one or more stacks of cache entries based on a workload. The one or more stacks in the cache array may be ordered in an array from most frequently used to least frequently used based on the frequency counts associated with the one or more stacks. The cache entries in each particular stack may be ordered from most recently used to least recently used based on a logical time stamp indicating the time the information, e.g., data, associated with the cache entry was requested. When a cache hit occurs, the frequency count associated with the cache entry requested is updated, i.e., increased by one, in the cache directory associated with that cache entry. If the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count associated with the next higher level stack, then that particular cache entry may be stored in a most recently used stack position in the next higher level stack. Upon storing the particular cache entry in the most recently used stack position in the next higher level stack, the next higher level stack subsequently expands in size by one entry. Upon moving the cache entry with an updated frequency count to the next higher level stack, the next lower level stack reduces in size by one entry. When a cache miss occurs, the cache entry in a least recently used stack position in the stack with the lowest number of cache hits in the one or more stack positions tracked during a particular period of time may be evicted thereby allowing a new entry to be inserted in the most recently used stack position in the lowest level stack to store the requested information. Subsequently, the stack with the lowest number of cache hits in the one or more stack positions tracked may be reduced in size by one entry and the stack storing the requested information may be increased in size by one entry. It is noted that even though the following discusses the present invention in connection with a disk cache, the present invention may be implemented in any type of cache including a memory cache and a filter cache.
FIG. 3—Network System
FIG. 3 illustrates one embodiment of the present invention of a network system 300. Network system 300 may comprise one or more clients 301A-D configured to send requests to a server 302, e.g., file server, database server, web server. Clients 301A-D may collectively or individually be referred to as clients 301 or client 301, respectively. It is noted that system 300 may comprise any number of clients 301 and that FIG. 3 is illustrative. It is further noted that network system 300 may be any type of system such as a file system or a database system and that FIG. 3 is not to be limited in scope to any one particular embodiment.
FIG. 4—Server
FIG. 4 illustrates an embodiment of the present invention of server 302. Referring to FIGS. 3 and 4, one or more clients 301 may issue requests to read from or write to a disk 420 in server 302. It is noted that the embodiment of the present invention is not limited to read and/or write requests but any requests that require service from server 302. As stated in the Background Information section, these stream of requests may form what is commonly referred to as a workload. That is, a workload may refer to the requests that need to be serviced by server 302. In one embodiment, the workload may be managed by a disk adapter 418. If these requests in the workload may be serviced by a disk cache (not shown) within disk adapter 418 instead of disk 420, then the instructions and data requested may be accessed faster. Therefore, it is desirable to optimize the disk cache (not shown) so that as many requests may be serviced by the disk cache as possible. It is noted that a disk cache may reside in other locations than disk adapter 418, e.g., disk unit 420, application 450. A method for designing a cache, e.g., disk cache, configured to adaptively reconfigure, e.g., length of the stacks in the cache may adapt to changes in the request stream, is described in the description of FIG. 5.
Referring to FIG. 4, server 302 may further comprise a central processing unit (CPU) 410 coupled to various other components by system bus 412. An operating system 440 runs on CPU 410 and provides control and coordinates the function of the various components of FIG. 4. Application 450, e.g., program for designing a cache, e.g., disk cache, configured to adaptively reconfigure, e.g., length of the stacks in the cache may adapt to changes in the request stream, as described in FIG. 5, runs in conjunction with operating system 440 which implements the various functions to be performed by application 450. Read only memory (ROM) 416 is coupled to system bus 412 and includes a basic input/output system (“BIOS”) that controls certain basic functions of server 302. Random access memory (RAM) 414, disk adapter 418 and communications adapter 434 are also coupled to system bus 412. It should be noted that software components including operating system 440 and application 450 are loaded into RAM 414 which is the computer system's main memory. Disk adapter 418 may be a small computer system interface (“SCSI”) adapter that communicates with disk units 420, e.g., disk drive. It is noted that the program of the present invention that designs a cache, e.g., disk cache, configured to adaptively reconfigure, e.g., length of the stacks in the cache may adapt to changes in the request stream, as described in FIG. 5 may reside in disk adapter 418, disk unit 420 or in application 450. Communications adapter 434 interconnects bus 412 with an outside network enabling server 302 to communicate with other such systems.
Implementations of the invention include implementations as a computer system programmed to execute the method or methods described herein, and as a computer program product. According to the computer system implementations, sets of instructions for executing the method or methods are resident in the random access memory 414 of one or more computer systems configured generally as described above. Until required by server 302, the set of instructions may be stored as a computer program product in another computer memory, for example, in disk drive 420 (which may include a removable memory such as an optical disk or floppy disk for eventual use in disk drive 420). Furthermore, the computer program product can also be stored at another computer and transmitted when desired to the user's workstation by a network or by an external network such as the Internet. One skilled in the art would appreciate that the physical storage of the sets of instructions physically changes the medium upon which it is stored so that the medium carries computer readable information. The change may be electrical, magnetic, chemical or some other physical change.
FIG. 5—Method for Designing a Cache Configured to Adaptively Reconfigure
FIG. 5 is a flowchart of one embodiment of the present invention of a method 500 for designing a cache configured to adaptively reconfigure. As stated in the Background Information section, prior art methods of designing caches focus on static techniques instead of adaptive techniques. For example, the length of the stacks in these caches do not adapt, i.e., change in size, to changes in the request stream. Consequently, these methods do not efficiently use memory space thereby improving the cache hit rate since the cache is not designed based on adaptive techniques. It would therefore be desirable to develop a cache configured to adaptively reconfigure thereby improving the performance of the cache, i.e., improving the cache hit rate. Method 500 is a method for designing a cache configured to adaptively reconfigure.
In step 501, a cache, e.g., Least Recently Used (LRU)—Least Frequently Used (LFU) cache, may be created based on an analysis of a workload as described in U.S. patent application Ser. No. 09/838,607, entitled “Designing a Cache Using a Canonical LRU-LFU Array,” which is hereby incorporated herein in its entirety by reference. The cache created may comprise one or more stacks where each stack comprises one or more cache entries as illustrated in FIG. 6. FIG. 6 illustrates an embodiment of the present invention of a cache array 600 created based on an analysis of a workload. Cache array 600 may comprise one or more stacks 601A-D. Stacks 601A-D may collectively or individually be referred to as stacks 601 or stack 601, respectively. Each stack 601 may comprise one or more cache entries. In the exemplary embodiment, cache array 600 comprises a total of 256 cache entries which are allocated across stacks 601A-D. For example, stack 601A may comprise 128 cache entries. Stack 601B may comprise 14 cache entries. Stack 601C may comprise 36 cache entries. Stack 601D may comprise 78 cache entries. It is noted that cache array 600 may comprise any number of stacks 601 and that each stack 601 may comprise any number of cache entries and that FIG. 6 is illustrative.
Cache array 600 may comprise two logical portions, e.g., data storage area, cache directory, as illustrated in FIG. 7. FIG. 7 illustrates an embodiment of the present invention of cache array 600 comprising two logical portions. It is noted that cache array 600 may comprise a different number of logical portions and that FIG. 7 is illustrative. Referring to FIG. 7, a first logical portion is a data storage area 701 where data storage area 701 comprises a set of cache entries where each cache entry stores particular instructions and data. A second logical portion is a cache directory 702 storing the logical base addresses associated with the cache entries in data storage area 701. Cache directory 702 may further be configured to store a logical time stamp associated with each cache entry in data storage area 701 indicating the time the information, e.g., data, in the associated cache entry was requested. Cache directory 702 may further be configured to store the frequency count associated with each cache entry in cache array 600 where the frequency count indicates the number of times the information, e.g., data, in the associated cache entry was requested. Cache directory 702 may further be configured to store the hit count associated with each stack position in each stack 601 in cache array 600 where the hit count indicates the number of times the information, e.g., data, in the associated stack position was requested.
Referring to FIG. 6, the cache entries may be stored in particular stacks 601 based on the frequency counts of the cache entries. For example, stack 601A may comprise cache entries that have a frequency count less than or equal to C0. Stack 601B may comprise cache entries that have a frequency count less than or equal to C1 and greater than C0. Stack 601C may comprise cache entries that have a frequency count less than or equal to C2 and greater than C1. Stack 601D may comprise cache entries that have a frequency count less than or equal to C3 and greater than C2. In one embodiment, stacks 601A-D may be ordered from most frequently used to least frequently used based on the frequency counts associated with each stack 601. For example, stack 601A is located on the lowest level of the array since the frequency count, e.g., C0, associated with stack 601A is lower than the frequency counts, e.g., C1, C2, C3, associated with the other stacks 601, e.g., stack 601B-D. Stack 601D is located on the highest level of the array since the frequency count, e.g., C3, associated with stack 601D is higher than the frequency counts, e.g., C0, C1, C2, associated with the other stacks 601, e.g., stack 601A-C, in cache array 600.
Referring to FIG. 6, the cache entries in each particular stack 601, e.g., stacks 601A-D, may be ordered within stack 601 from most recently used to least recently used based on the logical time stamps of the cache entries. That is, the cache entry whose logical time stamp indicates the most recent time entry of all the cache entries in stack 601 is placed in the first stack position commonly referred to as the most recently used stack position in stack 601. The cache entry whose logical time stamp indicates the last time entry of all the cache entries in stack 601 is placed in the last stack position commonly referred to as the least recently used stack position in stack 601.
In step 502, server 302 may be configured to receive a new request from a particular client 301. The request may be a request to read from and/or write to disk 420 of server 302. It is further noted that the embodiment of the present invention is not limited to read and/or write requests but any requests that require service from server 302.
In step 503, the workload comprising a stream of requests including the new request may be tracked. As stated in the Background Information section, a workload is not static but dynamic and changes over time. Consequently, it may be desirable for cache array 600 to adapt to changes in the request stream. In step 504, cache array 600 may be reconfigured based on tracking the workload. Step 504 may comprise sub-steps as illustrated in FIG. 8.
Referring to FIG. 8, a determination is made in step 801 as to whether the new request in the workload received in step 502 results in a cache hit. When an item, e.g., data, requested in the stream of new requests is present in a particular cache entry a “cache hit” is said to occur. It may be desirable for cache array 600 to adapt to changes in the request stream such as when a request results in a cache hit as illustrated in FIG. 9.
FIG. 9 illustrates an embodiment of the present invention of cache array 600 configured to adaptively reconfigure when a request in the stream of new requests, i.e., changes in the request stream, results in a cache hit. Referring to FIGS. 8 and 9, when a cache hit occurs in a particular stack 601, e.g., stack 601A, the frequency count associated with that cache entry is updated, i.e., increased by one, in the cache directory in step 802. A determination is then made as to whether the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count, e.g., C1, associated with the next higher level stack 601, e.g., stack 601B, in step 803. If the updated frequency count associated with that particular cache entry does not subsequently increase in number to the frequency count, e.g., C1, associated with the next higher level stack 601, e.g., stack 601B, then that particular cache entry may be stored in the most recently used stack position in its stack 601, e.g., stack 601A, in step 804. If the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count, e.g., C1, associated with the next higher level stack 601, e.g., stack 601B, then that particular cache entry may be stored in the most recently used stack position in the next higher level stack 601, e.g., stack 601B in step 805. Upon storing the particular cache entry in the most recently used stack position in the next higher level stack 601, e.g., stack 601B, the next higher level stack 601, e.g., stack 601B, subsequently expands in size by one entry. Upon moving the cache entry with an updated frequency count, e.g., C1, associated with the next higher level stack 601, e.g., stack 601B, the next lower level stack 601, e.g., stack 601A, reduces in size by one entry. When a cache hit occurs in the highest level stack 601, e.g., stack 601D, in the array, the cache entry associated with the cache hit is stored at the most recently used stack position in that stack 601, e.g., stack 601D. It is noted that a stack 601 may be reduced in size to zero and therefore the number of stacks 601 in cache array 600 may be reduced. For example, if stack 601B were reduced in size to zero, then cache array 600 would comprise stacks 601A, 601C and 601D only. It is further noted that cache array 600 may initially comprise only one stack 601 and expand into a plurality of stacks 601. It is further noted that cache array 600 may initially comprise a plurality of stacks 601 and reduce to one stack 601.
Referring to step 801 in FIG. 8, if an item, e.g., data, requested in the stream of new requests is not present in a particular cache entry a “cache miss” is said to occur. When a cache miss occurs, the requested item, e.g., information, data, may be retrieved from disk 420 and then stored in the most recently used stack position of the lowest level stack, e.g., stack 601A, as illustrated in FIG. 9. When a new entry is inserted in stack 601A, a cache entry in a least recently used stack position in one of the stacks 601 of cache array 600 may be evicted. The method of selecting which cache entry in one of the stacks 601 to be evicted is described in steps 806-809.
In step 806, cache array 600 may be reconfigured when a cache miss occurs by tracking the number of cache hits in one or more particular stack positions in each particular stack 601 of cache array 600 during a particular duration of time. In one embodiment, the number of cache hits are tracked in the one or more stack positions located towards the end of each stack 601 since the cache entries in these stack positions are least likely to incur a cache hit and hence most desirable to evict so as to provide an entry to store the requested information form disk 420. For example, the last four stack positions in each particular stack 601 of cache array 600 may be tracked for cache hits as illustrated in FIG. 10.
FIG. 10 comprises an embodiment of the present invention of a cache array 600 with additional units, e.g., adders 1001A-1001D, comparison unit 1002, configured to adaptively reconfigure cache 600 when a cache miss occurs. Referring to FIG. 10, stack positions 125-128 in stack 601A may be tracked. Stack positions 11-14 in stack 601B may be tracked. Stack positions 33-36 in stack 601C may be tracked. Stack positions 75-78 in stack 601D may be tracked. It is noted that any particular stack position in each particular stack may be tracked. However, the number of stack positions tracked in each particular stack 601 should be the same. A more detailed explanation of FIG. 10 is provided further below.
As stated above, the one or more stack positions in each particular stack 601, e.g., stacks 601A-D, of cache 600 may be tracked for cache hits during a particular duration of time. In one embodiment, the cache hits may be tracked for each particular stack 601, e.g., stacks 601-D, of cache 600 in one or more windows of a particular duration of time, e.g., time tn to tn−4, as illustrated in FIG. 11. FIG. 11 illustrates an embodiment of the present invention of tracking cache hits in one or more windows of a particular duration of time. It is noted that the windows may vary in duration of time and that FIG. 11 is illustrative. FIG. 11 illustrates that the duration of time from time tn to tn−4 may comprise four windows, e.g., window n, window n−1, window n−2, window n−3. During each window, the number of cache hits in one or more particular stack positions in each particular stack 601, e.g., stacks 601A-D, may be tracked. For example, during the first window, e.g., window n, two cache hits occurred in the one or more stack positions tracked in stack 601A as indicated by the two “A's” under window n. One cache hit occurred in the one or more stack positions tracked in stack 601B during the first window, e.g., window n, as indicated by the “B” under window n. One cache hit occurred in the one or more stack positions tracked in stack 601C during the first window, e.g., window n, as indicated by the “C” under window n. Two cache hits occurred in the one or more stack positions tracked in stack 601D during the first window, e.g., window n, as indicated by the “D's” under window n. The other cache hits are similarly indicated in the other windows, e.g., window n−1, window n−2, window n−3, of a particular duration of time. The particular time a cache hit occurs may be based on a logical time stamp that marks the arrival of the particular request in the request stream. That is, a logical time stamp may mark the arrival of a request that results in a cache hit.
In one embodiment, the cache hits in each window, e.g., window, may be assigned a particular weight based on the recency of the cache hit. That is, the more current requests in the request stream may be assigned a greater weight than the requests issued further back in time. For example, the cache hits may be assigned a weight of 0.4 for those occurring in window n, a weight of 0.3 for those occurring in window n−1, a weight of 0.2 for those occurring in window n−2 and a weight of 0.1 for those occurring in window n−3.
Referring to FIG. 8, in step 807, the number of cache hits in each of the one or more stack positions tracked in each particular stack 601 during a particular period of time may be counted. In one embodiment, the number of cache hits in each stack position in each stack 601 may be counted by a particular counter associated with that particular stack position. Each counter associated with a particular stack position may be implemented in software. For example, disk unit 420 or application 450 may comprise software configured to generate a particular counter associated with a particular stack position.
Referring to FIGS. 8 and 10, in step 808, the number of cache hits counted in each of the one or more stack positions tracked in each particular stack 601 of cache array 600 may be added as illustrated in FIG. 10. As stated above, FIG. 10 comprises an embodiment of the present invention of a cache array 600 with additional units, e.g., adders 1001A-1001D, comparison unit 1002, configured to adaptively reconfigure cache 600 when a cache miss occurs. Cache array 600 comprises stacks 601A-D where the number of cache hits counted in the one or more stack positions, e.g., last four positions, tracked during a particular period of time in each particular stack 601 may be added by adders 1001A-1001D. Adders 1001A-1001D may collectively or individually be referred to as adders 1001 or adder 1001, respectively. The output of adders 1001 is inputted to a comparison unit 1002 configured to determine which stack 601 had the highest hit count in the one or more stack positions tracked and which stack 601 had the lowest hit count in the one or more stack positions tracked during a particular period of time as explained in greater detail below. It is noted that the stacks 601 of cache array 600 may be coupled to a different number of adders 1001 corresponding to a different number of stacks 601 in cache array 600 and that FIG. 10 is illustrative.
As stated above, one or more stack positions, e.g., last four stack positions, in each particular stack 601 may be tracked for cache hits during a particular period of time in step 806. The number of cache hits occurring in the one or more cache entries tracked in step 806 during a particular period of time in each particular stack 601 may be counted in step 807. The number of cache hits counted in the one or more cache entries tracked in each particular stack 601 may be added in step 808 using adders 1001A-D. For example, referring to FIGS. 10 and 11, the number of cache hits occurring in stack positions 125-128 in stack 601A was seven from time tn to tn−4. The number of cache hits occurring in stack positions 11-14 in stack 601B was four from time tn to tn−4. The number of cache hits occurring in stack positions 33-36 in stack 601C was three from time tn to tn−4. The number of cache hits occurring in stack positions 75-78 in stack 601D was five from time tn to tn−4.
In one embodiment, the number of cache hits counted in step 807 and added in step 808 may be adjusted according to a weight assigned to the one or more windows of the period of time, e.g., tn to tn−4, used to track the one or more stack positions in stacks 601. For example, referring to FIGS. 10 and 11, the cache hits may be assigned a weight of 0.4 for those occurring in window n, a weight of 0.3 for those occurring in window n−1, a weight of 0.2 for those occurring in window n−2 and a weight of 0.1 for those occurring in window n−3. Subsequently, the number of cache hits occurring in stack positions 125-128 in stack 601A is 1.9 from time tn to tn−4. The number of cache hits occurring in stack positions 11-14 in stack 601B is 1 from time tn to tn−4. The number of cache hits occurring in stack positions 33-36 in stack 601C is 0.8 from time tn to tn−4. The number of cache hits occurring in stack positions 75-78 in stack 601D is 1.3 from time tn to tn−4.
Referring to FIGS. 8 and 10, in step 809, the total number of cache hits in the one or more stack positions, e.g., four stack positions, tracked in each stack 601 during a particular period of time may be compared with one another by comparison unit 902. Upon comparing the total number of cache hits in the one or more stack positions, e.g., four stack positions, tracked in each stack 601 with one another, a cache entry may be evicted in one of the stacks 601 of cache array 600 thereby allowing cache array 600 to store the requested information in a cache miss in step 809 as described in greater detail below. In one embodiment, the stack 601 with the lowest number of hit counts in the one or more stack positions tracked may be reduced in size by one entry by comparison unit 902 evicting the cache entry in the least recently used stack position in that stack 601. As stated above, when a new request in the request stream requests an item, e.g., data, not found in cache array 600 a “cache miss” is said to occur. When a cache miss occurs, the requested item may be retrieved from disk 420 and then stored in the most recently used stack position in the lowest level stack 601, e.g., stack 601A. However, cache array 600 may have a fixed number of cache entries, e.g., 256. Subsequently, in order to store a new entry, a cache entry must be evicted from one of the stacks 601 in cache array 600. It may be desirable to evict the cache entry that is least important thereby being able to insert a new entry to store the requested information. The cache entry that is least important may be indicated by a low number of hit counts. Subsequently, the least recently used stack position in the stack 601 with the lowest number of hit counts in the one or more stack positions tracked may be evicted. For example, the information, e.g., data, in the cache entry in the least recently used stack position may be discarded. A new entry may then be inserted in the most recently used stack position in the lowest level stack 601, e.g., stack 601A, to store the requested information from disk 420.
For example, referring to FIGS. 10 and 11, stack 601C has the lowest hit count number in the one or more stack positions tracked during a particular period of time. Subsequently, comparison unit 902 may reduce the size of stack 601C by one entry by evicting the cache entry in the least recently used stack position. Stack 601C may then be reconfigured to have a length of 35 cache entries instead of 36 cache entries. A new entry may then be inserted in the most recently used stack position in the lowest level stack 601, e.g., stack 601A, to store the requested information from disk 420. Stack 601A would then be reconfigured to have a length of 129 cache entries instead of 128 cache entries.
It is noted that it is possible that two or more stacks 601 may have the lowest number of hit counts. Comparison unit 902 may be configured to evict the least recently used stack position in the stack 601 associated with the lowest frequency count. It is further noted that if cache array 600 has only one stack 601 then the cache entry at the least recently used stack position in the one stack 601 would be evicted to make room for the new entry inserted in the most recently used stack position in the one stack 601 to store the requested information from disk 420 in a cache miss. It is further noted that a stack 601 may be reduced in size to zero and therefore the number of stacks 601 in cache array 600 may be reduced. For example, if stack 601B were reduced in size to zero, then cache array 600 would comprise stacks 601A, 601C and 601D only.
In another embodiment, the stack 601, e.g., stack 601C, with the lowest number of hit counts in the one or more stack positions tracked may be reduced in size by one entry and the stack 601, e.g., stack 601A, with the highest number of hit counts in the one or more stack positions tracked may be increased in size by one entry by comparison unit 902. The stack 601, e.g., stack 601C, with the lowest number of hit counts in the one or more stack positions tracked may be reduced in size by one entry by comparison unit 902 evicting the cache entry in least recently used stack position. An entry may then be added to the stack 601, e.g., stack 601A, with the highest number of hit counts in the one or more stack positions tracked by comparison unit 902 which may store the information, e.g., data, requested in a cache miss or information in a cache entry evicted.
Referring to FIG. 5, in step 505, a determination may be made as to whether there are more new requests, e.g., request to read from and/or write to disk 420 of server 302, to be received by server 302. If there are more new requests, then server 302 receives the new request in step 502. If there are no more new requests, then method 500 is terminated in step 506.
It is noted that method 500 may be executed in a different order presented and that the order presented in the discussion of FIGS. 5 and 8 is illustrative. It is further noted that certain steps may be executed almost concurrently.
Although the system, computer program product and method are described in connection with several embodiments, it is not intended to be limited to the specific forms set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the invention as defined by the appended claims. It is noted that the headings are used only for organizational purposes and not meant to limit the scope of the description or claims.

Claims (48)

What is claimed is:
1. A method for reconfiguring a cache comprising the steps of:
creating a cache with one or more stacks of cache entries;
receiving a new request;
tracking a workload comprising a stream of requests including said new request; and
reconfiguring said cache based on said tracking of said workload;
wherein said step of reconfiguring comprises a step of:
determining whether said new request in said workload resulted in a cache hit or a cache miss;
wherein if said cache hit occurred then the method further comprises the step of:
updating a frequency count associated with a cache entry requested in a first stack.
2. The method as recited in claim 1 further comprising the step of:
determining whether said updated frequency count increases in number to a frequency count associated with a next higher level stack.
3. The method as recited in claim 2, wherein if said updated frequency count increases in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said next higher level stack, wherein said first stack is reduced in size by one entry.
4. The method as recited in claim 2, wherein if said updated frequency count does not increase in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said first stack.
5. The method as recited in claim 1, wherein said stream of requests forming said workload are requests to access a disk.
6. A method for reconfiguring a cache comprising the steps of:
creating a cache with one or more stacks of cache entries;
receiving a new request;
tracking a workload comprising a stream of requests including said new request; and
reconfiguring said cache based on said tracking of said workload;
wherein said step of reconfiguring comprises a step of:
determining whether said new request in said workload resulted in a cache hit or a cache miss;
wherein if said cache miss occurred then the method further comprises the step of:
tracking one or more stack positions in each particular stack of said one or more stacks during a particular period of time for cache hits.
7. The method as recited in claim 6 further comprising the step of:
counting a number of said cache hits in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
8. The method as recited in claim 7 further comprising the step of:
adding said number of said cache hits counted in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
9. The method as recited in claim 8, wherein said particular period of time is comprised of one or more windows of time.
10. The method as recited in claim 9, wherein each of said one or more windows of said particular period of time is assigned a particular weight.
11. The method as recited in claim 10, wherein said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time is adjusted according to said weight assigned for each of said one or more windows of time.
12. The method as recited in claim 8 further comprising the step of:
comparing said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks with one another.
13. The method as recited in claim 12, wherein a first stack of said one or more stacks associated with a lowest number of hit counts in said one or more stack positions tracked is decreased in size by one entry.
14. The method as recited in claim 13, wherein said first stack of said one or more stacks associated with said lowest number of hit counts in said one or more stack positions tracked is decreased in size by evicting a cache entry in a least recently used stack position in said first stack.
15. The method as recited in claim 14, wherein one of said one or more stacks is increased in size by one entry to store information associated with a cache miss, wherein said information associated with said cache miss is stored in a most recently used stack position in said one of said one or more stacks.
16. The method as recited in claim 15, wherein said one of said one or more stacks is a lowest level stack of said one or more stacks.
17. A computer program product embodied in a machine readable medium for reconfiguring a cache comprising the programming steps of:
creating a cache with one or more stacks of cache entries;
receiving a new request;
tracking said workload comprising a stream of requests including said new request; and
reconfiguring said cache based on said tracking of said workload;
wherein said reconfiguring said cache comprises the programming step of:
determining whether said new request in said workload resulted in a cache hit or a cache miss;
wherein if said cache hit occurred then the computer program product further comprises the programming step of:
updating a frequency count associated with a cache entry requested in a first stack.
18. The computer program product as recited in claim 17 further comprises the programming step of:
determining whether said updated frequency count increases in number to a frequency count associated with a next higher level stack.
19. The computer program product as recited in claim 18, wherein if said updated frequency count increases in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said next higher level stack, wherein said first stack is reduced in size by one entry.
20. The computer program product as recited in claim 18, wherein if said updated frequency count does not increase in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said first stack.
21. The computer program product as recited in claim 17, wherein said stream of requests forming said workload are requests to access a disk.
22. A computer program product embodied in a machine readable medium for reconfiguring a cache comprising the programming steps of:
creating a cache with one or more stacks of cache entries;
receiving a new request;
tracking said workload comprising a stream of requests including said new request; and
reconfiguring said cache based on said tracking of said workload;
wherein said reconfiguring said cache comprises the programming step of:
determining whether said new request in said workload resulted in a cache hit or a cache miss;
wherein if said cache miss occurred then the computer program product further comprises the programming step of:
tracking one or more stack positions in each particular stack of said one or more stacks during a particular period of time for cache hits.
23. The computer program product as recited in claim 22 further comprises the programming step of:
counting a number of said cache hits in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
24. The computer program product as recited in claim 23 further comprises the programming step of:
adding said number of said cache hits counted in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
25. The computer program product as recited in claim 24, wherein said particular period of time is comprised of one or more windows of time.
26. The computer program product as recited in claim 25, wherein each of said one or more windows of said particular period of time is assigned a particular weight.
27. The computer program product as recited in claim 26, wherein said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time is adjusted according to said weight assigned for each of said one or more windows of time.
28. The computer program product as recited in claim 24 further comprises the programming step of:
comparing said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks with one another.
29. The computer program product as recited in claim 28, wherein a first stack of said one or more stacks associated with a lowest number of hit counts in said one or more stack positions tracked is decreased in size by one entry.
30. The computer program product as recited in claim 29, wherein said first stack of said one or more stacks associated with said lowest number of hit counts in said one or more stack positions tracked is decreased in size by evicting a cache entry in a least recently used stack position in said first stack.
31. The computer program product as recited in claim 30, wherein one of said one or more stacks is increased in size by one entry to store information associated with a cache miss, wherein said information associated with said cache miss is stored in a most recently used stack position in said one of said one or more stacks.
32. The computer program product as recited in claim 31, wherein said one of said one or more stacks is a lowest level stack of said one or more stacks.
33. A system comprising:
a processor;
a memory unit operable for storing a computer program for reconfiguring a cache; and
a bus system coupling the processor to the memory, wherein said processor, responsive to said computer program, comprises:
circuitry operable for creating a cache with one or more stacks of cache entries;
circuitry operable for receiving a new requests;
circuitry operable for tracking said workload comprising a stream of requests including said new request; and
circuitry operable for reconfiguring said cache based on said tracking of said workload;
wherein said circuitry operable for reconfiguring comprises:
circuitry operable for determining whether said new request in said workload resulted in a cache hit or a cache miss;
wherein if said cache hit occurred then said processor further comprises:
circuitry operable for updating a frequency count associated with a cache entry requested in a first stack.
34. The system as recited in claim 33, wherein said processor further comprises:
circuitry operable for determining whether said updated frequency count increases in number to a frequency count associated with a next higher level stack.
35. The system as recited in claim 34, wherein if said updated frequency count increases in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said next higher level stack, wherein said first stack is reduced in size by one entry.
36. The system as recited in claim 34, wherein if said updated frequency count does not increase in number to said frequency count associated with said next higher level stack, then said cache entry associated with said updated frequency is stored in a most recently used stack position in said first stack.
37. The system as recited in claim 33, wherein said stream of requests forming said workload are requests to access a disk.
38. A system comprising:
a processor;
a memory unit operable for storing a computer program for reconfiguring a cache; and
a bus system coupling the processor to the memory, wherein said processor, responsive to said computer program, comprises:
circuitry operable for creating a cache with one or more stacks of cache entries;
circuitry operable for receiving a new requests;
circuitry operable for tracking said workload comprising a stream of requests including said new request; and
circuitry operable for reconfiguring said cache based on said tracking of said workload;
wherein said circuitry operable for reconfiguring comprises:
circuitry operable for determining whether said new request in said workload resulted in a cache hit or a cache miss;
wherein if said cache miss occurred then said processor further comprises:
circuitry operable for tracking one or more stack positions in each particular stack of said one or more stacks during a particular period of time for cache hits.
39. The system as recited in claim 38, wherein said processor further comprises:
circuitry operable for counting a number of said cache hits in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
40. The system as recited in claim 39, wherein said processor further comprises:
circuitry operable for adding said number of said cache hits counted in each of said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time.
41. The system as recited in claim 40, wherein said particular period of time is comprised of one or more windows of time.
42. The system as recited in claim 41, wherein each of said one or more windows of said particular period of time is assigned a particular weight.
43. The system as recited in claim 42, wherein said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks during said particular period of time is adjusted according to said weight assigned for each of said one or more windows of time.
44. The system as recited in claim 40, wherein said processor further comprises:
circuitry operable for comparing said number of cache hits in said one or more stack positions tracked in each particular stack of said one or more stacks with one another.
45. The system as recited in claim 39, wherein a first stack of said one or more stacks associated with a lowest number of hit counts is decreased in size by one entry.
46. The system as recited in claim 45, wherein said first stack of said one or more stacks associated with said lowest number of hit counts in said one or more stack positions tracked is decreased in size by evicting a cache entry in a least recently used stack position in said first stack.
47. The system as recited in claim 46, wherein one of said one or more stacks is increased in size by one entry to store information associated with a cache miss, wherein said information associated with said cache miss is stored in a most recently used stack position in said one of said one or more stacks.
48. The system as recited in claim 47, wherein said one of said one or more stacks is a lowest level stack of said one or more stacks.
US09/838,433 2001-04-19 2001-04-19 Designing a cache with adaptive reconfiguration Expired - Lifetime US6745295B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/838,433 US6745295B2 (en) 2001-04-19 2001-04-19 Designing a cache with adaptive reconfiguration
US10/005,426 US6792509B2 (en) 2001-04-19 2001-11-07 Partitioned cache of multiple logical levels with adaptive reconfiguration based on multiple criteria

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/838,433 US6745295B2 (en) 2001-04-19 2001-04-19 Designing a cache with adaptive reconfiguration

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US09/838,607 Continuation-In-Part US6748491B2 (en) 2001-04-19 2001-04-19 Designing a cache using an LRU-LFU array
US10/005,426 Continuation-In-Part US6792509B2 (en) 2001-04-19 2001-11-07 Partitioned cache of multiple logical levels with adaptive reconfiguration based on multiple criteria

Publications (2)

Publication Number Publication Date
US20020156980A1 US20020156980A1 (en) 2002-10-24
US6745295B2 true US6745295B2 (en) 2004-06-01

Family

ID=25277073

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/838,433 Expired - Lifetime US6745295B2 (en) 2001-04-19 2001-04-19 Designing a cache with adaptive reconfiguration

Country Status (1)

Country Link
US (1) US6745295B2 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030084152A1 (en) * 2001-10-30 2003-05-01 Chung Keicy K. Read-only storage device having network interface, a system including the device, and a method of distributing files over a network
US20030217113A1 (en) * 2002-04-08 2003-11-20 Microsoft Corporation Caching techniques for streaming media
US20030217230A1 (en) * 2002-05-17 2003-11-20 International Business Machines Corporation Preventing cache floods from sequential streams
US20040044861A1 (en) * 2002-08-30 2004-03-04 Cavallo Joseph S. Cache management
US20040088415A1 (en) * 2002-11-06 2004-05-06 Oracle International Corporation Techniques for scalably accessing data in an arbitrarily large document by a device with limited resources
US20040205296A1 (en) * 2003-04-14 2004-10-14 Bearden Brian S. Method of adaptive cache partitioning to increase host I/O performance
US20040243727A1 (en) * 2003-06-02 2004-12-02 Chung Keicy K. Computer storage device having network interface
US20050071599A1 (en) * 2003-09-30 2005-03-31 Modha Dharmendra Shantilal Storage system and method for dynamically allocating cache space among different workload classes
US20050086436A1 (en) * 2003-10-21 2005-04-21 Modha Dharmendra S. Method and system of adaptive replacement cache with temporal filtering
US20060026344A1 (en) * 2002-10-31 2006-02-02 Sun Hsu Windsor W Storage system and method for reorganizing data to improve prefetch effectiveness and reduce seek distance
US20080263259A1 (en) * 2007-04-23 2008-10-23 Microsoft Corporation Hints model for optimization of storage devices connected to host and write optimization schema for storage devices
US20090106480A1 (en) * 2007-10-23 2009-04-23 Keicy Chung Computer storage device having separate read-only space and read-write space, removable media component, system management interface, and network interface
US20100217938A1 (en) * 2009-02-26 2010-08-26 Schneider James P Method and an apparatus to improve locality of references for objects
US20120271852A1 (en) * 2004-06-30 2012-10-25 Eric Russell Fredricksen System and Method of Accessing a Document Efficiently Through Multi-Tier Web Caching
US8639742B2 (en) 2004-06-30 2014-01-28 Google Inc. Refreshing cached documents and storing differential document content
US8676922B1 (en) 2004-06-30 2014-03-18 Google Inc. Automatic proxy setting modification
US8812651B1 (en) 2007-02-15 2014-08-19 Google Inc. Systems and methods for client cache awareness
US9612964B2 (en) 2014-07-08 2017-04-04 International Business Machines Corporation Multi-tier file storage management using file access and cache profile information
US11061826B2 (en) 2018-06-26 2021-07-13 International Business Machines Corporation Integration of application indicated minimum time to cache to least recently used track demoting schemes in a cache management system of a storage controller
US11068417B2 (en) 2018-06-26 2021-07-20 International Business Machines Corporation Allocation of cache storage among applications that indicate minimum retention time for tracks in least recently used demoting schemes
US11068413B2 (en) 2018-06-26 2021-07-20 International Business Machines Corporation Allocation of cache storage among applications based on application priority and minimum retention time for tracks in least recently used demoting schemes
US11074197B2 (en) 2018-06-26 2021-07-27 International Business Machines Corporation Integration of application indicated minimum time to cache and maximum time to cache to least recently used track demoting schemes in a cache management system of a storage controller
US11144474B2 (en) 2018-06-26 2021-10-12 International Business Machines Corporation Integration of application indicated maximum time to cache to least recently used track demoting schemes in a cache management system of a storage controller

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6792509B2 (en) * 2001-04-19 2004-09-14 International Business Machines Corporation Partitioned cache of multiple logical levels with adaptive reconfiguration based on multiple criteria
US7469300B2 (en) * 2002-01-18 2008-12-23 Mobitv, Inc. System and method for storage and retrieval of arbitrary content and application data
AU2003205150A1 (en) * 2002-01-18 2003-07-30 Idetic, Inc. Method and system for storage and retrieval of arbitrary content and application data
US20050137982A1 (en) * 2003-12-23 2005-06-23 Leslie Michelassi Systems and methods for determining a reconcilement result
US20050149440A1 (en) * 2003-12-23 2005-07-07 Leslie Michelassi Systems and methods for routing requests for reconcilement information
US7100820B2 (en) * 2003-12-23 2006-09-05 First Data Corporation Systems and methods for prioritizing reconcilement information searches
US7640205B2 (en) * 2003-12-23 2009-12-29 First Data Corporation Systems and methods for accessing reconcilement information
US7526608B2 (en) * 2004-05-28 2009-04-28 Sony Computer Entertainment Inc. Methods and apparatus for providing a software implemented cache memory
US20060129740A1 (en) * 2004-12-13 2006-06-15 Hermann Ruckerbauer Memory device, memory controller and method for operating the same
US7487320B2 (en) * 2004-12-15 2009-02-03 International Business Machines Corporation Apparatus and system for dynamically allocating main memory among a plurality of applications
US20080209131A1 (en) * 2006-11-22 2008-08-28 Kornegay Marcus L Structures, systems and arrangements for cache management
US20080120469A1 (en) * 2006-11-22 2008-05-22 International Business Machines Corporation Systems and Arrangements for Cache Management
US8417871B1 (en) * 2009-04-17 2013-04-09 Violin Memory Inc. System for increasing storage media performance
AU2015201273B2 (en) * 2009-08-21 2016-07-07 Google Llc System and method of caching information
US8566531B2 (en) 2009-08-21 2013-10-22 Google Inc. System and method of selectively caching information based on the interarrival time of requests for the same information
US8510785B2 (en) * 2009-10-19 2013-08-13 Motorola Mobility Llc Adaptive media caching for video on demand
US9397976B2 (en) 2009-10-30 2016-07-19 International Business Machines Corporation Tuning LDAP server and directory database
US9189423B2 (en) * 2011-12-06 2015-11-17 Advanced Micro Devices, Inc. Method and apparatus for controlling cache refills
US9881101B2 (en) * 2012-11-16 2018-01-30 International Business Machines Corporation Dynamic file retrieving for web page loading
US9497489B2 (en) 2013-03-12 2016-11-15 Google Technology Holdings LLC System and method for stream fault tolerance through usage based duplication and shadow sessions
KR20150078003A (en) * 2013-12-30 2015-07-08 삼성전자주식회사 Cache memory system and operating method for the same
US9734066B1 (en) * 2014-05-22 2017-08-15 Sk Hynix Memory Solutions Inc. Workload-based adjustable cache size
US20180081811A1 (en) * 2016-09-20 2018-03-22 Qualcomm Incorporated Dynamic cache partitioning through hill-climbing

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4186438A (en) 1976-03-17 1980-01-29 International Business Machines Corporation Interactive enquiry system
US4458310A (en) 1981-10-02 1984-07-03 At&T Bell Laboratories Cache memory using a lowest priority replacement circuit
US4463424A (en) 1981-02-19 1984-07-31 International Business Machines Corporation Method for dynamically allocating LRU/MRU managed memory among concurrent sequential processes
US4503501A (en) 1981-11-27 1985-03-05 Storage Technology Corporation Adaptive domain partitioning of cache memory space
US5043885A (en) 1989-08-08 1991-08-27 International Business Machines Corporation Data cache using dynamic frequency based replacement and boundary criteria
US5357623A (en) * 1990-10-15 1994-10-18 International Business Machines Corporation Dynamic cache partitioning by modified steepest descent
US5394531A (en) * 1989-04-03 1995-02-28 International Business Machines Corporation Dynamic storage allocation system for a prioritized cache
US5537635A (en) * 1994-04-04 1996-07-16 International Business Machines Corporation Method and system for assignment of reclaim vectors in a partitioned cache with a virtual minimum partition size
US5751993A (en) 1995-09-05 1998-05-12 Emc Corporation Cache management system
US5822562A (en) 1994-09-12 1998-10-13 International Business Machines Corporation Method and apparatus for expansion, contraction, and reapportionment of structured external storage structures
JPH1139120A (en) 1997-07-16 1999-02-12 Victor Co Of Japan Ltd Contents displaying/selecting device, method therefor and recording medium recorded with program for the method
US5892937A (en) 1993-06-04 1999-04-06 Digital Equipment Corporation Real-time data cache flushing threshold adjustment in a server computer
US5966726A (en) * 1997-05-28 1999-10-12 Western Digital Corporation Disk drive with adaptively segmented cache
US6012126A (en) 1996-10-29 2000-01-04 International Business Machines Corporation System and method for caching objects of non-uniform size using multiple LRU stacks partitions into a range of sizes
US6067608A (en) 1997-04-15 2000-05-23 Bull Hn Information Systems Inc. High performance mechanism for managing allocation of virtual memory buffers to virtual processes on a least recently used basis
US6072830A (en) 1996-08-09 2000-06-06 U.S. Robotics Access Corp. Method for generating a compressed video signal
US6088767A (en) * 1993-04-30 2000-07-11 International Business Machines Corporation Fileserver buffer manager based on file access operation statistics
US6105103A (en) 1997-12-19 2000-08-15 Lsi Logic Corporation Method for mapping in dynamically addressed storage subsystems
US6141731A (en) * 1998-08-19 2000-10-31 International Business Machines Corporation Method and system for managing data in cache using multiple data structures
US6330556B1 (en) * 1999-03-15 2001-12-11 Trishul M. Chilimbi Data structure partitioning to optimize cache utilization
US6370619B1 (en) * 1998-06-22 2002-04-09 Oracle Corporation Managing partitioned cache
US6378043B1 (en) * 1998-12-31 2002-04-23 Oracle Corporation Reward based cache management
US6470419B2 (en) * 1998-12-17 2002-10-22 Fujitsu Limited Cache controlling apparatus for dynamically managing data between cache modules and method thereof
US6493800B1 (en) * 1999-03-31 2002-12-10 International Business Machines Corporation Method and system for dynamically partitioning a shared cache
US6507893B2 (en) * 2001-01-26 2003-01-14 Dell Products, L.P. System and method for time window access frequency based caching for memory controllers
US6542967B1 (en) * 1999-04-12 2003-04-01 Novell, Inc. Cache object store

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4186438A (en) 1976-03-17 1980-01-29 International Business Machines Corporation Interactive enquiry system
US4463424A (en) 1981-02-19 1984-07-31 International Business Machines Corporation Method for dynamically allocating LRU/MRU managed memory among concurrent sequential processes
US4458310A (en) 1981-10-02 1984-07-03 At&T Bell Laboratories Cache memory using a lowest priority replacement circuit
US4503501A (en) 1981-11-27 1985-03-05 Storage Technology Corporation Adaptive domain partitioning of cache memory space
US5394531A (en) * 1989-04-03 1995-02-28 International Business Machines Corporation Dynamic storage allocation system for a prioritized cache
US5043885A (en) 1989-08-08 1991-08-27 International Business Machines Corporation Data cache using dynamic frequency based replacement and boundary criteria
US5357623A (en) * 1990-10-15 1994-10-18 International Business Machines Corporation Dynamic cache partitioning by modified steepest descent
US6088767A (en) * 1993-04-30 2000-07-11 International Business Machines Corporation Fileserver buffer manager based on file access operation statistics
US5892937A (en) 1993-06-04 1999-04-06 Digital Equipment Corporation Real-time data cache flushing threshold adjustment in a server computer
US5537635A (en) * 1994-04-04 1996-07-16 International Business Machines Corporation Method and system for assignment of reclaim vectors in a partitioned cache with a virtual minimum partition size
US5822562A (en) 1994-09-12 1998-10-13 International Business Machines Corporation Method and apparatus for expansion, contraction, and reapportionment of structured external storage structures
US5751993A (en) 1995-09-05 1998-05-12 Emc Corporation Cache management system
US6072830A (en) 1996-08-09 2000-06-06 U.S. Robotics Access Corp. Method for generating a compressed video signal
US6012126A (en) 1996-10-29 2000-01-04 International Business Machines Corporation System and method for caching objects of non-uniform size using multiple LRU stacks partitions into a range of sizes
US6067608A (en) 1997-04-15 2000-05-23 Bull Hn Information Systems Inc. High performance mechanism for managing allocation of virtual memory buffers to virtual processes on a least recently used basis
US5966726A (en) * 1997-05-28 1999-10-12 Western Digital Corporation Disk drive with adaptively segmented cache
JPH1139120A (en) 1997-07-16 1999-02-12 Victor Co Of Japan Ltd Contents displaying/selecting device, method therefor and recording medium recorded with program for the method
US6105103A (en) 1997-12-19 2000-08-15 Lsi Logic Corporation Method for mapping in dynamically addressed storage subsystems
US6370619B1 (en) * 1998-06-22 2002-04-09 Oracle Corporation Managing partitioned cache
US6141731A (en) * 1998-08-19 2000-10-31 International Business Machines Corporation Method and system for managing data in cache using multiple data structures
US6470419B2 (en) * 1998-12-17 2002-10-22 Fujitsu Limited Cache controlling apparatus for dynamically managing data between cache modules and method thereof
US6378043B1 (en) * 1998-12-31 2002-04-23 Oracle Corporation Reward based cache management
US6330556B1 (en) * 1999-03-15 2001-12-11 Trishul M. Chilimbi Data structure partitioning to optimize cache utilization
US6493800B1 (en) * 1999-03-31 2002-12-10 International Business Machines Corporation Method and system for dynamically partitioning a shared cache
US6542967B1 (en) * 1999-04-12 2003-04-01 Novell, Inc. Cache object store
US6507893B2 (en) * 2001-01-26 2003-01-14 Dell Products, L.P. System and method for time window access frequency based caching for memory controllers

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
D. Thiebaut, H. S. Stone, J. L. Wolf, "Improving Disk Cache Hit-Ratios Through Cache Partitioning," IEEE Transactions on Computers, vol. 41, No. 6, Jun. 1992, pp. 665-676.
E. J. O'Neil, P. E. O'Neil, and G. Weikum, "The LRU-K Page Replacement Algorithm for Database Disk Buffering," Proc. ACM SIGMOD Int'l Conf. On Management of Data, 1993, pp. 297-306.
J. Gecsei, D. R. Slutz, and I. L. Traiger, "Evaluation Techniques for Storage Hierarchies," IBM Systems Journal, No. 2, 1970, pp. 78-117.
J. T. Robinson and M. V. Devarakonda, "Data Cache Management Using Frequency-Based Replacement," Proc. Of ACM Conf. on Measurements and Modeling, 1990, pp. 134-142.
Peter Buneman et al., "Proceedings of the 1993 ACM SIGMOD International Conference on Management of Data," SIGMOD Record, vol. 22, Issue 2, Jun. 1993, pp. 297-306.
R. Karedla, J. S. Love, B. G. Wherry, "Caching Strategies to Improve Disk System Performance," IEEE Computer, Mar. 1994, pp. 38-46.

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080294753A1 (en) * 2001-10-30 2008-11-27 Chung Keicy K Read-only storage device having network interface, a system including the device, and a method of distributing files over a network
US8145729B2 (en) 2001-10-30 2012-03-27 Chung Keicy K Read-only storage device having network interface, a system including the device, and a method of distributing files over a network
US8886768B2 (en) 2001-10-30 2014-11-11 Keicy K. Chung Read-only storage device having network interface, a system including the device and a method of distributing files over a network
US7444393B2 (en) * 2001-10-30 2008-10-28 Keicy K. Chung Read-only storage device having network interface, a system including the device, and a method of distributing files over a network
US20030084152A1 (en) * 2001-10-30 2003-05-01 Chung Keicy K. Read-only storage device having network interface, a system including the device, and a method of distributing files over a network
US10122792B2 (en) 2001-10-30 2018-11-06 Keicy Chung Read-only storage device having network interface, a system including the device and a method of distributing files over a network
US7076544B2 (en) * 2002-04-08 2006-07-11 Microsoft Corporation Caching techniques for streaming media
US20030217113A1 (en) * 2002-04-08 2003-11-20 Microsoft Corporation Caching techniques for streaming media
US6823428B2 (en) * 2002-05-17 2004-11-23 International Business Preventing cache floods from sequential streams
US20030217230A1 (en) * 2002-05-17 2003-11-20 International Business Machines Corporation Preventing cache floods from sequential streams
US6918020B2 (en) * 2002-08-30 2005-07-12 Intel Corporation Cache management
US20040044861A1 (en) * 2002-08-30 2004-03-04 Cavallo Joseph S. Cache management
US20060026344A1 (en) * 2002-10-31 2006-02-02 Sun Hsu Windsor W Storage system and method for reorganizing data to improve prefetch effectiveness and reduce seek distance
US7076619B2 (en) * 2002-10-31 2006-07-11 International Business Machines Corporation Storage system and method for reorganizing data to improve prefetch effectiveness and reduce seek distance
US20040088415A1 (en) * 2002-11-06 2004-05-06 Oracle International Corporation Techniques for scalably accessing data in an arbitrarily large document by a device with limited resources
US20040205296A1 (en) * 2003-04-14 2004-10-14 Bearden Brian S. Method of adaptive cache partitioning to increase host I/O performance
US7058764B2 (en) * 2003-04-14 2006-06-06 Hewlett-Packard Development Company, L.P. Method of adaptive cache partitioning to increase host I/O performance
US7069351B2 (en) 2003-06-02 2006-06-27 Chung Keicy K Computer storage device having network interface
US20040243727A1 (en) * 2003-06-02 2004-12-02 Chung Keicy K. Computer storage device having network interface
US7107403B2 (en) * 2003-09-30 2006-09-12 International Business Machines Corporation System and method for dynamically allocating cache space among different workload classes that can have different quality of service (QoS) requirements where the system and method may maintain a history of recently evicted pages for each class and may determine a future cache size for the class based on the history and the QoS requirements
US20050071599A1 (en) * 2003-09-30 2005-03-31 Modha Dharmendra Shantilal Storage system and method for dynamically allocating cache space among different workload classes
US20050086436A1 (en) * 2003-10-21 2005-04-21 Modha Dharmendra S. Method and system of adaptive replacement cache with temporal filtering
US7058766B2 (en) * 2003-10-21 2006-06-06 International Business Machines Corporation Method and system of adaptive replacement cache with temporal filtering
US9485140B2 (en) 2004-06-30 2016-11-01 Google Inc. Automatic proxy setting modification
US20120271852A1 (en) * 2004-06-30 2012-10-25 Eric Russell Fredricksen System and Method of Accessing a Document Efficiently Through Multi-Tier Web Caching
US8639742B2 (en) 2004-06-30 2014-01-28 Google Inc. Refreshing cached documents and storing differential document content
US8676922B1 (en) 2004-06-30 2014-03-18 Google Inc. Automatic proxy setting modification
US8825754B2 (en) 2004-06-30 2014-09-02 Google Inc. Prioritized preloading of documents to client
US8788475B2 (en) * 2004-06-30 2014-07-22 Google Inc. System and method of accessing a document efficiently through multi-tier web caching
US8996653B1 (en) 2007-02-15 2015-03-31 Google Inc. Systems and methods for client authentication
US8812651B1 (en) 2007-02-15 2014-08-19 Google Inc. Systems and methods for client cache awareness
US7853759B2 (en) * 2007-04-23 2010-12-14 Microsoft Corporation Hints model for optimization of storage devices connected to host and write optimization schema for storage devices
US20080263259A1 (en) * 2007-04-23 2008-10-23 Microsoft Corporation Hints model for optimization of storage devices connected to host and write optimization schema for storage devices
US8769185B2 (en) 2007-10-23 2014-07-01 Keicy Chung Computer storage device having separate read-only space and read-write space, removable media component, system management interface, and network interface
US9292222B2 (en) 2007-10-23 2016-03-22 Keicy Chung Computer storage device having separate read-only space and read-write space, removable media component, system management interface, and network interface
US20090106480A1 (en) * 2007-10-23 2009-04-23 Keicy Chung Computer storage device having separate read-only space and read-write space, removable media component, system management interface, and network interface
US11061566B2 (en) 2007-10-23 2021-07-13 Keicy Chung Computing device
US10120572B2 (en) 2007-10-23 2018-11-06 Keicy Chung Computing device with a separate processor provided with management functionality through a separate interface with the interface bus
US8112588B2 (en) * 2009-02-26 2012-02-07 Red Hat, Inc. Sorting cache objects based on access counters reset upon garbage collection
US20100217938A1 (en) * 2009-02-26 2010-08-26 Schneider James P Method and an apparatus to improve locality of references for objects
US10346067B2 (en) 2014-07-08 2019-07-09 International Business Machines Corporation Multi-tier file storage management using file access and cache profile information
US9612964B2 (en) 2014-07-08 2017-04-04 International Business Machines Corporation Multi-tier file storage management using file access and cache profile information
US11061826B2 (en) 2018-06-26 2021-07-13 International Business Machines Corporation Integration of application indicated minimum time to cache to least recently used track demoting schemes in a cache management system of a storage controller
US11068417B2 (en) 2018-06-26 2021-07-20 International Business Machines Corporation Allocation of cache storage among applications that indicate minimum retention time for tracks in least recently used demoting schemes
US11068413B2 (en) 2018-06-26 2021-07-20 International Business Machines Corporation Allocation of cache storage among applications based on application priority and minimum retention time for tracks in least recently used demoting schemes
US11074197B2 (en) 2018-06-26 2021-07-27 International Business Machines Corporation Integration of application indicated minimum time to cache and maximum time to cache to least recently used track demoting schemes in a cache management system of a storage controller
US11144474B2 (en) 2018-06-26 2021-10-12 International Business Machines Corporation Integration of application indicated maximum time to cache to least recently used track demoting schemes in a cache management system of a storage controller
US11422948B2 (en) 2018-06-26 2022-08-23 International Business Machines Corporation Allocation of cache storage among applications that indicate minimum retention time for tracks in least recently used demoting schemes
US11461242B2 (en) 2018-06-26 2022-10-04 International Business Machines Corporation Integration of application indicated minimum time to cache and maximum time to cache to least recently used track demoting schemes in a cache management system of a storage controller
US11561905B2 (en) 2018-06-26 2023-01-24 International Business Machines Corporation Integration of application indicated minimum time to cache to least recently used track demoting schemes in a cache management system of a storage controller

Also Published As

Publication number Publication date
US20020156980A1 (en) 2002-10-24

Similar Documents

Publication Publication Date Title
US6745295B2 (en) Designing a cache with adaptive reconfiguration
US6792509B2 (en) Partitioned cache of multiple logical levels with adaptive reconfiguration based on multiple criteria
US6748491B2 (en) Designing a cache using an LRU-LFU array
EP2478442B1 (en) Caching data between a database server and a storage system
US6487638B2 (en) System and method for time weighted access frequency based caching for memory controllers
US7096321B2 (en) Method and system for a cache replacement technique with adaptive skipping
US6507893B2 (en) System and method for time window access frequency based caching for memory controllers
US7047366B1 (en) QOS feature knobs
US5734861A (en) Log-structured disk array with garbage collection regrouping of tracks to preserve seek affinity
US7979664B2 (en) Method, system, and article of manufacture for returning empty physical volumes to a storage pool based on a threshold and an elapsed time period
US6954768B2 (en) Method, system, and article of manufacture for managing storage pools
US8972661B2 (en) Dynamically adjusted threshold for population of secondary cache
US6385699B1 (en) Managing an object store based on object replacement penalties and reference probabilities
US7558919B1 (en) Dynamic cache partitioning
KR100338224B1 (en) A very efficient technique for dynamically tracking locality of a reference
US20030105926A1 (en) Variable size prefetch cache
US20050262326A1 (en) Method, system, and article of manufacture for borrowing physical volumes
US20030149843A1 (en) Cache management system with multiple cache lists employing roving removal and priority-based addition of cache entries
US7437515B1 (en) Data structure for write pending
US7039765B1 (en) Techniques for cache memory management using read and write operations
US6715039B1 (en) Cache slot promotion in a replacement queue cache using determinations of probabilities and costs
WO2014142337A1 (en) Storage device and method, and program
US7529891B2 (en) Balanced prefetching exploiting structured data
US6978349B1 (en) Adaptive cache memory management
CN117290261A (en) Data processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RODRIGUEZ, JORGE R.;REEL/FRAME:011724/0220

Effective date: 20010419

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:026894/0001

Effective date: 20110817

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044127/0735

Effective date: 20170929