Although direct-mapped caches suffer from higher miss ratios as compared to set-associative caches, they are attractive for today''s high-speed pipelined processors that require very low access times. Victim caching was proposed by Jouppi (Jouppi-91) as an approach to improve the miss rate of direct-mapped caches without affecting their access time. This approach augments the direct-mapped main cache with a small fully-associate cache, called victim cache, that stores cache blocks evicted from the main cache as a result of replacements. We propose and evaluate an improvement of this scheme, called `selective victim caching''. In this scheme, incoming blocks into the first-level cache are placed selectively in the main cache or a small victim cache by the use of a prediction scheme based on their past history of use. In addition, interchanges of blocks between the main cache and the victim cache are also performed selectively. We show that the scheme results in significant improvements in miss rate as well as the average memory access time, for both small and large caches (4 Kbytes -- 128 Kbytes). For example, simulations with 10 instruction traces from the SPEC ''92 benchmark suite showed an average improvement of approximately 21 percent in miss rate over simple victim caching for a 16-Kbyte cache with a block size of 32 bytes; the number of blocks interchanged between the main and victim caches reduced by approximately 70 percent. Implementation alternatives for the scheme in an on-chip processor cache are also described.
Cited By
- Stiliadis D and Varma A (1997). Selective Victim Caching, IEEE Transactions on Computers, 46:5, (603-610), Online publication date: 1-May-1997.
- Yang L and Torrellas J Optimizing primary data caches for parallel scientific applications Proceedings of the 10th international conference on Supercomputing, (141-148)
Recommendations
Selective Victim Caching: A Method to Improve the Performance of Direct-Mapped Caches
Although direct-mapped caches suffer from higher miss ratios as compared to set-associative caches, they are attractive for today's high-speed pipelined processors that require very low access times. Victim caching was proposed by Jouppi [1] as an ...
Opportunistic compression for direct-mapped DRAM caches
MEMSYS '18: Proceedings of the International Symposium on Memory SystemsLarge off-chip DRAM caches offer performance and bandwidth improvements for many systems by bridging the gap between on-chip last level caches and off-chip memories. To avoid the high hit latency resulting from serial DRAM accesses for tags and data, ...