Sahoo et al., 2018 - Google Patents
CAMO: A novel cache management organization for GPGPUsSahoo et al., 2018
- Document ID
- 4298201270730674365
- Author
- Sahoo D
- Sha S
- Satpathy M
- Mutyam M
- Bhuyan L
- Publication year
- Publication venue
- 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC)
External Links
Snippet
GPGPUs are now commonly used as co-processors of CPUs for the computation of data parallel and throughputintensive algorithms. However, memory available in GPGPUs is limited for many applications of interest; there is a continuous demand for increased memory …
- 235000009120 camo 0 title abstract description 23
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
- G06F12/0897—Caches characterised by their organisation or structure with two or more cache hierarchy levels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0862—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0844—Multiple simultaneous or quasi-simultaneous cache accessing
- G06F12/0846—Cache with multiple tag or data arrays being simultaneously accessible
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
- G06F9/30—Arrangements for executing machine-instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline, look ahead
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/25—Using a specific main memory architecture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1012—Design facilitation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1028—Power efficiency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/885—Monitoring specific for caches
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—INDEXING SCHEME RELATING TO CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. INCLUDING HOUSING AND APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B60/00—Information and communication technologies [ICT] aiming at the reduction of own energy use
- Y02B60/10—Energy efficient computing
- Y02B60/12—Reducing energy-consumption at the single machine level, e.g. processors, personal computers, peripherals, power supply
- Y02B60/1225—Access, addressing or allocation within memory systems or architectures, e.g. to reduce power consumption or heat production, or to increase battery life
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F1/00—Details of data-processing equipment not covered by groups G06F3/00 - G06F13/00, e.g. cooling, packaging or power supply specially adapted for computer application
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | A framework for memory oversubscription management in graphics processing units | |
Zhang et al. | Victim replication: Maximizing capacity while hiding wire delay in tiled chip multiprocessors | |
Jevdjic et al. | Unison cache: A scalable and effective die-stacked DRAM cache | |
Zhao et al. | Exploring DRAM cache architectures for CMP server platforms | |
Sudan et al. | Micro-pages: increasing DRAM efficiency with locality-aware data placement | |
Kim et al. | An adaptive, non-uniform cache structure for wire-delay dominated on-chip caches | |
Yedlapalli et al. | Meeting midway: Improving CMP performance with memory-side prefetching | |
Bock et al. | Concurrent page migration for mobile systems with OS-managed hybrid memory | |
Cantin et al. | Stealth prefetching | |
Zhang et al. | Flashgpu: Placing new flash next to gpu cores | |
Kaseridis et al. | A bandwidth-aware memory-subsystem resource management using non-invasive resource profilers for large cmp systems | |
Sankaranarayanan et al. | An energy efficient GPGPU memory hierarchy with tiny incoherent caches | |
Valls et al. | PS-Dir: A scalable two-level directory cache | |
Mandke et al. | Adaptive power optimization of on-chip SNUCA cache on tiled chip multicore architecture using remap policy | |
Sahoo et al. | CAMO: A novel cache management organization for GPGPUs | |
Belayneh et al. | Locality-aware optimizations for improving remote memory latency in multi-gpu systems | |
Hameed et al. | Rethinking on-chip DRAM cache for simultaneous performance and energy optimization | |
Hughes et al. | Performance and energy implications of many-core caches for throughput computing | |
Patil et al. | HAShCache: Heterogeneity-aware shared DRAMCache for integrated heterogeneous systems | |
Ryoo et al. | i-mirror: A software managed die-stacked dram-based memory subsystem | |
Bray et al. | Write caches as an alternative to write buffers | |
Wang et al. | Incorporating selective victim cache into GPGPU for high‐performance computing | |
Olorode et al. | Improving performance in sub-block caches with optimized replacement policies | |
Zhang et al. | Efficient data communication between CPU and GPU through transparent partial-page migration | |
Shoushtari et al. | SAM: software-assisted memory hierarchy for scalable manycore embedded systems |