GB2250114A - Multiple processor data processing system with cache memory - Google Patents
Multiple processor data processing system with cache memory Download PDFInfo
- Publication number
- GB2250114A GB2250114A GB9200747A GB9200747A GB2250114A GB 2250114 A GB2250114 A GB 2250114A GB 9200747 A GB9200747 A GB 9200747A GB 9200747 A GB9200747 A GB 9200747A GB 2250114 A GB2250114 A GB 2250114A
- Authority
- GB
- United Kingdom
- Prior art keywords
- data
- cache memory
- clme
- way
- memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0844—Multiple simultaneous or quasi-simultaneous cache accessing
- G06F12/0846—Cache with multiple tag or data arrays being simultaneously accessible
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0864—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using pseudo-associative means, e.g. set-associative or hashing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0842—Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/126—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
- G06F12/127—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning using additional replacement algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/601—Reconfiguration of cache memory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Description
.DTD:
22S0114 MULTIPLE DATA PROCESSING SYSTEM The present invention relates to a set-associative cache memory apparatus which is connected between data processor and main memory and accessible in accordance with access property.
.DTD:
Fig. i is the simplified block diagram, of a conventional 4-way setassociative cache memory apparatus. The reference numeral 1 designates address data used for accessing cache memory apparatus. Address data 1 is composed of address tag ii, entry address 12, and word address 13. Using entry address 12, cache memory apparatus accesses 4-way tag memories 2, 2, 2, 2, and data memories 3, 3, 3, 3. Tag memories 2, 2, 2, 2 are respectively provided with corresponding comparators 4, 4, 4, 4. Data memories 3, 3, 3, 3 are respectively provided with corresponding word selectors 5, 5, 5, 5. Each data memory 3 stores information of a plurality of words, while each word is provided with corresponding address. Word selectors 5, 5, 5, 5 are respectively provided with corresponding way selectors 6, 6, 6, 6. Each way-selector 5 outputs the content of selected way to data processor, where the content of way is selected in accordance with signals outputted from comparators 4 according to the compared result of data thereof.
.DTD:
Next, functional operation of the above conventional cache memory apparatus is described below.
.DTD:
On receipt of address data 1 from data processor in process of reading cycle, tag memories 2, 2, 2, 2 and data memories 3, 3, 3, 3 are accessed on the basis of addresses of entry address 12. Tag memory 2 of each corresponding way delivers content of accessed address to the corresponding comparator 4. Data memory 3 of each corresponding way delivers the content to the corresponding word selector 5. Each word selector 5 selects the content of needed word corresponding to word address 13 of address data 1 from the delivered data before delivering the selected data to way se- lector 6.."
.DTD:
Each comparator 4 compares address tag ii of address data 1 to those address tags delivered from each tag memory 2. If the compared result between these address tags mentioned above correctly coincides with each other, in other words, during "cache-hit", comparator 4 outputs coincidence signal to the corresponding way selector 6. On receipt of coincidence signal, way selector 6 outputs the content to data processor.
.DTD:
Conversely, if the compared result between those address data are inconsistent, in other words, during a "cache miss", cache memory apparatus then accesses main memory to read a certain data from address of main memory corresponding to address data i, and then delivers the read-out data to data processor. Using least-recently-used (LRU) algorithm, cache memory apparatus clears memory region storing the leastusable data to allow storage of the above data read out of main memory in the cleared memory region.
.DTD:
Fig. 2 is the simplified block diagram of a conventional data processor incorporating data-storing cache memory apparatus and instruction-storing cache memory apparatus. The reference numeral 8 designates data processing system. Data processor 81 is connected to main memory 83 through bus line 82. Instruction-storing cache memory 84a and datastoring memory 84b are respectively connected to the middle of bus line 82. Data processor 81 is connected to ehip selecting circuit 86 through access- property signal line 85. Chip selecting circuit 86 is connected to those cache memories 84a and 84b mentioned above through chip-selecting signal lines 87a and 87b.
.DTD:
Next, functional operation of this conventional data processor is described below.
.DTD:
During data-writing cycle in accordance with access property data outputted from data processor 81, chip se- lecting circuit 86 outputs chip-selecting signal. In accordance with the chip-selecting signal, either the instruction-storing cache memory 84a or data-storing cache memory 84b is selected. Data processor 81 accesses either cache memory 84a or cache memory 84b selected by chip-selecting signal so that an instruction or data can be fetched.
.DTD:
As a result, using any conventional cache memory apparatus of a conventional data processing system, if data should be cached in accordance with attribute related to instructions or data, a certain number of cache memory apparatuses corresponding to the number of attribute must be provided, thus resulting in the large-sized data processing system. Furthermore, volume of data stored according to attribute is variable. This in turn causes data to be stored unevenly to result in the poor utility of the cache memory apparatus itself.
.DTD:
The primary object of the invention is to overcome those problems mentioned above by providing a novel cache memory apparatus having memory regions in accordance with the attribute of the information and capable of accessing the needed memory region with the attribute of the information.
.DTD:
The second object of the invention is to provide a novel set-associative cache memory apparatus which sets information requiring storage in each way of memory region composed of n ways (where n is more than one) in accordance with the attribute of the information so that the needed way can be accessed by cache memory apparatus with the attribute of the information.
.DTD:
The third object of the invention is to provide a novel multiple data processing system which sets memory regions accessible by each of a plurality of data processors, inside of cache memory apparatus, so that the needed memory region can be accessed by information specifying any of these data processors.
.DTD:
The above and further objects and features of the invention will more fully be apparent from the following detailed description with accompanying drawings in which:
.DTD:
Fig. 1 is the simplified block diagram of a conventional cache memory apparatus; Fig. 2 is the simplified block diagram of a data processing system using a conventional cache memory apparatus; Fig. 3 is the schematic block diagram of a cache memory apparatus embodying the invention; and Fig. 4 is the simplified block diagram of a data processing system incorporating cache memory apparatus embodying the invention.
.DTD:
Referring now to the accompanying drawings, preferred embodiments of the invention are described below.
.DTD:
Fig. 3 is the schematic block diagram of the 4-way setassociative cache memory apparatus related to the invention. The reference numeral 1 designates address data used for accessing cache memory. Address data i is composed of address tag Ii, entry address 12, and word address 13. In accordance with the attribute of information, designation register 7 designates either a certain way to be cached or a certain way to which data, read out of main memory during a "cache miss", should be written. In accordance with entry address 12 of address data i, cache memory apparatus accesses tag memory 2 and data memory 3 of the designated way. Each tag memory Z is provided with the corresponding comparator 4, while each data memory 3 is provided with the corresponding word selector 5. Each data memory 3 stores information of a plurality of words, while address is provided for each word. Each word selector 5 is provided with the corresponding way selector 6. Comparator 4 outputs signal according to the compared result of data thereof, and in accordance with this signal, way selector 5 outputs the content of the compared result to data processor.
.DTD:
Fig. 4 is the simplified block diagram of the data processing system incorporating cache memory apparatus related to the invention. The reference numeral 8 designates data processing system. Data processor 81 is connected to main memory 88 through bus line 82. Cache memory 84 is connected to the middle of bus line 82. Data processor 81 is connected to cache memory apparatus 84 through access-property signal line 85.
.DTD:
Next, operation of the data processing system related to the invention is described below.
.DTD:
Four ways are called A, B, C and D, for example. Assume that way A is used for dealing with instructions and ways B to D are used for dealing with data, in accordance with the attribute of information including instructions and data. Designation register 7 stores those ways usable for each attribute.
.DTD:
To fetch instructions during reading cycle, data processor 81 accesses cache memory 84 while outputting access property data. By referring to the delivered access property, cache memory apparatus 84 identifies that instructions should be fetched, and then, on the basis of the address data I outputted from data processor 81, cache memory apparatus 84 accesses way A designated by register ?.
.DTD:
In accordance with entry address 12 which is a part of address data i, tag memory 2 and data memory 3 respectively output the content of these memories to comparator 4 and word selector 5 of way A. Comparator 4 then compares the content of tag memory Z and data memory 3 to the tag address delivered from tag address Ii of address data I and tag memory 2. After completing the comparison, comparator 4 outputs either "cache-hit" or "cache-miss" signal to way selector 6 of way A.
By referring to data from data memory 3, word selector 5 selects the content of word address 13 of address data 1 for delivery to way selector 6 of way A. On receipt of "cache-hit" signal from comparator 4, way selector 6 delivers this content to data processor 81. Conversely, on receipt of "cache-miss" signal from comparator 4, way selector 6 then accesses main memory 83. Cache memory apparatus 84 then reads the needed data from the corresponding address of main memory 83 and then delivers it to data processor 81. Cache memory apparatus 84 then stores the read- out data in way A. If there were no room in way A for storing data, in accordance with LRU algorithm, cache memory apparatus 84 clears memory region storing the least usable data before storing data from main memory 83 into the cleared memory region.
.DTD:
The above embodiment provides way A for storing instructions and ways B to D for storing data, according to the attribute of information. Needless to say that these ways can be provided in proportion to the volume of information of each attribute.
.DTD:
Other functional operations of the cache memory apparatus related to the invention are described below.
.DTD:
When data processor 81 accesses main memory 83 for reading data, cache memory apparatus 84 instantly starts caching operation without referring to access property.
.DTD:
When only comparator 4 identifies that the comparison of address data has resulted in the "cache-miss" condition and data has already been read out of main memory 83, cache memory apparatus 84 stores data from main memory 83 in the way designated by register 7 in accordance with the attribute of the information.
.DTD:
Next, another preferred embodiment of the multiple data processing system incorporating a plurality of central processing units (CPU) is described below. In this embodiment, those specific ways accessible by each CPU are preliminarily set at the designation register. Using information which specifies CPU, the needed way is accessed. Consequently, it is possible for this preferred embodiment to decrease the number of cache memory apparatuses used for multiple data processing system. Furthermore, since the range of monitoring process of memory accessing operation executed by other CPUs is confined within cache memory apparatus, speed of total data processing operation of the multiple data proc- essing system is significantly promoted. In conjunction with the attribute designating way, not only in dealing with instructions and data, but even such attribute like the number of data block stored in one way, i.e., the number of words for example also generates quite satisfactory effect identical to that is achieved from those embodiments cited above. This embodiment provides the constitution allowing the designation register to designate a specific way to be cached. Identical effect can also be generated by applying such a constitution for designating a specific way using pin information.
.DTD:
As this invention may be embodied in several forms without departing from the.essential characteristics thereof, the present embodiment is therefore illustra- tive and not restrictive.
.DTD:
.CLME:
Claims (1)
- CLAIMS .CLME:1. A multiple data processing system comprising a plurality of data processors and a cache memory apparatus having a plurality of memory regions, comprising; means for allocating an accessible memory region for each data processor; and means for accessing the needed memory region on the basis of information specifying each data processor..CLME:2. A multiple data processing system comprising a plurality of data processors and a cache memory apparatus incorporating n ways (where n is more than one) of memory regions, comprising; means for allocating an accessible way for each data processor; and means for accessing the needed way on the basis of information specifying each data processor..CLME:3. A system as set forth in claim 1 or 2, wherein the accessible way is designated by a register..CLME:4. A system as set forth in claim 1 or 2, wherein the accessible way is designated by data from pins..CLME:5. A multiple data processing system substantially as herein described with reference to Figure 4 of the accompanying drawings..CLME:ii 6. A cache memory apparatus which is provided with a plural sets of tag memories storing a part or all of address data and a plural sets of data memories storing information stored in said address, and accesses the needed tag memory with said address data, comprising; means for setting information to be stored in each data memory in accordance with the attribute thereof; and means for accessing the needed tag memory and data memory in accordance with the attribute of the information..CLME:7. A set-associative cache memory apparatus which is provided with n ways (where n is more than one) of tag memories storing a part of address data and data memories storing information stored in said address, comprising; means for setting information to be stored in each way of data memory in accordance with the attribute thereof; and means for accessing the needed way of tag memory and in accordance with the attribution of the data memory information..CLME:o wherein A cache memory apparatus as set forth in claim 6 or 7, said attribute of the information is substantially instructions and data..CLME:9. A cache memory apparatus as set forth in claim 6, 7 or 8 wherein said attribute of the information is substantially the number of data to be managed in each entry..CLME:i0. A cache memory apparatus as set forth in any preceding claim wherein the accessible way is designated by register..CLME:ii. A cache memory apparatus as set forth in any of claims 6 to 9, wherein the accessible way is designated by data from pins..CLME:12. A cache memory apparatus substantially as herein described with reference to Figure 3 with or without reference to Figure 4 of the accompanying drawings..CLME:
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP63011224A JPH0727492B2 (en) | 1988-01-21 | 1988-01-21 | Buffer storage |
Publications (3)
Publication Number | Publication Date |
---|---|
GB9200747D0 GB9200747D0 (en) | 1992-03-11 |
GB2250114A true GB2250114A (en) | 1992-05-27 |
GB2250114B GB2250114B (en) | 1992-09-23 |
Family
ID=11771987
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB8901247A Expired - Fee Related GB2214336B (en) | 1988-01-21 | 1989-01-20 | Cache memory apparatus |
GB9200747A Expired - Fee Related GB2250114B (en) | 1988-01-21 | 1992-01-14 | Multiple data processing system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB8901247A Expired - Fee Related GB2214336B (en) | 1988-01-21 | 1989-01-20 | Cache memory apparatus |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPH0727492B2 (en) |
GB (2) | GB2214336B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2311880A (en) * | 1996-04-03 | 1997-10-08 | Advanced Risc Mach Ltd | Partitioned cache memory |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5553262B1 (en) * | 1988-01-21 | 1999-07-06 | Mitsubishi Electric Corp | Memory apparatus and method capable of setting attribute of information to be cached |
JPH01233537A (en) * | 1988-03-15 | 1989-09-19 | Toshiba Corp | Information processor provided with cache memory |
JPH0951997A (en) * | 1995-08-11 | 1997-02-25 | Sookoo Kk | Tool for drying wash |
GB9727485D0 (en) | 1997-12-30 | 1998-02-25 | Sgs Thomson Microelectronics | Processing a data stream |
US6748492B1 (en) | 2000-08-07 | 2004-06-08 | Broadcom Corporation | Deterministic setting of replacement policy in a cache through way selection |
US6848024B1 (en) | 2000-08-07 | 2005-01-25 | Broadcom Corporation | Programmably disabling one or more cache entries |
US6732234B1 (en) | 2000-08-07 | 2004-05-04 | Broadcom Corporation | Direct access mode for a cache |
US6748495B2 (en) | 2001-05-15 | 2004-06-08 | Broadcom Corporation | Random generator |
US7266587B2 (en) | 2002-05-15 | 2007-09-04 | Broadcom Corporation | System having interfaces, switch, and memory bridge for CC-NUMA operation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1555300A (en) * | 1977-10-06 | 1979-11-07 | Ibm | Data processing apparatus |
US4228503A (en) * | 1978-10-02 | 1980-10-14 | Sperry Corporation | Multiplexed directory for dedicated cache memory system |
EP0284751A2 (en) * | 1987-04-03 | 1988-10-05 | International Business Machines Corporation | Cache memory |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4264953A (en) * | 1979-03-30 | 1981-04-28 | Honeywell Inc. | Virtual cache |
DE3380645D1 (en) * | 1982-12-28 | 1989-11-02 | Ibm | Method and apparatus for controlling a single physical cache memory to provide multiple virtual caches |
JPS59213084A (en) * | 1983-05-16 | 1984-12-01 | Fujitsu Ltd | Buffer store control system |
JPS61199137A (en) * | 1985-02-28 | 1986-09-03 | Yokogawa Electric Corp | Microprocessor unit |
US4853846A (en) * | 1986-07-29 | 1989-08-01 | Intel Corporation | Bus expander with logic for virtualizing single cache control into dual channels with separate directories and prefetch for different processors |
-
1988
- 1988-01-21 JP JP63011224A patent/JPH0727492B2/en not_active Expired - Lifetime
-
1989
- 1989-01-20 GB GB8901247A patent/GB2214336B/en not_active Expired - Fee Related
-
1992
- 1992-01-14 GB GB9200747A patent/GB2250114B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1555300A (en) * | 1977-10-06 | 1979-11-07 | Ibm | Data processing apparatus |
US4228503A (en) * | 1978-10-02 | 1980-10-14 | Sperry Corporation | Multiplexed directory for dedicated cache memory system |
EP0284751A2 (en) * | 1987-04-03 | 1988-10-05 | International Business Machines Corporation | Cache memory |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2311880A (en) * | 1996-04-03 | 1997-10-08 | Advanced Risc Mach Ltd | Partitioned cache memory |
US5875465A (en) * | 1996-04-03 | 1999-02-23 | Arm Limited | Cache control circuit having a pseudo random address generator |
Also Published As
Publication number | Publication date |
---|---|
GB2214336A (en) | 1989-08-31 |
JPH01187650A (en) | 1989-07-27 |
GB2214336B (en) | 1992-09-23 |
JPH0727492B2 (en) | 1995-03-29 |
GB2250114B (en) | 1992-09-23 |
GB9200747D0 (en) | 1992-03-11 |
GB8901247D0 (en) | 1989-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5091851A (en) | Fast multiple-word accesses from a multi-way set-associative cache memory | |
US5689679A (en) | Memory system and method for selective multi-level caching using a cache level code | |
US5133058A (en) | Page-tagging translation look-aside buffer for a computer memory system | |
US5371870A (en) | Stream buffer memory having a multiple-entry address history buffer for detecting sequential reads to initiate prefetching | |
EP0042000B1 (en) | Cache memory in which the data block size is variable | |
US4332010A (en) | Cache synonym detection and handling mechanism | |
CA1255395A (en) | Simplified cache with automatic update | |
US5155832A (en) | Method to increase performance in a multi-level cache system by the use of forced cache misses | |
US5461718A (en) | System for sequential read of memory stream buffer detecting page mode cycles availability fetching data into a selected FIFO, and sending data without aceessing memory | |
KR920005280B1 (en) | High speed cache system | |
CA2020275C (en) | Apparatus and method for reading, writing, and refreshing memory with direct virtual or physical access | |
US6047357A (en) | High speed method for maintaining cache coherency in a multi-level, set associative cache hierarchy | |
US4471429A (en) | Apparatus for cache clearing | |
US5070502A (en) | Defect tolerant set associative cache | |
US5018061A (en) | Microprocessor with on-chip cache memory with lower power consumption | |
GB2311880A (en) | Partitioned cache memory | |
EP0470739B1 (en) | Method for managing a cache memory system | |
GB2250114A (en) | Multiple processor data processing system with cache memory | |
US5452418A (en) | Method of using stream buffer to perform operation under normal operation mode and selectively switching to test mode to check data integrity during system operation | |
EP0180369B1 (en) | Cache memory addressable by both physical and virtual addresses | |
US5619673A (en) | Virtual access cache protection bits handling method and apparatus | |
EP0470736B1 (en) | Cache memory system | |
EP0474356A1 (en) | Cache memory and operating method | |
US5749092A (en) | Method and apparatus for using a direct memory access unit and a data cache unit in a microprocessor | |
US6826655B2 (en) | Apparatus for imprecisely tracking cache line inclusivity of a higher level cache |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
746 | Register noted 'licences of right' (sect. 46/1977) |
Effective date: 19951107 |
|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20000120 |