GB2214336A - Cache memory apparatus - Google Patents
Cache memory apparatus Download PDFInfo
- Publication number
- GB2214336A GB2214336A GB8901247A GB8901247A GB2214336A GB 2214336 A GB2214336 A GB 2214336A GB 8901247 A GB8901247 A GB 8901247A GB 8901247 A GB8901247 A GB 8901247A GB 2214336 A GB2214336 A GB 2214336A
- Authority
- GB
- United Kingdom
- Prior art keywords
- data
- cache memory
- memory
- way
- address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0844—Multiple simultaneous or quasi-simultaneous cache accessing
- G06F12/0846—Cache with multiple tag or data arrays being simultaneously accessible
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0864—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using pseudo-associative means, e.g. set-associative or hashing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0842—Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/126—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
- G06F12/127—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning using additional replacement algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/601—Reconfiguration of cache memory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
A cache memory apparatus allocates tag memory regions 2 and data memory regions 3 on the basis of an attribute of the information to be cached. The needed memory region according to the attribute (e.g instruction, data or quantity) is then accessed before caching is implemented. Accordingly, data processing systems incorporating the apparatus dispense with the need for a plurality of cache memory apparatuses corresponding to the number of attributes, thus saving chips and providing compact systems. A multiple processor system allocates cache memory regions to each data processor and accesses the needed memory region in accordance with information specifying a given data processor before caching is implemented. Consequently, the need to provide a plurality of cache memory apparatuses corresponding to the number of data processors is avoided. <IMAGE>
Description
CACHE MEMORY APPARATUS
The present invention relates to a set-associative cache memory apparatus which is connected between data processor and main memory and accessible in accordance with access property.
Fig. 1 is the simplified block diagram. of a conventional 4-way set-associative cache memory apparatus. The reference numeral 1 designates address data used for accessing cache memory apparatus. Address data 1 is composed of address tag 11, entry address 12, and word address 13. Using entry address 12, cache memory apparatus accesses 4-way tag memories 2, 2, 2, 2, and data memories 3, 3, 3, 3. Tag memories 2, 2, 2, 2 are respectively provided with corresponding cqmparators 4, 4, 4, 4. Data memories 3, 3, 3, 3 are respectively provided with corresponding word selectors 5, 5, 5, 5. Each data memory 3 stores information of a plurality of words, while each word is provided with corresponding address. Word selectors 5, 5, 5, 5 are respectively provided with corresponding way selectors 6, 6, 6, 6.Each .ar-selector 5 outputs the content of selected way to data processor, where the content of way is selected in accordance with signals outputted from comparators 4 according to the compared result of data thereof.
Next, functional operation of the above conventional cache memory apparatus is described below.
On receipt of address data 1 from data processor in process of reading cycle, tag memories 2, 2, 2, 2 and data memories 3, 3, 3, 3 are accessed on the basis of addresses of entry address 12. Tag memory 2 of each corresponding way delivers content of accessed address to the corresponding comparator 4. Data memory 3 of each corresponding way delivers the content to the corresponding word selector 5.
Each word selector 5 selects the content of needed word corresponding to word address 13 of address data 1 from the delivered data before delivering the selected data to way selector 6.
Each comparator 4 compares address tag 11 of address data 1 to those address tags delivered from each tag memory 2. It the compared result between these address tags mentioned above correctly coincides with each other, in other words, during "cache-liit", comparator 4 outputs coincidence signal to the corresponding way selector 6. On receipt of coincidence signal, way selector 6 outputs the content to data processor.
Conversely, if the compared result between those address data are inconsistent, in other words, during a cache miss", cache memory apparatus then accesses main memory to read a certain data from address of main memory corresponding to address data 1, and then delivers the read-out data to data processor. Using least-recently-used (LRU) algorithm, cache memory apparatus clears memory region storing the least-usable data to allow storage of the above data read out of main memory in the cleared memory region.
Fig. 2 is the simplified block diagram of a conventional data processor incorporating data-storing cache memory apparatus and instruction-storing cache memory apparatus.
The reference numeral 8 designates data processing system.
Data processor 81 is connected to main memory 83 through bus line 82. instruction-storing cache memory 84a and datastoring memory 84b are respectively connected to the middle of bus line 82. Data processor 81 is connected to chip selecting circuit 86 through access-property signal line 85.
Chip selecting circuit 86 is connected to those cache memories 84a and 84b mentioned above through chip-selecting signal lines 87a and 87b.
Next, functional operation of this conventional data processor is described below.
During data-writing cycle in accordance with access property data outputted from data processor 81, chip se lecting circuit 86 outputs chip-selecting signal. In accordance with the chip-selecting signal, either the instruction-storing cache memory 84a or data-storing cache memory 84b is selected. Data processor 81 accesses either cache memory 84a or cache memory 84b selected by chip-selecting signal so that an instruction or data can be fetched.
As a result, using any conventional cache memory apparatus of a conventional data processing system, if data should be cached in accordance with attribute related to instructions or data, a certain number of cache memory apparatuses corresponding to the number of attribute must be provided, thus resulting in the large-sized data processing system. Furthermore, volume of data stored according to attribute is variable. This in turn causes data to be stored unevenly to result in the poor utility of the cache memory apparatus itself.
The primary object of the invention is to overcome those,problems mentioned above by providing a novel cache memory apparatus having memory regions in accordance with the attribute of the information and capable of accessing the needed memory region with the attribute of the information.
The second object of the invention is to provide a novel set-associative cache memory apparatus which sets information requiring storage in each way of memory region composed of n ways (where n is more than one) in accordance with the attribute of the information so that the needed way can be accessed by cache memory apparatus with the attribute of the information.
The third object of the invention is to provide a novel multiple data processing system which sets memory regions accessible by each of a plurality of data processors, inside of cache memory apparatus, so that the needed memory region can be accessed by information specifying any of these data processors.
The above and further objects and features of the invention will more fully be apparent from the following detailed description with accompanying drawings , in which:
Fig. 1 is the simplified block diagram of a conventional cache memory apparatus;
Fig. 2 is the simplified block diagram of a data processing system using a conventional cache memory apparatus;
Fig. 3 is the schematic block diagram of a cache memory apparatus embodying the invention; and
Fig. 4 is the simplified block diagram of a data processing system incorporating cache mentors apparatus embodying
the invention.
Referring now to the accompanying drawings, preferred embodiments of the invention are described below.
Fig. 3 is the schematic block diagram of the 4-way setassociative cache memory apparatus related to the invention.
The reference numeral 1 designates address data used for accessing cache memory. Address data 1 is composed of address tag 11, entry address 12, and word address 13. In accordance with the attribute of information, designation register 7 designates either a certain way to be cached or a certain way to which data, read out of main memory during a "cache miss", should be written. In accordance with entry address 12 of address data 1, cache memory apparatus accesses tag memory 2 and data memory 3 of the designated way.
Each tag memory 2 is provided with the corresponding comparator 4, while each data memory 3 is provided with the corresponding word selector 5. Each data memory 3 stores information of a plurality of words, while address is provided for each word. Each word selector 5 is provided with the corresponding way selector 6. Comparator 4 outputs signal according to the compared result of data thereof, and in accordance with this signal, way selector 5 outputs the content of the-'compared result to data processor.
Fig. 4 is the simplified block diagram of the data processing system incorporating cache memory apparatus related to the invention. The reference numeral 8 designates data processing system. Data processor 81 is connected to main memory 83 through bus line 82. Cache memory 84 is connected to the middle of bus line 82. Data processor 81 is connected to cache memory apparatus 84 through access-property signal line 85.
Next, operation of the data processing system related to the invention is described below.
Four ways are called A, B, C and D, for example. Assume that way A is used for dealing with instructions and ways B to D are used for dealing with data, in accordance with the attribute of information including instructions and data. Designation register 7 stores those ways usable for each attribute.
To fetch instructions during reading cycle, data processor 81 accesses cache memory 84 while outputting access property data. By referring to the delivered access property, cache memory apparatus 84 identifies that instructions should be fetched, and then, on the basis of the address data 1 outputted from data processor 81, cache memory apparatus 84 accesses way A designated by register 7.
In accordance with entry address 12 which is a part of address data 1, tag memory 2 and data memory 3 respectively output the content of these memories to comparator 4 and word selector 5 of way A. Comparator 4 then compares the content of tag memory 2 and data memory 3 to the tag address delivered from tag address 11 of address data 1 and tag memory 2. After completing the comparison, comparator 4 outputs either "cache-hit" or "cache-miss" signal to way selector 6 of way A.
By referring to data from data memory 3, word selector 5 selects the content of word address 13 of address data 1 for delivery to way selector 6 of way A. On receipt of "cache-hit" signal from comparator 4, way selector 6 delivers this content to data processor 81. Conversely, on receipt of "cache-miss" signal from comparator 4, way selector 6 then accesses main memory 83. Cache memory apparatus 84 then reads the needed data from the corresponding address of main memory 83 and then delivers it to data processor 81.
Cache memory apparatus 84 then stores the read-out data in way A. If there were no room in way A for storing data, in accordance with LRU algorithm, cache memory apparatus 84 clears memory region storing the least usable data before storing data from main memory 83 into the cleared memory region.
The above embodiment provides way A for storing instructions and ways B to D for storing data, according to the attribute of information. Needless to say that these ways can be provided in proportion to the volume of information of each attribute.
Other functional operations of the cache memory apparatus related to the invention are described below.
When data processor 81 accesses main memory 83 for reading data, cache memory apparatus 84 instantly starts caching operation without referring to access property.
When only comparator 4 identifies that the comparison of address data has resulted in the "cache-miss" condition and data has already been read out of main memory 83, cache memory apparatus 84 stores data from main memory 83 in the way designated by register 7 in accordance with the attribute of the information.
Next, another preferred embodiment of the multiple data processing system incorporating a plurality of central processing units (CPU) is described below. In this embodiment, those specific ways accessible by each CPU are preliminarily set at the designation register. Using information which specifies CPU, the needed way is accessed. Consequently, it is possible for this preferred embodiment to decrease the number of cache memory apparatuses used for multiple data processing system. Furthermore, since the range of monitoring process of memory accessing operation executed by other
CPUs is confined within cache memory apparatus, speed of total data processing operation of the multiple data proc essing system is significantly promoted. In conjunction with the attribute designating way, not only in dealing with instructions and data, but even such attribute like the number of data block stored in one way, i.e., the number of words for example, also generates quite satisfactory effect identical to that is achieved from those embodiments cited above. This embodiment provides the constitution allowing the designation register to designate a specific way to be cached. Identical effect can also be generated by applying such a constitution for designating a specific way using pin information.
As this invention may be embodied in several forms without departing from the essential characteristics thereof, the present embodiment is therefore illustrative and not restrictive.
Claims (12)
1. A cache memory apparatus which is provided with a plural sets of tag memories storing a part or all of address data and a plural sets of data memories storing information stored in said address, and accesses the needed tag memory with said address data, comprising;
means for setting information to be stored in each data memory in accordance with the attribute thereof; and
means for accessing the needed tag memory and data memory in accordance with the attribute of the information.
2. A set-associative cache memory apparatus which is provided with n ways (where n is more than one) of tag memories storing a part of address data and data memories storing information stored in said address, comprising;
means for setting information to be stored in each way of data memory in accordance with the attribute thereof; and
means for accessing the needed way of tag memory and data memory in accordance with the attribution of the information.
3. A cache memory apparatus as set forth in Claim 1 or 2, wherein said attribute of the information is substantially instructions and data.
4. A cache memory apparatus as set forth in Claim 1, 2 or 3 wherein said attribute of the information is substantially the number of data to be managed in each entry.
5. A cache memory apparatus as set forth in any preceding Claim wherein the accessible way is designated by register.
6. A cache memory apparatus as set forth in any of
Claims 1 to 4, wherein the accessible way is designated by data from pins.
7. A cache memory apparatus substantially as herein described with reference to Figure 3 with or without reference to Figure 4 of the accompanying drawings.
8. A multiple data processing system comprising a plurality of data processors and a cache memory apparatus having a plurality of memory regions, comprising;
means for allocating an accessible memory region for each data processor; and
means for accessing the needed memory region on the basis of information specifying each data processor.
9. A multiple data processing system comprising a plurality of data processors and a cache memory apparatus incorporating n ways (where n is more than one) of memory regions, comprising;
means for allocating an accessible way for each data processor; and
means for accessing the needed way on the basis of information specifying each data processor.
10. A system as set forth in Claim 8 or 9, wherein the accessible way is designated by register.
11. A system as set forth in Claim 8 or 9 wherein the accessible way is designated by data from pins.
12. A multiple data processing system substantially as herein described with reference to Figure 3 with or without reference to Figure 4 of the accompanying drawings.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP63011224A JPH0727492B2 (en) | 1988-01-21 | 1988-01-21 | Buffer storage |
Publications (3)
Publication Number | Publication Date |
---|---|
GB8901247D0 GB8901247D0 (en) | 1989-03-15 |
GB2214336A true GB2214336A (en) | 1989-08-31 |
GB2214336B GB2214336B (en) | 1992-09-23 |
Family
ID=11771987
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB8901247A Expired - Fee Related GB2214336B (en) | 1988-01-21 | 1989-01-20 | Cache memory apparatus |
GB9200747A Expired - Fee Related GB2250114B (en) | 1988-01-21 | 1992-01-14 | Multiple data processing system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB9200747A Expired - Fee Related GB2250114B (en) | 1988-01-21 | 1992-01-14 | Multiple data processing system |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPH0727492B2 (en) |
GB (2) | GB2214336B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2311880A (en) * | 1996-04-03 | 1997-10-08 | Advanced Risc Mach Ltd | Partitioned cache memory |
EP0927937A1 (en) * | 1997-12-30 | 1999-07-07 | STMicroelectronics Limited | Method and computer system for processing a data stream |
EP1179781A2 (en) * | 2000-08-07 | 2002-02-13 | Broadcom Corporation | Programmably disabling one or more cache entries |
US6732234B1 (en) | 2000-08-07 | 2004-05-04 | Broadcom Corporation | Direct access mode for a cache |
US6748492B1 (en) | 2000-08-07 | 2004-06-08 | Broadcom Corporation | Deterministic setting of replacement policy in a cache through way selection |
US6748495B2 (en) | 2001-05-15 | 2004-06-08 | Broadcom Corporation | Random generator |
US6988168B2 (en) | 2002-05-15 | 2006-01-17 | Broadcom Corporation | Cache programmable to partition ways to agents and/or local/remote blocks |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5553262B1 (en) * | 1988-01-21 | 1999-07-06 | Mitsubishi Electric Corp | Memory apparatus and method capable of setting attribute of information to be cached |
JPH01233537A (en) * | 1988-03-15 | 1989-09-19 | Toshiba Corp | Information processor provided with cache memory |
JPH0951997A (en) * | 1995-08-11 | 1997-02-25 | Sookoo Kk | Tool for drying wash |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2045483A (en) * | 1979-03-30 | 1980-10-29 | Honeywell Inc | Control system with virtual cache |
EP0114944A2 (en) * | 1982-12-28 | 1984-08-08 | International Business Machines Corporation | Method and apparatus for controlling a single physical cache memory to provide multiple virtual caches |
EP0125855A2 (en) * | 1983-05-16 | 1984-11-21 | Fujitsu Limited | Buffer-storage control system |
GB2193356A (en) * | 1986-07-29 | 1988-02-03 | Intel Corp | Cache directory and control |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4136386A (en) * | 1977-10-06 | 1979-01-23 | International Business Machines Corporation | Backing store access coordination in a multi-processor system |
US4228503A (en) * | 1978-10-02 | 1980-10-14 | Sperry Corporation | Multiplexed directory for dedicated cache memory system |
JPS61199137A (en) * | 1985-02-28 | 1986-09-03 | Yokogawa Electric Corp | Microprocessor unit |
JPS63257853A (en) * | 1987-04-03 | 1988-10-25 | インターナシヨナル・ビジネス・マシーンズ・コーポレーシヨン | Cash memory system |
-
1988
- 1988-01-21 JP JP63011224A patent/JPH0727492B2/en not_active Expired - Lifetime
-
1989
- 1989-01-20 GB GB8901247A patent/GB2214336B/en not_active Expired - Fee Related
-
1992
- 1992-01-14 GB GB9200747A patent/GB2250114B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2045483A (en) * | 1979-03-30 | 1980-10-29 | Honeywell Inc | Control system with virtual cache |
EP0114944A2 (en) * | 1982-12-28 | 1984-08-08 | International Business Machines Corporation | Method and apparatus for controlling a single physical cache memory to provide multiple virtual caches |
EP0125855A2 (en) * | 1983-05-16 | 1984-11-21 | Fujitsu Limited | Buffer-storage control system |
GB2193356A (en) * | 1986-07-29 | 1988-02-03 | Intel Corp | Cache directory and control |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2311880A (en) * | 1996-04-03 | 1997-10-08 | Advanced Risc Mach Ltd | Partitioned cache memory |
US5875465A (en) * | 1996-04-03 | 1999-02-23 | Arm Limited | Cache control circuit having a pseudo random address generator |
EP0927937A1 (en) * | 1997-12-30 | 1999-07-07 | STMicroelectronics Limited | Method and computer system for processing a data stream |
US6324632B1 (en) | 1997-12-30 | 2001-11-27 | Stmicroelectronics Limited | Processing a data stream |
US6732234B1 (en) | 2000-08-07 | 2004-05-04 | Broadcom Corporation | Direct access mode for a cache |
EP1179781A3 (en) * | 2000-08-07 | 2003-04-23 | Broadcom Corporation | Programmably disabling one or more cache entries |
EP1179781A2 (en) * | 2000-08-07 | 2002-02-13 | Broadcom Corporation | Programmably disabling one or more cache entries |
US6748492B1 (en) | 2000-08-07 | 2004-06-08 | Broadcom Corporation | Deterministic setting of replacement policy in a cache through way selection |
US6848024B1 (en) | 2000-08-07 | 2005-01-25 | Broadcom Corporation | Programmably disabling one or more cache entries |
US6961824B2 (en) | 2000-08-07 | 2005-11-01 | Broadcom Corporation | Deterministic setting of replacement policy in a cache |
US7177986B2 (en) | 2000-08-07 | 2007-02-13 | Broadcom Corporation | Direct access mode for a cache |
US7228386B2 (en) | 2000-08-07 | 2007-06-05 | Broadcom Corporation | Programmably disabling one or more cache entries |
US6748495B2 (en) | 2001-05-15 | 2004-06-08 | Broadcom Corporation | Random generator |
US7000076B2 (en) | 2001-05-15 | 2006-02-14 | Broadcom Corporation | Random generator |
US6988168B2 (en) | 2002-05-15 | 2006-01-17 | Broadcom Corporation | Cache programmable to partition ways to agents and/or local/remote blocks |
Also Published As
Publication number | Publication date |
---|---|
GB2250114A (en) | 1992-05-27 |
JPH01187650A (en) | 1989-07-27 |
GB2214336B (en) | 1992-09-23 |
JPH0727492B2 (en) | 1995-03-29 |
GB2250114B (en) | 1992-09-23 |
GB9200747D0 (en) | 1992-03-11 |
GB8901247D0 (en) | 1989-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5813031A (en) | Caching tag for a large scale cache computer memory system | |
US5689679A (en) | Memory system and method for selective multi-level caching using a cache level code | |
US5091851A (en) | Fast multiple-word accesses from a multi-way set-associative cache memory | |
US4471429A (en) | Apparatus for cache clearing | |
US4493026A (en) | Set associative sector cache | |
KR920005280B1 (en) | High speed cache system | |
CA2020275C (en) | Apparatus and method for reading, writing, and refreshing memory with direct virtual or physical access | |
US5371870A (en) | Stream buffer memory having a multiple-entry address history buffer for detecting sequential reads to initiate prefetching | |
US5526509A (en) | Method and apparatus for controlling one or more hierarchical memories using a virtual storage scheme and physical to virtual address translation | |
US5133058A (en) | Page-tagging translation look-aside buffer for a computer memory system | |
US5146603A (en) | Copy-back cache system having a plurality of context tags and setting all the context tags to a predetermined value for flushing operation thereof | |
US6378047B1 (en) | System and method for invalidating set-associative cache memory with simultaneous set validity determination | |
US6745292B1 (en) | Apparatus and method for selectively allocating cache lines in a partitioned cache shared by multiprocessors | |
US6332179B1 (en) | Allocation for back-to-back misses in a directory based cache | |
US5179675A (en) | Data processing system with cache memory addressable by virtual and physical address | |
EP0470739B1 (en) | Method for managing a cache memory system | |
GB2214336A (en) | Cache memory apparatus | |
US5452418A (en) | Method of using stream buffer to perform operation under normal operation mode and selectively switching to test mode to check data integrity during system operation | |
EP0519685A1 (en) | Address translation | |
EP0535701A1 (en) | Architecture and method for combining static cache memory and dynamic main memory on the same chip (CDRAM) | |
US5619673A (en) | Virtual access cache protection bits handling method and apparatus | |
EP0470736B1 (en) | Cache memory system | |
US6813694B2 (en) | Local invalidation buses for a highly scalable shared cache memory hierarchy | |
EP0474356A1 (en) | Cache memory and operating method | |
US6480940B1 (en) | Method of controlling cache memory in multiprocessor system and the multiprocessor system based on detection of predetermined software module |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
746 | Register noted 'licences of right' (sect. 46/1977) |
Effective date: 19951108 |
|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20000120 |