Nothing Special   »   [go: up one dir, main page]

Presentation 3333

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 20

Virtual Memory

SUPERVISOR: Presented By:


DOAA AL-MAFRACHI-173103568
Dr. HASAN BALIK ZAINAB ALDULAIMI-173103587
QUSAY MUAAD ALI-173103736

10 / 4 /2018
Improving Cache Miss Latency-Reducing DRAM Latency

• Same as improving DRAM latency


• What is random access memory (RAM)? What are static RAM (SRAM)
and dynamic RAM (DRAM)?
• What is DRAM Cell organization? How are the cells arranged internally?
Memory addressing? Refreshing of DRAMs? Difference between DRAM
and SRAM?
• Access time of DRAM = Row access time + column access time +
refreshing
• What are page-mode and nibble-mode DRAMs?
• Synchronous SRAM or DRAM – Ability to transfer a burst of data given a
starting address and a burst length – suitable for transferring a block of
data from main memory to cache.
Main Memory Organizations Fig.
CPU CPU CPU

Multiplexor
Cache Cache
Cache

Bus Bus Bus

Memory Memory Memory Memory


Memory
bank 0 bank 1 bank 2 bank 3

Memory wide memory organization interleaved


memory organization

one-word wide
memory organization DRAM access time >> bus transfer time
Memory Access Time
Example
• Assume that it takes 1 cycle to send the address, 15 cycles
for each DRAM access and 1 cycle to send a word of data.
• Assuming a cache block of 4 words and one-word wide
DRAM (fig. 7.13a), miss penalty = 1 + 4x15 + 4x1 = 65 cycles
• With main memory and bus width of 2 words (fig. 7.13b),
miss penalty = 1 + 2x15 + 2x1 = 33 cycles. For 4-word wide
memory, miss penalty is 17 cycles. Expensive due to wide
bus and control circuits.
• With interleaved memory of 4 memory banks and same bus
width (fig. 7.13c), the miss penalty = 1 + 1x15 + 4x1 = 20
cycles. The memory controller must supply consecutive
addresses to different memory banks. Interleaving is
universally adapted in high-performance computers.
Virtual Memory

• Idea 1: Many Programs sharing DRAM Memory so that context switches can
occur
• Idea 2: Allow program to be written without memory constraints – program
can exceed the size of the main memory
• Idea 3: Relocation: Parts of the program can be placed at different locations in
the memory instead of a big chunk.
• Virtual Memory:
(1) DRAM Memory holds many programs running at same time (processes)
(2) use DRAM Memory as a kind of “cache” for disk
Virtual Memory has own terminology

• Each process has its own private “virtual address space” (e.g., 232 Bytes); CPU
actually generates “virtual addresses”
• Each computer has a “physical address space” (e.g., 128 MegaBytes DRAM);
also called “real memory”
• Address translation: mapping virtual addresses to physical addresses
• Allows multiple programs to use (different chunks of physical) memory at same time
• Also allows some chunks of virtual memory to be represented on disk, not in main
memory (to exploit memory hierarchy)
Mapping Virtual Memory to Physical Memory
• Divide Memory into equal sized

Virtual Memory
“chunks” (say, 4KB each)
Any chunk of Virtual Memory ° Stack
assigned to any chunk of
Physical Memory (“page”)

Physical Memory
64 MB Single
Process
Heap
Heap

Static

Code
0 0
Handling Page Faults

• A page fault is like a cache miss


• Must find page in lower level of hierarchy
• If valid bit is zero, the Physical Page Number points to a page on disk
• When OS starts new process, it creates space on disk for all the pages of the
process, sets all valid bits in page table to zero, and all Physical Page Numbers
to point to disk
• called Demand Paging - pages of the process are loaded from disk only as needed
Comparing the 2 levels of hierarchy
• Cache Virtual Memory
• Block or Line Page
• Miss Page Fault
• Block Size: 32-64B Page Size: 4K-16KB
• Placement: Fully Associative
Direct Mapped,
N-way Set Associative
• Replacement: Least Recently Used
LRU or Random (LRU) approximation
• Write Thru or Back Write Back
• How Managed: Hardware + Software
Hardware (Operating System)
How to Perform Address Translation?
• VM divides memory into equal sized pages
• Address translation relocates entire pages
• offsets within the pages do not change
• if make page size a power of two, the virtual address separates into two fields:

• like cache index, offset fields

Virtual Page Number Page Offset virtual address


Mapping Virtual to Physical Address
Virtual Address
31 30 29 28 27 .………………….12 11 10 9 8 ……..……. 3 2 1 0

Virtual Page Number Page Offset

1KB page
Translation size

Physical Page Number Page Offset

29 28 27 .………………….12 11 10 9 8 ……..……. 3 2 1 0

Physical Address
Address Translation

• Want fully associative page placement


• How to locate the physical page?
• Search impractical (too many pages)
• A page table is a data structure which contains the mapping of virtual pages
to physical pages
• There are several different ways, all up to the operating system, to keep this data
around
• Each process running in the system has its own page table
Address Translation: Page Table
Virtual Address (VA):
virtual page nbr offset

Page Table
Page Table ...
Register V A.R. P. P. N.
index Val Access Physical +
into -id Rights Page
page Number Physical
table V A.R. P. P. N. Memory
0 A.R. Address (PA)
Page Table
...
is located Access Rights: None, Read Only,
in physical Read/Write, Executable
memory
disk
Handling Page Faults

• A page fault is like a cache miss


• Must find page in lower level of hierarchy
• If valid bit is zero, the Physical Page Number points to a page on disk
• When OS starts new process, it creates space on disk for all the pages of the
process, sets all valid bits in page table to zero, and all Physical Page
Numbers to point to disk
• called Demand Paging - pages of the process are loaded from disk only as needed
Optimizing for Space

• Page Table too big!


• 4GB Virtual Address Space ÷ 4 KB page
 220 (~ 1 million) Page Table Entries
 4 MB just for Page Table of single process!
• Variety of solutions to tradeoff Page Table size for slower performance when
miss occurs in TLB
Use a limit register to restrict page table size and let it grow with more
pages,Multilevel page table, Paging page tables, etc.
(Take O/S Class to learn more)
How Translate Fast?

• Problem: Virtual Memory requires two memory accesses!


• one to translate Virtual Address into Physical Address (page table lookup)
• one to transfer the actual data (cache hit)
• But Page Table is in physical memory!
• Observation: since there is locality in pages of data, must be locality in virtual
addresses of those pages!
• Why not create a cache of virtual to physical address translations to make
translation fast? (smaller is faster)
• For historical reasons, such a “page table cache” is called a Translation
Lookaside Buffer, or TLB
Typical TLB Format
Access Dirty Ref Valid Physical Virtual
Rights Page Nbr Page Nbr

“tag” “data”
•TLB just a cache of the page table mappings

• Dirty: since use write back, need to know whether or not to write
page to disk when replaced
• Ref: Used to calculate LRU on replacement

• TLB access time comparable to cache


(much less than main memory access time)
Translation Look-Aside Buffers
•TLB is usually small, typically 32-4,096 entries

• Like any other cache, the TLB can be fully


associative, set associative, or direct mapped
data data
virtual physical
addr. addr.
hit hit miss
Processor TLB Cache Main
miss Memory

Page OS Fault Disk


Table Handler
Memory
page fault/
protection violation
31 30 29 15 14 13 12 11 10 9 8 3210

Virtual Address Virtual page number Page offset


DECStation 3100/
20 12 MIPS R2000
Valid Dirty Tag Physical page number
TLB
TLB hit

64 entries,
fully 20
associative

Physical page number Page offset


Physical Address Physical address tag Cache index
Byte
16 14 2 offset

Valid Tag Data

Cache
16K
entries,
direct 32
mapped
Cache hit Data
Real Stuff: Pentium Pro Memory Hierarchy
• Address Size: 32 bits (VA, PA)
• VM Page Size: 4 KB, 4 MB
• TLB organization: separate i,d TLBs
(i-TLB: 32 entries,
d-TLB: 64 entries)
4-way set associative
LRU approximated
hardware handles miss
• L1 Cache: 8 KB, separate i,d 4-way set
associative
LRU approximated
32 byte block
write back
• L2 Cache: 256 or 512 KB

You might also like