11 Virtualmem
11 Virtualmem
11 Virtualmem
Fall 2014
Virtual Memory, Page Faults,
Demand Paging, and Page Replacement
Myungjin Lee
myungjin.lee@ed.ac.uk
1
Reminder: Mechanics of address translation
virtual address
virtual page # offset
physical memory
page
page table frame 0
page
frame 1
physical address
page
page frame # page frame # offset frame 2
page
frame 3
…
page
Note: Each process frame Y
has its own page table!
2
Reminder: Page Table Entries (PTEs)
1 1 1 2 20
V R M prot page frame number
3
Paged virtual memory
4
Page faults
5
Demand paging
• Pages are only brought into main memory when they are
referenced
– only the code/data that is needed (demanded!) by a process needs
to be loaded
• What’s needed changes over time, of course…
– Hence, it’s called demand paging
• Few systems try to anticipate future needs
– OS crystal ball module notoriously ineffective
• But it’s not uncommon to cluster pages
– OS keeps track of pages that should come and go together
– bring in all when one is referenced
– interface may allow programmer or compiler to identify clusters
6
Page replacement
7
How do you “load” a program?
8
Oh, man, how can any of this possibly work?
• Locality!
– temporal locality
• locations referenced recently tend to be referenced again soon
– spatial locality
• locations near recently references locations are likely to be
referenced soon (think about why)
• Locality means paging can be infrequent
– once you’ve paged something in, it will be used many times
– on average, you use things that are paged in
– but, this depends on many things:
• degree of locality in the application
• page replacement policy and application reference pattern
• amount of physical memory vs. application “footprint” or “working
set”
9
10
Evicting the best page
11
#1: Belady’s Algorithm
12
#2: FIFO
13
#3: Least Recently Used (LRU)
Example bad
case: looping
through array
• Implementation
– to be perfect, must grab a timestamp on every memory reference,
put it in the PTE, order or search based on the timestamps …
– way too $$ in memory bandwidth, algorithm execution time, etc.
– so, we need a cheap approximation …
15
Approximating LRU
16
#4: LRU Clock
17
18
Allocation of frames among processes
19
• Hybrid algorithms
– local replacement
– an explicit mechanism for adding or removing page frames
20
Number of memory references between page faults
Why?
Why?
Where would you
like to operate?
21
The working set model of program behavior
22
Example: Working set
23
Working set size
24
#5: Hypothetical Working Set algorithm
25
#6: Page Fault Frequency (PFF)
26
Thrashing
27
System throughput (requests/sec.) with zero overhead
Why?
28
System throughput (requests/sec.) with thrashing
29
Where is life interesting?
30
Summary
• Virtual memory
• Page faults
• Demand paging
– don’t try to anticipate
• Page replacement
– local, global, hybrid
• Locality
– temporal, spatial
• Working set
• Thrashing
31
• Page replacement algorithms
– #1: Belady’s – optimal, but unrealizable
– #2: FIFO – replace page loaded furthest in the past
– #3: LRU – replace page referenced furthest in the past
• approximate using PTE reference bit
– #4: LRU Clock – replace page that is “old enough”
– #5: Working Set – keep the working set in memory
– #6: Page Fault Frequency – grow/shrink number of frames as a
function of fault rate
32