Nothing Special   »   [go: up one dir, main page]

OS R22 2-2 UNIT-4

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

UNIT - IV

Memory Management and Virtual Memory

Logical- Versus Physical-Address Space

An address generated by the CPU is commonly referred to as a logical address or


a virtual address.whereas an address seen by the main memory unit is commonly
referred to as a physical address.

The set of all logical addresses generated by a program is a logical-address space


whereas the set of all physical addresses corresponding to these logical addresses is
a physical address space.

Logical and physical addresses are the same in compile-time and load-time address
binding schemes; logical (virtual) and physical addresses differ in execution-time
address binding scheme.

 The Memory Management Unit is a hardware device that maps virtual to


physical address. In MMU scheme, the value in the relocation register is added to
every address generated by a user process at the time it is sent to memory as
follows:

(Dynamic relocation using a relocation register)

Dynamic Loading

It loads the program and data dynamically into physical memory to obtain bette

1
r memory-space utilization. With dynamic loading, a routine is not loaded until
it is called.
 The advantage of dynamic loading is that an unused routine is never loaded.
 This method is useful when large amounts of code are needed to handle
infrequently
occurring cases, such as error routines.
 Dynamic loading does not require special support from the operating system.

Dynamic Linking
Linking postponed until execution time.
 Small piece of code (stub) used to locate the appropriate memory-resident
library routine.
 Stub replaces itself with the address of the routine and executes the routine
. Operating system needed to check if routine is in processes memory address.
 Dynamic linking is particularly useful for libraries.
Overlays
 Keep in memory only those instructions and data that are needed at any given
time.
 Needed when process is larger than amount of memory allocated to it.
 Implemented by user, no special support needed from operating system,
programming
 design of overlay structure is complex.

Swapping in Operating System

Swapping is a memory management technique and is used to temporarily remove


the inactive programs from the main memory of the computer system. Any process
must be in the memory for its execution, but can be swapped temporarily out of
memory to a backing store and then again brought back into the memory to
complete its execution. Swapping is done so that other processes get memory for
their execution.

Due to the swapping technique performance usually gets affected, but it also helps
in running multiple and big processes in parallel. The swapping process is also
known as a technique for memory compaction. Basically, low priority processes

2
may be swapped out so that processes with a higher priority may be loaded and
executed.

Let us understand this technique with the help of a figure given below:

The above diagram shows swapping of two processes where the disk is used as a
Backing store.

In the above diagram, suppose there is a multiprogramming environment with a


round-robin scheduling algorithm; whenever the time quantum expires then the
memory manager starts to swap out those processes that are just finished and swap
another process into the memory that has been freed. And in the meantime, the
CPU scheduler allocates the time slice to some other processes in the memory.

Swap In and Swap Out in OS

The procedure by which any process gets removed from the hard disk and placed
in the main memory or RAM commonly known as Swap In.

On the other hand, Swap Out is the method of removing a process from the main
memory or RAM and then adding it to the Hard Disk.

Contiguous Memory Allocation


 One approach to memory management is to load each process into a
contiguous space. The operating system is allocated space first,usually at
3
either low or high memory locations, and then the remaining available
memory is allocated to processes as needed. ( The OS is usually loaded low,
because that is where the interrupt vectors are located, but on older systems
part of the OS was loaded high to make more room in low memory ( within
the 640K barrier ) for user processes. )
Memory Protection

 The system shown in Figure 3.6 below allows protection against


user programs accessing areas that they should not, allows programs
to be relocated to different memory starting addresses as needed, and
allows the memory space devoted to the OS to grow or shrink
dynamically as needs change.

Figure 3.6 - Hardware support for relocation and limit registers

Memory Allocation

 One method of allocating contiguous memory is to divide all


available memory into equal sized partitions, and to assign each
process to their own partition. This restricts both the number of
simultaneous processes and the maximum size of each process, and
is no longer used.
 An alternate approach is to keep a list of unused ( free ) memory
blocks ( holes ), and to find a hole of a suitable size whenever a
process needs to be loaded into memory. There are many different
strategies for finding the "best" allocation of memory to processes,
including the three most commonly discussed:

4
1. First fit - Search the list of holes until one is found that is big
enough to satisfy the request, and assign a portion of that hole to that
process. Whatever fraction of the hole not needed by the request is
left on the free list as a smaller hole. Subsequent requests may start
looking either from the beginning of the list or from the point at
which this search ended.
2. Best fit - Allocate the smallest hole that is big enough to satisfy the
request. This saves large holes for other process requests that may
need them later, but the resulting unused portions of holes may be too small
to be of any use, and will therefore be wasted. Keeping the free list sorted can
speed up the process of finding the right hole.
3. Worst fit - Allocate the largest hole available, thereby increasing the likelihood
that the remaining portion will be usable for satisfying future requests.

Paging in Operating Systems

Paging permits the physical address space of a process to be non-contiguous. It


is a fixed-size partitioning scheme. In the Paging technique, the secondary
memory and main memory are divided into equal fixed-size partitions.

Paging solves the problem of fitting memory chunks of varying sizes onto the
backing store and this problem is suffered by many memory management schemes.

Paging helps to avoid external fragmentation and the need for compaction.

Basic Method of Paging

The paging technique divides the physical memory(main memory) into fixed-size
blocks that are known as Frames and also divide the logical memory(secondary
memory) into blocks of the same size that are known as Pages.

This technique keeps the track of all the free frames.

The Frame has the same size as that of a Page. A frame is basically a place where a
(logical) page can be (physically) placed.

5
Each process is mainly divided into parts where the size of each part is the same as
the page size.

There is a possibility that the size of the last part may be less than the page size.

 Pages of a process are brought into the main memory only when there is a
requirement otherwise they reside in the secondary storage.

 One page of a process is mainly stored in one of the frames of the memory.
Also, the pages can be stored at different locations of the memory but
always the main priority is to find contiguous frames.

Let us now cover the concept of translating a logical address into the physical
address:

Translation of Logical Address into Physical Address

Before moving on further there are some important points to note:

6
 The CPU always generates a logical address.

 In order to access the main memory always a physical address is needed.

The logical address generated by CPU always consists of two parts:

1. Page Number(p)

2. Page Offset (d)

where,

Page Number is used to specify the specific page of the process from which the
CPU wants to read the data. and it is also used as an index to the page table.

and Page offset is mainly used to specify the specific word on the page that the
CPU wants to read.

Now let us understand what is Page Table?

Page Table in OS

The Page table mainly contains the base address of each page in the Physical
memory. The base address is then combined with the page offset in order to define
the physical memory address which is then sent to the memory unit.

Thus page table mainly provides the corresponding frame number (base address of
the frame) where that page is stored in the main memory.

As we have told you above that the frame number is combined with the page
offset and forms the required physical address.

So, The physical address consists of two parts:

1. Page offset(d)

2. Frame Number(f)

where,

7
The Frame number is used to indicate the specific frame where the required page
is stored.

and Page Offset indicates the specific word that has to be read from that page.

The Page size (like the frame size) is defined with the help of hardware. It is
important to note here that the size of the page is typically the power of 2 that
varies between 512 bytes and 16 MB per page and it mainly depends on the
architecture of the computer.

If the size of logical address space is 2 raised to the power m and page size is 2
raised to the power n addressing units then the high order m-n bits of logical
address designates the page number and the n low-order bits designate the page
offset.

The logical address is as follows:

where p indicates the index into the page table, and d indicates the displacement
within the page.

8
The above diagram indicates the translation of the Logical address into the
Physical address. The PTBR in the above diagram means page table base register
and it basically holds the base address for the page table of the current process.

The PTBR is mainly a processor register and is managed by the operating system.
Commonly, each process running on a processor needs its own logical address
space.

But there is a problem with this approach and that is with the time required to
access a user memory location. Suppose if we want to find the location i, we must
first find the index into the page table by using the value in the PTBR offset by the
page number for I. And this task requires memory access. It then provides us the
frame number which is combined with the page offset in order to produce the
actual address. After that, we can then access the desired place in the memory.

With the above scheme, two memory accesses are needed in order to access a byte(
one for the page-table entry and one for byte). Thus memory access is slower by a
factor of 2 and in most cases, this scheme slowed by a factor of 2.

Translation of look-aside buffer(TLB)

There is the standard solution for the above problem that is to use a special, small,
and fast-lookup hardware cache that is commonly known as Translation of look-
aside buffer(TLB).

 TLB is associative and high-speed memory.

 Each entry in the TLB mainly consists of two parts: a key(that is the tag) and
a value.

 When associative memory is presented with an item, then the item is


compared with all keys simultaneously. In case if the item is found then the
corresponding value is returned.

 The search with TLB is fast though the hardware is expensive.

 The number of entries in the TLB is small and generally lies in between 64
and 1024.

TLB is used with Page Tables in the following ways:

9
The TLB contains only a few of the page-table entries. Whenever the logical
address is generated by the CPU then its page number is presented to the TLB.

 If the page number is found, then its frame number is immediately available
and is used in order to access the memory. The above whole task may take
less than 10 percent longer than would if an unmapped memory reference
were used.

 In case if the page number is not in the TLB (which is known as TLB miss),
then a memory reference to the Page table must be made.

 When the frame number is obtained it can be used to access the memory.
Additionally, page number and frame number is added to the TLB so that
they will be found quickly on the next reference.

 In case if the TLB is already full of entries then the Operating system must
select one for replacement.

 TLB allows some entries to be wired down, which means they cannot be
removed from the TLB. Typically TLB entries for the kernel code are wired
down.

10
Paging Hardware With TLB

Advantages of Paging

Given below are some advantages of the Paging technique in the operating system:

 Paging mainly allows to storage of parts of a single process in a non-


contiguous fashion.

 With the help of Paging, the problem of external fragmentation is solved.

 Paging is one of the simplest algorithms for memory management.

Disadvantages of Paging

Disadvantages of the Paging technique are as follows:

 In Paging, sometimes the page table consumes more memory.

 Internal fragmentation is caused by this technique.

11
 There is an increase in time taken to fetch the instruction since now two
memory accesses are required.

Paging Hardware

Every address generated by CPU mainly consists of two parts:

1. Page Number(p)

2. Page Offset (d)

where,

Page Number is used as an index into the page table that generally contains the
base address of each page in the physical memory.

Page offset is combined with base address in order to define the physical memory
address which is then sent to the memory unit.

If the size of logical address space is 2 raised to the power m and page size is 2
raised to the power n addressing units then the high order m-n bits of logical
address designates the page number and the n low-order bits designate the page
offset.

The logical address is as follows:

where p indicates the index into the page table, and d indicates the displacement
within the page. The Page size is usually defined by the hardware. The size of the
page is typically the power of 2 that varies between 512 bytes and 16 MB per page.

12
Now its time to cover an example of Paging:

Paging Example

In order to implement paging, one of the simplest methods is to implement the


page table as a set of registers. As the size of registers is limited but the size of the
page table is usually large thus page table is kept in the main memory.

There is no External fragmentation caused due to this scheme; Any frame that is
free can be allocated to any process that needs it. But the internal fragmentation is
still there.

 If any process requires n pages then at least n frames are required.

 The first page of the process is loaded into the first frame that is listed on
the free-frame list, and then the frame number is put into the page table.

13
The frame table is a data structure that keeps the information of which frames are
allocated or which frames are available and many more things. This table mainly
has one entry for each physical page frame.

The Operating system maintains a copy of the page table for each process in the
same way as it maintains the copy of the instruction counter and register contents.
Also, this copy is used to translate logical addresses to physical addresses
whenever the operating system maps a logical address to a physical address
manually.

This copy is also used by the CPU dispatcher in order to define the hardware page
table whenever a process is to be allocated to the CPU.

Segmentation in Operating System


A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the exact sizes are called segments. Segmentation
gives the user’s view of the process which paging does not provide. Here the
user’s view is mapped to physical memory.
Types of Segmentation in Operating System

 Virtual Memory Segmentation: Each process is divided into a number of


segments, but the segmentation is not done all at once. This segmentation may
or may not take place at the run time of the program.
14
 Simple Segmentation: Each process is divided into a number of segments, all
of which are loaded into memory at run time, though not necessarily
contiguously.

Segmented Paging

Pure segmentation is not very popular and not being used in many of the operating
systems. However, Segmentation can be combined with Paging to get the best
features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size segments
which are further divided into fixed size pages.

1. Pages are smaller than segments.


2. Each Segment has a page table which means every program has multiple
page tables.
3. The logical address is represented as Segment Number (base address), Page
number and page offset.

Segment Number → It points to the appropriate Segment Number.

Page Number → It Points to the exact page within the segment

Page Offset → Used as an offset within the page frame

Each Page table contains the various information about every page of the segment.
The Segment Table contains the information about every segment. Each segment
table entry points to a page table entry and every page table entry is mapped to one
of the page within a segment.

15
Translation of logical address to physical address

The CPU generates a logical address which is divided into two parts: Segment
Number and Segment Offset. The Segment Offset must be less than the segment
limit. Offset is further divided into Page number and Page Offset. To map the exact
page number in the page table, the page number is added into the page table base.

The actual frame number with the page offset is mapped to the main memory to get
the desired word in the page of the certain segment of the process.

16
Advantages of Segmented Paging
1. It reduces memory usage.
2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.

Disadvantages of Segmented Paging


1. Internal Fragmentation will be there.
2. The complexity level will be much higher as compare to paging.
3. Page Tables need to be contiguously stored in the memory.

What is Demand Paging in OS (Operating System)?

According to the concept of Virtual Memory, in order to execute some process,


only a part of the process needs to be present in the main memory which means
that only a few pages will only be present in the main memory at any time.

However, deciding, which pages need to be kept in the main memory and which
need to be kept in the secondary memory, is going to be difficult because we
cannot say in advance that a process will require a particular page at particular
time.

Therefore, to overcome this problem, there is a concept called Demand Paging is


introduced. It suggests keeping all pages of the frames in the secondary memory
until they are required. In other words, it says that do not load any page in the main
memory until it is required.

Whenever any page is referred for the first time in the main memory, then that
page will be found in the secondary memory.

After that, it may or may not be present in the main memory depending upon the
page replacement algorithm which will be covered later in this tutorial.

What is a Page Fault?


If the referred page is not present in the main memory then there will be a miss and
the concept is called Page miss or page fault.

17
The CPU has to access the missed page from the secondary memory. If the number
of page fault is very high then the effective access time of the system will become
very high.

What is Thrashing?
If the number of page faults is equal to the number of referred pages or the number
of page faults are so high so that the CPU remains busy in just reading the pages
from the secondary memory then the effective access time will be the time taken
by the CPU to read one word from the secondary memory and it will be so high.
The concept is called thrashing.

If the page fault rate is PF %, the time taken in getting a page from the secondary
memory and again restarting is S (service time) and the memory access time is ma
then the effective access time can be given as;

1. EAT = PF X S + (1 - PF) X (ma)

Page Replacement in OS

In Virtual Memory Management, Page Replacement Algorithms play an important


role. The main objective of all the Page replacement policies is to decrease the
maximum number of page faults.

Page Fault – It is basically a memory error, and it occurs when the current
programs attempt to access the memory page for mapping into virtual address
space, but it is unable to load into the physical memory then this is referred to as
Page fault.

Page Replacement Algorithms:

1. First In First Out (FIFO): This is the simplest page replacement algorithm. In
this algorithm, the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue. When a page needs to be
replaced page in the front of the queue is selected for removal.

Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page


frames.Find the number of page faults.

18
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty
slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is
not available in memory so it replaces the oldest page slot i.e 1. —>1 Page
Fault. 6 comes, it is also not available in memory so it replaces the oldest page
slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not available so it replaces
0 1 page fault.

Belady’s anomaly proves that it is possible to have more page faults when
increasing the number of page frames while using the First in First Out (FIFO)
page replacement algorithm. For example, if we consider reference strings 3, 2,
1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9 total page faults, but if we increase
slots to 4, we get 10-page faults.

2. Optimal Page replacement: In this algorithm, pages are replaced which


would not be used for the longest duration of time in the future.

Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with


4 page frame. Find number of page fault.

19
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —
> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7
because it is not used for the longest duration of time in the future.—>1 Page
fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page
Fault.

Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to
set up a benchmark so that other replacement algorithms can be analyzed against
it.

3. Least Recently Used: In this algorithm, page will be replaced which is least
recently used.

Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3


with 4 page frames. Find number of page faults.

20
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —
> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7
because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.

4. Most Recently Used (MRU): In this algorithm, page will be replaced which
has been used recently. Belady’s anomaly can occur in this algorithm.

21
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —
> 4 Page faults

0 is already their so–> 0 page fault

when 3 comes it will take place of 0 because it is most recently used —>1 Page
fault

when 0 comes it will take place of 3 —>1 Page fault

when 4 comes it will take place of 0 —>1 Page fault

2 is already in memory so —> 0 Page fault

when 3 comes it will take place of 2 —>1 Page fault

when 0 comes it will take place of 3 —>1 Page fault

when 3 comes it will take place of 0 —>1 Page fault

when 2 comes it will take place of 3 —>1 Page fault


when 3 comes it will take place of 2 —>1 Page fault

Assignment Question

1. Explain Logical Versus Physical-Address Space .

2. Discuss Swapping in Operating System

3. Define Paging? method of paging.


4. What is Demand Paging in Operating System?

5. Explain Page Replacement Algorithms?

22

You might also like