OS R22 2-2 UNIT-4
OS R22 2-2 UNIT-4
OS R22 2-2 UNIT-4
Logical and physical addresses are the same in compile-time and load-time address
binding schemes; logical (virtual) and physical addresses differ in execution-time
address binding scheme.
Dynamic Loading
It loads the program and data dynamically into physical memory to obtain bette
1
r memory-space utilization. With dynamic loading, a routine is not loaded until
it is called.
The advantage of dynamic loading is that an unused routine is never loaded.
This method is useful when large amounts of code are needed to handle
infrequently
occurring cases, such as error routines.
Dynamic loading does not require special support from the operating system.
Dynamic Linking
Linking postponed until execution time.
Small piece of code (stub) used to locate the appropriate memory-resident
library routine.
Stub replaces itself with the address of the routine and executes the routine
. Operating system needed to check if routine is in processes memory address.
Dynamic linking is particularly useful for libraries.
Overlays
Keep in memory only those instructions and data that are needed at any given
time.
Needed when process is larger than amount of memory allocated to it.
Implemented by user, no special support needed from operating system,
programming
design of overlay structure is complex.
Due to the swapping technique performance usually gets affected, but it also helps
in running multiple and big processes in parallel. The swapping process is also
known as a technique for memory compaction. Basically, low priority processes
2
may be swapped out so that processes with a higher priority may be loaded and
executed.
Let us understand this technique with the help of a figure given below:
The above diagram shows swapping of two processes where the disk is used as a
Backing store.
The procedure by which any process gets removed from the hard disk and placed
in the main memory or RAM commonly known as Swap In.
On the other hand, Swap Out is the method of removing a process from the main
memory or RAM and then adding it to the Hard Disk.
Memory Allocation
4
1. First fit - Search the list of holes until one is found that is big
enough to satisfy the request, and assign a portion of that hole to that
process. Whatever fraction of the hole not needed by the request is
left on the free list as a smaller hole. Subsequent requests may start
looking either from the beginning of the list or from the point at
which this search ended.
2. Best fit - Allocate the smallest hole that is big enough to satisfy the
request. This saves large holes for other process requests that may
need them later, but the resulting unused portions of holes may be too small
to be of any use, and will therefore be wasted. Keeping the free list sorted can
speed up the process of finding the right hole.
3. Worst fit - Allocate the largest hole available, thereby increasing the likelihood
that the remaining portion will be usable for satisfying future requests.
Paging solves the problem of fitting memory chunks of varying sizes onto the
backing store and this problem is suffered by many memory management schemes.
Paging helps to avoid external fragmentation and the need for compaction.
The paging technique divides the physical memory(main memory) into fixed-size
blocks that are known as Frames and also divide the logical memory(secondary
memory) into blocks of the same size that are known as Pages.
The Frame has the same size as that of a Page. A frame is basically a place where a
(logical) page can be (physically) placed.
5
Each process is mainly divided into parts where the size of each part is the same as
the page size.
There is a possibility that the size of the last part may be less than the page size.
Pages of a process are brought into the main memory only when there is a
requirement otherwise they reside in the secondary storage.
One page of a process is mainly stored in one of the frames of the memory.
Also, the pages can be stored at different locations of the memory but
always the main priority is to find contiguous frames.
Let us now cover the concept of translating a logical address into the physical
address:
6
The CPU always generates a logical address.
1. Page Number(p)
where,
Page Number is used to specify the specific page of the process from which the
CPU wants to read the data. and it is also used as an index to the page table.
and Page offset is mainly used to specify the specific word on the page that the
CPU wants to read.
Page Table in OS
The Page table mainly contains the base address of each page in the Physical
memory. The base address is then combined with the page offset in order to define
the physical memory address which is then sent to the memory unit.
Thus page table mainly provides the corresponding frame number (base address of
the frame) where that page is stored in the main memory.
As we have told you above that the frame number is combined with the page
offset and forms the required physical address.
1. Page offset(d)
2. Frame Number(f)
where,
7
The Frame number is used to indicate the specific frame where the required page
is stored.
and Page Offset indicates the specific word that has to be read from that page.
The Page size (like the frame size) is defined with the help of hardware. It is
important to note here that the size of the page is typically the power of 2 that
varies between 512 bytes and 16 MB per page and it mainly depends on the
architecture of the computer.
If the size of logical address space is 2 raised to the power m and page size is 2
raised to the power n addressing units then the high order m-n bits of logical
address designates the page number and the n low-order bits designate the page
offset.
where p indicates the index into the page table, and d indicates the displacement
within the page.
8
The above diagram indicates the translation of the Logical address into the
Physical address. The PTBR in the above diagram means page table base register
and it basically holds the base address for the page table of the current process.
The PTBR is mainly a processor register and is managed by the operating system.
Commonly, each process running on a processor needs its own logical address
space.
But there is a problem with this approach and that is with the time required to
access a user memory location. Suppose if we want to find the location i, we must
first find the index into the page table by using the value in the PTBR offset by the
page number for I. And this task requires memory access. It then provides us the
frame number which is combined with the page offset in order to produce the
actual address. After that, we can then access the desired place in the memory.
With the above scheme, two memory accesses are needed in order to access a byte(
one for the page-table entry and one for byte). Thus memory access is slower by a
factor of 2 and in most cases, this scheme slowed by a factor of 2.
There is the standard solution for the above problem that is to use a special, small,
and fast-lookup hardware cache that is commonly known as Translation of look-
aside buffer(TLB).
Each entry in the TLB mainly consists of two parts: a key(that is the tag) and
a value.
The number of entries in the TLB is small and generally lies in between 64
and 1024.
9
The TLB contains only a few of the page-table entries. Whenever the logical
address is generated by the CPU then its page number is presented to the TLB.
If the page number is found, then its frame number is immediately available
and is used in order to access the memory. The above whole task may take
less than 10 percent longer than would if an unmapped memory reference
were used.
In case if the page number is not in the TLB (which is known as TLB miss),
then a memory reference to the Page table must be made.
When the frame number is obtained it can be used to access the memory.
Additionally, page number and frame number is added to the TLB so that
they will be found quickly on the next reference.
In case if the TLB is already full of entries then the Operating system must
select one for replacement.
TLB allows some entries to be wired down, which means they cannot be
removed from the TLB. Typically TLB entries for the kernel code are wired
down.
10
Paging Hardware With TLB
Advantages of Paging
Given below are some advantages of the Paging technique in the operating system:
Disadvantages of Paging
11
There is an increase in time taken to fetch the instruction since now two
memory accesses are required.
Paging Hardware
1. Page Number(p)
where,
Page Number is used as an index into the page table that generally contains the
base address of each page in the physical memory.
Page offset is combined with base address in order to define the physical memory
address which is then sent to the memory unit.
If the size of logical address space is 2 raised to the power m and page size is 2
raised to the power n addressing units then the high order m-n bits of logical
address designates the page number and the n low-order bits designate the page
offset.
where p indicates the index into the page table, and d indicates the displacement
within the page. The Page size is usually defined by the hardware. The size of the
page is typically the power of 2 that varies between 512 bytes and 16 MB per page.
12
Now its time to cover an example of Paging:
Paging Example
There is no External fragmentation caused due to this scheme; Any frame that is
free can be allocated to any process that needs it. But the internal fragmentation is
still there.
The first page of the process is loaded into the first frame that is listed on
the free-frame list, and then the frame number is put into the page table.
13
The frame table is a data structure that keeps the information of which frames are
allocated or which frames are available and many more things. This table mainly
has one entry for each physical page frame.
The Operating system maintains a copy of the page table for each process in the
same way as it maintains the copy of the instruction counter and register contents.
Also, this copy is used to translate logical addresses to physical addresses
whenever the operating system maps a logical address to a physical address
manually.
This copy is also used by the CPU dispatcher in order to define the hardware page
table whenever a process is to be allocated to the CPU.
Segmented Paging
Pure segmentation is not very popular and not being used in many of the operating
systems. However, Segmentation can be combined with Paging to get the best
features out of both the techniques.
In Segmented Paging, the main memory is divided into variable size segments
which are further divided into fixed size pages.
Each Page table contains the various information about every page of the segment.
The Segment Table contains the information about every segment. Each segment
table entry points to a page table entry and every page table entry is mapped to one
of the page within a segment.
15
Translation of logical address to physical address
The CPU generates a logical address which is divided into two parts: Segment
Number and Segment Offset. The Segment Offset must be less than the segment
limit. Offset is further divided into Page number and Page Offset. To map the exact
page number in the page table, the page number is added into the page table base.
The actual frame number with the page offset is mapped to the main memory to get
the desired word in the page of the certain segment of the process.
16
Advantages of Segmented Paging
1. It reduces memory usage.
2. Page table size is limited by the segment size.
3. Segment table has only one entry corresponding to one actual segment.
4. External Fragmentation is not there.
5. It simplifies memory allocation.
However, deciding, which pages need to be kept in the main memory and which
need to be kept in the secondary memory, is going to be difficult because we
cannot say in advance that a process will require a particular page at particular
time.
Whenever any page is referred for the first time in the main memory, then that
page will be found in the secondary memory.
After that, it may or may not be present in the main memory depending upon the
page replacement algorithm which will be covered later in this tutorial.
17
The CPU has to access the missed page from the secondary memory. If the number
of page fault is very high then the effective access time of the system will become
very high.
What is Thrashing?
If the number of page faults is equal to the number of referred pages or the number
of page faults are so high so that the CPU remains busy in just reading the pages
from the secondary memory then the effective access time will be the time taken
by the CPU to read one word from the secondary memory and it will be so high.
The concept is called thrashing.
If the page fault rate is PF %, the time taken in getting a page from the secondary
memory and again restarting is S (service time) and the memory access time is ma
then the effective access time can be given as;
Page Replacement in OS
Page Fault – It is basically a memory error, and it occurs when the current
programs attempt to access the memory page for mapping into virtual address
space, but it is unable to load into the physical memory then this is referred to as
Page fault.
1. First In First Out (FIFO): This is the simplest page replacement algorithm. In
this algorithm, the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue. When a page needs to be
replaced page in the front of the queue is selected for removal.
18
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty
slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is
not available in memory so it replaces the oldest page slot i.e 1. —>1 Page
Fault. 6 comes, it is also not available in memory so it replaces the oldest page
slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not available so it replaces
0 1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when
increasing the number of page frames while using the First in First Out (FIFO)
page replacement algorithm. For example, if we consider reference strings 3, 2,
1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9 total page faults, but if we increase
slots to 4, we get 10-page faults.
19
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —
> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7
because it is not used for the longest duration of time in the future.—>1 Page
fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page
Fault.
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to
set up a benchmark so that other replacement algorithms can be analyzed against
it.
3. Least Recently Used: In this algorithm, page will be replaced which is least
recently used.
20
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —
> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7
because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
4. Most Recently Used (MRU): In this algorithm, page will be replaced which
has been used recently. Belady’s anomaly can occur in this algorithm.
21
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —
> 4 Page faults
when 3 comes it will take place of 0 because it is most recently used —>1 Page
fault
Assignment Question
22