OS_3141601_Unit6_MemoryManagement
OS_3141601_Unit6_MemoryManagement
Management
Topics to be covered
Basic Memory Management: Definition
Logical and Physical address map
Memory allocation
Paging
Virtual Memory: Basics of Virtual Memory
Hardware and control structures – Locality of reference
Page fault
Working Set
Dirty page/Dirty bit – Demand paging ( Concepts only)
Page Replacement Algorithms
What is Memory?
Computer memory is any physical device capable of storing
information temporarily or permanently.
Types of memory
1. Random Access Memory (RAM), is a volatile memory that loses
its contents when the computer or hardware device loses power.
2. Read Only Memory (ROM), is a non-volatile memory,
sometimes abbreviated as NVRAM, is a memory that keeps its
contents even if the power is lost.
Computer uses special ROM called BIOS (Basic Input Output
System) which permanently stores the software needed to access
computer hardware such as hard disk and then load an operating
system into RAM and start to execute it.
What is Memory? (cont…)
3. Programmable Read-Only Memory (PROM), is a memory chip
on which you can store a program. But once the PROM has been
used, you cannot wipe it clean and use it to store something
else. Like ROMs, PROMs are non-volatile. E.g CD-R
4. Erasable Programmable Read-Only Memory (EPROM), is a
special type of PROM that can be erased by exposing it to
ultraviolet light. E.g CD-RW
5. Electrically Erasable Programmable Read-Only Memory
(EEPROM), is a special type of PROM that can be erased by
exposing it to an electrical charge. E.g Pendrive
What is Memory Hierarchy?
The hierarchical arrangement of storage in current computer
architectures is called the memory hierarchy.
• Faster
• Expensive
Register • Less Capacity
L1 Cache
L2 Cache
Main Memory
User
Program User
User Program
Program
OS in RAM 0 0 OS in RAM 0
User
Program User
User Program
Program
OS in RAM 0 0 OS in RAM 0
Partition 1
OS
Multiprogramming with Fixed partitions
There are two ways to maintain queue
1. Using Multiple Input Queues.
2. Using Single Input Queue.
Partition - 4 Partition - 4
Single
Partition - 3 Partition - 3
Input
Multiple Queue
Input
Queues
Partition - 2 Partition - 2
Partition - 1 Partition - 1
Operating Operating
System System
Multiprogramming with Dynamic partitions
• Here, memory is shared among operating system and
various simultaneously running processes.
• Initially, the entire available memory is treated as a
single free partition.
• Process is allocated exactly as much memory as it User
Program
requires.
• Whenever any process enters in a system, a chunk of
free memory big enough to fit the process is found
and allocated. The remaining unoccupied space is
treated as another free partition. OS
Multiprogramming with Dynamic partitions
• If enough free memory is not available to fit the
process, process needs to wait until required memory
becomes available.
• Whenever any process gets terminate, it releases the
space occupied. If the released free space is User
Program
contiguous to another free partition, both the free
partitions are merged together in to single free
partition.
• Better utilization of memory than fixed sized size
partition. OS
Multiprogramming with Dynamic partitions
Memory compaction
When swapping creates multiple holes in memory, it is possible
to combine them all in one big hole by moving all the processes
downward as far as possible. This techniques is known as memory
compaction.
It requires lot of CPU time.
Process B
Process B
Process A Process A
Operating Operating
System System
Memory Allocation with Extra Space
Base Limit
Register Register
H 18 2 P 20 6 P 26 3 H 29 3 X
H 18 2 P 20 6 P 26 3 H 29 3 X
H 18 2 P 20 6 P 26 3 H 29 3 X
Neighbors
Before P terminate After P terminate
A P B P is replaced by H A B
P is replaced by H
A P and two H are merged A
P is replaced by H
P B B
and two H are merged
P is replaced by H
P
and three H are merged
Memory allocation algorithms
Four memory allocation algorithms are as follow
1. First fit
2. Next fit
3. Best fit
4. Worst fit
First fit
Search starts from the starting location of the memory.
First available hole that is large enough to hold the process is
selected for allocation.
The hole is then broken up into two pieces, one for process and
another for unused memory.
Example: Processes of 212K, 417K, 112K and 426K arrives in order.
100k 500k 200k 300k 600k
• Here process of size 426k will not get any partition for allocation.
First fit
Fastest algorithm because it searches as little as possible.
Memory loss is higher, as very large hole may be selected for
small process.
Here process of size 426k will not get any partition for allocation.
• Here process of size 426k will not get any partition for allocation.
Next fit
Search time is smaller.
Memory manager must have to keep track of last allotted hole to
process.
It gives slightly worse performance than first fit.
Here process of size 426k will not get any partition for allocation.
• Here process of size 426k will not get any partition for allocation.
Worst fit
Search time is high, as it searches entire memory every time.
This algorithm can be used only with dynamic partitioning.
Here process of size 426k will not get any partition for allocation.
Virtual
Address space HDD
Virtual Memory
• In a system using virtual memory, the RAM
physical memory is divided into page
RAM
frames and the virtual address space is
divided in into equally-sized partitions
HDD Another
called pages. Process’s
• Virtual memory works fine in a Memory
Virtual
Address space HDD
Paging
Paging is a storage mechanism used to retrieve processes from
the secondary storage (Hard disk) into the main memory (RAM)
in the form of pages.
The main idea behind the paging is to divide each process in the
form of pages. The main memory will also be divided in the form
of frames.
One page of the process is to be stored in one of the frames of the
memory.
The pages can be stored at the different locations of the memory
but the priority is always to find the contiguous frames or holes.
Paging
Pages of the process are brought into the main memory only
when they are required otherwise they reside in the secondary
storage.
The sizes of each frame must be equal. Considering the fact that
the pages are mapped to the frames in Paging, page size needs to
be as same as frame size.
Different operating system defines different frame sizes.
Paging
40K – 44K
36K – 40K Virtual page
Virtual 32K – 36K 5
Address 28K – 32K
Space
24K – 28K
Physical
20K – 24K 3 Page frame 5 20K – 24K Memory
16K – 20K 4 4 16K – 20K Address
12K – 16K 0 3 12K – 16K
8K – 12K 2 8K – 12K
4K – 8K 1 1 4K – 8K
0K – 4K 2 0 0K – 4K
Paging
Size of Virtual Address Space is greater than that of main
memory, so instead of loading entire address space in to memory
to run the process, MMU copies only required pages into main
memory.
In order to keep the track of pages and page frames, OS
maintains a data structure called page table.
Logical Address vs Physical Address
Logical Address is generated by CPU while a program is running.
The logical address is virtual address as it does not exist physically
therefore it is also known as Virtual Address.
Physical Address identifies a physical location of required data in
a memory. The user never directly deals with the physical address
but can access by its corresponding logical address.
The hardware device called Memory-Management Unit is used
for mapping (converting) logical address to its corresponding
physical address.
Conversion of virtual address to physical address
When virtual memory is used, the virtual address is presented to
an MMU (Memory Management Unit) that maps the virtual
addresses onto the physical memory addresses.
40K – 44K X
36K – 40K X Virtual page
Virtual 32K – 36K 5
Address 28K – 32K X
Space
24K – 28K X
Physical
20K – 24K 3 Page frame 5 20K – 24K Memory
16K – 20K 4 4 16K – 20K Address
12K – 16K 0 3 12K – 16K
8K – 12K X 2 8K – 12K
4K – 8K 1 1 4K – 8K
0K – 4K 2 0 0K – 4K
Conversion of virtual address to physical address
We have a computer generated 16-bit addresses, from 0 up to
44K. These are the virtual addresses.
15 000 0
14 000 0
13 000 0
12 000 0
11 111 1
10 000 0
9 101 1
12 bit offset
Page 8 000 0
7 000 0 copied directly from
Table input to output
6 000 0
Virtual page=2 5 011 1
4 100 1
is used as an
3 000 1
index into the 2 110 1 110
page table 1 001 1
0 010 1
Present/
Absent bit
0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0
Modified bit: Modified bit says whether the page has been
modified or not. If the page in memory has been modified, it
must be written back to disk. This bit is also called as dirty bit as it
reflects the page’s state.
Referenced bit: A references bit is set whenever a page is
referenced, either for reading or writing. Its value helps
operating system in page replacement algorithm.
Cashing Disabled bit: This feature is important for pages that
maps onto device registers rather than memory. With this bit
cashing can be turned off.
Demand Paging
In paging system, processes are started up with none of their
pages in memory.
When CPU tries to fetch the first instruction, it gets page fault,
other page faults for global variables and stack usually follow
quickly.
After a while, the process has most of the pages it needs in main
memory and it has few page faults.
This strategy is called demand paging because pages are loaded
only on demand, not in advance.
Definitions
Working Set: The set of pages that a process is
currently using is known as working set.
Thrashing: A program causing page faults every
few instructions is said to be thrashing.
Pre-paging: Many paging systems try to keep
track of each process‘s working set and make
sure that it is in memory before letting the
process run.
Loading pages before allowing processes to run
is called pre-paging.
Issues in Paging
In any paging system, two major issues must be faced:
1. The mapping from virtual address to physical address must be
fast.
2. If the virtual address space is large, the page table will be large.
Mapping from virtual address to physical address must be fast
Logical Address
Physical Address
P D F3 D
P2
performance is
reduced by half
CPU
For every instruction
Memory reference Memory
occur two time
Use a hardware
Page Table
TLB (Translation
Page Frame
Lookaside Buffer)
P1 F2
P2 F3
P3 F1
Mapping from virtual address to physical address must be fast
Logical Address
Physical Address
P D F1
F3 D
P3 P2
TLB
Page Frame
CPU P1 F2 Hardware
P2 F3
Memory
Page
Instruction 1
Instruction 2 Page Table
Instruction 3 Page Frame
… P1 F2 Data Structure
… P2 F3 Inside memory
Instruction 100
P3 F1
Mapping from virtual address to physical address using TLB
Steps in TLB hit:
1. CPU generates virtual address.
2. It is checked in TLB (present).
3. Corresponding frame number is retrieved, which now tells
where in the main memory page lies.
Page Frame
P1 F301
P2 F302
Inverted Page Table
Current process ID - 245
Process IDs do not match.
Page No 2 Offset So follow chaining pointer
Page
Requests 7 0 1 2 0 3 0 4 2 3 0 3 2 0 2 0 1 7 0 1
Frame 1 7 7 7 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1
Frame 2 0 0 0 0 0 0 4 4 4 0 0 0 0 0 0 0 0 0 0
Frame 3 1 1 1 3 3 3 3 3 3 3 3 3 3 3 3 7 7 7
Page Faults F F F F F F F F F
(9)
FIFO Page Replacement Algorithm
The first in first out page replacement algorithm is the simplest
page replacement algorithm.
The operating system maintains a list of all pages currently in
memory, with the most recently arrived page at the tail and least
recent at the head.
On a page fault, the page at head is removed and the new page is
added to the tail.
When a page replacement is required the oldest page in memory
needs to be replaced.
The performance of the FIFO algorithm is not always good
because it may happen that the page which is the oldest is
frequently referred by OS.
Hence removing the oldest page may create page fault again.
FIFO Page Replacement Algorithm
Page Reference String:
• 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
• Three frames
Page
Requests 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0 0 0 0 0 7 7 7
Frame 2 0 0 0 0 3 3 3 2 2 2 2 2 1 1 1 1 1 0 0
Frame 3 1 1 1 1 0 0 0 3 3 3 3 3 2 2 2 2 2 1
Page Faults F F F F F F F F F F F F F F F
(15)
Second Chance Page Replacement Algorithm
It is modified form of the FIFO page replacement algorithm.
It looks at the front of the queue as FIFO does, but instead of
immediately paging out that page, it checks to see if its
referenced bit is set.
• If it is not set (zero), the page is swapped out.
• Otherwise, the referenced bit is cleared, the page is inserted at
the back of the queue (as if it were a new page) and this process
is repeated.
Page loaded 1 0 1 0 1 Most recently
first loaded page
A B C D E
0 1 0 1 0
Remove B C D E A Add at end
1 0 1 0 1
C D E A F
Second Chance Page Replacement Algorithm
If all the pages have their referenced bit set, on the second
encounter of the first page in the list, that page will be swapped
out, as it now has its referenced bit cleared.
If all the pages have their reference bit cleared, then second
chance algorithm degenerates into pure FIFO.
1 1 1 1 1
A B C D E
0 0 0 0 0
A B C D E
0 0 0 0 1
B C D E F
Clock Page Replacement Algorithm
In second chance algorithm pages are constantly moving around
on its list. So it is not working efficiently.
A better approach is to keep all the page frames on a circular list
in the form of a clock.
1
A
1 0
F B
A hand points to
1 1
the oldest page.
E C
0
D
Clock Page Replacement Algorithm
When a page fault occurs, the page being pointed to by the hand
is inspected.
1
A
1 0
F B
1 1
E C
0
D
Clock Page Replacement Algorithm
If its R bit is 0, the page is evicted, the new page is inserted into
the clock in its place, and the hand is advanced one position.
1
A
1 10
F G
B
1 1
E C
0
D
Clock Page Replacement Algorithm
If R is 1, it is cleared and the hand is advanced to the next page.
1
A
1 0
F B
1 1
0
E C
0
D
This process is repeated until
a page is found with R = 0.
LRU (Least Recently Used) Page Replacement Algorithm
A good approximation to the optimal algorithm is based on the
observation that pages that have been heavily used in last few
instructions will probably be heavily used again in next few
instructions.
When page fault occurs, throw out the page that has been used
for the longest time. This strategy is called LRU (Least Recently
Used) paging.
To fully implement LRU, it is necessary to maintain a linked list of
all pages in memory, with the most recently used page at the
front and the least recently used page at the rear.
The list must be updated on every memory reference.
Finding a page in the list, deleting it, and then moving it to the
front is a very time consuming operations.
LRU (Least Recently Used) Page Replacement Algorithm
Page Reference String:
• 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
• Three frames
Page
Requests 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
Frame 1 7 7 7 2 2 2 2 4 4 4 0 0 0 1 1 1 1 1 1 1
Frame 2 0 0 0 0 0 0 0 0 3 3 3 3 3 3 0 0 0 0 0
Frame 3 1 1 1 3 3 3 2 2 2 2 2 2 2 2 2 7 7 7
Page Faults F F F F F F F F F F F F
(12)
NRU (Not Recently Used) Page Replacement Algorithm
NRU makes approximation to replace the page based on R
(referenced) and M (modified) bits.
When a process is started up, both page bits for all pages are set
to 0 by operating system.
Periodically, the R bit is cleared, to distinguish pages that have not
been referenced recently from those that have been.
When page fault occurs, the operating system inspects all the
pages and divide them into 4 categories based on current values
of their R and M bits
1. Case 0 : not referenced, not modified
2. Case 1 : not referenced, modified
3. Case 2 : referenced, not modified
4. Case 3 : referenced, modified
The NRU (Not Recently Used) algorithm removes a page at
random from the lowest numbered nonempty class.
NRU (Not Recently Used) Page Replacement Algorithm
For example if,
1. Page-0 is of class-2 (referenced, not modified)
2. Page-1 is of class-1 (not referenced, modified)
3. Page-2 is of class-0 ( not referenced, not modified)
4. Page-3 is of class-3 (referenced, modified)
So lowest class page-2 needs to be replaced by NRU.
Belady’s Anomaly
Page Reference String: Page Faults of 3 Frame > Page Faults of 4 Frame
• 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Belady’s Anomaly
Page Requests 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 4 4 4 5 5 5 5 5 5
Frame 2 2 2 2 1 1 1 1 1 3 3 3
Frames
Three
Frame 3 3 3 3 2 2 2 2 2 4 4
Page Faults (9) F F F F F F F F F
Page Requests 1 2 3 4 1 2 5 1 2 3 4 5
Frame 1 1 1 1 1 1 1 5 5 5 5 4 4
Four Frames
Frame 2 2 2 2 2 2 2 1 1 1 1 5
Frame 3 3 3 3 3 3 3 2 2 2 2
Frame 4 4 4 4 4 4 4 3 3 3
Page Faults (10) F F F F F F F F F F
Belady’s Anomaly
Belady’s anomaly is the phenomenon in which increasing the
number of page frames results in an increase in the number of
page faults for certain memory access patterns.
This phenomenon is commonly experienced when using the first-
in first-out (FIFO) page replacement algorithm.
In FIFO, the page fault may or may not increase as the page
frames increase, but in Optimal and stack-based algorithms like
LRU, as the page frames increase the page fault decreases.
Sum
A computer has four page frames. The time of loading, time of last
access and the R and M bit for each page given below. Which
page FIFO, LRU and NRU will replace.
Page Loaded Last Ref. R M
0 126 280 1 0
1 230 265 0 1
2 140 270 0 0
3 110 285 1 1
LRU:- When page fault occurs, throw out the page that has been used
for the longest time.
Page page-1 is not used for the long time from all four, so LRU suggest
replacing page-1.
Sum
A computer has four page frames. The time of loading, time of last access and the
R and M bit for each page given below. Which page FIFO, LRU and NRU will
replace.
Page Loaded Last Ref. R M
0 126 280 1 0
1 230 265 0 1
2 140 270 0 0
3 110 285 1 1
NRU:-
• Page-0 is of class-2 (referenced, not modified)
• Page-1 is of class-1 (not referenced, modified)
• Page-2 is of class-0 ( not referenced, not modified)
• Page-3 is of class-3 (referenced, modified)
• So lowest class page-2 needs to be replaced by NRU
Segmentation
Segmentation is a memory management technique in which each
job is divided into several segments of different sizes, one for
each module that contains pieces that perform related functions.
Each segment is actually a different logical address space of the
program.
When a process is to be executed, its corresponding
segmentation are loaded into non-contiguous memory though
every segment is loaded into a contiguous block of available
memory.
Segmentation memory management works very similar to paging
but here segments are of variable-length where as in paging
pages are of fixed size.
Segmentation
Process P Segment Map Table
Sr Size Memory Physical Memory
Segment 1 Address
1 300 100 100
2 500 400 200
3 100 900 300
N X NM 400
Segment 2
500
600
700
800
Segment 3
900
1000
Segment N 1100
Segmentation
A program segment contains the program's main function, utility
functions, data structures, and so on.
The operating system maintains a segment map table for every
process.
Segment map table contains list of free memory blocks along
with segment numbers, their size and corresponding memory
locations in main memory.
For each segment, the table stores the starting address of the
segment and the length of the segment.
A reference to a memory location includes a value that identifies a
segment and an offset.
Paging VS Segmentation
Paging Segmentation
Paging was invented to get large address Segmentation was invented to allow
space without having to buy more programs and data to be broken up into
physical memory. logically independent address space and
to add sharing and protection.
The programmer does not aware that The programmer is aware that
paging is used. segmentation is used.
Procedure and data cannot be Procedure and data can be distinguished
distinguished and protected separately. and protected separately.
Change in data or procedure requires Change in data or procedure requires
compiling entire program. compiling only affected segment not
entire program.
Sharing of different procedures not Sharing of different procedures
available. available.
Question Bank
1. Explain multiprogramming with fixed partition.
2. Explain link list method for dynamic memory management.
3. How free space can be managed by OS.
4. Explain Swapping and Fragmentation in detail.
5. What is Paging? Explain paging mechanism in MMU with
example.
6. Explain TLB and Virtual Memory.
7. Discuss demand paging.
8. Define following Terms: Thrashing
9. List different Page Replacement Algorithms? Discuss it in terms of
page faults.
10. What is Belady’s anomaly? Explain with suitable example.
11. Given six Partition of 300KB, 600KB, 350KB, 200KB, 750KB and
125KB(in order), how would the first-fit, best-fit and worst-fit
algorithms places processes of size 115 KB, 500KB, 358KB, 200KB
and 375KB(in order)? Which algorithm is efficient for the use of
memory?
12. Calculate the page fault rates for below reference string in case of
FIFO and Optimal page replacement algorithm. Assume the
memory size is 4 page frames and all frames are initially empty.
0,2,1,6,4,0,1,0,3,1,2,1
13. Consider (70120304230321201701) page reference string: How
many page fault would occur for following page replacement
algorithm. Consider 3 frames and 4 frames.
1. FIFO 2. LRU 3. Optimal
14. Consider the following reference string. Calculate the page fault
rates for FIFO and OPTIMAL page replacement algorithm. Assume
the memory size is 4 page frame.
1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2
OR
Consider the following page reference string:
1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2 With four Frames How
many page faults would occur for the FIFO, Optimal page
replacement algorithms? which algorithm is efficient? (assume all
frames are initially empty)