0% found this document useful (0 votes)
16 views

Unit 3 (1)

Uploaded by

Sakshi hingne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Unit 3 (1)

Uploaded by

Sakshi hingne
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Unit 3

Memory Management Background


Memory Management is the process of controlling and coordinating computer memory,
assigning portions known as blocks to various running programs to optimize the overall
performance of the system.
It is the most important function of an operating system that manages primary memory. It
helps processes to move back and forward between the main memory and execution disk. It
helps OS to keep track of every memory location, irrespective of whether it is allocated to
some process or it remains free.

Swapping
Swapping is a memory management technique for swapping data between main memory
and secondary memory for better memory utilization. Swapping is a memory management
technique that can be used to increase the operating system’s performance.

Swapping is moving data between physical memory(RAM) and secondary memory. In


computing, virtual memory is a management technique that combines a computer’s hard
disk space with its random access memory (RAM) to create a larger virtual address space.
This can be useful if you have too many processes running on your system and not enough
physical memory to store them.
While performing swapping the operating system needs to allocate a block of memory. It
finds the first vacant block of physical memory.
As each new block of memory is allocated, it replaces the oldest block in physical memory.
When a program attempts to access a page that has been swapped out, the operating system
copies the page from the disk into physical memory and updates the page table entry.
The purpose of operating system swapping is to increase the degree of multiprogramming
and increase main memory usage.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 1


There are two steps to changing the operating system:
 Swap-in: A swap-in process in which a process moves from secondary storage / hard
disk to main memory (RAM).
 Swap out: Swap out takes a process out of the main memory and places it in
secondary memory.

Contiguous Memory Allocation Schemes


Contiguous memory allocation refers to a memory management technique in which
whenever there occurs a request by a user process for the memory, one of the sections of the
contiguous memory block would be given to that process, in accordance with its
requirement.

Contiguous memory allocation can be divide the memory into following types of partition.
 Fixed-Sized Partitions
Another name for this is static partitioning. In this case, the system gets divided into multiple
fixed-sized partitions. In this type of scheme, every partition may consist of exactly one
process. This very process limits the extent at which multiprogramming would occur, since
the total number of partitions decides the total number of processes. Read more on fixed-
sized partitions here.
 Variable-Sized Partitions
Dynamic partitioning is another name for this. The scheme allocation in this type of partition
is done dynamically. Here, the size of every partition isn’t declared initially. Only once we
know the process size, will we know the size of the partitions. But in this case, the size of the
process and the partition is equal; thus, it helps in preventing internal fragmentation.
On the other hand, when a process is smaller than its partition, some size of the partition
gets wasted (internal fragmentation). It occurs in static partitioning, and dynamic
partitioning solves this issue. Read more on dynamic partitions here.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 2


Paging
Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages. In the Paging method, the main memory
is divided into small fixed-size blocks of physical memory, which is called frames. The size of
a frame should be kept the same as that of a page to have maximum utilization of the main
memory and to avoid external fragmentation. Paging is used for faster access to data, and it
is a logical concept.

Segmentation
Segmentation method works almost similarly to paging. The only difference between the two
is that segments are of variable-length, whereas, in the paging method, pages are always of
fixed size.
A program segment includes the program's main function, data structures, utility functions,
etc. The OS maintains a segment map table for all the processes. It also includes a list of free
memory blocks along with its size, segment numbers, and its memory locations in the main
memory or virtual memory.
Paging Segmentation
A page is of the fixed block size. A segment is of variable size.
In Paging, the hardware decides the page The segment size is specified by the user.
size.
The paging technique is faster for memory Segmentation is slower than paging
access. method.
Page table stores the page data Segmentation table stores the
segmentation data.
Paging address space is one dimensional In segmentation, there is the availability of
many independent address spaces
A process address space is broken into A process address space Is broken in
fixed-sized blocks, which is called pages. differing sized blocks called sections.
It may lead to internal fragmentation. It may lead to external fragmentation.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 3


Virtual Memory Management: Background
Virtual Memory is a storage mechanism which offers user an illusion of having a very big
main memory. It is done by treating a part of secondary memory as the main memory. In
Virtual memory, the user can store processes with a bigger size than the available main
memory.
Therefore, instead of loading one long process in the main memory, the OS loads the various
parts of more than one process in the main memory. Virtual memory is mostly implemented
with demand paging and demand segmentation.
How Virtual Memory Works?
In the modern world, virtual memory has become quite common these days. It is used
whenever some pages require to be loaded in the main memory for the execution, and the
memory is not available for those many pages.
So, in that case, instead of preventing pages from entering in the main memory, the OS
searches for the RAM space that are minimum used in the recent times or that are not
referenced into the secondary memory to make the space for the new pages in the main
memory.
Let’s assume that an OS requires 300 MB of memory to store all the running programs.
However, there’s currently only 50 MB of available physical memory stored on the RAM.
 The OS will then set up 250 MB of virtual memory and use a program called the Virtual
Memory Manager(VMM) to manage that 250 MB.
 So, in this case, the VMM will create a file on the hard disk that is 250 MB in size to
store extra memory that is required.
 The OS will now proceed to address memory as it considers 300 MB of real memory
stored in the RAM, even if only 50 MB space is available.
 It is the job of the VMM to manage 300 MB memory even if just 50 MB of real memory
space is available.

Demand Paging scheme


Every process contains a lot of pages. But it is not efficient to put all the pages of that process
in the main memory. As RAM size is limited too. So it’s better when the process is in execution
then loading the pages according to the need. As it might be possible that any application
doesn’t need all its pages for running the application. For example, e.g. If there is a program
calculating the overall cost of painting a room. We have different pages having different
codes. Let’s say this process has three pages P1, P2, and P3 containing different codes.
 P1-code for calculating cost of labor.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 4


 P2- code for calculating the cost of the paint.
 P3- code for calculating Calculate surface area
Suppose the user wants to calculate the cost of labor only. Then CPU will demand only page
1, and there is no point in loading pages 2 and 3. If there was no concept of demand paging,
then the whole program must have loaded.

Process Creation
1) When a new process is created, the operating system assigns a unique Process Identifier
(PID) to it and inserts a new entry in the primary process table.
2) Then required memory space for all the elements of the process such as program, data,
and stack is allocated including space for its Process Control Block (PCB).
3) Next, the various values in PCB are initialized such as,
 The process identification part is filled with PID assigned to it in step (1) and also its
parent’s PID.
 The processor register values are mostly filled with zeroes, except for the stack
pointer and program counter. The stack pointer is filled with the address of the stack-
allocated to it in step (ii) and the program counter is filled with the address of its
program entry point.
 The process state information would be set to ‘New’.
 Priority would be lowest by default, but the user can specify any priority during
creation.
4) Then the operating system will link this process to the scheduling queue and the process
state would be changed from ‘New’ to ‘Ready’. Now the process is competing for the CPU.
5) Additionally, the operating system will create some other data structures such as log files
or accounting files to keep track of processes activity.

Page Replacement Policies


Page replacement is needed in the operating systems that use virtual memory using Demand
Paging. As we know that in Demand paging, only a set of pages of a process is loaded into the
memory. When a page that is residing in virtual memory is requested by a process for its
execution, the Operating System needs to decide which page will be replaced by this
requested page. This process is known as page replacement and is a vital component in
virtual memory management.
Page Replacement Algorithms
 First In First Out (FIFO)

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 5


FIFO algorithm is the simplest of all the page replacement algorithms. In this, we maintain a
queue of all the pages that are in the memory currently. The oldest page in the memory is at
the front-end of the queue and the most recent page is at the back or rear-end of the queue.
Whenever a page fault occurs, the operating system looks at the front-end of the queue to
know the page to be replaced by the newly requested page. It also adds this newly requested
page at the rear-end and removes the oldest page from the front-end of the queue.

 Optimal Page Replacement


In this algorithm, the pages are replaced with the ones that will not be used for the longest
duration of time in the future. In simple terms, the pages that will be referred farthest in the
future are replaced in this algorithm.
 Least Recently Used (LRU) Page Replacement
This algorithm works on the basis of the principle of locality of a reference which states
that a program has a tendency to access the same set of memory locations repetitively over
a short period of time. So pages that have been used heavily in the past are most likely to be
used heavily in the future also.
 Last In First Out (LIFO) Page Replacement
In this algorithm, the newest page is replaced by the requested page. Usually, this is done
through a stack, where we maintain a stack of pages currently in the memory with the newest
page being at the top. Whenever a page fault occurs, the page at the top of the stack is
replaced.
 Random Page Replacement
As the name suggests, chooses any random page in the memory to be replaced by the
requested page. This algorithm can behave like any of the algorithms based on the random
page chosen to be replaced.

Allocation of Frames
The main memory of the system is divided into frames. The OS has to allocate a sufficient
number of frames for each process using various algorithms. The five major ways to allocate
frames are as follows:
 Proportional frame allocation
The proportional frame allocation algorithm allocates frames based on the size that is
necessary for the execution and the number of total frames the memory has.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 6


The only disadvantage of this algorithm is it does not allocate frames based on priority. This
situation is solved by Priority frame allocation.
 Priority frame allocation
Priority frame allocation allocates frames based on the priority of the processes and the
number of frame allocations.
If a process is of high priority and needs more frames then the process will be allocated that
many frames. The allocation of lower priority processes occurs after it.
 Global replacement allocation
When there is a page fault in the operating system, then the global replacement allocation
takes care of it.
The process with lower priority can give frames to the process with higher priority to avoid
page faults.
 Local replacement allocation
In local replacement allocation, the frames of pages can be stored on the same page.
It doesn’t influence the behavior of the process as it did in global replacement allocation.
 Equal frame allocation
In equal frame allocation, the processes are allocated equally among the processes in the
operating system.
The only disadvantage in equal frame allocation is that a process requires more frames for
allocation for execution and there are only a set number of frames.

Thrashing
A process is said to be thrashing if the CPU spends more time serving page faults than
executing the pages. This leads to low CPU utilization and the Operating System in return
tries to increase the degree of multiprogramming.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 7


The above definition might be hard to understand in one go so let’s try to understand it with
an example. We know each process requires a minimum number of pages to be present in
the main memory at any point in time for its execution.
For instance, consider 3 processes- P1, P2, and P3.
P1 has 20 pages and needs a minimum of 5 pages to properly execute with demand paging.
P2 has 16 pages and requires 4 pages; P3 has 12 pages and require 4 pages;
Let’s say we have a main memory that can hold only 12 pages and the number of frames
allotted to each process is based on its size. So, P1 gets 5 frames to store 5 pages, P2 gets 4
and P3 gets 3
Main Memory Frames Allocation

P1 P1 P1 P1 P1 P2 P2 P2 P2 P3 P3 P3
Since P3 doesn’t get the necessary number of pages to run, a page fault is raised. By the
necessary number of pages.
Assuming we use global replacement algorithm(discussed below), one of the pages from P1
gets replaced and following is the new configuration of main memory:

P1 P1 P1 P1 P2 P2 P2 P2 P3 P3 P3 P4
With the current state of memory, P1 now raises a page fault and takes frames from other
processes. These victim processes also start to page fault and take pages from other
processes and this will continue so and so on. This high paging activity is thrashing.

PROF. BHUSHAN L RATHI (SARASWATI COLLEGE, SHEGAON) 8

You might also like