Unit 3 Memory Managment Os Notes
Unit 3 Memory Managment Os Notes
SYLLABUS
UNIT-III: Memory Management
Logical versus Physical address space, Swapping, Contiguous allocation, Paging, Segmentation.
Virtual Memory: Demand paging, Performance of demand paging, Page replacement, Page replacement
algorithms, Thrashing.
• De-allocation technique.
Address Binding:
Programs are stored on the secondary storage disks as binary executable files. When the programs are
to be executed they are brought in to the main memory and placed within a process. The collection of
processes on the disk waiting to enter the main memory forms the input queue. One of the processes
which are to be executed is fetched from the queue and placed in the main memory. During the
execution it fetches instruction and data from main memory. After the process terminates it returns
back the memory space. During execution the process will go through different steps and in each step
the address is represented in different ways. In source program the address is symbolic. The compiler
converts the symbolic address to re-locatable address. The loader will convert this re-locatable address
to absolute address. Binding of instructions and data can be done at any step along the way: Compile
time:-If we know whether the process resides in memory then absolute code can be generated. If the
static address changes then it is necessary to re-compile the code from the beginning. Load time:-If the
compiler doesn’t know whether the process resides in memory then it generates the re-locatable code.
In this the binding is delayed until the load time. Execution time:-If the process is moved during its
execution from one memory segment to another then the binding is delayed until run time. Special
hardware is used for this. Most of the general purpose operating system uses this methodLogical versus
physical address:
Swapping:
Swapping is a method in which the process should be swapped temporarily from the main
memory to the backing store. It will be later brought back into the memory for continue
execution.
Backing store is a hard disk or some other secondary storage device that should be big enough in
order to accommodate copies of all memory images for all users. It is also capable of offering
direct access to these memory images.
Benefits of Swapping:
Non-contiguous allocation involves the use of pointers to link the non-contiguous memory
blocks allocated to a process. These pointers are used by the operating system to keep track of
the memory blocks allocated to the process and to locate them during the execution of the
process.
Paging
Paging is a storage mechanism used in OS to retrieve processes from secondary storage to the
main memory as pages. The primary concept behind paging is to break each process into
individual pages. Thus the primary memory would also be separated into frames.
One page of the process must be saved in one of the given memory frames. These pages can be
stored in various memory locations, but finding contiguous frames/holes is always the main goal.
Process pages are usually only brought into the main memory when they are needed; else, they
are stored in the secondary storage.
The frame sizes may vary depending on the OS. Each frame must be of the same size. Since the
pages present in paging are mapped on to the frames, the page size should be similar to the frame
size.
Example
Assuming that the main memory is 16 KB and the frame size is 1 KB, the main memory will be
partitioned into a collection of 16 1 KB frames. P1, P2, P3, and P4 are the four processes in the
system, each of which is 4 KB in size. Each process is separated into 1 KB pages, allowing one
page to be saved in a single frame.
Because all of the frames are initially empty, the pages of the processes will be stored in a
continuous manner. The graphic below depicts frames, pages, and the mapping between them.
Consider the case when P2 and P4 are shifted to the waiting state after a period of time. Eight
frames are now empty, allowing other pages to be loaded in their stead. Inside the ready queue is
the process P5, which is 8 KB (8 pages) in size.
Given that we have 8 noncontiguous frames accessible in memory, paging allows us to store the
process in many locations. As a result, we can load the process P5 page instead of P2 and P4
Memory Management Unit
The Memory Management Unit (MMU) is responsible for converting logical addresses to
physical addresses. The physical address refers to the actual address of a frame in which each
page will be stored, whereas the logical address refers to the address that is generated by the
CPU for each page.
When the CPU accesses a page using its logical address, the OS must first collect the physical
address in order to access that page physically. There are two elements to the logical address:
• Page number
• Offset
The OS’s memory management unit must convert the page numbers to the frame numbers.
Examples
Let’s say the CPU requests the 10th word of the 4th page of process P3 in the image above.
Because page number 4 of process P1 is stored at frame number 9, the physical address will be
returned as the 10th word of the 9th frame.
• If the physical address is 12 bits, then the physical address space would be 4 K words
• If the logical address is 13 bits, then the logical address space would be 8 K words
• If the page size is equal to the frame size, which is equal to 1 K words (assumption),
Then:
The address generated by the CPU is divided into the following:
• Page offset(d): It refers to the number of bits necessary to represent a certain word on a
page, page size in Logical Address Space, or page word number or page offset.
• Page number(p): It is the number of bits needed to represent the pages in the Logical
Address Space or the page number.
The Physical Address is divided into the following:
• Frame offset(d): It refers to the number of bits necessary to represent a certain word in a
frame, or the Physical Address Space frame size, the word number of a frame, or the
frame offset.
• Frame number(f): It’s the number of bits needed to indicate a frame of the Physical
Address Space or a frame number.
Dedicated registers can be used to implement the page table in hardware. However, using a
register for the page table is only useful if the page table is tiny. We can employ TLB (translation
look-aside buffer), a particular, tiny, fast look-up hardware cache if the page table has a
significant number of entries.
then the effective access time would be = m(page table) + m(page in page table)
Segmentation
A process is divided into Segments. The chunks that a program is divided into which are not
necessarily all of the exact sizes are called segments. Segmentation gives the user’s view of the
process which paging does not provide. Here the user’s view is mapped to physical memory .
The term “page miss” or “page fault” refers to a situation where a referenced page is not found
in the main memory.
When a program tries to access a page, or fixed-size block of memory, that isn’t currently loaded
in physical memory (RAM), an exception known as a page fault happens. Before enabling the
program to access a page that is required, the operating system must bring it into memory from
secondary storage (such a hard drive) in order to handle a page fault.
In modern operating systems, page faults are a common component of virtual memory
management. By enabling programs to operate with more data than can fit in physical memory
at once, they enable the efficient use of physical memory. The operating system is responsible
for coordinating the transfer of data between physical memory and secondary storage as needed.
What is Thrashing?
Thrashing is the term used to describe a state in which excessive paging activity takes place in
computer systems, especially in operating systems that use virtual memory, severely impairing
system performance. Thrashing occurs when a system’s high memory demand and low physical
memory capacity cause it to spend a large amount of time rotating pages between main memory
(RAM) and secondary storage, which is typically a hard disc.
It is caused due to insufficient physical memory, overloading and poor memory management.
The operating system may use a variety of techniques to lessen thrashing, including lowering
the number of running processes, adjusting paging parameters, and improving memory
allocation algorithms. Increasing the system’s physical memory (RAM) capacity can also
lessen thrashing by lowering the frequency of page swaps between RAM and the disc.
Demand Paging is a memory management scheme where pages of a process are not loaded into
memory until they are needed (i.e., a "page fault" occurs). This allows efficient use of memory and faster
program loading, but the performance depends on several key factors.
• Includes:
o Trap to the OS.
o Saving the state of the process.
o Finding the page on disk.
o Reading the page into memory.
o Updating the page table.
o Restarting the instruction.
• Impact: More time-consuming than a regular memory access. High overhead can reduce system
performance.
4. Thrashing
• Definition: When the system spends more time handling page faults than executing processes.
• Cause: Too many processes competing for limited memory.
• Effect: Severe degradation in performance.
• Solution: Reduce multiprogramming, use better page replacement algorithms.
5. Locality of Reference
Conclusion
The performance of demand paging is highly dependent on the page fault rate, handling time, and how
well the system manages memory. It provides benefits like reduced memory usage and faster load times
but can cause performance issues if not managed properly.
Page replacement algorithms
Page replacement algorithms are techniques used in operating systems to manage
memory efficiently when the physical memory is full. When a new page needs to be loaded
into physical memory, and there is no free space, these algorithms determine which existing
page to replace.
If no page frame is free, the virtual memory manager performs a page replacement operation to
replace one of the pages existing in memory with the page whose reference caused the page fault.
It is performed as follows: The virtual memory manager uses a page replacement algorithm to
select one of the pages currently in memory for replacement, accesses the page table entry of the
selected page to mark it as “not present” in memory, and initiates a page-out operation for it if
the modified bit of its page table entry indicates that it is a dirty page.
This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3-page frames. Find the number
of page faults using FIFO Page Replacement Algorithm.
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available
in memory, so it replaces the oldest page slot i.e 1. —> 1 Page Fault. 6 comes, it is also not
available in memory, so it replaces the oldest page slot i.e 3 —> 1 Page Fault. Finally, when 3
come it is not available, so it replaces 0 1-page fault.
In this algorithm, pages are replaced which would not be used for the longest duration of time
in the future.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not
used for the longest duration of time in the future—> 1 Page fault. 0 is already there so —> 0
Page fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available
in the memory. Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to set up a
benchmark so that other replacement algorithms can be analyzed against it.
Least Recently Used
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is
least recently used —> 1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.
In this algorithm, page will be replaced which has been used recently. Belady’s anomaly can
occur in this algorithm.
Example 4: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page
frames. Find number of page faults using MRU Page Replacement Algorithm.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so–> 0 page fault
when 3 comes it will take place of 0 because it is most recently used —> 1 Page fault
when 0 comes it will take place of 3 —> 1 Page fault
when 4 comes it will take place of 0 —> 1 Page fault
2 is already in memory so —> 0 Page fault
when 3 comes it will take place of 2 —> 1 Page fault
when 0 comes it will take place of 3 —> 1 Page fault
when 3 comes it will take place of 0 —> 1 Page fault
when 2 comes it will take place of 3 —> 1 Page fault
when 3 comes it will take place of 2 —> 1 Page fault
Conclusion
In summary, page replacement algorithms are essential for managing a computer’s memory
efficiently. They help ensure that the system runs smoothly by reducing the number of times it
needs to fetch data from slower storage. Different algorithms, like FIFO and LRU, have their
own pros and cons, and choosing the right one depends on the specific needs of the system.
Understanding these algorithms can help improve system performance and make our computers
faster and more efficient.
END OF UNIT 3