0% found this document useful (0 votes)
4 views

Unit 3 Memory Managment Os Notes

Unit 3 covers memory management, including concepts such as logical vs. physical address space, swapping, contiguous allocation, paging, and segmentation. It discusses memory management techniques, the role of the Memory Management Unit (MMU), and the implications of demand paging and thrashing in virtual memory systems. The unit also highlights the advantages and disadvantages of various memory allocation strategies, including fixed and variable partition schemes.

Uploaded by

daitoimmortal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Unit 3 Memory Managment Os Notes

Unit 3 covers memory management, including concepts such as logical vs. physical address space, swapping, contiguous allocation, paging, and segmentation. It discusses memory management techniques, the role of the Memory Management Unit (MMU), and the implications of demand paging and thrashing in virtual memory systems. The unit also highlights the advantages and disadvantages of various memory allocation strategies, including fixed and variable partition schemes.

Uploaded by

daitoimmortal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT -3

SYLLABUS
UNIT-III: Memory Management

Logical versus Physical address space, Swapping, Contiguous allocation, Paging, Segmentation.

Virtual Memory: Demand paging, Performance of demand paging, Page replacement, Page replacement
algorithms, Thrashing.

MEMORY MANAGEMENT INTRODUCTION:


Memory management is concerned with managing the primary memory. Memory consists of array of
bytes or words each with their own address. The instructions are fetched from the memory by the CPU
based on the value program counter.

Functions of memory management:


• Keeping track of status of each memory location.

• Determining the allocation policy.

• Memory allocation technique.

• De-allocation technique.

Address Binding:
Programs are stored on the secondary storage disks as binary executable files. When the programs are
to be executed they are brought in to the main memory and placed within a process. The collection of
processes on the disk waiting to enter the main memory forms the input queue. One of the processes
which are to be executed is fetched from the queue and placed in the main memory. During the
execution it fetches instruction and data from main memory. After the process terminates it returns
back the memory space. During execution the process will go through different steps and in each step
the address is represented in different ways. In source program the address is symbolic. The compiler
converts the symbolic address to re-locatable address. The loader will convert this re-locatable address
to absolute address. Binding of instructions and data can be done at any step along the way: Compile
time:-If we know whether the process resides in memory then absolute code can be generated. If the
static address changes then it is necessary to re-compile the code from the beginning. Load time:-If the
compiler doesn’t know whether the process resides in memory then it generates the re-locatable code.
In this the binding is delayed until the load time. Execution time:-If the process is moved during its
execution from one memory segment to another then the binding is delayed until run time. Special
hardware is used for this. Most of the general purpose operating system uses this methodLogical versus
physical address:

Logical versus physical address:


The address generated by the CPU is called logical address or virtual address. The address seen by the
memory unit i.e., the one loaded in to the memory register is called the physical address. Compile time
and load time address binding methods generate some logical and physical address. The execution time
addressing binding generate different logical and physical address. Set of logical address space
generated by the programs is the logical address space. Set of physical address corresponding to these
logical addresses is the physical address space. The mapping of virtual address to physical address during
run time is done by the hardware device called memory management unit (MMU). The base register is
also called re-location register. Value of the relocation register is added to every address generated by
the user process at the time it is sent to memory.
COMPARISON

Swapping:
Swapping is a method in which the process should be swapped temporarily from the main
memory to the backing store. It will be later brought back into the memory for continue
execution.

Backing store is a hard disk or some other secondary storage device that should be big enough in
order to accommodate copies of all memory images for all users. It is also capable of offering
direct access to these memory images.
Benefits of Swapping:

• It offers a higher degree of multiprogramming.


• Allows dynamic relocation. For example, if address binding at execution time is being used,
then processes can be swap in different locations. Else in case of compile and load time
bindings, processes should be moved to the same location.
• It helps to get better utilization of memory.
• Minimum wastage of CPU time on completion so it can easily be applied to a priority-based
scheduling method to improve its performance.

Memory Management Techniques

Memory management techniques are methods used by an operating system to efficiently


allocate, utilize, and manage memory resources for processes. These techniques ensure smooth
execution of programs and optimal use of system memory
Different Memory Management techniques are:

What is Contiguous Memory Management?


Contiguous memory allocation is a memory allocation strategy. As the name implies, we utilize
this technique to assign contiguous blocks of memory to each task. Thus, whenever a process
asks to access the main memory, we allocate a continuous segment from the empty region to the
process based on its size. In this technique, memory is allotted in a continuous way to the
processes. Contiguous Memory Management has two types:
• Fixed(or Static) Partition
• Variable(or Dynamic) Partitioning
Contiguous Memory Management Techniques

Below are two Contiguous Memory Management Techniques.


• . Fixed Partition Scheme

In the fixed partition scheme, memory is divided into fixed number of


partitions. Fixed means number of partitions are fixed in the memory. In the
fixed partition, in every partition only one process will be accommodated.
Degree of multi-programming is restricted by number of partitions in the
memory. Maximum size of the process is restricted by maximum size of the
partition. Every partition is associated with the limit registers.
• Limit Registers: It has two limit:
• Lower Limit: Starting address of the partition.
• Upper Limit: Ending address of the partition.
Internal Fragmentation is found in fixed partition scheme. To overcome the
problem of internal fragmentation, instead of fixed partition scheme, variable
partition scheme is used.
Disadvantages Fix partition scheme
• Maximum process size <= Maximum partition size.
• The degree of multiprogramming is directly proportional to the number of
partitions.
• Internal fragmentation which is discussed above is present.
• If a process of 19kb wants to allocate and we have free space which is
not continuous we are not able to allocate the space.

2. Variable Partition Scheme


In the variable partition scheme, initially memory will be single continuous free
block. Whenever the request by the process arrives, accordingly partition will
be made in the memory. If the smaller processes keep on coming then the
larger partitions will be made into smaller partitions.
• In variable partition schema initially, the memory will be full contiguous free
block
• Memory divided into partitions according to the process size where process
size will vary.
• One partition is allocated to each active partition.

External Fragmentation is found in variable partition scheme. To overcome the


problem of external fragmentation, compaction technique is used or non-
contiguous memory management techniques are used.
Solution of External Fragmentation
1. Compaction
Moving all the processes toward the top or towards the bottom to make free
available memory in a single continuous place is called
compaction. Compaction is undesirable to implement because it interrupts all
the running processes in the memory.
Disadvantage of Compaction
• Page fault can occur.
• It consumes CPU time (overhead).
2. Non-contiguous memory allocation
1. Physical address space: Main memory (physical memory) is divided into
blocks of the same size called frames. frame size is defined by the
operating system by comparing it with the size of the process.
2. Logical Address space: Logical memory is divided into blocks of the
same size called process pages. page size is defined by hardware system
and these pages are stored in the main memory during the process in non-
contiguous frames.
Advantages of Variable Partition Scheme
• Portion size = process size
• There is no internal fragmentation (which is the drawback of fixed partition
schema).
• Degree of multiprogramming varies and is directly proportional to a number
of processes.
Disadvantage Variable Partition Scheme
• External fragmentation is still there.
Advantages of Contiguous Memory Management
• It’s simple to monitor how many memory blocks are still available for use,
which determines how many more processes can be allocated RAM.
• Considering that the complete file can be read from the disc in a single
session, contiguous memory allocation offers good read performance.
• Contiguous allocation is simple to set up and functions well.
Disadvantages of Contiguous Memory Management
• Fragmentation is not a problem. Since new files can be written to the disk
after older ones.
• To select the appropriate hole size while creating a new file, it needs know
its final size.
• The extra space in the holes would need to be reduced or used once the
disk is full.
Non-Contiguous Memory Allocation
Non-contiguous allocation, also known as dynamic or linked allocation, is a memory allocation
technique used in operating systems to allocate memory to processes that do not require a
contiguous block of memory. In this technique, each process is allocated a series of non-
contiguous blocks of memory that can be located anywhere in the physical memory.

Non-contiguous allocation involves the use of pointers to link the non-contiguous memory
blocks allocated to a process. These pointers are used by the operating system to keep track of
the memory blocks allocated to the process and to locate them during the execution of the
process.

Fundamental Approaches of Implementing Non-Contiguous Memory


Allocation

There are two fundamental approaches to implementing non-contiguous memory


allocation. Paging and Segmentation are the two ways that allow a process’s physical
address space to be non-contiguous. It has the advantage of reducing memory
wastage but it increases the overheads due to address translation. It slows the
execution of the memory because time is consumed in address translation.
• Paging
• Segmentation

Paging

Paging is a storage mechanism used in OS to retrieve processes from secondary storage to the
main memory as pages. The primary concept behind paging is to break each process into
individual pages. Thus the primary memory would also be separated into frames.

One page of the process must be saved in one of the given memory frames. These pages can be
stored in various memory locations, but finding contiguous frames/holes is always the main goal.
Process pages are usually only brought into the main memory when they are needed; else, they
are stored in the secondary storage.

The frame sizes may vary depending on the OS. Each frame must be of the same size. Since the
pages present in paging are mapped on to the frames, the page size should be similar to the frame
size.
Example
Assuming that the main memory is 16 KB and the frame size is 1 KB, the main memory will be
partitioned into a collection of 16 1 KB frames. P1, P2, P3, and P4 are the four processes in the
system, each of which is 4 KB in size. Each process is separated into 1 KB pages, allowing one
page to be saved in a single frame.

Because all of the frames are initially empty, the pages of the processes will be stored in a
continuous manner. The graphic below depicts frames, pages, and the mapping between them.

Consider the case when P2 and P4 are shifted to the waiting state after a period of time. Eight
frames are now empty, allowing other pages to be loaded in their stead. Inside the ready queue is
the process P5, which is 8 KB (8 pages) in size.

Given that we have 8 noncontiguous frames accessible in memory, paging allows us to store the
process in many locations. As a result, we can load the process P5 page instead of P2 and P4
Memory Management Unit
The Memory Management Unit (MMU) is responsible for converting logical addresses to
physical addresses. The physical address refers to the actual address of a frame in which each
page will be stored, whereas the logical address refers to the address that is generated by the
CPU for each page.

When the CPU accesses a page using its logical address, the OS must first collect the physical
address in order to access that page physically. There are two elements to the logical address:

• Page number
• Offset
The OS’s memory management unit must convert the page numbers to the frame numbers.

Examples
Let’s say the CPU requests the 10th word of the 4th page of process P3 in the image above.
Because page number 4 of process P1 is stored at frame number 9, the physical address will be
returned as the 10th word of the 9th frame.

Let’s consider another example:

• If the physical address is 12 bits, then the physical address space would be 4 K words
• If the logical address is 13 bits, then the logical address space would be 8 K words
• If the page size is equal to the frame size, which is equal to 1 K words (assumption),
Then:
The address generated by the CPU is divided into the following:

• Page offset(d): It refers to the number of bits necessary to represent a certain word on a
page, page size in Logical Address Space, or page word number or page offset.
• Page number(p): It is the number of bits needed to represent the pages in the Logical
Address Space or the page number.
The Physical Address is divided into the following:

• Frame offset(d): It refers to the number of bits necessary to represent a certain word in a
frame, or the Physical Address Space frame size, the word number of a frame, or the
frame offset.
• Frame number(f): It’s the number of bits needed to indicate a frame of the Physical
Address Space or a frame number.
Dedicated registers can be used to implement the page table in hardware. However, using a
register for the page table is only useful if the page table is tiny. We can employ TLB (translation
look-aside buffer), a particular, tiny, fast look-up hardware cache if the page table has a
significant number of entries.

• The TLB is a high-speed, associative memory.


• LB entries are made up of two parts: a value and a tag.
• When this memory is accessed, an item is compared to all tags at the same time.
• If the object is located, the value associated with it is returned.
m = main memory access time

In case the page table is kept in the main memory,

then the effective access time would be = m(page table) + m(page in page table)

Segmentation
A process is divided into Segments. The chunks that a program is divided into which are not
necessarily all of the exact sizes are called segments. Segmentation gives the user’s view of the
process which paging does not provide. Here the user’s view is mapped to physical memory .

Types of Segmentation in Operating Systems


• Virtual Memory Segmentation: Each process is divided into a number of segments,
but the segmentation is not done all at once. This segmentation may or may not take
place at the run time of the program.
• Simple Segmentation: Each process is divided into a number of segments, all of which
are loaded into memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in
segmentation. A table stores the information about all such segments and is called Segment
Table.

What is Segment Table?


It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each
table entry has:
• Base Address: It contains the starting physical address where the segments reside in
memory.
• Segment Limit: Also known as segment offset. It specifies the length of the segment.
Translation of Two-dimensional Logical Address to Dimensional Physical Address.

The address generated by the CPU is divided into:


• Segment number (s): Number of bits required to represent the segment.
• Segment offset (d): Number of bits required to represent the position of data within a
segment.
Demand paging
Demand paging is a technique used in virtual memory systems where pages enter main memory
only when requested or needed by the CPU. In demand paging, the operating system loads only
the necessary pages of a program into memory at runtime, instead of loading the entire program
into memory at the start. A page fault occurred when the program needed to access a page that
is not currently in memory.
The operating system then loads the required pages from the disk into memory and updates the
page tables accordingly. This process is transparent to the running program and it continues to
run as if the page had always been in memory.

What is Page Fault?

The term “page miss” or “page fault” refers to a situation where a referenced page is not found
in the main memory.
When a program tries to access a page, or fixed-size block of memory, that isn’t currently loaded
in physical memory (RAM), an exception known as a page fault happens. Before enabling the
program to access a page that is required, the operating system must bring it into memory from
secondary storage (such a hard drive) in order to handle a page fault.
In modern operating systems, page faults are a common component of virtual memory
management. By enabling programs to operate with more data than can fit in physical memory
at once, they enable the efficient use of physical memory. The operating system is responsible
for coordinating the transfer of data between physical memory and secondary storage as needed.

What is Thrashing?

Thrashing is the term used to describe a state in which excessive paging activity takes place in
computer systems, especially in operating systems that use virtual memory, severely impairing
system performance. Thrashing occurs when a system’s high memory demand and low physical
memory capacity cause it to spend a large amount of time rotating pages between main memory
(RAM) and secondary storage, which is typically a hard disc.
It is caused due to insufficient physical memory, overloading and poor memory management.
The operating system may use a variety of techniques to lessen thrashing, including lowering
the number of running processes, adjusting paging parameters, and improving memory
allocation algorithms. Increasing the system’s physical memory (RAM) capacity can also
lessen thrashing by lowering the frequency of page swaps between RAM and the disc.

Performance of Demand Paging in Operating Systems

Demand Paging is a memory management scheme where pages of a process are not loaded into
memory until they are needed (i.e., a "page fault" occurs). This allows efficient use of memory and faster
program loading, but the performance depends on several key factors.

1. Page Fault Rate

• Definition: The frequency with which page faults occur.


• Range: 0 (no faults) to 1 (every reference causes a fault).
• Impact:
o Low page fault rate: Better performance.
o High page fault rate: System slows down due to frequent disk access.

2. Page Fault Overhead

• Includes:
o Trap to the OS.
o Saving the state of the process.
o Finding the page on disk.
o Reading the page into memory.
o Updating the page table.
o Restarting the instruction.
• Impact: More time-consuming than a regular memory access. High overhead can reduce system
performance.

3. Effective Access Time (EAT)

EAT=(1−p)×ma+p×page_fault_timeEAT = (1 - p) \times ma + p \times


page\_fault\_timeEAT=(1−p)×ma+p×page_fault_time Where:

• ppp = page fault rate


• mamama = memory access time (in nanoseconds)
• page_fault_timepage\_fault\_timepage_fault_time = time to handle a page fault (can be in
milliseconds)

Example: If ma=200 nsma = 200 \, nsma=200ns, page_fault_time=8 ms=8,000,000 nspage\_fault\_time


= 8 \, ms = 8,000,000 \, nspage_fault_time=8ms=8,000,000ns, and p=0.001p = 0.001p=0.001:

EAT=(1−0.001)×200+0.001×8,000,000=199.8+8,000=8199.8 nsEAT = (1 - 0.001) \times 200 + 0.001


\times 8,000,000 = 199.8 + 8,000 = 8199.8 \,
nsEAT=(1−0.001)×200+0.001×8,000,000=199.8+8,000=8199.8ns

→ Even a small page fault rate can drastically increase EAT.

4. Thrashing

• Definition: When the system spends more time handling page faults than executing processes.
• Cause: Too many processes competing for limited memory.
• Effect: Severe degradation in performance.
• Solution: Reduce multiprogramming, use better page replacement algorithms.

5. Locality of Reference

• Temporal locality: Recently accessed pages are likely to be accessed again.


• Spatial locality: Pages near the currently accessed one are likely to be accessed soon.
• Demand paging performs better when processes exhibit strong locality.

Conclusion

The performance of demand paging is highly dependent on the page fault rate, handling time, and how
well the system manages memory. It provides benefits like reduced memory usage and faster load times
but can cause performance issues if not managed properly.
Page replacement algorithms
Page replacement algorithms are techniques used in operating systems to manage
memory efficiently when the physical memory is full. When a new page needs to be loaded
into physical memory, and there is no free space, these algorithms determine which existing
page to replace.
If no page frame is free, the virtual memory manager performs a page replacement operation to
replace one of the pages existing in memory with the page whose reference caused the page fault.
It is performed as follows: The virtual memory manager uses a page replacement algorithm to
select one of the pages currently in memory for replacement, accesses the page table entry of the
selected page to mark it as “not present” in memory, and initiates a page-out operation for it if
the modified bit of its page table entry indicates that it is a dirty page.

Common Page Replacement Techniques


• First In First Out (FIFO)
• Optimal Page replacement
• Least Recently Used (LRU)
• Most Recently Used (MRU)

First In First Out (FIFO)

This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.

Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3-page frames. Find the number
of page faults using FIFO Page Replacement Algorithm.

Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available
in memory, so it replaces the oldest page slot i.e 1. —> 1 Page Fault. 6 comes, it is also not
available in memory, so it replaces the oldest page slot i.e 3 —> 1 Page Fault. Finally, when 3
come it is not available, so it replaces 0 1-page fault.

Optimal Page Replacement

In this algorithm, pages are replaced which would not be used for the longest duration of time
in the future.

Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page frame.


Find number of page fault using Optimal Page Replacement Algorithm.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not
used for the longest duration of time in the future—> 1 Page fault. 0 is already there so —> 0
Page fault. 4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available
in the memory. Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to set up a
benchmark so that other replacement algorithms can be analyzed against it.
Least Recently Used

In this algorithm, page will be replaced which is least recently used.


Example Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page
frames. Find number of page faults using LRU Page Replacement Algorithm.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is
least recently used —> 1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available
in the memory.

Most Recently Used (MRU)

In this algorithm, page will be replaced which has been used recently. Belady’s anomaly can
occur in this algorithm.
Example 4: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page
frames. Find number of page faults using MRU Page Replacement Algorithm.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so–> 0 page fault
when 3 comes it will take place of 0 because it is most recently used —> 1 Page fault
when 0 comes it will take place of 3 —> 1 Page fault
when 4 comes it will take place of 0 —> 1 Page fault
2 is already in memory so —> 0 Page fault
when 3 comes it will take place of 2 —> 1 Page fault
when 0 comes it will take place of 3 —> 1 Page fault
when 3 comes it will take place of 0 —> 1 Page fault
when 2 comes it will take place of 3 —> 1 Page fault
when 3 comes it will take place of 2 —> 1 Page fault

Conclusion

In summary, page replacement algorithms are essential for managing a computer’s memory
efficiently. They help ensure that the system runs smoothly by reducing the number of times it
needs to fetch data from slower storage. Different algorithms, like FIFO and LRU, have their
own pros and cons, and choosing the right one depends on the specific needs of the system.
Understanding these algorithms can help improve system performance and make our computers
faster and more efficient.

END OF UNIT 3

You might also like