Unit 5
Unit 5
Surbhi Dongaonkar
Contents
• O.S. and hardware interaction
• Swapping
• Continuous memory management
• Paging
• Segmentation
• Virtual Memory Management
• Demand Paging
• Page replacement algorithms
• Allocation of frames
• Kernel memory management
Memory Management
• Memory Management is the process of controlling and
coordinating computer memory, assigning portions known as
blocks to various running programs to optimize the overall
performance of the system.
• It is the most important function of an operating system that
manages primary memory.
• It helps processes to move back and forward between the main
memory and execution disk.
• It helps OS to keep track of every memory location, irrespective of
whether it is allocated to some process or it remains free.
OS and Hardware Interaction
• For hardware functions such as input and output and memory
allocation, the operating system acts as an intermediary between
programs and the computer hardware.
• As the OS coordinates the activities of the processor, it uses RAM as a
temporary storage area for instructions and data the processor needs.
• The OS is therefore responsible for coordinating the space allocations
in RAM to ensure that there is enough space for the waiting
instructions and data.
• If there isn’t sufficient space in RAM for all the data and instructions,
the OS allocates the least necessary files to temporary storage on the
hard drive.
Swapping
• Swapping is a memory management scheme in which any process can
be temporarily swapped from main memory to secondary memory so
that the main memory can be made available for other processes.
• It is used to improve main memory utilization.
• In secondary memory, the place where the swapped-out process is
stored is called swap space.
• Swap-out is a method of removing a process from RAM and adding it
to the hard disk.
• Swap-in is a method of removing a program from a hard disk and
putting it back into the main memory or RAM.
Advantages of Swapping
• It helps the CPU to manage multiple processes within a single main
memory.
• It helps to create and use virtual memory.
• Swapping allows the CPU to perform multiple tasks simultaneously.
Therefore, processes do not have to wait very long before they are
executed.
• It improves the main memory utilization.
Disadvantages of Swapping
• If the computer system loses power, the user may lose all information
related to the program in case of substantial swapping activity.
• If the swapping algorithm is not good, the composite method can
increase the number of Page Fault and decrease the overall
processing performance.
Continuous Memory Management
• In the Contiguous Memory Allocation, each process is contained in a
single contiguous section of memory.
• Memory allocation is achieved just by dividing the memory into
the partition.
• The two important types of memory management are:
1.Fixed partitioning
2.Dynamic partitioning
Fixed Partitioning
• Fixed size partitioning is also called static partitioning.
• In fixed partitioning, the number of partitions is fixed.
• In fixed partitioning, the size of each partition may or may not be the same.
• In fixed partitioning, spanning is not allowed, which means that the entire
process has to be allocated into a partition block. This also means that only
a portion of the process cannot be allocated.
Limitations to Fixed Partitioning
1.Internal fragmentation
2.Limits the process size
3.Limits the degree of multi-programming
4.External fragmentation
Dynamic/Variable size partitioning
• Dynamic partitioning is a variable size partitioning.
• In dynamic partitioning, the memory is allocated at run-time based
on the requirement of processes.
• There is no internal fragmentation in dynamic partitioning.
• In dynamic partitioning, there is no limitation on the number of
processes.
• The only limitation of dynamic partitioning is that it suffers from
external Fragmentation.
Fragmentation
• Fragmentation is a process of data storage in which memory space is
used inadequately, decreasing ability or efficiency
• fragmentation contributes to "unused" storage capacity
• There are three distinct fragmentation kinds: internal fragmentation,
external fragmentation, and data fragmentation
P1 P2 Pn
Unused space
Internal fragmentation
• memory can only be supplied in blocks (Ex-multiple of 4) to systems,
and as an outcome, if a program demands maybe 29 bytes, it will get
a coalition of 32 bytes.
• The surplus storage goes to waste when this occurs. The useless
space is found inside an assigned area in this case is known as Internal
Fragmentation.
External Fragmentation
• When used storage is differentiated into smaller lots and is
punctuated by assigned memory space, external fragmentation
occurs.
• Though unused storage is available, it is essentially inaccessible since
it is separately split into fragments that are too limited to meet the
software's requirements.
• In data files, external fragmentation often exists when several files of
various sizes are formed, resized, and discarded
Assignment:
• Difference between Internal and External Fragmentation
Logical and physical address
• Logical Address or Virtual Address (represented in bits): An address
generated by the CPU
• Logical Address Space or Virtual Address Space( represented in words or
bytes): The set of all logical addresses generated by a program
• Physical Address (represented in bits): An address actually available on
memory unit
• Physical Address Space (represented in words or bytes): The set of all
physical addresses corresponding to the logical addresses
• The mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device and this mapping is
known as paging technique.
Logical and physical address
• The Physical Address
Space is conceptually
divided into a number of
fixed-size blocks,
called frames.
• The Logical address
Space is also splitted into
fixed-size blocks,
called pages.
• Page Size = Frame Size
Paging
• Paging permits the physical address space of a process to be non-
contiguous. It is a fixed-size partitioning scheme.
• In the Paging technique, the secondary memory and main memory
are divided into equal fixed-size partitions.
• Paging solves the problem of fitting memory chunks of varying sizes
onto the backing store and this problem is suffered by many memory
management schemes.
• Paging tends to avoid external fragmentation
Paging Technique
• The paging technique divides the
physical memory(main memory) into
fixed-size blocks that are known
as Frames and also divide the logical
memory(secondary memory) into
blocks of the same size that are known
as Pages.
• The Frame has the same size as that of
a Page. A frame is basically a place
where a (logical) page can be
(physically) placed.
Paging Technique
• Pages of a process are brought into the main memory only when there is a
requirement otherwise they reside in the secondary storage.
• The CPU always generates a logical address. In order to access the main
memory always a physical address is needed.
• The logical address generated by CPU always consists of two parts:
1.Page Number(p)
2.Page Offset (d)
• where,
• Page Number is used to specify the specific page of the process from which the
CPU wants to read the data. and it is also used as an index to the page table.
• and Page offset is mainly used to specify the specific word on the page that the
CPU wants to read.
Page Table
• The hardware implementation of page table can be done by using
dedicated registers. But the usage of register for the page table is
satisfactory only if page table is small. If page table contain large
number of entries then we can use TLB(translation Look-aside buffer),
a special, small, fast look up hardware cache.
• The TLB is associative, high speed memory.
• Each entry in TLB consists of two parts: a tag and a value.
• When this memory is used, then an item is compared with all tags
simultaneously.If the item is found, then corresponding value is
returned
Page Table
Page Table
•
Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in
page table)
Segmentation
• Segmentation is a memory management technique in which the
memory is divided into the variable size parts.
• Each part is known as a segment which can be allocated to a process.
• The details about each segment are stored in a table called a segment
table. Segment table have:
1.Base: It is the base address of the segment
2.Limit: It is the length of the segment.
Segment table:
Translation of logical segment to physical
Advantages of Segmentation
1.No internal fragmentation
2.Average Segment Size is larger than the actual page size.
3.Less overhead
4.It is easier to relocate segments than entire address space.
5.The segment table is of lesser size as compared to the page table in
paging.
Disadvantages of Segmentation
1.It can have external fragmentation.
2.it is difficult to allocate contiguous memory to variable sized
partition.
3.Costly memory management algorithms.
• Difference between paging and segmentation??
Virtual Memory
• Virtual Memory is a storage scheme that provides user an illusion of
having a very big main memory. This is done by treating a part of
secondary memory as the main memory.
• Instead of loading one big process in the main memory, the Operating
System loads the different parts of more than one process in the main
memory.
• By doing this, the degree of multiprogramming will be increased and
therefore, the CPU utilization will also be increased.
Virtual Memory Management
• whenever some pages needs to be loaded in the main memory for
the execution and the memory is not available for those many pages,
then in that case, instead of stopping the pages from entering in the
main memory, the OS search for the RAM area that are least used in
the recent times or that are not referenced and copy that into the
secondary memory to make the space for the new pages in the main
memory.
Demand Paging
• In demand paging, the pages of a process which are least used, get
stored in the secondary memory.
• A page is copied to the main memory when its demand is made or
page fault occurs. There are various page replacement algorithms
which are used to determine the pages which will be replaced.
Page fault and Thrashing
• If the referred page is not present in the main memory then there will
be a miss and the concept is called Page miss or page fault.
• Whenever any page is referred for the first time in the main memory,
then that page will be found in the secondary memory, hence page
fault occurs.
• If the number of page faults is equal to the number of referred pages
or the number of page faults are so high so that the CPU remains
busy in just reading the pages from the secondary memory then the
effective access time will be the time taken by the CPU to read one
word from the secondary memory and it will be so high. The concept
is called thrashing.
Page replacement Algorithms
• First In First Out (FIFO): This is the simplest page replacement
algorithm. In this algorithm, the operating system keeps track of all
pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the
queue is selected for removal.
• No of frames =3
Example: FIFO
Page replacement Algorithms