0% found this document useful (0 votes)
39 views

Unit 5

This document provides an overview of memory management techniques used in operating systems. It discusses how the OS interacts with hardware to manage memory allocation. Key techniques covered include swapping, paging, segmentation, and virtual memory management. Swapping moves processes between main memory and disk storage. Paging and segmentation divide memory into fixed and variable sized blocks. Paging maps logical addresses to physical frames, while segmentation uses segment tables. Virtual memory gives the illusion of a larger memory space using disk as additional storage.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Unit 5

This document provides an overview of memory management techniques used in operating systems. It discusses how the OS interacts with hardware to manage memory allocation. Key techniques covered include swapping, paging, segmentation, and virtual memory management. Swapping moves processes between main memory and disk storage. Paging and segmentation divide memory into fixed and variable sized blocks. Paging maps logical addresses to physical frames, while segmentation uses segment tables. Virtual memory gives the illusion of a larger memory space using disk as additional storage.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Memory Management

Surbhi Dongaonkar
Contents
• O.S. and hardware interaction
• Swapping
• Continuous memory management
• Paging
• Segmentation
• Virtual Memory Management
• Demand Paging
• Page replacement algorithms
• Allocation of frames
• Kernel memory management
Memory Management
• Memory Management is the process of controlling and
coordinating computer memory, assigning portions known as
blocks to various running programs to optimize the overall
performance of the system.
• It is the most important function of an operating system that
manages primary memory.
• It helps processes to move back and forward between the main
memory and execution disk.
• It helps OS to keep track of every memory location, irrespective of
whether it is allocated to some process or it remains free.
OS and Hardware Interaction
• For hardware functions such as input and output and memory
allocation, the operating system acts as an intermediary between
programs and the computer hardware.
• As the OS coordinates the activities of the processor, it uses RAM as a
temporary storage area for instructions and data the processor needs.
• The OS is therefore responsible for coordinating the space allocations
in RAM to ensure that there is enough space for the waiting
instructions and data.
• If there isn’t sufficient space in RAM for all the data and instructions,
the OS allocates the least necessary files to temporary storage on the
hard drive.
Swapping
• Swapping is a memory management scheme in which any process can
be temporarily swapped from main memory to secondary memory so
that the main memory can be made available for other processes.
• It is used to improve main memory utilization.
• In secondary memory, the place where the swapped-out process is
stored is called swap space.
• Swap-out is a method of removing a process from RAM and adding it
to the hard disk.
• Swap-in is a method of removing a program from a hard disk and
putting it back into the main memory or RAM.
Advantages of Swapping
• It helps the CPU to manage multiple processes within a single main
memory.
• It helps to create and use virtual memory.
• Swapping allows the CPU to perform multiple tasks simultaneously.
Therefore, processes do not have to wait very long before they are
executed.
• It improves the main memory utilization.
Disadvantages of Swapping
• If the computer system loses power, the user may lose all information
related to the program in case of substantial swapping activity.
• If the swapping algorithm is not good, the composite method can
increase the number of Page Fault and decrease the overall
processing performance.
Continuous Memory Management
• In the Contiguous Memory Allocation, each process is contained in a
single contiguous section of memory.
• Memory allocation is achieved just by dividing the memory into
the partition.
• The two important types of memory management are:
1.Fixed partitioning
2.Dynamic partitioning
Fixed Partitioning
• Fixed size partitioning is also called static partitioning.
• In fixed partitioning, the number of partitions is fixed.
• In fixed partitioning, the size of each partition may or may not be the same.
• In fixed partitioning, spanning is not allowed, which means that the entire
process has to be allocated into a partition block. This also means that only
a portion of the process cannot be allocated.
Limitations to Fixed Partitioning
1.Internal fragmentation
2.Limits the process size
3.Limits the degree of multi-programming
4.External fragmentation
Dynamic/Variable size partitioning
• Dynamic partitioning is a variable size partitioning.
• In dynamic partitioning, the memory is allocated at run-time based
on the requirement of processes.
• There is no internal fragmentation in dynamic partitioning.
• In dynamic partitioning, there is no limitation on the number of
processes.
• The only limitation of dynamic partitioning is that it suffers from
external Fragmentation.
Fragmentation
• Fragmentation is a process of data storage in which memory space is
used inadequately, decreasing ability or efficiency
• fragmentation contributes to "unused" storage capacity
• There are three distinct fragmentation kinds: internal fragmentation,
external fragmentation, and data fragmentation

P1 P2 Pn

Unused space
Internal fragmentation
• memory can only be supplied in blocks (Ex-multiple of 4) to systems,
and as an outcome, if a program demands maybe 29 bytes, it will get
a coalition of 32 bytes.
• The surplus storage goes to waste when this occurs. The useless
space is found inside an assigned area in this case is known as Internal
Fragmentation.
External Fragmentation
• When used storage is differentiated into smaller lots and is
punctuated by assigned memory space, external fragmentation
occurs.
• Though unused storage is available, it is essentially inaccessible since
it is separately split into fragments that are too limited to meet the
software's requirements.
• In data files, external fragmentation often exists when several files of
various sizes are formed, resized, and discarded
Assignment:
• Difference between Internal and External Fragmentation
Logical and physical address
• Logical Address or Virtual Address (represented in bits): An address
generated by the CPU
• Logical Address Space or Virtual Address Space( represented in words or
bytes): The set of all logical addresses generated by a program
• Physical Address (represented in bits): An address actually available on
memory unit
• Physical Address Space (represented in words or bytes): The set of all
physical addresses corresponding to the logical addresses
• The mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device and this mapping is
known as paging technique.
Logical and physical address
• The Physical Address
Space is conceptually
divided into a number of
fixed-size blocks,
called frames.
• The Logical address
Space is also splitted into
fixed-size blocks,
called pages.
• Page Size = Frame Size
Paging
• Paging permits the physical address space of a process to be non-
contiguous. It is a fixed-size partitioning scheme.
• In the Paging technique, the secondary memory and main memory
are divided into equal fixed-size partitions.
• Paging solves the problem of fitting memory chunks of varying sizes
onto the backing store and this problem is suffered by many memory
management schemes.
• Paging tends to avoid external fragmentation
Paging Technique
• The paging technique divides the
physical memory(main memory) into
fixed-size blocks that are known
as Frames and also divide the logical
memory(secondary memory) into
blocks of the same size that are known
as Pages.
• The Frame has the same size as that of
a Page. A frame is basically a place
where a (logical) page can be
(physically) placed.
Paging Technique
• Pages of a process are brought into the main memory only when there is a
requirement otherwise they reside in the secondary storage.
• The CPU always generates a logical address. In order to access the main
memory always a physical address is needed.
• The logical address generated by CPU always consists of two parts:
1.Page Number(p)
2.Page Offset (d)
• where,
• Page Number is used to specify the specific page of the process from which the
CPU wants to read the data. and it is also used as an index to the page table.
• and Page offset is mainly used to specify the specific word on the page that the
CPU wants to read.
Page Table
• The hardware implementation of page table can be done by using
dedicated registers. But the usage of register for the page table is
satisfactory only if page table is small. If page table contain large
number of entries then we can use TLB(translation Look-aside buffer),
a special, small, fast look up hardware cache.
• The TLB is associative, high speed memory.
• Each entry in TLB consists of two parts: a tag and a value.
• When this memory is used, then an item is compared with all tags
simultaneously.If the item is found, then corresponding value is
returned
Page Table
Page Table

Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in
page table)
Segmentation
• Segmentation is a memory management technique in which the
memory is divided into the variable size parts.
• Each part is known as a segment which can be allocated to a process.
• The details about each segment are stored in a table called a segment
table. Segment table have:
1.Base: It is the base address of the segment
2.Limit: It is the length of the segment.
Segment table:
Translation of logical segment to physical
Advantages of Segmentation
1.No internal fragmentation
2.Average Segment Size is larger than the actual page size.
3.Less overhead
4.It is easier to relocate segments than entire address space.
5.The segment table is of lesser size as compared to the page table in
paging.
Disadvantages of Segmentation
1.It can have external fragmentation.
2.it is difficult to allocate contiguous memory to variable sized
partition.
3.Costly memory management algorithms.
• Difference between paging and segmentation??
Virtual Memory
• Virtual Memory is a storage scheme that provides user an illusion of
having a very big main memory. This is done by treating a part of
secondary memory as the main memory.
• Instead of loading one big process in the main memory, the Operating
System loads the different parts of more than one process in the main
memory.
• By doing this, the degree of multiprogramming will be increased and
therefore, the CPU utilization will also be increased.
Virtual Memory Management
• whenever some pages needs to be loaded in the main memory for
the execution and the memory is not available for those many pages,
then in that case, instead of stopping the pages from entering in the
main memory, the OS search for the RAM area that are least used in
the recent times or that are not referenced and copy that into the
secondary memory to make the space for the new pages in the main
memory.
Demand Paging
• In demand paging, the pages of a process which are least used, get
stored in the secondary memory.
• A page is copied to the main memory when its demand is made or
page fault occurs. There are various page replacement algorithms
which are used to determine the pages which will be replaced.
Page fault and Thrashing
• If the referred page is not present in the main memory then there will
be a miss and the concept is called Page miss or page fault.
• Whenever any page is referred for the first time in the main memory,
then that page will be found in the secondary memory, hence page
fault occurs.
• If the number of page faults is equal to the number of referred pages
or the number of page faults are so high so that the CPU remains
busy in just reading the pages from the secondary memory then the
effective access time will be the time taken by the CPU to read one
word from the secondary memory and it will be so high. The concept
is called thrashing.
Page replacement Algorithms
• First In First Out (FIFO): This is the simplest page replacement
algorithm. In this algorithm, the operating system keeps track of all
pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the
queue is selected for removal.
• No of frames =3
Example: FIFO
Page replacement Algorithms

• Least Recently Used: In this algorithm, page will be replaced which is


least recently used.
• Needs to keep track of pages which are used when.
• Used to keep recent pages into main memory to reduce page faults
Example-LRU
Page replacement Algorithms
• Optimal Page replacement: In this algorithm, pages are replaced
which would not be used for the longest duration of time in the
future.
• Needs to know the sequence of pages to use in future.
• This is tedious and can change dynamically, but will reduce the page
faults.
Example-Optimal
Page replacement Algorithms
• Most Recently Used (MRU): In this algorithm, page will be replaced
which has been used recently.
• Last In First Out (LIFO): In this algorithm, page will be replaced which
has been added recently(newest page).
• Random Page Replacement: As indicated from the name this
algorithm replaces the page randomly.
• Most Frequently Used(MFU): In this algorithm, page will be replaced
which has been used largest number of times.
• Least Frequently Used(LFU): In this algorithm, page will be replaced
which has been used least number of times.
Belady’s Anomaly
• Increase the page numbers(frame numbers as page numbers =
frame numbers) will lead to less page fault, right!! since more
chances of element to be present in the queue.
• But, that’s not the case always.
• Sometimes by increasing the page numbers page fault rather
increases, this type of anomaly is called belady’s Anomaly.
Kernel Memory
• The idea of kernel memory comes from the computer structure, which has
a kernel layer that is responsible for the core processes to run the
operating system.
• When you run your computer during startup, the kernel layer identifies the
processes needed to load your OS.
• These processes are essential and thus kernel memory was developed as
part of memory management to ensure that there is always an available
and dedicated memory for core processes.
• Nonpaged Kernel Memory: The nonpaged kernel memory in task manager
refers to the kernel memory that uses your RAM
• Paged Kernel Memory: Virtual memory is used to take off some load from
the RAM, making the RAM available for other applications
Kernel Memory Management
• Kernel memory is often allocated from a free-memory pool different
from memory used for user-mode processes.
• The reasons to this are:
• The kernel requests the memory for data structures which can be less than a
page size.
• Some hardware devices interacts directly with physical memory without
considering virtual memory
• There are two strategies for managing free memory in kernel space:
• Buddy System
• Slab allocation
Buddy System
• Buddy System allocates memory of fixed size segment consisting of
physically contiguous pages.
• Memory is allocated in power-of-2 allocator, means it satisfies
request in power of 2(4KB, 8KB, 16KB…..)
• Example- request of 11KB is satisfied with 16KB segment.
• Advantage: Easy segment formation and merging if larger requests
comes in.
• Disadvantage: Internal Fragmentation
Slab Allocation
• A slab is mode up of one or more contiguous physical pages.
• A cache consists of one or more slabs
• There can be separate cache for separate data structure.
• Slab allocation algorithm uses caches to store kernel objects.
• Number of objects in a cache depends on size of associated slab.
• In linux, a slab may be in one of three states:
• Full: All objects in slab are marked as used
• Empty: All objects in slab are marked as free
• Partial: Slab consists of both used and free objects
Slab allocation
• Advantages:
• No fragmentation
• Quick service to request
• Recent distribution on linux now includes two other memory
allocator: SLOB and SLUB along with SLAB allocator.

You might also like