Os Chap2
Os Chap2
Memory management
4.1 Introduction
• The main purpose of a computer system is to execute programs.
• These programs with their accusing data must locate in the main memory at the time for execution.
• The principles of managing the main memory which is one of the most precious resources in a
multiprogramming system of an operating system.
• Memory consists of a large array of words or bytes each with its own address.
• Modern operating systems provide efficient memory management and still research is being
conducted to improve the way they allocated for applications, because the main problem faces the
memory allocation algorithm.
An Overview of Memory
• Memory hierarchy design is divided into 2 main types:
1. Processor Registers
2. Cache Memory
3. Main Memory
• Selection of processes to swap out
criteria: suspended/blocked state, low priority, time spent in memory
• Selection of processes to swap in
criteria: time spent on swapping device, priority
• Allocation and management of swap space on a swapping device.
Swap space can be:
• system wide
• dedicated (e.g., swap partition or disk)
Memory protection
• The second fundamental task of a memory management
system is to protect programs sharing the memory from each other.
• This protection also covers the operating system itself.
• Memory protection can be provided at either of the two levels:
• Hardware: – address translation (most common!)
• Software:
• language dependent: strong typing
• language independent: software fault isolation
Memory partitioning
• So, the eight frames will become vacant, and we need to load or put other pages in those
vacant blocks.
• The process A5 is having a size of eight pages (8 KB), which are waiting in the ready
queue.
• We can see in the following example that we have eight non-contiguous frames that are
existing in the memory, with the help of paging, we can store the processes at different
places.
• Due to this, we can load the pages of the A5 process instead of process A2 and Process A4.
Demand Paging
• A technique in which a page is usually brought into the main memory only
when it is needed or demanded by the CPU
• Bring a page into memory only when it is needed.
• Page faults occur when a page is requested by the CPU that is not currently in
the page table (in other words, not currently in RAM).
• Hence whenever a page fault occurs the following steps are followed by the
operating system and the required page is brought into memory.
The process includes the following steps :
1. If the CPU tries to refer to a page that is currently not available in the main
memory, it generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to
proceed the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical
address space. The page replacement algorithms are used for the decision-
making of replacing the page in physical address space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it
will place the process back into the ready state.
Advantages of demand paging:
• More processes may be maintained in the main memory:
• Because we are going to load only some of the pages of any particular
process, there is room for more processes.
• This leads to more efficient utilization of the processor because it is more
likely that at least one of the more numerous processes will be in the ready
state at any particular time.
•A process may be larger than all of the main memory:
•One of the most fundamental restrictions in programming is lifted.
•A process larger than the main memory can be executed because of demand
paging. The OS itself loads pages of a process in the main memory as
required.
•It allows greater multiprogramming levels by using less of the available
(primary) memory for each process.
Address Translation
Address generated by CPU is divided into
• Page number (p) -- page number is used as an index into a page table which
contains base address of each page in physical memory.
• Page offset (d) -- page offset is combined with base address to define the
physical memory address.
• Page address is called logical address and represented by page number and
the offset.
• Logical Address = Page number + page offset
• Frame address is called physical address and represented by a frame
number and the offset.
• Physical Address = Frame number + page offset
• A data structure called page map table is used to keep track of the relation
between a page of a process to a frame in physical memory.
Advantages of paging
• In Paging, there is no external fragmentation.
• In Paging, the swapping between the equal-size pages and page frames is
easy.
• Paging is a simple technique that we use for memory management.
Drawbacks of Paging
• In Paging, there may be a chance of Internal Fragmentation.
• In Paging, the Page table consumes extra memory.
• Due to Multi-level Paging, there may be a chance of memory reference
overhead.
Segmentation
• Segmentation is a technique to break memory into logical pieces where each piece
represents a group of related information.
• Segmentation is a memory management technique in which the memory is divided into
the variable size parts.
• Each part is known as a segment which can be allocated to a process.
• For example ,data segments or code segment for each process, data segment for
operating system and so on.
• Segmentation is another technique for non-contiguous storage allocation.
• It is different from paging as pages are physical in nature and hence are of fixed size,
whereas segments are logical divisions of a program and hence are of variable size.
• In segmentation, we divide the logical address space into different segments.
• The size of a segment varies according to the data stored in it or the nature of operation
performed on that segment.
•.
• Segmentation can be implemented using or without using paging.
• Unlike paging, segment are having varying sizes and thus eliminates internal
fragmentation. External fragmentation still exists but to lesser extent
• In segmentation, the memory is split into variable-length parts. Each part is known
as segments.
• The information which is related to the segment is stored in a table which is called
a segment table.
• There are two types of information stored in the segment table:
• Limit: - The limit is the length or size of the segment
• Base: - The base is the base address of the segment.
• The operating system preserves a segment map table for mapping.
• The table consists of segment number, list of the memory blocks which are free
along with its size, and its memory location in the virtual memory or the main
memory.
Translation of Logical Address into Physical Address by Segment Table
• The CPU generates a logical address that comprises of two parts:
• Segment Number: - Segment Number is defined as the number of bits that
are needed to represent the segment.
• Segment Offset: - Segment Offset is defined as the number of bits which are
needed to represent the size of the segment.
• We map the segment number to the segment table, and after mapping, a
comparison is done between the limit of the segment to the offset.
• If the offset bit is less than the limit, then we can say that the address is valid.
Otherwise, the address is invalid.
• If the address is valid then we add the base address of the segment to the offset
so that we can get the physical address of the actual word in the main memory.
Example
Calculate the physical address if no trap is produced.
• In a segmentation scheme, the generated logical address consists of two parts-
• Segment Number
• Segment Offset
• We know-
• Segment Offset must always lie in the range [0, limit-1].
• If segment offset becomes greater than or equal to the limit of segment, then trap
addressing error is produced.
Q1:0,430
• Segment Number = 0
• Segment Offset = 430
• We have,
• In the segment table, limit of segment-0 is 700.
• Thus, segment offset must always lie in the range = [0, 700-1] = [0, 699]
• Now,
• Since generated segment offset lies in the above range, so request generated is valid.
• Therefore, no trap will be produced.
• Physical Address= base + segment offset
• Physical Address = 1219 + 430 = 1649
Q2:1,11
• Segment Number = 1
• Segment Offset = 11
• We have,
• In the segment table, limit of segment-1 is 14.
• Thus, segment offset must always lie in the range = [0, 14-1] = [0, 13]
• Now,
• Since generated segment offset lies in the above range, so request generated is valid.
• Therefore, no trap will be produced.
• Physical Address = 2300 + 11 = 2311
Q3:2,100
• Segment Number = 2
• Segment Offset = 100
• We have,
• In the segment table, limit of segment-2 is 100.
• Thus, segment offset must always lie in the range = [0, 100-1] = [0, 99]
• Now,
• Since generated segment offset does not lie in the above range, so the request
generated is invalid.
• Therefore, trap will be produced.
• Do question number 4 and 5 as an exercise
Benefits of Segmentation
• Internal fragmentation is not present in the segmentation.
• Less overhead.
• In segmentation, the segment table size is less than the page table size.
• In segmentation, the relocation of the segment is easier than the whole
address space.
Drawbacks of Segmentation
• In segmentation, there may be a chance of external fragmentation.
• The segmentation technique is expensive.
• In Segmentation, it is tough to allocate memory in a contiguous manner to a
variable-sized partition.
Difference between Paging and Segmentation
Paging Segmentation
• In paging, memory is partitioned into • In segmentation, memory is partitioned
fixed-size pages. into the variable size segments.
• In paging, with the help of hardware • In segmentation, the user gives the size
page size is determine. of the segments.
• In paging, there may be a chance of
• In segmentation, there may be a chance
internal fragmentation.
of external fragmentation.
• Paging is faster than the
• Segmentation is slower than the paging.
segmentation.
• In this technique, the logical address is
• In this technique, the logical address
is partitioned into the page number partitioned into the segment number and
and the page offset. the segment offset.
Dynamic Memory Allocation algorithms
• Memory allocation is a process by which computer programs are assigned
memory or space.
• There are three main strategies that the memory manager can use when
allocating free memory to processes:
Best Fit :
• Chooses the block that is closest in size to the request
• Find the smallest free memory block that will fit the process needs.
• Idea is minimize wastage of free memory space .
• It is the allocation of the process to the partition which is first smallest and
sufficient partition among the free available partition.
• The smallest hole that is big enough is allocated to program.
Worst Fit :
• Find the largest free memory block that will fit the process needs.
• Idea is to increase the possibility that another process can use the left-over
space
• Chooses the block that is largest in size (worst-fit)
• The idea is to leave a usable fragment left over
• The largest hole that is big enough is allocated to program.
First Fit :
• Find the first space to fit the memory needs.
• In the first fit, partition is allocated which is first sufficient from the top of Main
Memory
• It Minimize the time to analyze the memory space available.
• The first hole that is big enough is allocated to program.
Example
Although, best fit minimizes the wastage space, it consumes a lot of
processor time for searching the block which is close to required size. Also,
Best-fit may perform poorer than other algorithms in some cases.
For example, see below example.
Consider the requests from processes in given order 300K, 25K, 125K and
50K. Let there be two blocks of memory available of size 150K followed by
a block size 350K.Which of the following partition allocation schemes can
satisfy above requests
First fit, Best fit or Worst fit?
Solution
There is one unallocated process and there is also 50k wasted memory block
Worst fit:
300k allocated into 350k memory block size, left with 50k free space
25k allocated into 150k memory block size, left with 125k free space
125k allocated into 125k memory block size, there is no free space
50k allocated into 50k memory block size, there is no free space
In this allocation algorithm all the process are allocated and there is no
wasted memory block.
Which algorithm does allocate the process efficiently?
Exercise
Given free memory partitions of 100K, 500K, 200K, 300K, and 600K
(in order), how would each of the First-fit, Best-fit, and Worst-fit
algorithms place processes of 212K, 417K, 112K, and 426K (in order)?
Which algorithm makes the most efficient use of memory? "
Page Replacement Algorithm
• Page replacement algorithms are the techniques using which an Operating
System decides which memory pages to swap out, write to disk when a page of
memory needs to be allocated.
• Whenever a page fault occurs, then the OS has to choose a page just to remove
from the memory to make room for that page which has to be brought in.
• When the page that was selected for replacement and was paged out, is
referenced again, it has to read in from disk, and this requires for I/O
completion.
• This process determines the quality of the page replacement algorithm: the
lesser the time waiting for page-ins, the better is the algorithm.
• There are many different page replacement algorithms.
Cont.…
• Whenever main memory (RAM) is full, or when a page is requested that is
located in virtual memory, the memory manager needs to swap old or
unused pages from RAM into virtual memory in order to make space for the
new pages being loaded.
• There are a number of strategies used by the memory manager to handle this
task.
• These strategies are often called replacement policies.
• There are 2 main replacement policies that are used by operating systems:
• First In First Out (FIFO)
• Least Recently used
• Hit Ratios: Determining Which Replacement Policy to Use
• A hit ratio is the number of times that a page is actually found in main
memory, as opposed to the number of page faults generated which is
requiring of the memory manager to retrieve the page from virtual memory.
• To calculate the hit ratio, we divide the number of non page faults by the
total number of page requests (the number of times that data has been sent
between the CPU and main memory).
• Hit ratio=hit/total
Cont….
• Since there are extra steps involved in order to execute the instruction, and
since retrieving a page from a hard disk is slower than RAM, page faults
result in longer processing time.
• We use hit ratios to determine which replacement policy will result in the
fewest number of page faults, and the fastest overall processing time.
• The policy that produces the lowest number of page faults is usually the
policy we want to use.
• A HIT =the page is found in RAM),
• A PAGE FAULT =(the page must be retrieved from virtual memory.
• A FRAME is a segment of RAM that can be allocated to hold a page, and
is typically the same size as a page.
First In First Out (FIFO) policy
• Oldest page in main memory is the one which will be selected for
replacement.
• Easy to implement, keep a list, replace pages from the tail and add new
pages at the head.
• Uses a queue data structure to keep track of the pages in main memory.
• Oldest page at the front (head) and newest page at the back (tail).
• Always replace (get rid of) the oldest page.
• Does not always work, because the oldest page may still be used by the
process.
Least Recently Used (LRU) policy
• Page which has not been used for the longest time in main memory is the
one which will be selected for replacement.
• Easy to implement, keep a list, replace pages by looking back into time.
• This algorithm helps the Operating system to search those pages that are
used over a short duration of time frame.
• Consider a main memory with five frames and the following sequence of
page references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. calculate hit and page
fault using First-In-First-out (FIFO) and Least Recently Used (LRU)
replacement policy?
Using FIFO
Number of Page Faults = 9
Number of hits = 6
Page Fault ratio = 9/15 i.e. page faults(total miss)/total possible cases
Request 3 8 2 3 9 1 6 3 8 9 3 6 2 1 3
Frame5 1 1 1 1 1 1 1 2 2 2
Frame4 9 9 9 9 9 9 9 9 9 9 9
Frame3 2 2 2 2 2 2 8 8 8 8 8 1 1
Frame2 8 8 8 8 8 6 6 6 6 6 6 6 6 6
Frame1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
Miss/hit miss miss miss hit miss miss miss hit miss hit hit hit miss miss hit
Using LRU
Number of Page Faults = 9
Number of Hits = 6
Page Fault ratio = 9/15 i.e. page faults(total miss)/total possible cases
Thrashing
• Thrashing - a process is busy swapping pages in and out (spends more time
paging than executing).
• When our computer spends too much time swapping pages between RAM
and the page file, we call this condition thrashing.
• If a process does not have “enough” pages, the page-fault rate is very high.
This leads to:
• low CPU utilization (ready queue is empty)
• operating system (may) think that it needs to increase the degree of
multiprogramming
• another process added to the system, this process requires pages to be
brought in
Causes of Thrashing :
1. High degree of multiprogramming:
• If the number of processes keeps on increasing in the memory then the number of
frames allocated to each process will be decreased.
• So, fewer frames will be available for each process. Due to this, a page fault will
occur more frequently and more CPU time will be wasted in just swapping in and
out of pages and the utilization will keep on decreasing.
• For example:
Let free frames = 400
• Case 1: Number of process = 100
Then, each process will get 4 frames.
• Case 2: Number of processes = 400
Each process will get 1 frame.
• Case 2 is a condition of thrashing, as the number of processes is increased, frames
per process are decreased. Hence CPU time will be consumed in just swapping
pages.
2. Lacks of Frames:
• If a process has fewer frames then fewer pages of that process will be able
to reside in memory and hence more frequent swapping in and out will be
required.
• This may lead to thrashing. Hence sufficient amount of frames must be
allocated to each process in order to prevent thrashing.
Recovery of Thrashing :
• Do not allow the system to go into thrashing by instructing the long-term
scheduler not to bring the processes into memory after the threshold.
• If the system is already thrashing then instruct the mid-term scheduler to
suspend some of the processes so that we can recover the system from
thrashing.
• Install more RAM