0% found this document useful (0 votes)
80 views

Os Chap2

The document discusses memory management in operating systems. It covers the following key points: 1. Memory management involves tracking used and unused memory, allocating memory to processes, and deallocating memory when processes are done. 2. Memory can be allocated contiguously in one block or non-contiguously across multiple blocks. Paging and segmentation allow for non-contiguous allocation. 3. The memory hierarchy includes CPU registers, cache, main memory, and secondary storage. Virtual memory allows programs to access more memory than physically available.

Uploaded by

amsalu alemu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

Os Chap2

The document discusses memory management in operating systems. It covers the following key points: 1. Memory management involves tracking used and unused memory, allocating memory to processes, and deallocating memory when processes are done. 2. Memory can be allocated contiguously in one block or non-contiguously across multiple blocks. Paging and segmentation allow for non-contiguous allocation. 3. The memory hierarchy includes CPU registers, cache, main memory, and secondary storage. Virtual memory allows programs to access more memory than physically available.

Uploaded by

amsalu alemu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

Chapter Four

Memory management
4.1 Introduction
• The main purpose of a computer system is to execute programs.

• These programs with their accusing data must locate in the main memory at the time for execution.

• The principles of managing the main memory which is one of the most precious resources in a
multiprogramming system of an operating system.

• The difference between bit, byte and word(reading assignment)

• Memory consists of a large array of words or bytes each with its own address.

• Modern operating systems provide efficient memory management and still research is being
conducted to improve the way they allocated for applications, because the main problem faces the
memory allocation algorithm.
An Overview of Memory
• Memory hierarchy design is divided into 2 main types:

• External Memory or Secondary Memory:


• Comprising of Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral storage
devices which are accessible by the processor via I/O Module.

• Internal Memory or Primary Memory:


• Comprising of Main Memory, Cache Memory & CPU registers.
• This is directly accessible by the processor.

Main memory usually has two partitions:


• Low Memory − Operating system resides in this memory.
• High Memory − User processes are held in high memory.
• memory is organized in layers according to access time and capacity is called
memory hierarchy

1. Processor Registers

2. Cache Memory

3. Main Memory

4. Hard Disk Drives

5. Tape Drives and Optical Discs


Physical and virtual memory
Physical memory
• Physical memory refers to the RAM or the primary memory in the computer.
• Physical memory is a volatile memory. Therefore, it requires a continuous flow of
power to retain data.
• However, power failures and interruptions can erase the data in the physical memory.
• The CPU can directly access the physical memory. It holds programs on the execution
lineup.
• programs are first placed in the physical memory so that the CPU can execute them
faster.
• It takes less time to access data from the physical memory than accessing the data
from the hard disk.
• After completing the execution, the programs go back to the hard disk. Likewise, the
free memory can be allocated to a new program.  When executing these programs, they
are called processes.
Virtual memory
• Virtual memory is a logical memory.
• In other words, it is a memory management technique performed by the
operating system.
• virtual memory is a memory management technique that creates an illusion
to users of larger physical memory
• Virtual memory allows the programmer to use more memory for the
programs than the available physical memory.
• If the physical memory is 4GB and the virtual memory is 16GB, the
programmer can use the 16GB virtual memory to execute the program.
• Using virtual memory, it can execute complex programs that require more
memory than the physical memory.
4.2 Memory management
• The part of the operating system that manages (part of) the memory hierarchy is
called the memory manager. Its job is to efficiently manage memory: keep track of
which parts of memory are in use, allocate memory to processes when they need it,
and deallocate it when they are done.
• Memory management is the functionality of an operating system which handles or
manages primary memory.
• Memory management keeps track of each and every memory location either it is
allocated to some process or it is free.
• It checks how much memory is to be allocated to processes. It decides which
process will get memory at what time.
• It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
• Just as processes share the CPU, they also share physical memory. Memory
management is mechanisms for doing that sharing.
• Memory management refers to management of Primary Memory or Main
Memory.
• Main memory provides a fast storage that can be accessed directly by the
CPU.
• For a program to be executed, it must be in the main memory.
• An Operating System does the following activities for memory management:
• Keeps tracks of primary memory, i.e., what part of it are in use by whom,
what part are not in use.
• In multiprogramming, the OS decides which process will get memory
when and how much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been
terminated.
Memory management Schemes are broadly divided into following categories:
A. Contiguous Memory Allocation:
• Contiguous allocation means that each logical object is placed in a set of
memory locations with strictly consecutive addresses.
• It is basically a method in which a single contiguous section/part of memory is
allocated to a process or file needing it.
• The freely/unused available memory partitions are not distributed in a random
fashion here and there across the whole memory space.
• It is simple to keep track of how many memory blocks are left, which
determines how many more processes can be granted memory space.
Operating system uses the following memory allocation mechanism.
Single-partition allocation
• In this type of allocation, relocation-register scheme is used to protect
user processes from each other, and from changing operating-system
code and data.
• Relocation register contains value of smallest physical address whereas
limit register contains range of logical addresses.
• Each logical address must be less than the limit register.
Cont’d…
Multiple-partition allocation
• In this type of allocation, main memory is divided into a number of fixed-sized
partitions where each partition should contain only one process.
• When a partition is free, a process is selected from the input queue and is loaded
into the free partition.
• When the process terminates, the partition becomes available for another process.
• In contiguous memory allocation each process is contained in a single contiguous
block of memory. Memory is divided into several fixed size partitions.
• Each partition contains exactly one process.
• When a partition is free, a process is selected from the input queue and loaded into
it.
• The free blocks of memory are known as holes.
• The set of holes is searched to determine which hole is best to allocate
B. Non-contiguous Memory Allocation:
• Non-contiguous allocation implies that a single logical object may be placed in non-
consecutive sets of memory location.
• It allocates the memory space present in different locations to the process as per it’s
requirements.
• As all the available memory space is in a distributed pattern so the freely available
memory space is also scattered here and there.
• Paging and segmentation are the two mechanisms that are used to manage non-
contiguous memory allocation.
• A better method to overcome the fragmentation problem was evolved which makes
our logical address spaces non-contiguous in nature
• It has the advantage of reducing memory waste, but it increases overhead because of
the address translation.
• It slows down the memory execution because time is consumed in address
translation.
• The Operating Systems is a resource manager, which implies:
• The OS must keep the accounting of the resource memory (assigned and free
memory blocks)
• The OS must have a policy for memory allocation
• The OS must allocate memory to the processes when they need it
• The OS must release memory allocated to processes when they no longer need it
• The OS must keep the accounting of the system memory
• The OS has to know the amount of free memory: otherwise, this memory could
not be assigned to processes
• The OS also has to register the memory allocated to each individual process (via
zones in the process tables)
• Whenever a process is created or whenever a process requests memory, the OS
allocates memory to that process
• When a process terminates the OS releases its assigned memory
• The OS also manages the virtual memory system
• A fundamental task of the memory management component of an operating
 system is to ensure safe  execution of programs by providing: –
• Sharing of memory
• Memory protection
Issues in sharing memory
Safety (or protection)
• Processes must not corrupt each other (nor the OS!)
Efficiency
• CPU utilization must be preserved and memory must be  fairly allocated.
Want low overheads for memory  management.
Transparency
• Several processes may coexist, unaware of each other,  in the main memory and 
run regardless of the number  and location of processes.
Relocation
• Ability of a program to run in different memory  locations.
• Before a program can be executed by the CPU, it must  go through several 
steps:
• Compiling (translating)—generates the object code.
• Linking- combines the object code into a single self-sufficient executable code.
• Loading- copies the executable code into memory. May include run­
time linking with libraries.
• Execution -dynamic memory allocation.
Address binding (relocation)
• The process of associating program instructions and 
data (addresses) to physical memory addresses is called 
address binding, or relocation.
Static--new locations are determined before execution.
• Compile time: The compiler or assembler translates 
symbolic addresses (e.g., variables) to absolute addresses.
• Load time: The compiler translates symbolic addresses 
to relative (relocatable) addresses. The loader translates  these to absolute addresses.
• Dynamic--new locations are determined during execution.
• Run time: The program retains its relative addresses. 
The absolute addresses are generated by hardware. 
Swapping
• The basic idea of swapping is to treat main memory as a  ‘‘pre-emptible’’ resource. 
• A high-speed swapping device is  used as the backing storage of the preempted processes. 
• Swapping is a mechanism in which a process can be swapped temporarily out of main
memory to a backing store , and then brought back into memory for continued execution.
• Backing store is a usually a hard disk drive or any other secondary storage which fast in
access and large enough to accommodate copies of all memory images for all users.
• It must be capable of providing direct access to these memory images.
• Swapping is a medium-­term scheduling method.
• Major time consuming part of swapping is transfer time.
• Swapping brings flexibility even to systems with fixed  partitions, 
because: “if needed, the operating system can always make room for 
high-­priority jobs, no matter what!’’
The responsibilities of a swapper include:

• Selection of processes to swap out
criteria: suspended/blocked state, low priority, time  spent in memory

• Selection of processes to swap in

criteria: time spent on swapping device, priority

• Allocation and management of swap space on a swapping  device. 

Swap space can be: 
• system wide
• dedicated (e.g., swap partition or disk)
Memory protection

• The second fundamental task of a memory management 
system is to protect programs sharing the memory from  each other. 

• This protection also covers the operating  system itself. 

• Memory protection is a phenomenon by which we control memory access rights on a computer

• Memory protection can be provided at  either of the two levels:
• Hardware: – address translation (most common!)
• Software:
• language dependent: strong typing
• language independent: software fault isolation
Memory partitioning

A. Static memory partitioning


• Divide memory at boot time into a selection of different sized partitions
• Any unused space in the partition is wasted – Called internal
fragmentation
• Processes smaller than main memory, but larger than a partition cannot
run
• It is simple
• It is easy to implement
• Can result in poor memory utilization – Due to internal fragmentation
B. Dynamic memory portioning
• Partitions are of variable length – Allocated on-demand from ranges of
free memory
• Process is allocated exactly what it needs– Assumes a process knows
what it needs
• We end up with unusable holes, which is called external fragmentation
Fragmentation
• Refers to the unused memory that the  memory management system cannot allocate. 
• As processes are loaded and removed from memory, the free memory space is broken into
little pieces.
• It happens after sometimes that processes can not be allocated to memory blocks considering
their small size and memory blocks remains unused. This problem is known as
Fragmentation
• Fragmentation is of two type which is internal and external fragmentation
Internal Fragmentation –
• Allocated memory may be slightly larger than requested memory; this size difference is
memory internal to a partition, but not being used.
• Waste of memory within a partition, caused by the 
difference between the size of a partition and the  process loaded. 
• Severe in static partitioning schemes.
• The internal fragmentation can be reduced by effectively assigning the smallest partition
but large enough for the process.
External Fragmentation –
• Total memory space exists to satisfy a request, but it is not contiguous.
• Waste of memory between partitions, caused by  scattered noncontiguous free space. 
• Severe in dynamic  partitioning schemes. 
• External fragmentation can be reduced by compaction or shuffle memory contents to
place all free memory together in one large block.
Paging and segmentation
Paging
• Paging is a memory management scheme that removes the requirement of
contiguous allocation of physical memory.
• External fragmentation is avoided by using paging technique.
• Paging is a technique in which physical memory is broken into blocks of the same
size called pages (size is power of 2, between 512 bytes and 8192 bytes).
• When a process is to be executed, it's corresponding pages are loaded into any
available memory frames.
• Logical address space of a process can be non-contiguous and a process is
allocated physical memory whenever the free memory frame is available.
• Operating system keeps track of all free frames. Operating system needs n free
frames to run a program of size n pages.
• This scheme permits the physical address space of a process to be non-contiguous.
• The size of the process is measured in the number of pages.
• Similarly, main memory is divided into small fixed-sized blocks of (physical)
memory called frames and the size of a frame is kept the same as that of a page
to have optimum utilization of the main memory and to avoid external
fragmentation.
• Suppose we have main memory, and the size of the main memory is 16 KB, and
the size of the frame is 1 KB.
• In this, the main memory is split into 16 frames, and each frame is of 1 KB.
• In the system, we have four distinct processes, and the processes are A1, A2, A3,
and A4, and the size of each process is 4 KB.
• In this, we split or divide all the pages into the pages of size 1KB so that the OS
can store 1page in 1 frame.
• We can see in the following example that after some time, the process A2 and the process
A4 are moved into the waiting state.

• So, the eight frames will become vacant, and we need to load or put other pages in those
vacant blocks.

• The process A5 is having a size of eight pages (8 KB), which are waiting in the ready
queue.

• We can see in the following example that we have eight non-contiguous frames that are
existing in the memory, with the help of paging, we can store the processes at different
places.

• Due to this, we can load the pages of the A5 process instead of process A2 and Process A4.
Demand Paging
• A technique in which a page is usually brought into the main memory only
when it is needed or demanded by the CPU
• Bring a page into memory only when it is needed.
• Page faults occur when a page is requested by the CPU that is not currently in
the page table (in other words, not currently in RAM).
• Hence whenever a page fault occurs the following steps are followed by the
operating system and the required page is brought into memory.
The process includes the following steps : 
1. If the CPU tries to refer to a page that is currently not available in the main
memory, it generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to
proceed the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical
address space. The page replacement algorithms are used for the decision-
making of replacing the page in physical address space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it
will place the process back into the ready state.
Advantages of demand paging:  
• More processes may be maintained in the main memory:
• Because we are going to load only some of the pages of any particular
process, there is room for more processes.
• This leads to more efficient utilization of the processor because it is more
likely that at least one of the more numerous processes will be in the ready
state at any particular time.
•A process may be larger than all of the main memory:
•One of the most fundamental restrictions in programming is lifted.
•A process larger than the main memory can be executed because of demand
paging. The OS itself loads pages of a process in the main memory as
required.
•It allows greater multiprogramming levels by using less of the available
(primary) memory for each process.
Address Translation
Address generated by CPU is divided into
• Page number (p) -- page number is used as an index into a page table which
contains base address of each page in physical memory.
• Page offset (d) -- page offset is combined with base address to define the
physical memory address.
• Page address is called logical address and represented by page number and
the offset.
• Logical Address = Page number + page offset
• Frame address is called physical address and represented by a frame
number and the offset.
• Physical Address = Frame number + page offset
• A data structure called page map table is used to keep track of the relation
between a page of a process to a frame in physical memory.
Advantages of paging
• In Paging, there is no external fragmentation.
• In Paging, the swapping between the equal-size pages and page frames is
easy.
• Paging is a simple technique that we use for memory management.
Drawbacks of Paging
• In Paging, there may be a chance of Internal Fragmentation.
• In Paging, the Page table consumes extra memory.
• Due to Multi-level Paging, there may be a chance of memory reference
overhead.
Segmentation
• Segmentation is a technique to break memory into logical pieces where each piece
represents a group of related information.
• Segmentation is a memory management technique in which the memory is divided into
the variable size parts.
• Each part is known as a segment which can be allocated to a process.
• For example ,data segments or code segment for each process, data segment for
operating system and so on.
• Segmentation is another technique for non-contiguous storage allocation.
• It is different from paging as pages are physical in nature and hence are of fixed size,
whereas segments are logical divisions of a program and hence are of variable size.
• In segmentation, we divide the logical address space into different segments.
• The size of a segment varies according to the data stored in it or the nature of operation
performed on that segment.
•.
• Segmentation can be implemented using or without using paging.
• Unlike paging, segment are having varying sizes and thus eliminates internal
fragmentation. External fragmentation still exists but to lesser extent
• In segmentation, the memory is split into variable-length parts. Each part is known
as segments.
• The information which is related to the segment is stored in a table which is called
a segment table.
• There are two types of information stored in the segment table:
• Limit: - The limit is the length or size of the segment
• Base: - The base is the base address of the segment.
• The operating system preserves a segment map table for mapping.
• The table consists of segment number, list of the memory blocks which are free
along with its size, and its memory location in the virtual memory or the main
memory.
Translation of Logical Address into Physical Address by Segment Table
• The CPU generates a logical address that comprises of two parts:
• Segment Number: - Segment Number is defined as the number of bits that
are needed to represent the segment.
• Segment Offset: - Segment Offset is defined as the number of bits which are
needed to represent the size of the segment.
• We map the segment number to the segment table, and after mapping, a
comparison is done between the limit of the segment to the offset.
• If the offset bit is less than the limit, then we can say that the address is valid.
Otherwise, the address is invalid.
• If the address is valid then we add the base address of the segment to the offset
so that we can get the physical address of the actual word in the main memory.
Example
Calculate the physical address if no trap is produced.
• In a segmentation scheme, the generated logical address consists of two parts-
• Segment Number
• Segment Offset
• We know-
• Segment Offset must always lie in the range [0, limit-1].
• If segment offset becomes greater than or equal to the limit of segment, then trap
addressing error is produced.
Q1:0,430
• Segment Number = 0
• Segment Offset = 430
• We have,
• In the segment table, limit of segment-0 is 700.
• Thus, segment offset must always lie in the range = [0, 700-1] = [0, 699]
• Now,
• Since generated segment offset lies in the above range, so request generated is valid.
• Therefore, no trap will be produced.
• Physical Address= base + segment offset
• Physical Address = 1219 + 430 = 1649
Q2:1,11
• Segment Number = 1
• Segment Offset = 11
• We have,
• In the segment table, limit of segment-1 is 14.
• Thus, segment offset must always lie in the range = [0, 14-1] = [0, 13]
• Now,
• Since generated segment offset lies in the above range, so request generated is valid.
• Therefore, no trap will be produced.
• Physical Address = 2300 + 11 = 2311
Q3:2,100
• Segment Number = 2
• Segment Offset = 100
• We have,
• In the segment table, limit of segment-2 is 100.
• Thus, segment offset must always lie in the range = [0, 100-1] = [0, 99]
• Now,
• Since generated segment offset does not lie in the above range, so the request
generated is invalid.
• Therefore, trap will be produced.
• Do question number 4 and 5 as an exercise
Benefits of Segmentation
• Internal fragmentation is not present in the segmentation.
• Less overhead.
• In segmentation, the segment table size is less than the page table size.
• In segmentation, the relocation of the segment is easier than the whole
address space.
Drawbacks of Segmentation
• In segmentation, there may be a chance of external fragmentation.
• The segmentation technique is expensive.
• In Segmentation, it is tough to allocate memory in a contiguous manner to a
variable-sized partition.
Difference between Paging and Segmentation

Paging Segmentation
• In paging, memory is partitioned into • In segmentation, memory is partitioned
fixed-size pages. into the variable size segments.
• In paging, with the help of hardware • In segmentation, the user gives the size
page size is determine. of the segments.
• In paging, there may be a chance of
• In segmentation, there may be a chance
internal fragmentation.
of external fragmentation.
• Paging is faster than the
• Segmentation is slower than the paging.
segmentation.
• In this technique, the logical address is
• In this technique, the logical address
is partitioned into the page number partitioned into the segment number and
and the page offset. the segment offset.
Dynamic Memory Allocation algorithms
• Memory allocation is a process by which computer programs are assigned
memory or space.
• There are three main strategies that the memory manager can use when
allocating free memory to processes:
Best Fit :
• Chooses the block that is closest in size to the request
• Find the smallest free memory block that will fit the process needs.
• Idea is minimize wastage of free memory space .
• It is the allocation of the process to the partition which is first smallest and
sufficient partition among the free available partition.
• The smallest hole that is big enough is allocated to program.
Worst Fit :
• Find the largest free memory block that will fit the process needs.
• Idea is to increase the possibility that another process can use the left-over
space
• Chooses the block that is largest in size (worst-fit)
• The idea is to leave a usable fragment left over
• The largest hole that is big enough is allocated to program.
First Fit :
• Find the first space to fit the memory needs.
• In the first fit, partition is allocated which is first sufficient from the top of Main
Memory
• It Minimize the time to analyze the memory space available.
• The first hole that is big enough is allocated to program.
Example
Although, best fit minimizes the wastage space, it consumes a lot of
processor time for searching the block which is close to required size. Also,
Best-fit may perform poorer than other algorithms in some cases.
For example, see below example.
Consider the requests from processes in given order 300K, 25K, 125K and
50K. Let there be two blocks of memory available of size 150K followed by
a block size 350K.Which of the following partition allocation schemes can
satisfy above requests
First fit, Best fit or Worst fit?
Solution

Let us try all options.


First Fit:
300K is allocated from block of size 350K, 50 is left in the block.
25K is allocated from 150K block, 125K is left in the block.
125K is allocated from 125 K block, this block is fully allocated.
50K is allocated from 50k, which left from 350k.

There is no unallocated process and there is also no wasted memory block


Cont.…
Best fit:

300K is allocated from block of size 350K, 50 is left in the block.


25K is allocated from the remaining 50K block, 25K is left in the block.
125K is allocated from 150 K block, 25K is left in this block also.
50K can’t be allocated even if there is 25K + 25K space available.

There is one unallocated process and there is also 50k wasted memory block
Worst fit:
300k allocated into 350k memory block size, left with 50k free space
25k allocated into 150k memory block size, left with 125k free space
125k allocated into 125k memory block size, there is no free space
50k allocated into 50k memory block size, there is no free space
In this allocation algorithm all the process are allocated and there is no
wasted memory block.
Which algorithm does allocate the process efficiently?
Exercise
Given free memory partitions of 100K, 500K, 200K, 300K, and 600K
(in order), how would each of the First-fit, Best-fit, and Worst-fit
algorithms place processes of 212K, 417K, 112K, and 426K (in order)?
Which algorithm makes the most efficient use of memory? "
Page Replacement Algorithm
• Page replacement algorithms are the techniques using which an Operating
System decides which memory pages to swap out, write to disk when a page of
memory needs to be allocated.
• Whenever a page fault occurs, then the OS has to choose a page just to remove
from the memory to make room for that page which has to be brought in.
• When the page that was selected for replacement and was paged out, is
referenced again, it has to read in from disk, and this requires for I/O
completion.
• This process determines the quality of the page replacement algorithm: the
lesser the time waiting for page-ins, the better is the algorithm.
• There are many different page replacement algorithms.
Cont.…
• Whenever main memory (RAM) is full, or when a page is requested that is
located in virtual memory, the memory manager needs to swap old or
unused pages from RAM into virtual memory in order to make space for the
new pages being loaded.
• There are a number of strategies used by the memory manager to handle this
task.
• These strategies are often called replacement policies.
• There are 2 main replacement policies that are used by operating systems:
• First In First Out (FIFO)
• Least Recently used
• Hit Ratios: Determining Which Replacement Policy to Use
• A hit ratio is the number of times that a page is actually found in main
memory, as opposed to the number of page faults generated which is
requiring of the memory manager to retrieve the page from virtual memory.
• To calculate the hit ratio, we divide the number of non page faults by the
total number of page requests (the number of times that data has been sent
between the CPU and main memory).
• Hit ratio=hit/total
Cont….
• Since there are extra steps involved in order to execute the instruction, and
since retrieving a page from a hard disk is slower than RAM, page faults
result in longer processing time.
• We use hit ratios to determine which replacement policy will result in the
fewest number of page faults, and the fastest overall processing time.
• The policy that produces the lowest number of page faults is usually the
policy we want to use.
• A HIT =the page is found in RAM),
• A PAGE FAULT =(the page must be retrieved from virtual memory.
• A FRAME is a segment of RAM that can be allocated to hold a page, and
is typically the same size as a page.
First In First Out (FIFO) policy
• Oldest page in main memory is the one which will be selected for
replacement.
• Easy to implement, keep a list, replace pages from the tail and add new
pages at the head.
• Uses a queue data structure to keep track of the pages in main memory.
• Oldest page at the front (head) and newest page at the back (tail).
• Always replace (get rid of) the oldest page.
• Does not always work, because the oldest page may still be used by the
process.
Least Recently Used (LRU) policy

• Page which has not been used for the longest time in main memory is the
one which will be selected for replacement.
• Easy to implement, keep a list, replace pages by looking back into time.
• This algorithm helps the Operating system to search those pages that are
used over a short duration of time frame.
• Consider a main memory with five frames and the following sequence of
page references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. calculate hit and page
fault using First-In-First-out (FIFO) and Least Recently Used (LRU)
replacement policy?

Using FIFO
Number of Page Faults = 9
Number of hits = 6
Page Fault ratio = 9/15 i.e. page faults(total miss)/total possible cases
Request 3 8 2 3 9 1 6 3 8 9 3 6 2 1 3

Frame5 1 1 1 1 1 1 1 2 2 2

Frame4 9 9 9 9 9 9 9 9 9 9 9

Frame3 2 2 2 2 2 2 8 8 8 8 8 1 1

Frame2 8 8 8 8 8 6 6 6 6 6 6 6 6 6

Frame1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3

Miss/hit miss miss miss hit miss miss miss hit miss hit hit hit miss miss hit

Using LRU
Number of Page Faults = 9
Number of Hits = 6
Page Fault ratio = 9/15 i.e. page faults(total miss)/total possible cases
Thrashing
• Thrashing - a process is busy swapping pages in and out (spends more time
paging than executing).
• When our computer spends too much time swapping pages between RAM
and the page file, we call this condition thrashing.
• If a process does not have “enough” pages, the page-fault rate is very high.
This leads to:
• low CPU utilization (ready queue is empty)
• operating system (may) think that it needs to increase the degree of
multiprogramming
• another process added to the system, this process requires pages to be
brought in
Causes of Thrashing :  
1. High degree of multiprogramming:
• If the number of processes keeps on increasing in the memory then the number of
frames allocated to each process will be decreased.
• So, fewer frames will be available for each process. Due to this, a page fault will
occur more frequently and more CPU time will be wasted in just swapping in and
out of pages and the utilization will keep on decreasing. 
• For example: 
Let free frames = 400 
• Case 1: Number of process = 100 
Then, each process will get 4 frames. 
• Case 2: Number of processes = 400 
Each process will get 1 frame. 
• Case 2 is a condition of thrashing, as the number of processes is increased, frames
per process are decreased. Hence CPU time will be consumed in just swapping
pages. 
2. Lacks of Frames:
• If a process has fewer frames then fewer pages of that process will be able
to reside in memory and hence more frequent swapping in and out will be
required.
• This may lead to thrashing. Hence sufficient amount of frames must be
allocated to each process in order to prevent thrashing.
Recovery of Thrashing :  
• Do not allow the system to go into thrashing by instructing the long-term
scheduler not to bring the processes into memory after the threshold.
• If the system is already thrashing then instruct the mid-term scheduler to
suspend some of the processes so that we can recover the system from
thrashing.
• Install more RAM

You might also like