0% found this document useful (0 votes)
67 views

Chapter Three: Memory Management

The document summarizes key aspects of memory management by an operating system. It discusses the main functions of the memory manager as keeping track of used and free memory, allocating and deallocating memory to processes, and managing swapping between memory and disk. It describes different memory partitioning schemes like fixed and dynamic partitioning and algorithms for memory allocation like first-fit, next-fit, best-fit, and worst-fit. It also covers protection of processes from accessing each other's memory without permission and relocation of processes in memory that is unpredictable due to loading, swapping and compaction.

Uploaded by

mehari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Chapter Three: Memory Management

The document summarizes key aspects of memory management by an operating system. It discusses the main functions of the memory manager as keeping track of used and free memory, allocating and deallocating memory to processes, and managing swapping between memory and disk. It describes different memory partitioning schemes like fixed and dynamic partitioning and algorithms for memory allocation like first-fit, next-fit, best-fit, and worst-fit. It also covers protection of processes from accessing each other's memory without permission and relocation of processes in memory that is unpredictable due to loading, swapping and compaction.

Uploaded by

mehari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Chapter Three: memory management

Chapter 3
Memory Management
3.1. Introduction
- Memory is one of the most important resources of the computer system that is used to store
data and programs temporarily.
- Part of the operating system that manages memory is called the Memory Manager (MM).
- The main functions of the Memory Manager (MM) are the following:
 Keeping track of which part of memory are in used and which parts are free
 Allocating and de allocating memory to processes
 Managing swapping between memory and disk when memory is not big enough to
hold all the processes
- A good memory manager has the following attributes
It allows all processes to run
Its memory (space used for the management activity) overhead must be reasonable
Its time overhead (time required for the management activity) is reasonable

Memory Management Schemes


- There are a number of memory management schemes, ranging from very simple to highly
sophisticated ones. They can be broadly classified into two:
 Mono-programming – Only one process will be in the memory in addition to the
operating system
 Multiprogramming - It allows multiple programs to run in parallel. Running in
parallel divides the memory among concurrently running processes and the operating
system.
3.2 Memory Partitioning
- In order to run multiple programs in parallel a memory should be partitioned.
- There are two partitioning options – fixed partitioning and dynamic partitioning
 Fixed partitioning – partitioning is done before the processes comes to memory
 Dynamic partitioning – partitioning is done when processes request memory space
3.2.1. Fixed partitioning
- It is the simplest and oldest partitioning technique
- It divides the memory into fixed number of equal or unequal sized partitions
- The partitioning can be done by the OS or the operator when the system is started
- Variations of fixed partitioning are:-
 Equal sized fixed partitions
 Unequal sized fixed partitions
A. Equal sized fixed partitions
- The memory is partitioned into equal-sized partitions
- Any process whose size is less than or equal to the partition size can be loaded into any
available partition
- If all partitions are occupied with processes that are not ready to run, then one of these
processes is swapped-out and a new process is loaded or a suspended process is swapped in
and loaded
- Problems:
A program may be too big to fit into a partition
Inefficient use of memory due to internal fragmentation. Internal fragmentation is
the phenomenon, in which there is wasted space internal to a partition due to the fact
that the process loaded is smaller than the partition.

Operating System Page 1


Chapter Three: memory management

B. Unequal sized fixed partitioning


- Memory is partitioned into unequal-sized fixed partitions
- In this case, there are two memory allocation schemes:
 Using multiple queues
 Using a single queue
i. Using multiple queues
Each partition has its own queue
A process is assigned to the smallest partition within which it will fit and it can
only be loaded into the partition it belongs

Fig:-Using multiple queues


Advantage: Minimize internal fragmentation
Disadvantage: Some partitions may remain unused even though processes are
waiting in some other queues
ii. Using single queue
The smallest available partition that can hold the process is selected
Whenever a partition becomes free, the process closest to the front of the queue
that fits in it could be loaded into the empty partition.
Since it can cause internal fragmentation, another technique suggests to searching
from the queue the one that best fits.

Operating System Page 2


Chapter Three: memory management

Fig:-Using single queues


Advantage: Partitions are used all the time
Disadvantage: When best fit is searched, it will not give small jobs best service,
as they are interactive. A possible remedy for this problem is not to allow small
processes not to be skipped more than k – times.
Advantage of fixed portioning
- It is simple and easy to implement and
- Little operating system overhead processing overhead is low
Disadvantages of fixed partitioning
-Internal fragmentation (unused space within partitions)
- Limits number of active processes
-Limitation on degree of multiprogramming

3.2.2. Dynamic partitioning


- Partitions are created dynamically during the allocation of memory
- The size and number of partitions vary throughout of the operation of the system
- Processes will be allocated exactly as much memory they require
- Dynamic partitioning leads to a situation in which there are a lot of small useless holes in
memory. This phenomenon is called external fragmentation.
Solution to external fragmentation:
Compaction – combining the multiple useless holes in to one big hole
The problem of memory compaction is that it requires a lot of CPU time. For
instance, on a 32M machine that can copy 16B/ms, it takes 2 sec to compact all of the memory.
- A process has three major sections - stack, data and code
- With fixed partitions, the space given to process is usually larger than it asked for and hence
the process can expand if it needs.
- With dynamic partitions since a process is allocated exactly as much memory it requires, it
creates an overhead of moving and swapping processes whenever the process grows dynamically.
- It is expected that most processes will grow as they run, so it is a good idea to allocate a little extra
memory whenever a process is loaded into the memory.
- As shown in the following diagram the extra space is located between the stack and data
segments because the code segment doesn’t grow.

Operating System Page 3


Chapter Three: memory management

If a process runs out of the extra space allocated to it, either


o It will have to be moved to a hole with enough space or
o Swapped out of memory until a large enough hole can be created or
o Kill the process
Memory usage management
- There are two ways to keep track of memory usage – bitmap and linked list
i. Bitmaps
- The memory is divided into fixed size allocation units
- Each allocation unit is represented with a bit in the bit map. If the bit is 0, it means the
allocation unit is free and if it is 1, it means it is occupied
Example: Look at the following portion of a memory. Let the size of the allocation units is 5k as
shown in the diagram.

The corresponding bitmap for the above memory can be as follows:

The size of the allocation unit is an important design issue


o The smaller the allocation unit, the larger the bitmap size, which will result to
memory and CPU overheads. Searching will take time.
Example: Assume an allocation unit of 4bytes and assume the computer has 800KB of memory
Number of bits required = 800KB/4B = 200Kbits
Size of bitmap = 200Kbits/8 = 25KB
Percentage of the memory used for the bitmap = (25/800)*100% 3%
- The larger the allocation unit, the larger the internal fragmentation. This is because a
partition must be a multiple of allocation units where the minimum partition size is 1 allocation unit.
ii. Linked Lists
- A linked list of allocated and free memory segments is used
- Each segment is either a process or a hole between two processes and contains a number of
allocation units

Operating System Page 4


Chapter Three: memory management

- Each entry in the list consists of


o Segment type: P/H (1bit)
o The address at which it starts
The length of the segment
o A pointer to the next entry
Example: The memory in the bitmap example can be represented using linked list as follows:

Memory Allocation algorithms (Placement algorithms)


- When a new process is created, or swapped in, the OS must decide which free block to allocate.
- The following are some of the placement algorithms
i. First-Fit
It scans the memory from the beginning and chooses the first available block that is large enough
It is the simplest and fastest algorithm
Disadvantage: Searching from the beginning always may result in slow performance
ii. Next Fit
It starts scanning the memory from the location of the last placement and chooses the next available
block that is large enough
Disadvantage: It may cause the last section of memory to be quickly fragmented
iii. Best Fit
It searches the entire list of available holes and chooses the lock that is closest in size to the request
Disadvantage: External fragmentation and slower because it scans the whole list
iv. Worst Fit
It searches for the largest available hole, so that the hole broken off will be big enough to be useful
Disadvantage: It is as slow as best fit.
Example: allocation of 16 MB process

Operating System Page 5


Chapter Three: memory management

Summary
Advantage
- Dynamic memory allocation provides better memory utilization
Disadvantage
- Management is more complex
- Includes memory compaction overhead
3.2.2. Relocation and Protection
- Multiprogramming introduces two essential problems that must be solved – relocation and
protection
A. Protection
- Programs in other processes should not be able to reference memory locations in a process
for reading or writing purposes without permission.
- A user program shouldn’t be able to address a memory area which is not in its partition
o A user process must not access any portion of the OS
o Usually a program in one process cannot branch to an instruction in another process
o Without permission, a program in one process cannot access the data area of another process.
B. Relocation
- Location of a program in main memory is unpredictable due to loading, swapping and compaction
- There are two types of addresses:
o Logical address
It is a reference to a memory location independent of the current assignment of data to memory
Logical address is generated during compilation
o Relative address
It is a particular example of logical address in which the address is expressed as a location relative
to some known part, usually the beginning of the program
Translation must be made first
o Physical/absolute address
An actual locator in the main memory
Relocation techniques
- There are two possible solutions for relocation problems – static relocation and dynamic relocation
i. Static Relocation
- When a process is first loaded, all relative memory references are replaced by an absolute address
based on the base address of the loaded process
- It can be applied for processes which are always assigned to the same partition, for instance in the
case of unequal size fixed partitions scheme with multiple process queue.
- It doesn’t solve the protection problem. A malicious program can always construct a new
instruction and jump to it.
ii. Dynamic Relocation
- When a process is assigned for running, a special processor register (base register) is loaded
with the starting address of the program and a limit register is loaded with ending address of
the partition

Limit and base registers are protected by the hardware not to be modified by user programs
- Hardware mechanisms are used for translating relative addresses to physical addresses at the
time of execution of the instructions that contains the reference.

Operating System Page 6


Chapter Three: memory management

The value in the base register is added to the relative address to produce an absolute address
- The absolute address is compared to the value in the limit/bound register
- If the address is within bounds, the instruction execution can proceed, otherwise an interrupt is
generated to the OS
- Dynamic relocation provides solution to both relocation and protection problems.

3.3 Virtual Memory


- A process may be larger than the available main memory
- In the early day’s overlays were used to solve this problem
o The programmer will divide the program into modules and the main program is
responsible for switching the modules in and out of memory as needed.
o Drawback: The programmer must know how much memory is available and it wastes the
programmer time.
- In virtual memory, the OS keeps those parts of the program currently in use in main memory
and the rest on the disk. Piece of programs are swapped between disk and memory as needed.
-It allows a larger logical address space than physical memory.
- If a process encounters a logical address that is not in main memory, it generates an interrupt
indicating a memory access fault and the OS puts the process in blocking state until it brings
the piece that contains the logical address that caused the access fault.
- Program generated addresses are called virtual addresses and the set of such addresses form the
virtual address space.
- The virtual address will go to the Memory Management Unit (MMU) that maps the virtual
addresses into physical addresses.
- There are two virtual memory implementation techniques
Paging
Segmentation
3.4. Paging and Segmentation
3.4.1. Paging
- The virtual address space is divided into units called pages and the memory is divided into
units called page frames.
 Page address is called logical address and represented by page number and the offset.
Logical Address = Page Number + Page Offset
 Frame address is called physical address and represented by a frame number and the offset.
Physical Address = Frame Number +Page Offset
If page table entry = 4 byte (= 32 bits), 232 page frames.
 Pages and page frames should have the same size
 Transfer between memory and disks are always in units of pages
 The operation of dividing a process into pages is done by the compiler or MM
Example: Assume a 16-bit virtual address and each process is divided into 4K pages and the
physical memory size is 32K

Operating System Page 7


Chapter Three: memory management

Virtual Address = (4 bits page number, 12-bits offset)


Virtual Address Space=216=64K
4
Maximum Number of pages = 2 =16pages
Example:
1. When the program tries to access address 0, for example, using the instruction - MOVE REG, 0
The virtual address 0 is sent to MMU. The MMU sees that this virtual address falls in page 0 (0-
4095), which according to its mapping is page frame 2 (8192 to 12287).
It thus transforms the address to 8192 and outputs address 8192 onto the bus.
2. In the same way, an instruction MOVE REG, 8196 is effectively transformed in to MOVE REG,
24580
Page Tables
- The OS maintains page table for mapping virtual addresses to physical addresses
- Page tables are stored in memory
- The OS also maintains all free-frames list of all frames in main memory that are currently occupied
- Each page table entry contains: present-bit, modified-bit, and frame number
Present-bit: Whether the page is present in main memory or not
Modified-bit: whether the contents of the corresponding page have been altered since the page
was last loaded into main memory. If it is not altered, then it is not necessary to write the page
out when it is to replace the page in the frame it occupies
Frame number: Indicates the corresponding page frame number in memory if its present-bit is 1,
otherwise it is null.
Other Bits: Other control bits like protection and sharing bits
- The MMU uses page tables to translate logical/virtual addresses into physical addresses, consisting
of frame number and offset
- When a particular process is running, a register holds the starting address of the page table
for that process. The page number of a virtual address is used to index that table and look up
the corresponding frame number. This is combined with the offset partition of the virtual
address to produce the desired physical address.
Example: Address Bits = 16,Page size = 4K and Virtual address 8196. The following figure shows
the internal operation of the MMU with 16 4K pages.

Operating System Page 8


Chapter Three: memory management

If an instruction in a process tries to access an address in unmapped page


 The MMU causes the CPU to generate an interrupt, called page fault
 The OS puts the interrupted process in a blocking state and takes control
 The OS chooses a page (page frame) to remove from memory and updates its contents back
to the disk if it is modified since its last load
 The OS issues a disk I/O read request to fetch the page just referenced into the page
frame just freed and the OS can dispatch another process to run while the disk I/O is performed.
 Once the desired page has been loaded into main memory, and I/O interrupt is issued, giving
control back to the OS, which places the affected process back into a ready state.
 When the process is dispatched, the trapped instruction is restarted
Page Size and mapping Speed
- There are two issues
 The smaller the page size, the less number of internal fragmentation which optimizes the use
of main memory
 The smaller the page size, the greater the number of pages and thus the larger the page table
 Every virtual memory can cause two physical memory accesses: one to fetch the appropriate
page table entry and one to fetch the desired data.

Calculating internal fragmentation


– Page size = 2,048 bytes
– Process size = 72,766 bytes
– 35 pages + 1,086 bytes
– Internal fragmentation of 2,048 - 1,086 = 962 bytes

Operating System Page 9


Chapter Three: memory management

Page Replacement Algorithms (PRA)


Optimal PRA
- It says replace the page that will not be referenced soon
- Drawback – impossible to know whether the page will be referenced or not in the future
Not Recently Used PRA
- It says replace the page that is not referenced in the near past
First In First Out PRA
- Remove the oldest page
Least Recently Used PRA
- Remove pages that have less used in the past
- It assumes that pages that have been heavily used in the last few instructions will probably be
heavily used again in the future
3.4.1. Segmentation
- The virtual space is divided into units called segments by the programmer
- Segments can have unequal and dynamic size
- Each segment has its own virtual address space
- Each segment consists of a linear sequence of addresses, form 0 to some maximum
- A process is loaded in partially (Only some of its segments)
- The OS maintains a segment table for each process and also a free-segment list
- It simplifies the handling of growing data structures
- When segmentation is used, there is no internal fragmentation

Paging vs segmentation
Segments are variable size while pages are fixed size
Segmentation requires more complicated hardware for address translation than paging
Segmentation suffers from external fragmentation whereas paging only yields a small internal
fragmentation.
3.5 working set and thrashing
The set of pages that a process is currently using is called its working set. If the entire working
set is in memory, the process will run without causing many faults until it moves into another
execution phase (e.g., the next pass of the compiler).
– WS(t,w) = {pages P such that P was referenced in the time interval (t, t-w)}
Example: Working set

If the available memory is too small to hold the entire working set, the process will cause many
page faults and run slowly. program causing page faults every few instructions is said to be
thrashing.

Operating System Page 10


Chapter Three: memory management

3.5.1 Thrashing
 If a process cannot maintain its minimum required number of frames, then it must be
swapped out, freeing up frames for other processes. This is an intermediate level of
CPU scheduling.
 But what about a process that can keep its minimum, but cannot keep all of the frames
that it is currently using on a regular basis? In this case it is forced to page out pages
that it will need again in the very near future, leading to large numbers of page faults.
 A process that is spending more time paging than executing is said to be thrashing.
 when the system spends most of its time servicing page faults, little time doing useful
work

Fig: - Thrashing

Operating System Page 11

You might also like