Unit 4-Memory Maangement
Unit 4-Memory Maangement
4. MEMORY MANAGEMENT
INTRODUCTION
➢ Memory is one of the most important resources of the computer system that is used to
store data and programs temporarily.
➢ Part of the operating system that manages memory is called the Memory Manager (MM).
The main functions of the Memory Manager (MM) are the following:
• Keeping track of which part of memory is in use and which parts are free.
• Allocating and de-allocating memory to processes.
• Managing swapping between memory and disk when memory is not big enough to
hold all the processes.
There are a number of memory management schemes, ranging from very simple to highly
sophisticated ones. They can be broadly classified into two
Mono-programming: Only one process will be in the memory in addition to the operating
system.
Multiprogramming: It allows multiple programs to run in parallel. Divides the memory
among concurrently running processes and the operating system.
Address binding of instructions and data to memory addresses can happen at three different
stages.
• Compile time: If memory location known a priori, absolute code can be generated;
must recompile code if starting location changes.
• Load time: Must generate relocatable code if memory location is not known at
compile time.
• Execution time: Binding delayed until run time if the process can be moved during
its execution from one memory segment to another. Need hardware support for
address maps (e.g., base and limit registers).
• The concept of a logical address space that is bound to a separate physical address
space is central to proper memory management.
• Logical address – generated by the CPU; also referred to as virtual address.
• Physical address – address seen by the memory unit.
• Logical and physical addresses are the same in compile-time and load-time address-
binding schemes; logical (virtual) and physical addresses differ in execution-time
address-binding scheme.
SWAPPING:
• A process can be swapped temporarily out of memory to a backing store, and then brought
back into memory for continued execution.
• Backing store – fast disk large enough to accommodate copies of all memory images for all
users; must provide direct access to these memory images.
• Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-
priority process is swapped out so higher-priority process can be loaded and executed.
• Major part of swap time is transfer time; total transfer time is directly proportional to the
amount of memory swapped.
• Modified versions of swapping are found on many systems, i.e., UNIX and Microsoft
Windows.
Two problems have to be solved to allow multiple applications to be in memory at the same
time without interfering with each other:
• Protection
• Relocation
A better solution to allow multiple programs to be in memory at the same time without
interfering with each other is to use address space.
An address space is the set of addresses that a process can use to address memory. Each
process has its own address space.
When loading multiple applications into memory at the same time, we must ensure correct
operation to protect the operating system from access by user processes and, in addition, to
protect user processes from one another.
It maps each process’ address space onto a different part of physical memory in a simple
way. We first need to make sure that each process has a separate memory space. To do this,
we need the ability to determine the range of legal addresses that the process may access and
to ensure that the process can access only these legal addresses.
For example, if the base register holds 300040 and the limit register is 120900, then the
program can legally access all addresses from 300040 through 420939 (inclusive).
MEMORY PARTITIONING
➢ There are two partitioning options – Fixed partitioning and Dynamic partitioning
Fixed partitioning – Partitioning is done before the process comes to memory.
Dynamic partitioning – Partitioning is done when process request memory space.
All memory is available for user processes and is considered one large block of available
memory, a hole
To keep free and allocated memory areas, it is possible to use an array data structure.
Data
Code
This procedure is a particular instance of the general dynamic storage allocation problem,
which concerns how to satisfy a request of size n from a list of free holes.
When memory is assigned dynamically, the operating system must manage it. In general
terms, there are two ways to keep track of memory usage:
• Bitmaps
• Linked lists
The memory is divided into fixed size allocation units. Each allocation unit is represented
with a bit in the bit map. If the bit is 0, it means the allocation unit is free and if it is 1, it
means it is occupied.
The following figure shows part of memory and the corresponding bitmap.
There are many solutions to this problem. The first-fit, best-fit, and worst-fit strategies are
the ones most commonly used to select a free hole from the set of available holes.
First-fit and best-fit better than worst-fit in terms of speed and storage utilization.
VIRTUAL MEMORY
Hide the real memory from the user. A process may be larger than the available main
memory. Swapping is not an attractive option, b/c it takes seconds to swap out a 1-GB
program and the same to swap in a 1-GB program.
In Virtual memory the OS keeps those parts of the program currently in use in main memory
and the rest on the disk. If a process encounters a logical address that is not in main memory,
it generates an interrupt indicating a memory access fault and the OS puts the process in
blocking state until it brings the piece that contains the logical address that caused the access
fault.
Program generated addresses are called virtual addresses and the set of such addresses form
the virtual address space. The virtual address will go to the Memory Management Unit
(MMU) that maps the virtual addresses into physical addresses.
There are two virtual memory implementation techniques
• Paging
• Segmentation
PAGING
The basic method for implementing paging involves breaking physical memory into fixed-
sized blocks called frames and breaking logical memory into blocks of the same size called
pages.
Every address generated by the CPU is divided into two parts: a page number (p) and a
page offset (d).
The page number is used as an index into a page table. The page table contains the base
address of each page in physical memory. This base address is combined with the page
offset to define the physical memory address that is sent to the memory unit.
PAGING HARDWARE
Using a page size of 4 bytes and a physical memory of 32 bytes (8 pages), we show how the
programmer’s view of memory can be mapped into physical memory. Logical address 0 is
page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus, logical
address 0 maps to physical address 20 [= (5 × 4) + 0]. Logical address 3 (page 0, offset 3)
maps to physical address 23 [= (5 × 4) + 3]. Logical address 4 is page 1, offset 0; according
to the page table, page 1 is mapped to frame 6. Thus, logical address 4 maps to physical
address 24 [= (6 × 4) + 0]. Logical address 13 maps to physical address 9.
DEMAND PAGING
Demand paging can significantly affect the performance of a computer system. To see why,
let’s compute the effective access time for a demand-paged memory.
To compute the effective access time, we must know how much time is needed to service a
page fault(Access to a page marked invalid causes a page fault). A page fault causes the
following sequence to occur:
PAGE REPLACEMENT
Page replacement takes the following approach. If no frame is free, we find one that is not
currently being used and free it. We can free a frame by writing its contents to swap space
and changing the page table (and all other tables) to indicate that the page is no longer in
memory
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim frame to the disk; change the page and frame tables accordingly.
3. Read the desired page into the newly freed frame; change the page and frame tables.
4. Continue the user process from where the page fault occurred.
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
For our example reference string, our three frames are initially empty. The first three
references (7, 0, 1) cause page faults and are brought into these empty frames. The next
reference (2) replaces page 7, because page 7 was brought in first. Since 0 is the next
reference and 0 is already in memory, we have no fault for this reference. The first reference
to 3 results in replacement of page 0, since it is now first in line. Because of this
replacement, the next reference, to 0, will fault. Page 1 is then replaced by page 0. This
process continues as shown in Figure
Notice that the number of faults for four frames (ten) is greater than the number of faults for
three frames (nine)! This most unexpected result is known as Belady’s anomaly: for some
page-replacement algorithms, the page-fault rate may increase as the number of allocated
frames increases.
One result of the discovery of Belady’s anomaly was the search for an optimal page-
replacement algorithm—the algorithm that has the lowest page-fault rate of all algorithms
and will never suffer from Belady’s anomaly. Such an algorithm does exist and has been
called OPT or MIN. It is simply this:
Replace the page that will not be used for the longest period of time.
If the optimal algorithm is not feasible, perhaps an approximation of the optimal algorithm is
possible. The key distinction between the FIFO and OPT algorithms (other than looking
backward versus forward in time) is that the FIFO algorithm uses the time when a page was
brought into memory, whereas the OPT algorithm uses the time when a page is to be used. If
we use the recent past as an approximation of the near future, then we can replace the page
that has not been used for the longest period of time. This approach is the least recently
used (LRU) algorithm.
The result of applying LRU replacement to our example reference string is shown in Figure .
The LRU algorithm produces twelve faults. Notice that the first five faults are the same as
those for optimal replacement. When the reference to page 4 occurs, however, LRU
replacement sees that, of the three frames in memory, page 2 was used least recently. Thus,
the LRU algorithm replaces page 2, not knowing that page 2 is about to be used. When it
then faults for page 2, the LRU algorithm replaces page 3, since it is now the least recently
used of the three pages in memory. Despite these problems, LRU replacement with twelve
faults is much better than FIFO replacement with fifteen.
The LRU policy is often used as a page-replacement algorithm and is considered to be good.
Memory-management scheme that supports user view of memory. The virtual space is
divided into units called segments by the programmer.
When segmentation is used there is no internal fragmentation. User prefers to view memory
as a collection of variable-sized segments, like arrays, functions, procedures, and main
program. Logical address space considered to be collection of segments.
In Segmentation Architecture: Logical address consists of a two tuple Segment-number,
offset.
Segment table – maps two-dimensional physical addresses; each table entry has
• Base – contains the starting physical address where the segments reside in memory.
• Limit – specifies the length of the segment.
Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program.
Segmentation Architecture → Relocation, Sharing and Allocation.
Segmentation Paging
Program is divided into variable size segments. Program is divided into fixed size pages.
User or compiler is responsible for dividing the Division into pages is performed by the
program into segments. Operating System.
Segmentation is slower than paging. Paging is faster than segmentation.
Segmentation is visible to the user. Paging is invisible to the user.
Segmentation eliminates internal fragmentation. Paging suffers from internal fragmentation.
Segmentation suffers from external There is no external fragmentation.
fragmentation.
Processor uses segment number, offset to Processor uses page number, offset to
calculate absolute address. calculate absolute address.
Operating System maintains a list of free holes Operating System maintains a free frame list.
in main memory.