0% found this document useful (0 votes)
72 views

Unit 4-Memory Maangement

The document discusses memory management techniques used by operating systems. It describes the main functions of a memory manager as keeping track of used and free memory, allocating and deallocating memory to processes, and managing swapping between memory and disk. It then covers different memory management schemes like mono-programming, multiprogramming, address binding, logical vs physical addresses, swapping, address spaces, base and limit registers, fixed and dynamic memory partitioning, and techniques for managing free memory using bitmaps and linked lists.

Uploaded by

Habtie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

Unit 4-Memory Maangement

The document discusses memory management techniques used by operating systems. It describes the main functions of a memory manager as keeping track of used and free memory, allocating and deallocating memory to processes, and managing swapping between memory and disk. It then covers different memory management schemes like mono-programming, multiprogramming, address binding, logical vs physical addresses, swapping, address spaces, base and limit registers, fixed and dynamic memory partitioning, and techniques for managing free memory using bitmaps and linked lists.

Uploaded by

Habtie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

MTU

4. MEMORY MANAGEMENT

INTRODUCTION

➢ Memory is one of the most important resources of the computer system that is used to
store data and programs temporarily.
➢ Part of the operating system that manages memory is called the Memory Manager (MM).
The main functions of the Memory Manager (MM) are the following:
• Keeping track of which part of memory is in use and which parts are free.
• Allocating and de-allocating memory to processes.
• Managing swapping between memory and disk when memory is not big enough to
hold all the processes.

A good memory manager has the following attributes


• It allows all processes to run.
• Its memory (space used for the management activity) overhead must be reasonable.
• Its time overhead (time required for the management activity) is reasonable.

MEMORY MANAGEMENT SCHEMES

There are a number of memory management schemes, ranging from very simple to highly
sophisticated ones. They can be broadly classified into two
Mono-programming: Only one process will be in the memory in addition to the operating
system.
Multiprogramming: It allows multiple programs to run in parallel. Divides the memory
among concurrently running processes and the operating system.

Binding of Instructions and Data to memory:

Address binding of instructions and data to memory addresses can happen at three different
stages.

• Compile time: If memory location known a priori, absolute code can be generated;
must recompile code if starting location changes.
• Load time: Must generate relocatable code if memory location is not known at
compile time.
• Execution time: Binding delayed until run time if the process can be moved during
its execution from one memory segment to another. Need hardware support for
address maps (e.g., base and limit registers).

Memory Management Page: 1


Logical address vs Physical address:

• The concept of a logical address space that is bound to a separate physical address
space is central to proper memory management.
• Logical address – generated by the CPU; also referred to as virtual address.
• Physical address – address seen by the memory unit.
• Logical and physical addresses are the same in compile-time and load-time address-
binding schemes; logical (virtual) and physical addresses differ in execution-time
address-binding scheme.

SWAPPING:

• A process can be swapped temporarily out of memory to a backing store, and then brought
back into memory for continued execution.
• Backing store – fast disk large enough to accommodate copies of all memory images for all
users; must provide direct access to these memory images.
• Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-
priority process is swapped out so higher-priority process can be loaded and executed.
• Major part of swap time is transfer time; total transfer time is directly proportional to the
amount of memory swapped.
• Modified versions of swapping are found on many systems, i.e., UNIX and Microsoft
Windows.

Memory Management Page: 2


Schematic View of Swapping
ADDRESS SPACES

Two problems have to be solved to allow multiple applications to be in memory at the same
time without interfering with each other:

• Protection

• Relocation

A better solution to allow multiple programs to be in memory at the same time without
interfering with each other is to use address space.

An address space is the set of addresses that a process can use to address memory. Each
process has its own address space.

Difficulty: How to give each program its own address space?

BASE AND LIMIT REGISTERS

When loading multiple applications into memory at the same time, we must ensure correct
operation to protect the operating system from access by user processes and, in addition, to
protect user processes from one another.

It maps each process’ address space onto a different part of physical memory in a simple
way. We first need to make sure that each process has a separate memory space. To do this,
we need the ability to determine the range of legal addresses that the process may access and
to ensure that the process can access only these legal addresses.

Memory Management Page: 3


We can provide this protection by using two registers:

• Base register: holds the smallest legal physical memory address;

• Limit register: specifies the size of the range.

For example, if the base register holds 300040 and the limit register is 120900, then the
program can legally access all addresses from 300040 through 420939 (inclusive).

MEMORY PARTITIONING

In order to run multiple programs in parallel a memory should be partitioned.

➢ There are two partitioning options – Fixed partitioning and Dynamic partitioning
Fixed partitioning – Partitioning is done before the process comes to memory.
Dynamic partitioning – Partitioning is done when process request memory space.

All memory is available for user processes and is considered one large block of available
memory, a hole

Memory Management Page: 4


Fixed partitioning

➢ It is the simplest and oldest partitioning technique.


➢ It divides the memory into fixed number of equal or unequal sized partitions.
➢ The partitioning can be done by the OS or the operator when the system is started.
➢ Variations of fixed partitioning
• Equal sized fixed partitions.
• Unequal sized fixed partitions.

Equal sized fixed partitions

➢ The memory is partitioned into equal-sized partitions.


➢ Any process whose size is less than or equal to the partition size can be loaded into any
available partition.
➢ If all partitions are occupied with processes that are not ready to run, then one of these
processes is swapped-out and a new process is loaded or a suspended process is swapped
in and loaded.

• A program may be too big to fit into a partition.


• Inefficient use of memory due to internal fragmentation (the distribution of parts of
the same disk file over different areas of the disk). Internal fragmentation is the
phenomenon, in which there is wasted space internal to a partition due to the fact that
the process loaded is smaller than the partition.

Unequal sized fixed partitioning

➢ Memory is partitioned into unequal-sized fixed partitions.


➢ In this case there are two memory allocation schemes
• Using multiple queues.
• Using a single queue.

Using multiple queues


• Each partition has its own queue.
• A process is assigned to the smallest partition within which it will fit and it can only be
loaded into the partition it belongs.
• It minimizes internal fragmentation.

Using single queue


• The smallest available partition that can hold the process is selected.
• Whenever a partition becomes free, the process closest to the front of the queue that fits
in it could be loaded into the empty partition.
• Since it can cause internal fragmentation, another technique suggests searching from the
queue the one that best fits to the new empty space.

To keep free and allocated memory areas, it is possible to use an array data structure.

Memory Management Page: 5


Dynamic partitioning

➢ Partitions are created dynamically during the allocation of memory.


➢ The size and number of partitions vary throughout of the operation of the system.
➢ Processes will be allocated exactly as much memory they require.
➢ Dynamic partitioning leads to a situation in which there are a lot of small useless holes
in memory, this phenomenon is called external fragmentation.
➢ Solution to external fragmentation:
Compaction – Combining the multiple useless holes in to one big hole.
• The problem of memory compaction is that it requires a lot of CPU time. For
instance, on a 32M machine that can copy 16B/ms, it takes 2 sec to compact all of the
memory.
➢ A process has three major sections - stack, data and code.
➢ With fixed partitions, the space given to process is usually larger than it asked for and
hence the process can expand if it needs.
➢ With dynamic partitions since a process is allocated exactly as much memory it requires,
it creates an overhead of moving and swapping processes whenever the process grows
dynamically.
➢ It is expected that most processes will grow as they run, so it is a good idea to allocate a
little extra memory whenever a process is loaded into the memory.
➢ As shown in the following diagram the extra space is located between the stack and data
segments because the code segment doesn’t grow.
Stack

Data

Code

This procedure is a particular instance of the general dynamic storage allocation problem,
which concerns how to satisfy a request of size n from a list of free holes.

MANAGING FREE MEMORY

When memory is assigned dynamically, the operating system must manage it. In general
terms, there are two ways to keep track of memory usage:
• Bitmaps
• Linked lists

Memory Management Page: 6


MEMORY MANAGEMENT WITH BITMAPS

The memory is divided into fixed size allocation units. Each allocation unit is represented
with a bit in the bit map. If the bit is 0, it means the allocation unit is free and if it is 1, it
means it is occupied.

The following figure shows part of memory and the corresponding bitmap.

HOW DO YOU ALLOCATE MEMORY TO NEW PROCESSES

There are many solutions to this problem. The first-fit, best-fit, and worst-fit strategies are
the ones most commonly used to select a free hole from the set of available holes.

➢ First-fit: Allocate the first hole that is big enough.


➢ Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless
ordered by size. Produces the smallest leftover hole.
➢ Worst-fit: Allocate the largest hole; must also search entire list. Produces the
largest leftover hole.

First-fit and best-fit better than worst-fit in terms of speed and storage utilization.

VIRTUAL MEMORY
Hide the real memory from the user. A process may be larger than the available main
memory. Swapping is not an attractive option, b/c it takes seconds to swap out a 1-GB
program and the same to swap in a 1-GB program.

Memory Management Page: 7


A solution adopted in the 1960s was to split programs into little pieces, called overlays.
• The programmer will divide the program into modules and the main program is
responsible for switching the modules in and out of memory as needed.
• The programmer must know how much memory is available and it wastes the
programmer time.

In Virtual memory the OS keeps those parts of the program currently in use in main memory
and the rest on the disk. If a process encounters a logical address that is not in main memory,
it generates an interrupt indicating a memory access fault and the OS puts the process in
blocking state until it brings the piece that contains the logical address that caused the access
fault.
Program generated addresses are called virtual addresses and the set of such addresses form
the virtual address space. The virtual address will go to the Memory Management Unit
(MMU) that maps the virtual addresses into physical addresses.
There are two virtual memory implementation techniques
• Paging
• Segmentation

PAGING

The basic method for implementing paging involves breaking physical memory into fixed-
sized blocks called frames and breaking logical memory into blocks of the same size called
pages.
Every address generated by the CPU is divided into two parts: a page number (p) and a
page offset (d).

The page number is used as an index into a page table. The page table contains the base
address of each page in physical memory. This base address is combined with the page
offset to define the physical memory address that is sent to the memory unit.

PAGING HARDWARE

Memory Management Page: 8


PAGING MODEL OF PHYSICAL AND LOGICAL MEMORY

Memory Management Page: 9


PAGING EXAMPLE FOR 32-BYTE MEMORY WITH 4-BYTE PAGES

Using a page size of 4 bytes and a physical memory of 32 bytes (8 pages), we show how the
programmer’s view of memory can be mapped into physical memory. Logical address 0 is
page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus, logical
address 0 maps to physical address 20 [= (5 × 4) + 0]. Logical address 3 (page 0, offset 3)
maps to physical address 23 [= (5 × 4) + 3]. Logical address 4 is page 1, offset 0; according
to the page table, page 1 is mapped to frame 6. Thus, logical address 4 maps to physical
address 24 [= (6 × 4) + 0]. Logical address 13 maps to physical address 9.

DEMAND PAGING

Memory Management Page: 10


Demand paging Similar to a paging system with swapping. Rather than swapping the entire
process into memory. Never swap a page into memory unless needed.
Swapper swaps an entire process into memory.
Pager brings individual pages into memory.
In the beginning, pager guesses the pages to be used by the process. Pages not needed are
not brought into memory.
Hardware support for demand paging.
An extra bit attached to each entry in the page table (valid-invalid bit.)
This bit indicates whether the page is in memory or not.

Demand paging can significantly affect the performance of a computer system. To see why,
let’s compute the effective access time for a demand-paged memory.
To compute the effective access time, we must know how much time is needed to service a
page fault(Access to a page marked invalid causes a page fault). A page fault causes the
following sequence to occur:

1. Trap to the operating system.


2. Save the user registers and process state.
3. Determine that the interrupt was a page fault.
4. Check that the page reference was legal and determine the location of the page on the
disk.
5. Issue a read from the disk to a free frame:
a. Wait in a queue for this device until the read request is serviced.
b. Wait for the device seek and/or latency time.
c. Begin the transfer of the page to a free frame.
6. While waiting, allocate the CPU to some other user (CPU scheduling, optional).
7. Receive an interrupt from the disk I/O subsystem (I/O completed).
8. Save the registers and process state for the other user (if step 6 is executed).
9. Determine that the interrupt was from the disk.
10. Correct the page table and other tables to show that the desired page is now in memory.
11. Wait for the CPU to be allocated to this process again.
12. Restore the user registers, process state, and new page table, and then resume the
interrupted instruction.

PAGE REPLACEMENT

Page replacement takes the following approach. If no frame is free, we find one that is not
currently being used and free it. We can free a frame by writing its contents to swap space
and changing the page table (and all other tables) to indicate that the page is no longer in
memory
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim frame to the disk; change the page and frame tables accordingly.
3. Read the desired page into the newly freed frame; change the page and frame tables.
4. Continue the user process from where the page fault occurred.

Memory Management Page: 11


1. FIFO Page Replacement

The simplest page-replacement algorithm is a first-in, first-out (FIFO) algorithm.


A FIFO replacement algorithm associates with each page the time when that page was
brought into memory. When a page must be replaced, the oldest page is chosen.
We use the reference string for a memory with three frames.

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

For our example reference string, our three frames are initially empty. The first three
references (7, 0, 1) cause page faults and are brought into these empty frames. The next
reference (2) replaces page 7, because page 7 was brought in first. Since 0 is the next
reference and 0 is already in memory, we have no fault for this reference. The first reference
to 3 results in replacement of page 0, since it is now first in line. Because of this
replacement, the next reference, to 0, will fault. Page 1 is then replaced by page 0. This
process continues as shown in Figure

Notice that the number of faults for four frames (ten) is greater than the number of faults for
three frames (nine)! This most unexpected result is known as Belady’s anomaly: for some
page-replacement algorithms, the page-fault rate may increase as the number of allocated
frames increases.

2. Optimal Page Replacement

One result of the discovery of Belady’s anomaly was the search for an optimal page-
replacement algorithm—the algorithm that has the lowest page-fault rate of all algorithms
and will never suffer from Belady’s anomaly. Such an algorithm does exist and has been
called OPT or MIN. It is simply this:

Replace the page that will not be used for the longest period of time.

Memory Management Page: 12


For example, on our sample reference string, the optimal page-replacement algorithm would
yield nine page faults, as shown in Figure . The first three references cause faults that fill the
three empty frames. The reference to page 2 replaces page 7, because page 7 will not be
used until reference 18, whereas page 0 will be used at 5, and page 1 at 14. The reference to
page 3 replaces page 1, as page 1 will be the last of the three pages in memory to be
referenced again. With only nine page faults, optimal replacement is much better than
a FIFO algorithm, which results in fifteen faults.

3. LRU Page Replacement

If the optimal algorithm is not feasible, perhaps an approximation of the optimal algorithm is
possible. The key distinction between the FIFO and OPT algorithms (other than looking
backward versus forward in time) is that the FIFO algorithm uses the time when a page was
brought into memory, whereas the OPT algorithm uses the time when a page is to be used. If
we use the recent past as an approximation of the near future, then we can replace the page
that has not been used for the longest period of time. This approach is the least recently
used (LRU) algorithm.

The result of applying LRU replacement to our example reference string is shown in Figure .
The LRU algorithm produces twelve faults. Notice that the first five faults are the same as
those for optimal replacement. When the reference to page 4 occurs, however, LRU
replacement sees that, of the three frames in memory, page 2 was used least recently. Thus,
the LRU algorithm replaces page 2, not knowing that page 2 is about to be used. When it
then faults for page 2, the LRU algorithm replaces page 3, since it is now the least recently
used of the three pages in memory. Despite these problems, LRU replacement with twelve
faults is much better than FIFO replacement with fifteen.
The LRU policy is often used as a page-replacement algorithm and is considered to be good.

Memory Management Page: 13


SEGMENTATION:

Memory-management scheme that supports user view of memory. The virtual space is
divided into units called segments by the programmer.

A program is a collection of segments. A segment is a logical unit such as Main program,


function, procedure, method, object, variable, common block, stack, arrays.
Segments can have unequal and dynamic size. Each segment has its own virtual address
space. Each segment consists of a linear sequence of addresses, from 0 to some maximum.
A process is loaded in partially (Only some of its segments). The OS maintains a segment
table for each process and also a free-segment list. It simplifies the handling of growing data
structures.

When segmentation is used there is no internal fragmentation. User prefers to view memory
as a collection of variable-sized segments, like arrays, functions, procedures, and main
program. Logical address space considered to be collection of segments.
In Segmentation Architecture: Logical address consists of a two tuple Segment-number,
offset.
Segment table – maps two-dimensional physical addresses; each table entry has
• Base – contains the starting physical address where the segments reside in memory.
• Limit – specifies the length of the segment.

Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program.
Segmentation Architecture → Relocation, Sharing and Allocation.

Memory Management Page: 14


A segmentation example is shown in the following diagram

Segmentation Paging
Program is divided into variable size segments. Program is divided into fixed size pages.
User or compiler is responsible for dividing the Division into pages is performed by the
program into segments. Operating System.
Segmentation is slower than paging. Paging is faster than segmentation.
Segmentation is visible to the user. Paging is invisible to the user.
Segmentation eliminates internal fragmentation. Paging suffers from internal fragmentation.
Segmentation suffers from external There is no external fragmentation.
fragmentation.
Processor uses segment number, offset to Processor uses page number, offset to
calculate absolute address. calculate absolute address.
Operating System maintains a list of free holes Operating System maintains a free frame list.
in main memory.

Memory Management Page: 15

You might also like