0% found this document useful (0 votes)
5 views

Chapter 3 Memory Managment

Chapter Three discusses memory management, emphasizing its importance in operating systems for efficient memory utilization and process execution. It covers various techniques such as paging, segmentation, and swapping, along with issues like fragmentation and thrashing. The chapter also explains the distinction between logical and physical address spaces, and introduces memory management methods like bitmaps and linked lists.

Uploaded by

shamedin9920
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Chapter 3 Memory Managment

Chapter Three discusses memory management, emphasizing its importance in operating systems for efficient memory utilization and process execution. It covers various techniques such as paging, segmentation, and swapping, along with issues like fragmentation and thrashing. The chapter also explains the distinction between logical and physical address spaces, and introduces memory management methods like bitmaps and linked lists.

Uploaded by

shamedin9920
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 52

CHAPTER THREE

Memory Management

Muluken E. Email: [email protected]


(MSc) Slide 1
Contents

4.1. Over view of physical memory and


memory management
4.1.1. Hardware overlays
4.1.2. Swapping
4.1.3. Partitioning
4.2. Paging and Segmentation
4.2.1. Page replacement and
replacement policies
4.3. Working sets and thrashing
4.4. Caching 2
Memory Management
Memory is an important resource that needs to be managed
properly.

Memory management is the functionality of an operating


system which handles or manages primary memory.

To achieve a degree of multiprogramming and proper


utilization of memory, memory management is important.

Memory management keeps track of each and every memory


location either it is allocated to some process or it is free.

It checks how much memory is to be allocated to processes. It


decides which process will get memory at what time. 3
Memory Management
In a multiprogramming computer, the operating system resides
in a part of memory and the rest is used by multiple processes.

The task of subdividing the memory among different processes


is called memory management.

Memory management is a method in the operating system to


manage operations between main memory and disk during
process execution.

 The main aim of memory management is to achieve efficient


utilization of memory.
4
Memory Management
It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.

Memory management provides protection by using two registers, a


base register and a limit register.

The base register holds the smallest legal physical memory address
and the limit register specifies the size of the range.

For example, if the base register holds 300000 and the limit
register is 120,000, then the program can legally access all
addresses from 300000 through 420,000.

5
Memory Management

6
Why Memory Management is required
Allocate and de-allocate memory before and after
process execution.
To keep track of used memory space by processes.

To minimize fragmentation issues.

To proper utilization of main memory.

To maintain data integrity while executing of process.

7
Logical versus Physical Address Space
Logical Address space: An address generated by the CPU is
known as “Logical Address”.

It is also known as a Virtual address.

Logical address space can be defined as the size of the process.

A logical address can be changed.

Physical Address space: An address seen by the memory unit is


commonly known as a “Physical Address”.

A Physical address is also known as a Real address.

8
Logical versus Physical Address Space

The set of all physical addresses corresponding to


these logical addresses is known as Physical address
space.
A physical address is computed by MMU.

The run-time mapping from virtual to physical


addresses is done by a hardware device Memory
Management Unit(MMU).
The physical address always remains constant. 9
Logical versus Physical Address Space
MMU uses following mechanism to convert virtual
address to physical address.
The value in the base register is added to every address
generated by a user process which is treated as offset at the
time it is sent to memory.
For example, if the base register value is 10000, then an
attempt by the user to use address location 100 will be
dynamically reallocated to location 10100.
The user program deals with virtual addresses; it never sees
the real physical addresses. 10
Logical versus Physical Address Space

11
 The OS has a part called Memory manager that manipulates
and perform an operation on memory hierarchy.

12
Memory Management
 Memory management systems are divided in to two classes.

1. Those that move processes back and forth b/n main


memory and disk (paging and swapping) b/c of
memory insufficiency
2. Those that do not (simpler)
 Also the realm of software tech. persuade the creation
of efficient memory.

13
Monoprogramming without Swapping or Paging
 The simplest possible memory management scheme is to run just one
program at a time, sharing the memory between that program and the
operating system.
0X FFF Devices drivers in
OS in ROM ROM

User Program
User Program

User program

OS in RAM OS in RAM
0

Formerly in main Palm tops, embedded Early PC w/r portion


frames and systems and Smart of a system is called
minicomputers cards BIOS

 In all cases only one process can run at a time


14
◦ No Swapping and Paging (ex. Three level scheduling in batch system)
Multiprogramming with Fixed Partitions

To gain proper memory utilization, memory allocation must


be allocated efficient manner.

One of the simplest methods for allocating memory is to


divide memory into several fixed-sized partitions and each
partition contains exactly one process.

Thus, the degree of multiprogramming is obtained by the


number of partitions.

15
Fragmentation

A Fragmentation is defined as when the process is


loaded and removed after execution from memory,
it creates a small free hole.
These holes can not be assigned to new processes
because holes are not combined or do not fulfill the
memory requirement of the process.

16
Fragmentation
Internal fragmentation:
Internal fragmentation occurs when memory blocks are
allocated to the process more than their requested size.
Due to this some unused space is leftover and creates
an internal fragmentation problem.
External fragmentation:
In external fragmentation, we have a free memory block, but
we can not assign it to process because blocks are not
contiguous.
To overcome the external fragmentation problem
Compaction is used. In the compaction technique, all free
memory space combines and makes one large block. So, this
space can be used by other processes effectively.
17
Swapping
• Swapping is a mechanism in which a process can be swapped
temporarily out of main memory to a backing store , and then
brought back into memory for continued execution.

• Backing store is a usually a hard disk drive or any other


secondary storage which fast in access and large enough to
accommodate copies of all memory images for all users. It
must be capable of providing direct access to these memory
images.

• A swapping allows more processes to be run and can be fit into


memory at one time. 18
Swapping
• Swapping is also known as roll-out, roll in, because if a
higher priority process arrives and wants service, the
memory manager can swap out the lower priority process
and then load and execute the higher priority process.

• After finishing higher priority work, the lower priority


process swapped back in memory and continued to the
execution process.

19
Swapping
• Major time consuming part of swapping is transfer time. Total
transfer time is directly proportional to the amount of memory
swapped.

• Let us assume that the user process is of size 100KB and the
backing store is a standard hard disk with transfer rate of 1 MB
per second. The actual transfer of the 100K process to or from
memory will take

100KB / 1000KB per second

= 1/10 second
20
Swapping

21
Swapping(1)

• Memory allocation changes as


• processes come into memory
• leave memory
• The shaded regions are unused memory. 22
Swapping (2)

• Allocating space for growing data segment


• Allocating space for growing stack & data segment
23
MM with Bitmaps and Linked List
• As memory is assigned dynamically OS must manage it.

• There are two ways of free space management system.


• Bitmaps

• Linked list/free list.

24
MM using Bitmaps
1) Bitmap
 The free space list is implemented as a bitmap or bit vector.

 Each block is represented by a 1 bit

 When the block is free its bit is set to 1 and

 When the block is allocated to other its bit is set to 0 .

Example: consider a disk where blocks 2 3 4 5 8 9 10 11 12 13 17 18 25 26


and 27 are free, and the rest are allocated.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Blocks
Bits 0 0 1 1 1 1 0 0 1 1 1 1 1 1 0 0 0 1 1 0 0

25
MM using Bitmaps
 The Size of the
bitmap
depends on the
size of memory
and size of the
allocation unit.

 a) A part of memory with five processes and three holes. The tick marks show the
memory allocation units. The shaded regions (0 in the bitmap) are free.
 (b) The corresponding bitmap.
 (c) The same information as a list.

26
MM using Linked List
There is a head pointer which points the first free block of
the list which is kept in a special location on the disk.
Advantage
The operating system simply allocate the first block in
free space list and move the head pointer to the next free
block in the list.
Disadvantage
Searching the free space will be very time consuming
each block will have to be read from the disk.

27
MM using Linked List

• Four neighbor combinations for the terminating


process, X.

28
Paging

† Paging is a memory management scheme by which a


computer stores and retrieves data from secondary storage for
use in main memory.

† In this scheme, the operating system retrieves data from


secondary storage in same size blocks called pages.

† Paging is an important part of virtual memory


implementations in modern operating system, using secondary
storage to let programs exceed the size of available physical
memory.
29
† Non contiguous memory allocation
Paging

† Logical address space is divided into equal size pages.

† Physical address space is divided into equal size frames.

† Page size = frame size.

† When a process is to be executed, it's corresponding pages are


loaded into any available memory frames.

† Operating system keeps track of all free frames. Operating


system needs n free frames to run a program of size n pages.

30
Paging
† Address generated by CPU is divided into

⁘Page number (p) -- page number is used as an index into


a page table which contains base address of each page in
physical memory.
⁘Page offset (d) -- page offset is combined with base
address to define the physical memory address.

31
Paging

• VM uses a technique called paging.


• For instance- the instruction MOV REG,1000 –copy the
content of memory address 1000 in to REG

32
Paging…
 Pages and page frames are
always equal. Here 4KB
◦ How much pages and page
frames can we get from 64
KB of Virtual address
space and 32 KB of
physical memory?

33
Paging Model of Logical and Physical
Memory

34
Paging Hardware

35
Paging Example
† Paging example for a 32-byte memory with 4-byte pages

36
Goals of Paging

◊ The major goals of paging are to make memory allocation

and swapping easier and to reduce fragmentation.

◊ Paging also allows allocation of noncontiguous memory

(i.e., pages need not be adjacent.)

37
What happens if there is no free frame?
• Page replacement happens when a requested page is not
present in the main memory and the available space is not
sufficient for allocation to the requested page.

• Page replacement – find some page in memory, but not


really in use, swap it out
• algorithm

• performance – want an algorithm which will result in


minimum number of page faults

• Same page may be brought into memory several times


Page Replacement

• Prevent over-allocation of memory by modifying page-


fault service routine to include page replacement.

• Use modify (dirty) bit to reduce overhead of page


transfers – only modified pages are written to disk.

• Page replacement completes separation between logical


memory and physical memory – large virtual memory can
be provided on a smaller physical memory.
Need For Page Replacement
Basic Page Replacement

1. Find the location of the desired page on disk

2. Find a free frame:


- If there is a free frame, use it
- If there is no free frame, use a page replacement
algorithm to select a victim frame

3. Bring the desired page into the (newly) free frame;


update the page and frame tables

4. Restart the process


Page Replacement
Page Replacement Algorithms

• Want lowest page-fault rate


• Evaluate algorithm by running it on a particular string of
memory references (reference string) and computing the
number of page faults on that string

• In all our examples, the reference string is


A. 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

B. 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
First-In-First-Out (FIFO) Page replacement
Replace the pages that has been in memory the longest time.
If the needed/demanding page is not available in the main memory it is
called page fault.
If the needed/demanding page is available in the main memory it is called
page Hit.
What happens when we have 3 frames in the above example A?

What happens when we have 4 frames?


44
Optimal Algorithm

 Each page will be labeled the number of instruction that


will be executed before that page is needed/ first
referenced.
 The only problem with this algorithm is unrealizable…
THE ONLY PROBLEM?
 How do you know this? Requires future knowledge!
 Used as yardstick to evaluate how well other algorithms
perform.

45
Optimal Page Replacement

46
Least Recently Used (LRU) Algorithm

† Replace the page that has not been used for the longest period of
time.
† The optimal algorithm uses the time when a page is to be used
next
† FIFO - uses the time when a page was brought into memory
† LRU – replace the page which is least recently used.
† LRU– use the recent past as an approximation of the near future.

47
LRU Page Replacement
† Counter implementation
 Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter.
 When a page needs to be changed, look at the counters to
determine which one is the smallest value and use that frame .

48
Global vs. Local Allocation
 Global replacement – process selects a replacement frame from the
set of all frames; one process can take a frame from another.
◦ A process is no longer in control of its page-fault rate
◦ Usually results in greater system throughput and its therefore
more commonly used
 Local replacement – each process selects from only its own set of
allocated frames.
◦ Does not take advantage of less used pages in other processes
which could improve overall system throughput
49
Working sets and Thrashing
† If a single process is too large for memory, there is nothing the
operating system (OS) can do. That process will simply
thrash.
† If the problem arises because of the sum of several processes:
figure out how much memory each process needs to prevent
thrashing. This is called its working set.

50
Caching

† Caching is storing data in separate disk (very fast speed


disk)).
† The data which is to be used many times results wastage of
time if it is in hard disk, but storing the data in cache
reduces this time wastage.
† Cache is used in system to speed up the access of data
frequently used.

51
!!!
E E
H R
R T
T E
A P
C H
O F
N D
E
52

You might also like