0% found this document useful (0 votes)
38 views64 pages

#OS Lecture Note 3 Memory Management (2)

Uploaded by

dangech0922
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views64 pages

#OS Lecture Note 3 Memory Management (2)

Uploaded by

dangech0922
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Operating System

Mattu University
Engineering and Technology College
Department of Computer Science

Operating System

Target Dept.:- Computer Science 3rd year Regular

By: Tekalign B.
Lecture Three

Memory Management

Engineering and Technology College Mattu University


Outline
3

Overview of physical memory and memory management

Swapping

Partitioning

Paging and segmentation

Working sets and thrashing

Caching

2/29/2024
Memory management in OS
4

2/29/2024
What is Main Memory?
5

• Main memory, commonly referred to as RAM (Random


Access Memory),
• is the computer's primary temporary storage for actively processed
data.
• Unlike permanent storage like hard drives, RAM is volatile, losing
its contents when the computer powers down.
• Efficient memory management, involving allocation and
deallocation, is essential for optimal performance.

2/29/2024
The need for memory management
6
• Memory is cheap today, and getting cheaper
– But applications are demanding more and more memory, there is
never enough!
 Ideally programmers want memory that is

 Large, fast and non volatile


 Memory hierarchy

 small amount of fast, expensive memory – cache


 some medium-speed, medium price main memory
 gigabytes of slow, cheap disk storage
 Memory manager handles the memory hierarchy
2/29/2024
Memory Management
7

 Memory needs to be allocated to ensure a reasonable


supply of ready processes to consume available processor
time
 The part of the operating system that manages (part of) the
memory hierarchy is called the memory manager.
 Memory management is the functionality of an operating
system which handles or manages primary memory.
 The main aim of memory management is to achieve
efficient utilization of memory.
Why Memory Management is Required?
8
 mechanism
 Keep track of memory in use
 Keep track of used and unused (“free”) memory
 Protect memory space
 To minimize fragmentation issue
 Allocate, deallocate memory before and after processes
execution
 Swap processes: memory <–> disk
 Policies
 Decide when to load each process into memory
 Decide how much memory space to allocate each process
 Decide when a process should be removed from memory
Logical vs Physical Address Space
9

The address points to the location where data is stored, just


like your address points to where you live.
 Logical Address Space:
 generated by the CPU is also known as Virtual address.
 can be defined as the size of the process. And can be changed.

 Physical Address Space:


 An address seen by the memory unit.
 A Physical address is also known as a Real address.

 The set of all physical addresses corresponding to these logical


addresses is known as Physical address space.
Cont’d…
10
Cont’d…
11

 The run-time mapping from virtual to physical addresses is


done by a hardware device called the memory-management
unit (MMU).
Swapping
12

 Swapping is a mechanism in which a process can be swapped


temporarily out of main memory to a backing store , and then
brought back into memory for continued execution.

 Backing store – fast disk large enough to accommodate copies of all


memory images for all users;
 Roll out, roll in – swapping variant used for priority-based scheduling
algorithms; lower-priority process is swapped out so higher-priority
process can be loaded and executed.
 Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped.
 Modified versions of swapping are found on many systems, i.e., UNIX, Linux,
and Windows.
Cont’d…
13

 Mono-programming without Swapping or Paging


 This is the simplest memory management approach the memory is
divided into two sections:
 One part of the operating system
 The second part of the user program
Cont’d…
14

 Multiprogramming with fixed partitions (without Swapping)

 Fixed memory partitions


 separate input queues for each partition

 single input queue


Schematic View of Swapping
15
Cont’d…
16
Cont’d…
17

• Initially, only process A is in memory. Then processes B and C are


created or swapped in from disk.
• In Fig. 3-4(d) A is swapped out to disk. Then D comes in and B goes
out.
• Finally A comes in again. Since A is now at a different location,
addresses contained in it must be relocated, either by software when
it is swapped in or (more likely) by hardware during program
execution. For example, base and limit registers would work fine
here.
Memory Partitioning
18

• Partition main memory into a set of non overlapping


regions called partitions
• An early method of managing memory
– Pre-virtual memory
– Not used much now
• But, it will clarify the later discussion of virtual memory if
we look first at partitioning
– Virtual Memory has evolved from the partitioning
methods
Types of Partitioning
19

• Fixed Partitioning
• Dynamic Partitioning
• Simple Paging
• Simple Segmentation
• Virtual Memory Paging
• Virtual Memory Segmentation
Fixed Partitioning

– It is the simplest and oldest partitioning


technique
– It divides the memory into fixed number of
equal or unequal sized partitions.
– The partitioning can be done by the OS or
the operator when the system is started.
– Variations of fixed partitioning
– Equal sized fixed partitions.
– Unequal sized fixed partitions.
Fixed Partitioning …

• Equal-size partitions
– Any process whose size is less than or
equal to the partition size can be loaded
into an available partition
• If all partitions are occupied, the operating
system can swap a process out of a partition
– If none are in a ready or running state
Fixed Partitioning: Problems

• A program may not fit in a partition.


– The programmer must design the program with
overlays
• Main memory use is inefficient.
– Any program, no matter how small, occupies an
entire partition.
– This is results in internal fragmentation.
– Internal Fragmentation is a problem that occurs
when the process is allocated to a memory block
whose size is more than the size of that process
and due to which some part of the memory is left
unused.
Solution – Unequal Size Partitions

Dynamic (unequal size) partitioning


• Lessens both problem, but doesn’t solve completely
• Partitions are created dynamically during the allocation
of memory.
• The size and number of partitions vary throughout of
the operation of the system.
• In Fig,
– Programs up to 16M can be accommodated without
overlay
– Smaller programs can be placed in smaller partitions,
reducing internal fragmentation
Unequal Size Partitions…

• Dynamic partitioning leads to a


situation in which there are a lot of
small useless holes in memory.
– This phenomenon is called
external fragmentation.
– Solution to external fragmentation:
• Compaction – Combining the
multiple useless holes in to one big
hole.
• In Fig: We can see that, there is enough space
(9MB) to run a process 4 (required 7 MB) but the
available memory (fragment) space is not
contiguous.
Dynamic Partitioning example
25

• A hole of 64K is left after loading 3 processes: not enough room for another
process
• Eventually each process is blocked. The OS swaps out process 2 to bring in
process 4=128K
Dynamic Partitioning example …
26

• Another hole of 96K is created after we loaded P4.


• The OS swaps out process 1 to bring in again process 2 and another hole of 96K
is created. P2=224K
• Compaction would produce a single hole of 256K. This phenomenon is called
external fragmentation.
Dynamic Partitioning: Placement Algorithm
27
Operating system must decide which free block
to allocate to a process
• Best-fit algorithm
– Chooses the block that is closest in size
(big enough) to the request
– Worst performer overall
– Since smallest block is found for process,
the smallest amount of fragmentation is
left
– memory utilization is maximum as
compared to other memory allocation
techniques.
Dynamic Partitioning
28

• First-fit algorithm
– Scans memory form the
beginning and chooses the first
available block that is large
enough
– Fastest
– May have many process loaded in
the front end of memory that must
be searched over when trying to
find a free block
Dynamic Partitioning
29

• Next-fit
– Scans memory from the location of the last
placement
– More often allocate a block of memory at the end of
memory where the largest block is found
– Inefficient memory utilization is a major issue in the
worst fit.
Dynamic Partitioning
30

• Worst-fit
– allocate the largest available hole
to process.
– This method produces the largest
leftover hole.
Fragmentation
31

 is defined as when the process is loaded and removed after


execution from memory, it creates a small free hole.
 These holes can not be assigned to new processes because holes are
not combined or do not fulfill the memory requirement of the
process.
 To achieve a degree of multiprogramming, we must reduce the waste
of memory or fragmentation problems.
 In the operating systems two types of fragmentation:
 Internal Fragmentation
 External Fragmentation
Fragmentation
32

 Internal fragmentation:
 occurs when memory blocks are allocated to the process more than

their requested size.

 Due to this some unused space is left over and creating an internal

fragmentation problem.
Example: Suppose there is a fixed partitioning used for memory
allocation and the different sizes of blocks 3MB, 6MB, and 7MB
space in memory. Now a new process p4 of size 2MB comes and
demands a block of memory. It gets a memory block of 3MB but
1MB block of memory is a waste, and it can not be allocated to
other processes too. This is called internal fragmentation.
Fragmentation
33

 External fragmentation:
we have a free memory block, but we can not assign it to a
process because blocks are not contiguous.
Example: Suppose (consider the above example) three
processes p1, p2, and p3 come with sizes 2MB, 4MB,
and 7MB respectively. Now they get memory blocks of
size 3MB, 6MB, and 7MB allocated respectively. After
allocating the process p1 process and the p2 process left
1MB and 2MB. Suppose a new process p4 comes and
demands a 3MB block of memory, which is available, but
we can not assign it because free memory space is not
contiguous. This is called external fragmentation.
Fragmentation
34

 Both the first-fit and best-fit systems for memory

allocation are affected by external fragmentation.

 To overcome the external fragmentation problem


Compaction is used.

 In the compaction technique, all free memory space


combines and makes one large block. So, this space can
be used by other processes effectively.
Fragmentation
35
Virtual Memory
36
 is a technique that allows the execution of processes which are not

completely available in memory.

 separation of user logical memory from physical memory.

 This separation allows an extremely large virtual memory to be


provided for programmers when only a smaller physical
memory is available.
 Virtual memory can be implemented via:
 Demand paging
 Demand segmentation
Cont’d…
37
Paging
38

• Paging is a memory management scheme that eliminates


the need for a contiguous allocation of physical memory.
– This scheme permits the physical address space of a process to be
non-contiguous.
• The Physical Address Space is conceptually divided into
several fixed-size blocks called Frames.
• The Logical Address Space is also split into fixed-size
blocks, called Pages.
Paging
39
Example of paging
40
Processes and Frames
41
Page Table
42
Cont’d…
43

• Advantages of Paging
– Paging reduces external fragmentation,
– simple to implement and assumed as an efficient memory
management technique.
– Paging is useful for fast accessing data.
– Due to equal size of the pages and frames, swapping becomes
very easy.
• Disadvantages of Paging
– suffer from internal fragmentation.
– Page table requires extra memory space, so may not be good for
a system having small RAM.
Page Replacement Algorithm
44

 Page replacement algorithms are the techniques using which


Operating System decides which memory pages to swap out,
write to disk when a page of memory needs to be allocated.

 Evaluate algorithm by running it on a particular string of


memory references (reference string) and computing the number
of page faults on that string.
 Reference String
 The string of memory references is called reference string.
 Want lowest page-fault rate.
First in First out (FIFO) algorithm
45

 Oldest page in main memory is the one which will be selected for
replacement.
 The operating system maintains a list of all pages currently in
memory, with the most recent arrival at the tail and the least recent
arrival at the head. On a page fault, the page at the head is removed
and the new page added to the tail of the list. Oldest page in main
memory is the one which will be selected for replacement.
 Easy to implement, keep a list, replace pages from the tail and add
new pages at the head.
Cont’d…
46
Cont’d…
47
 Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
 3 frames (3 pages can be in memory at a time per process)

• Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5


Adding more frames can cause more page faults!
Belady’s Anomaly
Optimal Page algorithm
48

 An optimal page-replacement algorithm has the lowest page-fault


rate of all algorithms.

 The optimal page replacement algorithm says that the page with the
highest label should be removed. If one page will not be used for 8
million instructions and another page will not be used for 6 million
instructions, removing the former.

 Replace the page that will not be used for the longest period of time
. Use the time when a page is to be used.
Cont’d…
49
Cont’d…
50
Least Recently Used (LRU) algorithm
51

 Page which has not been used for the longest time in main memory
is the one which will be selected for replacement.

 Easy to implement, keep a list, replace pages by looking back into


time.
Cont’d…
52
Cont’d…
53
Cont’d…
54

4. Page Buffering algorithm


5. Least frequently Used(LFU) algorithm
6. Most frequently Used(MFU) algorithm
Segmentation
55

• is a memory management technique in which each job is


divided into several segments of different sizes, one for each
module that contains pieces that perform related functions.
– Each segment is actually a different logical address space of the program.

– When a process is to be executed, its corresponding segmentation are


loaded into non-contiguous memory though every segment is loaded into a
contiguous block of available memory.

– Segmentation is similar to dynamic partitioning


• Segmentation memory management works very similar to paging
but here segments are of variable-length where as in paging pages
are of fixed size.
Cont’d…
56
Cont’d…
57
Catching
58
 Caching (often referred to as memory caching) is a technique
in which computer applications temporarily store data in a
computer's main memory (i.e., random access memory, or
RAM) to enable fast retrievals of that data.
 The RAM that is used for the temporary storage is known as
the cache.
 Since accessing RAM is significantly faster than accessing other media
like hard disk drives or networks, caching helps applications run
faster due to faster access to data.
Catching (Reading Assignment)
59
 How Does Memory Caching Works?
 Strategies for Managing Cache Space
 LRU, LFU, FIFO, LIFO, Random replacement

 Types of Catching?
 Cpu cache, Memory cache, Disk cache, Browser cache, Distributed cache

 Cache Memory and Cache Memory Types?


 Types of cache memory (L1 cache, L2 cache, Level 3 (L3) cache)

 Cache vs main memory?


 Cache vs main memory
 Cache Mapping (Direct, Associative and Set-Associative)
 Application of cache memory
 Advantages and dis advantages of cache memory
Working Set and Thrashing (Reading Assignment II)
60
 The primary reason for Thrashing is the total amount of memory
demanding by the processes (in total) exceeds the total actual
memory by a great amount, so page faults occur frequently.
 As a solution the working set model simply keeps only the working set of
pages of each processes.
 If a process does not have “enough” pages, the page-fault rate is very high.
This leads to:
 low CPU utilization.
 operating system thinks that it needs to increase the degree of multiprogramming.
 another process added to the system.

 Thrashing = a process is busy swapping pages in and out.


Sample FAQ?
61

Q.1: What is a memory leak, and how does it affect


system performance?

Answer
 A memory leak occurs when a program fails to release
memory that it no longer needs, resulting in wasted
memory resources. Over time, if memory leaks accumulate,
the system’s available memory diminishes, leading to
reduced performance and possibly system crashes.
FAQ?
62

Q.2: Can memory fragmentation be prevented in an


operating system?
Answer
 While it is challenging to completely eliminate memory
fragmentation, certain techniques can help minimize its impact.
One approach is to use memory allocation algorithms that focus
on reducing external fragmentation, such as buddy systems or
slab allocation. Additionally, techniques like compaction can be
employed to reduce internal fragmentation.
FAQ?
63

Q.3: What are the advantages and disadvantages of using


virtual memory?
Answer
 benefits,
 the ability to run larger programs than the available physical memory,
 increased security through memory isolation, and
 simplified memory management for applications.

 Dis adv
 introduces additional overhead due to page table lookups and potential
performance degradation if excessive swapping occurs.
End of Chapter Three

You might also like