0% found this document useful (0 votes)
7 views41 pages

OS Lecture-11 (Memory Management)

The document discusses memory management in computer systems, covering key concepts such as memory protection, address binding, and various memory allocation strategies. It explains the use of base and limit registers for memory protection, the processes of dynamic loading and linking, and the implications of fragmentation in memory allocation. Additionally, it outlines different memory management schemes including contiguous allocation, fixed and variable partitioning, and the effects of swapping on process execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views41 pages

OS Lecture-11 (Memory Management)

The document discusses memory management in computer systems, covering key concepts such as memory protection, address binding, and various memory allocation strategies. It explains the use of base and limit registers for memory protection, the processes of dynamic loading and linking, and the implications of fragmentation in memory allocation. Additionally, it outlines different memory management schemes including contiguous allocation, fixed and variable partitioning, and the effects of swapping on process execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

OS Lecture-11

Memory Management
Background
 Memory is central to the operation of a modern computer system.

 Memory consists of a large array of words or bytes, each with its


own address.

 Selection of a memory-management method for a specific system


depends on many factors, especially on the hardware design of the
system.
Basic Hardware
• Main memory, registers and cache are ONLY storage CPU can access
directly.

• Program must be brought (from disk) into memory and placed within a
process for it to be run.

• Direct storage access time:


─ Register access in one CPU clock (or less)
─ Main memory can take many cycles (i.e. slowly)
─ Solution: Cache (fast memory) sits between main memory and CPU registers

• Protection of memory:
– Protection of memory is necessary to ensure correct operation.
– Protect the operating system from access by user processes.
– Protect user processes from one another.
– This protection must be provided by the hardware & can be implemented in
several ways.
Base and Limit Registers
• One method to implement protection: use Base and Limit Registers

• We need to make sure that each process has a separate memory


space….How???

• Main idea: we need to determine the range of legal addresses that


can be accessed only by the process.

─ We can provide this protection by using two registers (base & limit)

• The base register >> holds the physical address of the first byte in
the legal range.

• The limit register >> holds the size of the range.


Base and Limit Registers (cont.)
• Example: If the base register holds 300040 and the limit register is
120900….what is the range of legal addresses??
– The program can legally access all addresses from 300040 through
420939 (inclusive)

• NOTE: Last physical address = base + limit -1


Hardware Address Protection
• How base & limit registers help to provide memory protection??

• By applying (2) procedures:


– Procedure (1): The CPU hardware compare every address generated
in user mode with the registers.

– Procedure (2): restrict the ability to load base & limit registers only
to OS.
Hardware Address Protection (cont.)
• Procedure (1): The CPU hardware compare every address generated in
user mode with the registers.

• If (CPU generated address ≥ base) & (CPU generated address < base + limit) …
₋ Then …..the CPU generated address is legal and allowed to access the
memory
₋ Else…… the CPU generated address is illegal and NOT allowed to access the
memory…..(causing a trap (/error) to OS).
Hardware Address Protection (cont.)
• Procedure (2): restrict the ability to load base & limit registers ONLY to OS.

• This restriction applied by using a special privileged instruction.

• Since privileged instructions can be executed only in kernel mode, and


since only the operating system executes in kernel mode. So, ONLY the
operating system can load the base and limit registers.

• This scheme allows the operating system to change the value of the
registers but prevents user programs from changing the registers’
contents.
Address Binding
Multistep Processing of a User Program
Address Binding
• Address binding (or relocation): The process of associating program
instructions and data to physical memory addresses.

• A user program will go through several steps - some of which may be


optional-before being executed ….. (steps are: compiling >>> linking >>>
execution).

• Addresses may be represented in different ways during these steps:


─ Addresses in the source program are generally symbolic (such as count, age).

─ A compiler will typically bind these symbolic addresses to relocatable


addresses (such as "14 bytes from the beginning of this module").

─ The linkage editor or loader will in turn bind the relocatable addresses to
absolute addresses (physical address) (such as 74014).

• Each binding is a mapping from one address space to another.


Binding of Instructions and Data to Memory
• The binding of instructions and data to memory addresses can be done at
any step along the way:
– Compile time. The compiler translates symbolic addresses to absolute
addresses. If you know at compile time where the process will reside in
memory, then absolute code can be generated (Static).

– Load time. When it is not known at compile time where the process will reside
in memory, then the compiler translates symbolic addresses to relative
(relocatable) addresses. The loader translates these to absolute addresses
(Static).

– Execution time. If the process can be moved during its execution from one
memory segment to another, then binding must be delayed until run time. The
absolute addresses are generated by special hardware (e.g. MMU)
• Most general-purpose OSs use this method (Dynamic).

• Static - new locations are determined before execution.

• Dynamic - new locations are determined during execution.


Logical vs. Physical Address Space
• Logical/virtual address: address generated by CPU

• Physical address: address seen by memory hardware

• Compile-time / load-time binding >>> logical address = physical address

• Run-time binding >>> logical address ≠ physical address

• Logical address space: is the set of all logical addresses generated by a


program.

• Physical address space: is the set of all physical addresses corresponding


to these logical addresses.

• MMU (Memory-Management Unit): H/W device that maps virtual


addresses to physical addresses at run time.
Simple Address Mapping Method

• For NOW, we illustrate address mapping with a simple MMU scheme:


– The base register is now called a relocation register.
– Basic methodology>>>> Physical address = logical address + relocation register
– For example:
• Base register contains 14000
• If logical address = 0 >>>> Physical address= 0+14000=14000
• If logical address = 346 >>>> Physical address= 346+14000=14346
Dynamic Relocation Using a Relocation Register
• Routine is not loaded until it is called.

• Better memory-space utilization; unused routine is never loaded.

• All routines kept on disk in relocatable load format.

• Useful when large amounts of code are needed to handle


infrequently occurring cases.

• No special support from the operating system is required.

– Implemented through program design

– OS can help by providing libraries to implement dynamic loading


Dynamic Loading
• Routine is not loaded until it is called.

• All routines are kept on disk in a relocatable load format.

• Main program is loaded and executed.

• A routine needs to call another routine.

• The relocatable linking loader is called to load the desired routine.

• Control is passed to newly loaded routine.

• Routine is called into memory only when it is needed.


Dynamic Loading - Example

Secondary storage

routine()
Memory {
-----;
main() -----;
{ -----;
---; -----;
---; }

routine(); // routine is called dynamically

---;
---;
}
Dynamic Loading Advantages
• Better memory utilization.

• Unused routine is never loaded.

• Useful when large amounts of code are needed to handle


infrequently occurring cases.
– E.g. Error routines – Although the total program size may be large, the
portion i.e., used is much smaller

• No special support from the operating system is required.

• Implemented through program design.

• Operating system provides library routines to implement dynamic


loading
Dynamic Linking
• Concept of dynamic linking is similar to dynamic loading.

• Instead of loading, linking is postponed till execution.

• This feature is used with system libraries.

• A stub is included in the image for each library routine reference.


Dynamic Linking (cont.)
• A stub is a small piece of code that indicates how to load the library
routine.

• Stub is executed.

• It replaces itself with the address of the routine and executes the
routine.

• Next time when that code segment is reached the library routine is
executed directly
Dynamic Linking - Example

Secondary storage

printf()
Memory {
-----;
main() -----;
{ -----;
---; -----;
---; }

printf(); // stub is linked dynamically

---;
---;
}
Swapping
• Swapping is a mechanism in which a process can be swapped temporarily
out of main memory to a backing store, and then brought back into
memory for continued execution.

• Backing store is a usually a hard disk drive or any other secondary storage
which fast in access and large.
Swapping (cont.)
• How swapping performed???
– The system maintains a ready queue consisting of all processes whose
memory images are on the backing store or in memory and are ready
to run.

– Whenever the CPU scheduler decides to execute a process, it calls the


dispatcher.

– The dispatcher checks to see whether the next process in the queue is
in memory.

– If it is not, and if there is no free memory region, the dispatcher swaps


out a process currently in memory and swaps in the desired process.

– Then, Dispatcher reloads registers and transfers control to the selected


process.
Swapping (cont.)
• Major time consuming part of swapping is transfer time.
• Total transfer time ∝ amount of memory swapped.
• Context-switch time in such a swapping system is high.
• Example: 100MB process swapping to hard disk with transfer rate of
50MB/sec.
– Swap out time of 2000 ms
– Plus swap in of same sized process
– Total context switch swapping component time of 4000ms (4 seconds)

• Swapping is normally disabled but will start if many processes are


running and are using a threshold amount of memory.
• Swapping is again halted when the load on the system is reduced.
Basic Memory Management Schemes

Memory Management
Schemes (/Strategies)

Contiguous
Paging Segmentation
Allocation

Multiple-partition Conventional
method Variable size Paging Hierarchical Hashed Inverted
(Fixed-partition method Paging Paging Paging
allocation) (basic method)
Memory Management Strategy #1:
Contiguous Memory Allocation
Contiguous Memory Allocation
• The memory is usually divided into two partitions:
– One for the resident OS
– One for the user processes.

• We can place the OS in either low memory or high memory.

• Basic methodology:
– Each process is contained in a single contiguous section of memory.

• CMA strategy can be applied in two methods:


1. Multiple-partition method/(Fixed-partition allocation) method
2. Variable-partition allocation method
Mono Programming without Swapping
• The simplest memory management 0xFFF...
type. Device
OS in
Drivers in
• Just one process in memory at a time. ROM ROM
User
Program
• Process uses all of the memory.
User User
• Operation: Program Program

─ User types a command


─ OS loads the requested program OS in
from disk to memory RAM OS in
• and executes it RAM
0
─ When process is finished (a) (b) (c)
• OS prompts and waits for
another command IBM PC: Uses model (c): Device driver
at highest 8 K block of 1 MB memory space
Multiprogramming with Fixed Partitions
• Scenario: multiple programs at a time
– Problem: how to allocate memory?

• Divide memory up into n partitions


– Equal partitions vs. unequal partitions
– Can be done manually when system is up

• A job arrives, put it into the input queue for the smallest partition
large enough to hold it
– Any space in a partition not used by a job is lost
Example: Multiprogramming With Fixed Partitions

800K
Partition 4
700K

Partition 3
Multiple input queues
400K

Partition 2
200K
Partition 1
100K
OS
0
Single Input Queue
800K
• Disadvantage of multiple
Partition 4
input queues 700K
– Small jobs may wait,
while a queue with
larger memory is Partition 3
empty
400K
• Solution: single input
queue Partition 2
200K
Partition 1
100K
OS
0
Variable-partition Allocation
• Problem with multiple-partition allocation: Degree of multiprogramming
limited by number of partitions.

• Solution: Variable-partition sizes for efficiency (sized to a given process’


needs).

• Hole – block of available memory; holes of various size are scattered


throughout memory.

• When a process arrives, it is allocated memory from a hole large enough


to accommodate it.

• Process exiting frees its partition, adjacent free partitions combined.

• Operating system maintains information about:


a) allocated partitions b) free partitions (hole)
Variable-partition Allocation (cont.)
• Example:
P1: 600K P2: 1000K P3: 300K P4: 700K P5: 500K

0 0 0 0 0
OS OS OS OS OS
400 400 400 400 400

P1 P1 P1 P5
1000 900
1000 1000 1000 1000
P4 P4 P4
P2 1700 1700 1700
2000 2000 2000 2000 2000
P3 P3 P3 P3 P3
2300 2300 2300 2300 2300

2560 2560 2560 2560 2560


Variable Storage-Allocation Problem
• How to satisfy a request of size n from a list of free holes?
• There are many solutions to this problem:

– First-fit: Allocate the first hole that is big enough

– Best-fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size
• Produces the smallest leftover hole

– Worst-fit: Allocate the largest hole; must also search entire list
• Produces the largest leftover hole

• First-fit and best-fit better than worst-fit in terms of speed and


storage utilization.
Example
• Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB, and 600 KB (in
order), how would each of the first-fit, best-fit, and worst-fit algorithms place
processes of 212 KB, 417 KB, 112 KB, and 426 KB (in order)? Which algorithm
makes the most efficient use of memory?

First-fit: Best-fit:
P1>>> 100, 500, 200, 300, 600 P1>>> 100, 500, 200, 300, 600
P2>>> 100, 288, 200, 300, 600 P2>>> 100, 500, 200, 88, 600
P3>>> 100, 288, 200, 300, 183 P3>>> 100, 83, 200, 88, 600
100, 116, 200, 300, 183 final set P4>>> 100, 83, 88, 88, 600
P4 (426K) must wait of holes 100, 83, 88, 88, 174

Worst-fit:
P1>>> 100, 500, 200, 300, 600
P2>>> 100, 500, 200, 300, 388
P3>>> 100, 83, 200, 300, 388
100, 83, 200, 300, 276
P4 (426K) must wait

>>> In this example, Best-fit turns out to be the best because there is no wait processes.
Fragmentation
• Fragmentation is a problem appears in memory allocation schemes
due to unusable memory space.

• Fragmentation types:
─ External fragmentation: memory space to satisfy a request is
available, but is not contiguous…(i.e. storage is fragmented into a
large number of small holes).
• Both the first-fit and best-fit strategies for memory allocation suffer from
external fragmentation.

• (1/3) of memory may be unusable!!!

─ Internal Fragmentation: allocated memory may be larger than


requested memory….(i.e. some memory within partition may be left
unused).
Example
Fixed-sized partitions Variable-sized partitions
used
Partition 1 Partition 1 used
un-used

used free
Partition 2
un-used Partition 2 used

used
Partition 3 Partition 3 used
un-used

used free
Partition 4
un-used Partition 4 used

Internal External
Fragmentation Fragmentation
External Fragmentation Solution
• Compaction: Memory contents shuffled to place all free memory
together in one large block
– Dynamic relocation (run-time binding) needed
– Compaction is expensive (high overhead)

• Permit the logical address space of the processes to be non-


contiguous, thus allowing a process to be allocated physical
memory wherever it is available.
– Two techniques achieve this solution:
• Paging
• Segmentation
Compaction Example
0
Operating Operating Operating Operating
System System System System
300K 300K 300K 300K
P1 P1 P1 P1
500K 500K 500K 500K
P2 P2 P2 P2
600K 600K 600K 600K
P3
400K P4
800K
1000K 1000K
P4 900K
P3 P3
1200K 1200K 1200K
300K
1500K
1500K
P4 900K 900K P4

1900K 1900K
200K P3
2100K 2100K 2100K
original allocation moved 600K moved 400K moved 200K
Internal vs. External Fragmentation
Key Internal Fragmentation External Fragmentation
When there is a difference between When there are small and non-
required memory space vs. allotted contiguous memory blocks which
1 Definition memory space, problem is termed cannot be assigned to any process, the
as Internal Fragmentation. problem is termed as External
Fragmentation.

Internal Fragmentation occurs when External Fragmentation occurs when


Memory
2 allotted memory blocks are of fixed allotted memory blocks are of varying
Block Size
size. size.

Internal Fragmentation occurs when External Fragmentation occurs when a


a process needs more space than process is removed from the main
3 Occurrence
the size of allotted memory block or memory.
use less space.

Best Fit Block Search is the solution Compaction is the solution for external
4 Solution for internal fragmentation. fragmentation.

Internal Fragmentation occurs when External Fragmentation occurs when


5 Example Paging is employed. Segmentation is employed.
Virtual Memory
• Developed to address the issues identified with the simple schemes
covered thus far.

• Two classic variants:


– Paging
– Segmentation

• Paging is now the dominant one of the two.

• Some architectures support hybrids of the two schemes.


– e.g. Intel IA-32 (32-bit x86)

You might also like