0% found this document useful (0 votes)
9 views

Slide-7-OS Memory Management (1)

Memory management is the process of controlling and coordinating computer memory to optimize system performance by allocating and reclaiming memory blocks for running programs. It involves hardware components, operating system policies, and application requirements to ensure efficient memory usage, protection, and sharing among processes. Key concepts include fixed and dynamic partitioning, address binding, and the use of a Memory-Management Unit (MMU) for translating logical to physical addresses.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Slide-7-OS Memory Management (1)

Memory management is the process of controlling and coordinating computer memory to optimize system performance by allocating and reclaiming memory blocks for running programs. It involves hardware components, operating system policies, and application requirements to ensure efficient memory usage, protection, and sharing among processes. Key concepts include fixed and dynamic partitioning, address binding, and the use of a Memory-Management Unit (MMU) for translating logical to physical addresses.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 126

Operating Systems

Memory Management

Alok Kumar Jagadev


Background

Main Memory

CPU cache
instructions
Registers
data Process
program image
in memory

Operating
System
Disk

2
Memory Management
• Memory management is the process of controlling and coordinating
computer memory
– assigned portions are called blocks to various running programs to
optimize overall system performance.
• Memory management requires
– Hardware: involves components that physically store data, such as RAM,
caches memory
– OS: allocation/ reallocation of specific memory blocks to individual programs
as user demands change
– programs & applications: ensures the availability of adequate memory for the
objects and data structures of each running program at all times

3
Memory Management
• In most memory management schemes, the kernel occupies some fixed
portion of main memory and the rest is shared by multiple processes

• Memory management is the process of


– Allocating memory to user programs
– reclaiming that memory when it is no longer needed
– protecting each user’s memory area from other user programs
• ensuring that each program only references memory locations that have
been allocated to it.

4
Memory Management
• In order to manage memory effectively the OS must have
– Memory allocation policies
– Methods to track the status of memory locations (free or allocated)
– Policies for preempting memory from one process to allocate to
another

5
Memory Management Requirements
• Memory management is intended to satisfy the following requirements:
− Relocation
− Protection
− Sharing
− Logical organization
− Physical organization

6
Requirements: Relocation
• Relocation is the process of adjusting program addresses to match the
actual physical addresses where the program resides when it executes

• Why is relocation needed?


− programmer cannot know where the program will be placed in
memory when executed
− a process may be (often) relocated in main memory due to
swapping (maximize the CPU utilization)
− swapping enables the OS to have a larger pool of ready-to-execute
processes
− memory references in code (for both instructions and data) must be
translated to actual physical address

7
Requirements: Protection

§ Processes should not be able to reference memory locations in another


process without permission
§ impossible to check addresses at compile time in programs since the
program could be relocated
§ Memory references generated by a process must be checked at run time

8
Requirements: Sharing

§ must allow several processes to access a common portion of main


memory without compromising protection
§ cooperating processes may need to share access to the same data
structure
§ allow each process to access the same copy of the program rather
than have their own separate copy

9
Requirements: Logical Organization
• Main memory is organized as a linear (1-D) address space consisting of a
sequence of bytes or words.
• users write programs in modules with different characteristics
– instruction modules are execute-only
– data modules are either read-only or read/write
– some modules are private others are public
• To effectively deal with user programs, the OS and hardware should
support a basic form of module to provide the required protection and
sharing

10
Requirements: Physical Organization

• secondary memory is the long term store for programs and data whereas
main memory holds program and data currently in use
• moving information between these two levels of memory is a major
concern of memory management
– it is highly inefficient to leave this responsibility to the application
programmer

11
Simple Memory Management

§ An executing process must be loaded entirely in main memory (if


overlays are not used)
§ Overlays: a process will not use the complete program at the same
time while running
§ Simple memory management techniques
§ fixed partitioning
§ dynamic partitioning
§ simple paging
§ simple segmentation

§ are not used in modern OS, but they lay the ground for a proper
discussion of virtual memory

12
Fixed Partitioning

• Fixed partitions: Partition main memory into a set of


non overlapping regions called partitions
• Partitions can be of equal or unequal sizes
• Any process whose size is less than or equal to the
partition size can be loaded into an available partition
• If there is an available partition, a process can be
loaded into that partition
– because all partitions are of equal size, it does not
matter which partition is used
• If all partitions are occupied by blocked processes,
choose one process to swap out to make room for the
new process

13
Fixed Partitioning Problems

• A program may be too large to fit in a partition. The programmer must


then design the program with overlays
– when the module needed is not present, load the module into partition

• Main memory utilization is inefficient


• any program, regardless of size, occupies an entire partition
– internal fragmentation : wasted space due to the block of data
loaded being smaller than the partition

14
Solution – Unequal Size Partitions

• Using unequal size partitions helps lessen the


problems
– programs up to 16M can be accommodated
without overlays
– partitions smaller than 8M allow smaller
programs to be accommodated with less
internal fragmentation

15
Unequal Size Partitions

• Unequal-size partitions: use of multiple queues


– assign each process to the smallest partition within which it will fit
– A queue for each partition size
– tries to minimize internal fragmentation
– Problem: some queues will be empty if no processes within a size
range is present

16
Dynamic Partitioning

• Partitions are of variable length and number


• Each process is allocated exactly as much memory as it requires
• Eventually holes are formed in main memory. This is called external
fragmentation
• Must use compaction to shift processes so they are contiguous and all
free memory is in one block

17
Dynamic Partitioning Example
• A hole of 64K is left after loading 3 processes: not enough room for
another process
• Eventually each process is blocked. The OS swaps out process 2 to bring
in process 4

18
Dynamic Partitioning: an example
• another hole of 96K is created
• Eventually each process is blocked. The OS swaps out process 1 to bring
in again process 2 and another hole of 96K is created...
• Compaction would produce a single hole of 256K

19
Placement Algorithm
• Used to decide which free block to
allocate to a process
• Goal: to reduce usage of compaction
(time consuming)
• Possible algorithms:
– Best-fit: Allocate the smallest
hole that is big enough; must
search entire list, unless ordered
by size
• Produces the smallest leftover
hole
– First-fit: Allocate the first hole
that is big enough
– Worst-fit: Allocate the largest
hole; must also search entire list
• Produces the largest leftover hole
20
Problem – Placement Algorithm
• Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in
order), how would the first-fit, best-fit, and worst-fit algorithms place
processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)? Which
algorithm makes the most efficient use of memory?

21
Solution – Placement Algorithm
First-fit:
212K is put in 500K partition
417K is put in 600K partition
112K is put in 288K partition (new partition 288K = 500K - 212K)
426K must wait

Best-fit:
212K is put in 300K partition
417K is put in 500K partition
112K is put in 200K partition
426K is put in 600K partition

22
Solution – Placement Algorithm
Worst-fit:
212K is put in 600K partition
417K is put in 500K partition
112K is put in 388K partition
426K must wait

In this example, best-fit turns out to be the best.

23
Address Types

• physical address
• logical address
• Relative address

24
Physical Address Space
Physical Address
• Identifies a physical location of required data in a memory
• User never directly deals with the physical address
– can access by its corresponding logical address
• The user program generates the logical address
– program needs physical memory for its execution
• logical address must be mapped to the physical address by MMU before
they are used
• The term Physical Address Space is used for all physical addresses
corresponding to the logical addresses in a Logical address space

25
Logical Address Space
Logical Address
• generated by CPU while a program is running
• also known as virtual address as it does not exist physically
• used as a reference to access the physical memory location by CPU
• The term Logical Address Space is used for the set of all logical addresses
generated by a program’s perspective.
• The hardware device called Memory-Management Unit (MMU) is used for
mapping logical address to its corresponding physical address.

26
Logical address space concept
• Logical address space concept is RAM
different from the physical phy_max
addresses.

• A program uses logical addresses.


limit Program
• Set of logical addresses used by logic_max
the program is its logical address
space logical base
address Program
– Logical address space can be,
for example, [0, max_address] space

0
• Logical address space has to be
mapped somewhere in physical 0
memory

27
Memory-Management Unit (MMU)
• A memory management unit (MMU) is a computer hardware component
that handles all memory and caching operations associated with the
processor.
• MMU is responsible for all aspects of memory management.
• It is usually integrated into the processor, but in some systems it occupies
a separate chip.

28
Comparison

PARAMENTER LOGICAL ADDRESS PHYSICAL ADDRESS


Basic generated by CPU location in a memory unit
Address Space Logical Address Space is set Physical Address is set of all
of all logical addresses physical addresses mapped
generated by CPU in to the corresponding logical
reference to a program addresses

Visibility User can view the logical User can never view
address of a program physical address of program
Generation generated by the CPU Computed by MMU
Access user can use the logical user can indirectly access
address to access the physical address but not
physical address directly
29
Address Translation

• A relative address is an example of logical address in which the


address is expressed as a location relative to some known point in the
program (Ex: the beginning)

• Relative address is the most frequent type of logical address used (in
executable files)
• Such modules are loaded in main memory with all memory
references in relative form
• Physical addresses are calculated
• For adequate performance, the translation from relative to physical
address must be done by hardware

30
Base and Limit Registers

• A pair of base and limit registers


define the logical address space
• Memory management provides
protection by using two registers, a
base register and a limit register.
• The base register holds the smallest
legal physical memory address and
the limit register specifies the size
of the range.
• CPU must check every memory
access generated in user mode to
be sure it is between base and limit
for that user

31
Hardware Address Protection with Base
and Limit Registers

32
Binding of Instructions and Data to
Memory
• Address binding of instructions and data to (physical) memory addresses can
happen at three different stages
– Compile time
• If it is known at compile time where a program will reside in physical
memory, then absolute code can be generated by the compiler, containing
actual physical addresses.
– Physical address is embedded by the code
• If the load address changes at some later time, then the program will have
to be recompiled.
• Advantages: minimum set up time (already know the location where will
be loaded.
• Disadvantages: whenever loaded, whether it is pre occupied (collision)
– Current program will overwrite

33
Binding of Instructions and Data to
Memory
– Load time
• If the location at which a program will be loaded is not known at compile
time, then the compiler must generate relocatable code, which references
addresses relative to the start of the program.
• If that starting address changes, then the program must be reloaded but not
recompiled.

34
Binding of Instructions and Data to
Memory
– Execution time
• If a program can be moved from one memory segment to another
during execution, then binding must be delayed until execution time.
• Needs special hardware support for address maps (e.g., base and limit
registers)
• This method is implemented by most modern OS.

35
Multistep Processing of a User
Program
Multistep Processing of a User
Program
• User programs go through several
steps before being run.
• Program components do not
necessarily know where in RAM
they will be loaded
• RAM deals with absolute
addresses
• Logical addresses need to be
bound to physical addresses at
some point.

36
Dynamic relocation using a relocation
register

Dynamic Loading
§ With dynamic loading, a routine is not loaded until it is called
§ All routines are kept on disk in a relocatable load format
§ The main program is loaded into memory and is executed
§ When a routine needs to call another one, the calling routine first checks to see
whether the other routine has been loaded
§ If not, the relocatable linking loader is called to load the desired routine into
memory
§ Then, control is passed to the newly loaded routine
§ Advantage of dynamic loading: An unused routine is never loaded
§ Dynamic loading doesn't require special support from the OS
§ The user must design programs to take advantage of this
§ However, OS may help the programmer by providing library routines to
37 implement dynamic loading
Swapping
• A process must be loaded into memory in order to execute.
• If there is not enough memory available to keep all running processes in
memory at the same time,
– some processes who are not currently using CPU swapped out to a fast local
disk called the backing store
– Then brought back into memory for continuing execution
• Backing store
– fast disk large enough to accommodate copies of all memory images for all
users;
– must provide direct access to these memory images
• swap out, swap in – swapping used for priority-based scheduling
algorithms; lower-priority process is swapped out so higher-priority
process can be loaded and executed
• Major part of swap time is transfer time;
38 – total transfer time is directly proportional to the amount of memory swapped
Schematic View of Swapping

39
Swapping
• If compile-time or load-time address binding is used
– processes must be swapped back into the same memory location from which
they were swapped out.
• If execution time binding is used,
– processes can be swapped back into any available location.
• Swapping is a very slow process compared to other operations.
• Example,
– if a user process occupied 10 MB and the transfer rate for the backing store
were 40 MB per second, then it would take 1/4 second ( 250 milliseconds ) just
to do the data transfer.
– Ignoring head seek time and latency the swapping involves moving old data
out as well as new data in, the overall transfer time required for this swap is
500 milliseconds, or half a second.
– For efficient processor scheduling the CPU time slice should be significantly
longer than this lost transfer time.
40
Paging
• Paging is a memory management scheme that eliminates the need for
contiguous allocation of physical memory.
• This scheme permits the physical address space of a process to be non
–contiguous.

• Logical Address or Virtual Address (represented in bits) : An address


generated by the CPU
• Logical Address Space or Virtual Address Space (represented in words or
bytes) : The set of all logical addresses generated by a program
• Physical Address (represented in bits) : An address actually available on
memory unit
• Physical Address Space (represented in words or bytes) : The set of all
physical addresses corresponding to the logical addresses
41
Paging
Example:
• If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G
words (1 G = 230)
• If Logical Address Space = 128 M words = 27 × 220 words, then Logical
Address = log2 227 = 27 bits
• If Physical Address = 22 bit, then Physical Address Space = 222 words = 4
M words (1 M = 220)
• If Physical Address Space = 16 M words = 24 × 220 words, then Physical
Address = log2 224 = 24 bits

42
Paging
• The mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device and this mapping
is known as paging technique.

• The Physical Address Space is conceptually divided into a number of


fixed-size blocks, called frames.
• The Logical address Space is also splitted into fixed-size blocks, called
pages.
• Page Size = Frame Size

43
Paging

RAM (Physical Memory)


a program

logical address space


0
0
1 a frame
1
2 (size = 2x)
2
3 3
4 4 physical memory:
5 set of fixed sized
5 frames
7
program: set of pages 6
8
Page size = Frame size
9
44
Paging
a program RAM
0
0
0 1
1
2 2
2
3 load 3
1 4
4
0 mapped_to 1 5
5
1 mapped_to 4 3 7
2 mapped_to 2
3 mapped_to 7 5 6
4 mapped_to 9 8
5 mapped_to 6
4 9
45 page table
Example

46
Paging
Example:
• Physical Address = 12 bits, then Physical Address Space = 4 K words
• Logical Address = 13 bits, then Logical Address Space = 8 K words
• Page size = frame size = 1 K words (assumption)

47
Paging
Address generated by CPU is divided into
• Page number(p): Number of bits required to represent the pages in
Logical Address Space or Page number
• Page offset(d): Number of bits required to represent particular word in a
page or page size of Logical Address Space or word number of a page
or page offset.

Physical Address is divided into


• Frame number(f): Number of bits required to represent the frame of
Physical Address Space or Frame number.
• Frame offset(d): Number of bits required to represent particular word in
a frame or frame size of Physical Address Space or word number of a
frame or frame offset.
48
Address Translation Scheme
§ Assume Logical Addresses are m bits. Then logical address space is 2m
bytes.
§ Assume page size is 2n bytes.

• Logical Address generated by CPU is divided into:


– Page number (p) – used as an index into a page table which contains
base address of each page in physical memory
– Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory unit

page number page offset


p d
(m – n) bits n bits

m bits
49
Problem: Paging
• Assuming a 1 KB page size, what are the page numbers and offsets for the
following address references (provided as decimal numbers):

a. 2375
b. 19366
c. 30000
d. 256
e. 16385

50
Solution: Paging
Answer:
– Page size = 2n = 1024 B = 210 B
– # of bits in offset part (n) =10

Solution steps :
1. Convert logical address: Decimal → Binary
2. Split binary address to 2 parts (page # , Offset), offset : n digits
3. Convert offset & page# : Binary → Decimal

51
Solution: Paging

Logical Logical address Page # Offset Page # Offset


address (binary) (6 bits) (10 bits) (decimal) (decimal)
(decimal) (binary) (binary)
2375 0000 1001 0100 0111 0000 10 01 0100 0111 2 327
19366 0100 1011 1010 0110 0100 10 11 1010 0110 18 934
30000 0111 0101 0011 0000 0111 01 01 0011 0000 29 304
256 0000 0001 0000 0000 0000 00 01 0000 0000 0 265
16385 0100 0000 0000 0001 0100 00 00 0000 0001 16 1

52
Problem: Paging
Consider a logical address space of 64 pages of 1024 words each, mapped onto
a physical memory of 32 frames.

a) How many bits are there in the logical address?


b) How many bits are there in the physical address?

53
Solution: Paging
Method1:
a) m =???
Size of logical address space = 2m = # of pages × page size
2m = 64 × 1024
2m = 26 × 210
2m = 216
=> m = 16 bit

54
Solution: Paging
Method2:
m =???
# of pages = 2m-n

n =???
Page size=2n
1024=2n
210 = 2n => n = 10 bit

Again: # of pages = 2m-n


64=2m-10
26 = 2m-10
6 = m-10 => m=16 bit
55
Solution: Paging
b)
Let (x) is number of bits in the physical address
x =???
Size of physical address space = 2x
Size of physical address space = # of frames × frame size
(frame size = page size )
Size of physical address space = 32 × 1024
2x = 25 × 210
2x = 215
=> number of required bits in the physical address= x =15 bit

56
Example: Address Translation
LA = 5
page size = 4 bytes
PA = ?
= 22
5 is 0101
PA = 11001
4 bit logical address

32 byte memory
LA = 11
PA = ?
11 is 1011
PA = 00111

offset
page (dispacement) LA = 13
number inside
PA = ?
page
13 is 1101
PA = 01001
57
Example: Address Translation
m=3; 23 = 8 logical addresses 2 bits for offset 0000
n=2; page size = 22 = 4 0001 frame 00
0010
000 A 0011
001 B 0100
page 0
010 C 0101
011 D frame 01
0110
100 E 0111
1 bit for page# page 1 101 F 1000 E
110 G 1001 F
111 H frame 10
1010 G
Logical Memory 1011 H
page table 1100 A
0 11 1101 B frame 11
1 10 1110 C
each entry is used to map 1111 D
4 addresses (page size addresses) 2 bits for frame# Physical Memory
58
Example: Address Translation

15 000 0
16 bit logical address 14 000 0
0010000000000100 13 000 0
page size = 4096 bytes
12 000 0 (offset is 12 bits)
p# offset 11 111 1
10 000 0
9 101 1
8 000 0
mapping 7 000 0
6 000 0
5 011 1 frame number
4 100 1 valid/invalid bit
f# offset 3 000 1
2 110 1
110 000000000100
15 bit physical address
1 001 1
0 010 1
page table
59
Free Frames

OS keeps info
about the frames
in its frame table

Before allocation After allocation


60
Paging:
Address Translation
• The hardware implementation of page table can be done by using
dedicated registers.
• But the usage of register for the page table is satisfactory only if page table
is small.
• If page table contain large number of entries then can use TLB
(translation Look-aside buffer), a special, small, fast look up hardware
cache.

• The TLB is associative, high speed memory.


• Each entry in TLB consists of two parts: a tag and a value.
• When this memory is used, then an item is compared with all tags
simultaneously.
– If the item is found, then corresponding value is returned.
61
Paging Hardware:
Address Translation

62
Implementation of Page Table
• Page table is kept in main memory
– Page-table base register (PTBR) points to the page table
– Page-table length register (PTLR) indicates size of the page table
• In this scheme every data/instruction access requires two memory accesses.
– One for the page table and
– one for the data/instruction
• The two memory access problem can be solved by the use of a special fast-
lookup hardware cache called associative memory or translation look-
aside buffers (TLBs)
• Some TLBs store address-space identifiers (ASIDs) in each TLB entry –
uniquely identifies each process to provide address-space protection for that
process

63
Implementation of Page Table
RAM
CPU Program Program
P1 P2
PC

PT1 PT2 Kernel


Memory
PTBR Page Page
Table Table
PTLR of of
P1 P2

Currently running
process is process 1 (P1)
PCB1 PCB2

64
Associative Memory
• Associative memory: A type of computer memory from which items may
be retrieved by matching some part of their content, rather than by
specifying their address (hence also called associative storage or Content-
addressable memory (CAM).)
• Associative memory is much slower than RAM, and is rarely encountered
in mainstream computer designs.
Page # Frame #

Address translation (p, d)


– If p is in associative register, get frame # out
– Otherwise get frame # from page table in memory
65
Paging Hardware With TLB

66
Paging

67
Effective Access Time
• Associative Lookup =  time unit
• Assume memory cycle time is 1 microsecond
• Hit ratio – percentage of times that a page number is found in the
associative registers; ratio related to number of associative registers
• Hit ratio = 
• Effective Access Time (EAT)
EAT = (1 + )  + (2 + )(1 – )
=2+–

68
Effective Access Time: Problem
Consider a paging system with the page table stored in memory.
a) If a memory reference takes 200 nanoseconds, how long does a paged
memory reference take?
b) If we add associative registers, and 75 percent of all page-table
references are found in the associative registers, what is the effective
memory reference time? (Assume that finding a page-table entry in the
associative registers takes zero time, if the entry is there.)

69
Effective Access Time: Solution
Answer:
a)
memory reference time= 200+200= 400 ns
( 200 ns to access the page table in RAM and 200 ns to access the
word in memory)
b)
Case (1) : page entry found in associative registers (part1)
Memory access time = 0+200=200 ns
( 0 ns to access the page table in associative registers and 200 ns to
access the word in memory)

70
Effective Access Time: Solution
Answer:
b)
Case (2) : page entry NOT found in associative registers (part1) but
found in page table in RAM
Memory access time = 0+200+200=400 ns
( 0 ns to access the page table in associative registers (part1) , 200 ns
to access the page table (part2) in RAM and 200 ns to access the
word in memory)
>>> Effective access time =∑ [probability of the case × access time
of this case]
Effective access time = [0.75 × 200 ]+ [0.25 × 400] = 250 ns.

71
Effective Access Time: Problem
Problem:
• Consider a paging hardware with a TLB.
• Assume that the entire page table and all the pages are in the physical
memory.
• It takes 10 milliseconds to search the TLB and 80 milliseconds to access
the physical memory.
• If the TLB hit ratio is 0.6, Find the effective memory access time (in
milliseconds).

72
Effective Access Time: Solution
Answer:
§ In the case that the page is found in the TLB (TLB hit) the total time
would be the time of search in the TLB plus the time to access memory
§ TLB_hit_time = TLB_search_time + memory_access_time

§ In the case that the page is not found in the TLB (TLB miss) the total
time would be the time to search the TLB plus the time to access
memory to get the page table and frame, plus the time to access memory
to get the data.
§ TLB_miss_time := TLB_search_time + memory_access_time +
memory_access_time

73
Effective Access Time: Solution
Answer:
Effective Access Time
EAT := TLB_miss_time × (1- hit_ratio) + TLB_hit_time × hit_ratio.

EAT := (TLB_search_time + 2 × memory_access_time) × (1- hit_ratio)


+ (TLB_search_time + memory_access_time) × hit_ratio.

As both page table and page are in physical memory


T(eff) = hit ratio × (TLB access time + Main memory access time) +
(1 – hit ratio) × (TLB access time + 2 × main memory time)
= 0.6 × (10+80) + (1-0.6) × (10+2 × 80)
= 0.6 × (90) + 0.4 × (170)
= 122
74
Memory Protection
• Memory protection implemented by associating protection bit with
each frame

• Valid-invalid bit attached to each entry in the page table:


– “valid” indicates that the associated page is in the process’ logical
address space, and is thus a legal page
• signifies that the page is in memory
– “invalid” indicates that the page is not in the process’ logical
address space
• signifies that the page may be invalid or haven't brought into the
memory yet.

75
Valid (v) or Invalid (i) Bit In A Page
Table

76
Shared Pages
• Shared code
– One copy of read-only (reentrant) code shared among processes
(i.e., text editors, compilers, window systems).
– Shared code must appear in same location in the logical address
space of all processes

• Private code and data


– Each process keeps a separate copy of the code and data
– The pages for the private code and data can appear anywhere in
the logical address space

77
Shared Pages Example

78
Structure of the Page Table

• Hierarchical Paging

• Hashed Page Tables

• Inverted Page Tables

79
Hierarchical Page Tables

Need:
The need for multilevel paging arises when
• The size of page table is greater than the frame size.
• As a result, the page table cannot be stored in a single frame in main
memory.

80
Hierarchical Page Tables

Working:
In multilevel paging
• The page table having size greater than the frame size is divided into
several parts.
• The size of each part is same as frame size except possibly the last part.
• The pages of page table are then stored in different frames of the main
memory.
• To keep track of the frames storing the pages of the divided page table,
another page table is maintained.
• As a result, the hierarchy of page tables get generated.
• Multilevel paging is done till the level is reached where the entire page
table can be stored in a single frame.

81
Two-Level Page-Table Scheme

82
Two-Level Paging Scheme
logical address logical address
offset offset
00
0000 01
0001 10
0010 11
0011
0100 00
0101 01
0110 00 10
0111 01 11
1000 10
1001 11 00
1010 01
1011 10
1100 11
1101
1110
1111 two-level 00
01
single level page table 10
Page table 11
83
Two-Level Paging Example
• A logical address (on 32-bit machine with 1K page size) is divided into:
– a page number consisting of 22 bits
– a page offset consisting of 10 bits (page size 1K)
• Since the page table is paged, the page number is further divided into:
– a 12-bit page number
– a 10-bit page offset

• Thus, a logical address is as follows:


page number page offset
pi p2 d

12 10 10

where pi is an index into the outer page table, and p2 is the displacement
within the page of the outer page table
84
Address-Translation Scheme

85
Illustration of Multilevel Paging

Consider a system using paging scheme where


• Logical Address Space = 4 GB
• Physical Address Space = 16 TB
• Page size = 4 KB

Find how many levels of page table will be required.

86
Illustration of Multilevel Paging

Number of bits in Physical Address


Size of main memory = Physical Address Space = 16 TB = 244 B
Thus, Number of bits in physical address = 44 bits

Number of Frames in Main Memory


Number of Frames in Main Memory =
Size of main memory / Frame size = 16 TB / 4 KB = 232 frames
Thus, Number of bits in frame number = 32 bits

Number of Bits in Page Offset


Page size = 4 KB = 212 B
Thus, Number of bits in page offset = 12 bits
87
Illustration of Multilevel Paging
Alternatively,
Number of bits in page offset
= Number of bits in physical address – Number of bits in frame number
= 44 bits – 32 bits
= 12 bits

So, Physical address is

88
Illustration of Multilevel Paging
Number of Pages of Process
Number of pages the process is divided
= Process size / Page size = 4 GB / 4 KB = 220 pages

Inner Page Table Size


Inner page table keeps track of the frames storing the pages of process.
Inner Page table size
= Number of entries in inner page table × Page table entry size
= Number of pages the process is divided × Number of bits in frame
number = 220 × 32 bits = 220 × 4 bytes = 4 MB
Now, observe
• The size of inner page table is greater than the frame size (4 KB).
• Thus, inner page table can not be stored in a single frame.
• So, inner page table has to be divided into pages.
89
Illustration of Multilevel Paging
Number of Pages of Inner Page Table
Number of pages the inner page table is divided
= Inner page table size / Page size = 4 MB / 4 KB = 210 pages
Now, these 210 pages of inner page table are stored in different frames of the
main memory.

Number of page table entries in one page of inner page table


= Page size / Page table entry size
= Page size / Number of bits in frame number
= 4 KB / 32 bits = 4 KB / 4 B = 210

Number of Bits Required to Search an Entry in One Page of Inner Page Table
• One page of inner page table contains 210 entries.
• Thus, Number of bits required to search a particular entry in one page of
90 inner page table = 10 bits
Illustration of Multilevel Paging
Outer Page Table Size
Outer page table is required to keep track of the frames storing the pages of
inner page table.
Outer Page table size
= Number of entries in outer page table × Page table entry size
= Number of pages the inner page table is divided × Number of bits in
frame number
= 210 × 32 bits = 210 × 4 bytes = 4 KB

Now, observe
• The size of outer page table is same as frame size (4 KB).
• Thus, outer page table can be stored in a single frame.
• So, for given system, we will have two levels of page table.
• Page Table Base Register (PTBR) will store the base address of the outer
91 page table.
Illustration of Multilevel Paging
Number of Bits Required to Search an Entry in Outer Page Table
• Outer page table contains 210 entries.
• Thus, Number of bits required to search a particular entry in outer page
table = 10 bits
The paging system will look like as shown below

92
Example: Two level page table need

logical address length = 32 bits


pagesize = 4096 bytes used 8 MB
logical address division: 10, 10, 12

unused What is total size of two


232 bytes = 4 GB level page
table if entry size
logical address space size is 4 bytes?
(only partially used)
used 12 MB

93
Example: Two level page table need

10 10 12 Each entry of a second level page


table translates a page# to a
frame#; i.e. each entry maps a
page which is 4096 bytes
210
entries There are 1024 entries in a
second level page table
210
entries ……
Hence, a second level page table
210
can map 210 × 212 = 222 = 4 MB
entries
Top level of logical address space
page table
a second level page
table

94
Example: Two level page table need

8 MB / 4 MB = 2 second level page


8 MB tables required to map
8 MB of logical memory

Total = 3 + 2 = 5 second level


232 bytes = 4 GB page tables required

12 MB / 4 MB = 3 second level page


tables required to map 12 MB
of logical memory
12 MB

95
Example: two level page table need
2nd level page tables
8 MB 210
entries
210
entries 1K × 4Bytes +
5 × 1K × 4Bytes
210 ….
232bytes unused
entries 210 = 24 KB
= 4 GB
entries space needed
to hold
top level 210
entries the page
page table tables of the
12 MB 210 process
entries

96
Problem: Two level page table need

§ Consider a virtual memory system with physical memory of 8GB, a


page size of 8KB and 46 bit virtual address.
§ Assume every page table exactly fits into a single page.
§ If page table entry size is 4B then how many levels of page tables
would be required.

97
Solution: Two level page table need

§ Page size = 8KB = 213 B


§ Virtual address space size = 246 B
§ PTE = 4B = 22 B

Number of pages or number of entries in page table,


= (virtual address space size) / (page size)
= 246B/213 B
= 233

Size of page table,


= (number of entries in page table) × (size of PTE)
= 233 × 22 B
98 = 235 B
Solution: Two level page table need

To create one more level,

Size of page table > page size


Number of page tables in last level,
= 235 B / 213 B
= 222

Base address of these tables are stored in page table [second last level].
Size of page table [second last level]
= 222 × 22B
= 224B

99
Solution: Two level page table need

To create one more level,


Size of page table [second last level] > page size

Number of page tables in second last level


= 224B/213 B
= 211

Base address of these tables are stored in page table [third last level]
Size of page table [third last level]
= 211 × 22 B
= 213 B
= page size
10
0 3 levels are required.
Three-level Paging Scheme

10
1
Hashed Page Tables
• Common in address spaces > 32 bits
• The virtual page number is hashed into a page table
• A page table entry contains a chain of elements hashing to the same
location
– each element = <a virtual page number, frame number>

• Virtual page numbers are compared in this chain searching for a match
– If a match is found, the corresponding physical frame is extracted

10
2
Hashed Page Table

10
3
Inverted Page Table
• Most of the Operating Systems implement a separate pagetable for each
process.
– for ‘n’ processes running on a Multiprocessing operating system, there are ‘n’
number of pagetables stored in the memory.
• Sometimes when a process is very large in size
– its pagetable size also increases substantially.
• An inverted page table
Has one entry for every frame of memory
Records which page is in that frame
Is indexed by frame number not page number!

§ Example: A process of size 2 GB with Page size = 512 Bytes, Size of page table
entry = 4 Bytes, then Number of pages in the process = 2 GB / 512 B = 222
10 PageTable Size = 222 × 22 = 224 bytes
4
Inverted Page Table
• An alternate approach is to use the Inverted Page Table structure that
consists of one-page table entry for every frame of the main memory.
• So the number of page table entries in the Inverted Page Table reduces
to the number of frames in physical memory and a single page table is
used to represent the paging information of all the processes.

• Each entry in the page table contains the following fields.


– Page number
– Process id
– Control bits
– Chained pointer

10
5
Inverted Page Table Architecture

10
6
Example: Inverted Page Table
If the virtual address space supported is 264 bits,
• the page size is 1K,
• the size of the physical memory is 64K,
• the size of a PTE is two bytes, and
• the addressing is at the byte level,
Calculate the size of the page table required for both standard and inverted
page tables.
Standard page table:
• address space is 61 bits in byte addressing
• number of pages = 261 / 1K = 251;
• Page Table Size = 251 × (PTE size) = 252 bytes
Inverted page table:
• Total frames = 64K / 1K = 64;
10 • Page Table Size = 64 × (PTE size) = 128 bytes
7
Segmentation
• Segmentation is a memory management technique in which, the
memory is divided into the variable size parts.
• Each part is known as segment which can be allocated to a process.
• There are types of segmentation
– Virtual memory segmentation: Each process is divided into a number of
segments, not all of which are resident at any one point in time.
– Simple segmentation: Each process is divided into a number of segments,
all of which are loaded into memory at run time, though not necessarily
contiguous.

10
8
Simple Segmentation

• Paging is more close to Operating system rather than the User.

• It divides all the process into the form of pages regardless of the fact that a
process can have some relative parts of functions which needs to be loaded in
the same page.

• Operating system doesn't care about the User's view of the process.

• It may divide the same function into different pages and those pages may or
may not be loaded at the same time into the memory.

• It decreases the efficiency of the system.

• It is better to have segmentation which divides the process into the segments.

• Each segment contain same type of functions such as main function can be
10 included in one segment and the library functions can be included in the other
9 segment
Simple Segmentation

Translation of Logical address into physical address by segment table


• CPU generates a logical address which contains two parts
– Segment Number
– Offset

• The Segment number is mapped to the segment table.


• The limit of the respective segment is compared with the offset.
• If the offset is less than the limit then the address is valid otherwise it
throws an error as the address is invalid.

11
0
Segmentation
• Memory-management scheme that supports user view of memory
• A program is a collection of segments
– A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
11
1
User’s View of a Program

11
2
Logical View of Segmentation

4
1

3 2
4

11 user space physical memory space

3
Logical-to-Physical Address Translation in
segmentation

11
4
Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>,
• Segment table – maps two-dimensional physical addresses; each
table entry has:
– base – contains the starting physical address where the
segments reside in memory
– limit – specifies the length of the segment
• Segment-table base register (STBR) points to the segment
table’s location in memory
• Segment-table length register (STLR) indicates number of
segments used by a program;
segment number s is legal if s < STLR
11
5
Segmentation Architecture (Cont.)
• Protection
– With each entry in segment table associate:
• validation bit = 0  illegal segment
• read/write/execute privileges
• Protection bits associated with segments; code sharing occurs at
segment level
• Since segments vary in length, memory allocation is a dynamic
storage-allocation problem
• A segmentation example is shown in the following diagram

11
6
Segmentation Hardware

11
7
Example of Segmentation

11
8
Illustration: Segmentation
Problem: Consider the following segment table
Segment No. Base Length
0 1219 700
1 2300 14
2 90 100
3 1327 580
4 1952 96

Which of the following logical address will produce trap addressing error?
A. 0, 430
B. 1, 11
C. 2, 100
D. 3, 425
11 E. 4, 95
9 Calculate the physical address if no trap is produced..
Illustration: Segmentation
In a segmentation scheme, the generated logical address consists of two parts
• Segment Number
• Segment Offset

We know
• Segment Offset must always lie in the range [0, limit-1].
• If segment offset becomes greater than or equal to the limit of segment,
then trap addressing error is produced.

12
0
Illustration: Segmentation
Option-A: 0, 430
Here,
Segment Number = 0
Segment Offset = 430

We have,
In the segment table, limit of segment 0 is 700.
Thus, segment offset must always lie in the range = [0, 700-1] = [0, 699]

Now, Since generated segment offset lies in the above range, so request
generated is valid.
Therefore, no trap will be produced.
Physical Address = 1219 + 430 = 1649
12
1
Illustration: Segmentation
Option-B: 1, 11-
Here,
Segment Number = 1
Segment Offset = 11

We have,
In the segment table, limit of segment-1 is 14.
Thus, segment offset must always lie in the range = [0, 14-1] = [0, 13]

Now, Since generated segment offset lies in the above range, so request
generated is valid.
Therefore, no trap will be produced.
Physical Address = 2300 + 11 = 2311
12
2
Illustration: Segmentation
Option-C: 2, 100

Here,
Segment Number = 2
Segment Offset = 100

We have,
In the segment table, limit of segment-2 is 100.
Thus, segment offset must always lie in the range = [0, 100-1] = [0, 99]

Now,
Since generated segment offset does not lie in the above range, so request
generated is invalid.
12
Therefore, trap will be produced.
3
Illustration: Segmentation
Option-D: 3, 425

Here,
Segment Number = 3
Segment Offset = 425

We have,
In the segment table, limit of segment-3 is 580.
Thus, segment offset must always lie in the range = [0, 580-1] = [0, 579]

Now,
Since generated segment offset lies in the above range, so request generated is
valid.
12 Therefore, no trap will be produced.
Physical Address = 1327 + 425 = 1752
4
Illustration: Segmentation
Option-E: 4, 95
Here,
Segment Number = 4
Segment Offset = 95

We have,
In the segment table, limit of segment-4 is 96.
Thus, segment offset must always lie in the range = [0, 96-1] = [0, 95]
Now,
Since generated segment offset lies in the above range, so request
generated is valid.
Therefore, no trap will be produced.
Physical Address = 1952 + 95 = 2047
12
Thus, Option-(C) is correct.
5
Pros and Cons
Advantages of Segmentation
• No internal fragmentation
• Average Segment Size is larger than the actual page size.
• Less overhead
• It is easier to relocate segments than entire address space.
• The segment table is of lesser size as compare to the page table in
paging.

Disadvantages
• It can have external fragmentation.
• it is difficult to allocate contiguous memory to variable sized partition.
12 • Costly memory management algorithms.
6

You might also like