Slide-7-OS Memory Management (1)
Slide-7-OS Memory Management (1)
Memory Management
Main Memory
CPU cache
instructions
Registers
data Process
program image
in memory
Operating
System
Disk
2
Memory Management
• Memory management is the process of controlling and coordinating
computer memory
– assigned portions are called blocks to various running programs to
optimize overall system performance.
• Memory management requires
– Hardware: involves components that physically store data, such as RAM,
caches memory
– OS: allocation/ reallocation of specific memory blocks to individual programs
as user demands change
– programs & applications: ensures the availability of adequate memory for the
objects and data structures of each running program at all times
3
Memory Management
• In most memory management schemes, the kernel occupies some fixed
portion of main memory and the rest is shared by multiple processes
4
Memory Management
• In order to manage memory effectively the OS must have
– Memory allocation policies
– Methods to track the status of memory locations (free or allocated)
– Policies for preempting memory from one process to allocate to
another
5
Memory Management Requirements
• Memory management is intended to satisfy the following requirements:
− Relocation
− Protection
− Sharing
− Logical organization
− Physical organization
6
Requirements: Relocation
• Relocation is the process of adjusting program addresses to match the
actual physical addresses where the program resides when it executes
7
Requirements: Protection
8
Requirements: Sharing
9
Requirements: Logical Organization
• Main memory is organized as a linear (1-D) address space consisting of a
sequence of bytes or words.
• users write programs in modules with different characteristics
– instruction modules are execute-only
– data modules are either read-only or read/write
– some modules are private others are public
• To effectively deal with user programs, the OS and hardware should
support a basic form of module to provide the required protection and
sharing
10
Requirements: Physical Organization
• secondary memory is the long term store for programs and data whereas
main memory holds program and data currently in use
• moving information between these two levels of memory is a major
concern of memory management
– it is highly inefficient to leave this responsibility to the application
programmer
11
Simple Memory Management
§ are not used in modern OS, but they lay the ground for a proper
discussion of virtual memory
12
Fixed Partitioning
13
Fixed Partitioning Problems
14
Solution – Unequal Size Partitions
15
Unequal Size Partitions
16
Dynamic Partitioning
17
Dynamic Partitioning Example
• A hole of 64K is left after loading 3 processes: not enough room for
another process
• Eventually each process is blocked. The OS swaps out process 2 to bring
in process 4
18
Dynamic Partitioning: an example
• another hole of 96K is created
• Eventually each process is blocked. The OS swaps out process 1 to bring
in again process 2 and another hole of 96K is created...
• Compaction would produce a single hole of 256K
19
Placement Algorithm
• Used to decide which free block to
allocate to a process
• Goal: to reduce usage of compaction
(time consuming)
• Possible algorithms:
– Best-fit: Allocate the smallest
hole that is big enough; must
search entire list, unless ordered
by size
• Produces the smallest leftover
hole
– First-fit: Allocate the first hole
that is big enough
– Worst-fit: Allocate the largest
hole; must also search entire list
• Produces the largest leftover hole
20
Problem – Placement Algorithm
• Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in
order), how would the first-fit, best-fit, and worst-fit algorithms place
processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)? Which
algorithm makes the most efficient use of memory?
21
Solution – Placement Algorithm
First-fit:
212K is put in 500K partition
417K is put in 600K partition
112K is put in 288K partition (new partition 288K = 500K - 212K)
426K must wait
Best-fit:
212K is put in 300K partition
417K is put in 500K partition
112K is put in 200K partition
426K is put in 600K partition
22
Solution – Placement Algorithm
Worst-fit:
212K is put in 600K partition
417K is put in 500K partition
112K is put in 388K partition
426K must wait
23
Address Types
• physical address
• logical address
• Relative address
24
Physical Address Space
Physical Address
• Identifies a physical location of required data in a memory
• User never directly deals with the physical address
– can access by its corresponding logical address
• The user program generates the logical address
– program needs physical memory for its execution
• logical address must be mapped to the physical address by MMU before
they are used
• The term Physical Address Space is used for all physical addresses
corresponding to the logical addresses in a Logical address space
25
Logical Address Space
Logical Address
• generated by CPU while a program is running
• also known as virtual address as it does not exist physically
• used as a reference to access the physical memory location by CPU
• The term Logical Address Space is used for the set of all logical addresses
generated by a program’s perspective.
• The hardware device called Memory-Management Unit (MMU) is used for
mapping logical address to its corresponding physical address.
26
Logical address space concept
• Logical address space concept is RAM
different from the physical phy_max
addresses.
0
• Logical address space has to be
mapped somewhere in physical 0
memory
27
Memory-Management Unit (MMU)
• A memory management unit (MMU) is a computer hardware component
that handles all memory and caching operations associated with the
processor.
• MMU is responsible for all aspects of memory management.
• It is usually integrated into the processor, but in some systems it occupies
a separate chip.
28
Comparison
Visibility User can view the logical User can never view
address of a program physical address of program
Generation generated by the CPU Computed by MMU
Access user can use the logical user can indirectly access
address to access the physical address but not
physical address directly
29
Address Translation
• Relative address is the most frequent type of logical address used (in
executable files)
• Such modules are loaded in main memory with all memory
references in relative form
• Physical addresses are calculated
• For adequate performance, the translation from relative to physical
address must be done by hardware
30
Base and Limit Registers
31
Hardware Address Protection with Base
and Limit Registers
32
Binding of Instructions and Data to
Memory
• Address binding of instructions and data to (physical) memory addresses can
happen at three different stages
– Compile time
• If it is known at compile time where a program will reside in physical
memory, then absolute code can be generated by the compiler, containing
actual physical addresses.
– Physical address is embedded by the code
• If the load address changes at some later time, then the program will have
to be recompiled.
• Advantages: minimum set up time (already know the location where will
be loaded.
• Disadvantages: whenever loaded, whether it is pre occupied (collision)
– Current program will overwrite
33
Binding of Instructions and Data to
Memory
– Load time
• If the location at which a program will be loaded is not known at compile
time, then the compiler must generate relocatable code, which references
addresses relative to the start of the program.
• If that starting address changes, then the program must be reloaded but not
recompiled.
34
Binding of Instructions and Data to
Memory
– Execution time
• If a program can be moved from one memory segment to another
during execution, then binding must be delayed until execution time.
• Needs special hardware support for address maps (e.g., base and limit
registers)
• This method is implemented by most modern OS.
35
Multistep Processing of a User
Program
Multistep Processing of a User
Program
• User programs go through several
steps before being run.
• Program components do not
necessarily know where in RAM
they will be loaded
• RAM deals with absolute
addresses
• Logical addresses need to be
bound to physical addresses at
some point.
36
Dynamic relocation using a relocation
register
Dynamic Loading
§ With dynamic loading, a routine is not loaded until it is called
§ All routines are kept on disk in a relocatable load format
§ The main program is loaded into memory and is executed
§ When a routine needs to call another one, the calling routine first checks to see
whether the other routine has been loaded
§ If not, the relocatable linking loader is called to load the desired routine into
memory
§ Then, control is passed to the newly loaded routine
§ Advantage of dynamic loading: An unused routine is never loaded
§ Dynamic loading doesn't require special support from the OS
§ The user must design programs to take advantage of this
§ However, OS may help the programmer by providing library routines to
37 implement dynamic loading
Swapping
• A process must be loaded into memory in order to execute.
• If there is not enough memory available to keep all running processes in
memory at the same time,
– some processes who are not currently using CPU swapped out to a fast local
disk called the backing store
– Then brought back into memory for continuing execution
• Backing store
– fast disk large enough to accommodate copies of all memory images for all
users;
– must provide direct access to these memory images
• swap out, swap in – swapping used for priority-based scheduling
algorithms; lower-priority process is swapped out so higher-priority
process can be loaded and executed
• Major part of swap time is transfer time;
38 – total transfer time is directly proportional to the amount of memory swapped
Schematic View of Swapping
39
Swapping
• If compile-time or load-time address binding is used
– processes must be swapped back into the same memory location from which
they were swapped out.
• If execution time binding is used,
– processes can be swapped back into any available location.
• Swapping is a very slow process compared to other operations.
• Example,
– if a user process occupied 10 MB and the transfer rate for the backing store
were 40 MB per second, then it would take 1/4 second ( 250 milliseconds ) just
to do the data transfer.
– Ignoring head seek time and latency the swapping involves moving old data
out as well as new data in, the overall transfer time required for this swap is
500 milliseconds, or half a second.
– For efficient processor scheduling the CPU time slice should be significantly
longer than this lost transfer time.
40
Paging
• Paging is a memory management scheme that eliminates the need for
contiguous allocation of physical memory.
• This scheme permits the physical address space of a process to be non
–contiguous.
42
Paging
• The mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device and this mapping
is known as paging technique.
43
Paging
46
Paging
Example:
• Physical Address = 12 bits, then Physical Address Space = 4 K words
• Logical Address = 13 bits, then Logical Address Space = 8 K words
• Page size = frame size = 1 K words (assumption)
47
Paging
Address generated by CPU is divided into
• Page number(p): Number of bits required to represent the pages in
Logical Address Space or Page number
• Page offset(d): Number of bits required to represent particular word in a
page or page size of Logical Address Space or word number of a page
or page offset.
m bits
49
Problem: Paging
• Assuming a 1 KB page size, what are the page numbers and offsets for the
following address references (provided as decimal numbers):
a. 2375
b. 19366
c. 30000
d. 256
e. 16385
50
Solution: Paging
Answer:
– Page size = 2n = 1024 B = 210 B
– # of bits in offset part (n) =10
Solution steps :
1. Convert logical address: Decimal → Binary
2. Split binary address to 2 parts (page # , Offset), offset : n digits
3. Convert offset & page# : Binary → Decimal
51
Solution: Paging
52
Problem: Paging
Consider a logical address space of 64 pages of 1024 words each, mapped onto
a physical memory of 32 frames.
53
Solution: Paging
Method1:
a) m =???
Size of logical address space = 2m = # of pages × page size
2m = 64 × 1024
2m = 26 × 210
2m = 216
=> m = 16 bit
54
Solution: Paging
Method2:
m =???
# of pages = 2m-n
n =???
Page size=2n
1024=2n
210 = 2n => n = 10 bit
56
Example: Address Translation
LA = 5
page size = 4 bytes
PA = ?
= 22
5 is 0101
PA = 11001
4 bit logical address
32 byte memory
LA = 11
PA = ?
11 is 1011
PA = 00111
offset
page (dispacement) LA = 13
number inside
PA = ?
page
13 is 1101
PA = 01001
57
Example: Address Translation
m=3; 23 = 8 logical addresses 2 bits for offset 0000
n=2; page size = 22 = 4 0001 frame 00
0010
000 A 0011
001 B 0100
page 0
010 C 0101
011 D frame 01
0110
100 E 0111
1 bit for page# page 1 101 F 1000 E
110 G 1001 F
111 H frame 10
1010 G
Logical Memory 1011 H
page table 1100 A
0 11 1101 B frame 11
1 10 1110 C
each entry is used to map 1111 D
4 addresses (page size addresses) 2 bits for frame# Physical Memory
58
Example: Address Translation
15 000 0
16 bit logical address 14 000 0
0010000000000100 13 000 0
page size = 4096 bytes
12 000 0 (offset is 12 bits)
p# offset 11 111 1
10 000 0
9 101 1
8 000 0
mapping 7 000 0
6 000 0
5 011 1 frame number
4 100 1 valid/invalid bit
f# offset 3 000 1
2 110 1
110 000000000100
15 bit physical address
1 001 1
0 010 1
page table
59
Free Frames
OS keeps info
about the frames
in its frame table
62
Implementation of Page Table
• Page table is kept in main memory
– Page-table base register (PTBR) points to the page table
– Page-table length register (PTLR) indicates size of the page table
• In this scheme every data/instruction access requires two memory accesses.
– One for the page table and
– one for the data/instruction
• The two memory access problem can be solved by the use of a special fast-
lookup hardware cache called associative memory or translation look-
aside buffers (TLBs)
• Some TLBs store address-space identifiers (ASIDs) in each TLB entry –
uniquely identifies each process to provide address-space protection for that
process
63
Implementation of Page Table
RAM
CPU Program Program
P1 P2
PC
Currently running
process is process 1 (P1)
PCB1 PCB2
64
Associative Memory
• Associative memory: A type of computer memory from which items may
be retrieved by matching some part of their content, rather than by
specifying their address (hence also called associative storage or Content-
addressable memory (CAM).)
• Associative memory is much slower than RAM, and is rarely encountered
in mainstream computer designs.
Page # Frame #
66
Paging
67
Effective Access Time
• Associative Lookup = time unit
• Assume memory cycle time is 1 microsecond
• Hit ratio – percentage of times that a page number is found in the
associative registers; ratio related to number of associative registers
• Hit ratio =
• Effective Access Time (EAT)
EAT = (1 + ) + (2 + )(1 – )
=2+–
68
Effective Access Time: Problem
Consider a paging system with the page table stored in memory.
a) If a memory reference takes 200 nanoseconds, how long does a paged
memory reference take?
b) If we add associative registers, and 75 percent of all page-table
references are found in the associative registers, what is the effective
memory reference time? (Assume that finding a page-table entry in the
associative registers takes zero time, if the entry is there.)
69
Effective Access Time: Solution
Answer:
a)
memory reference time= 200+200= 400 ns
( 200 ns to access the page table in RAM and 200 ns to access the
word in memory)
b)
Case (1) : page entry found in associative registers (part1)
Memory access time = 0+200=200 ns
( 0 ns to access the page table in associative registers and 200 ns to
access the word in memory)
70
Effective Access Time: Solution
Answer:
b)
Case (2) : page entry NOT found in associative registers (part1) but
found in page table in RAM
Memory access time = 0+200+200=400 ns
( 0 ns to access the page table in associative registers (part1) , 200 ns
to access the page table (part2) in RAM and 200 ns to access the
word in memory)
>>> Effective access time =∑ [probability of the case × access time
of this case]
Effective access time = [0.75 × 200 ]+ [0.25 × 400] = 250 ns.
71
Effective Access Time: Problem
Problem:
• Consider a paging hardware with a TLB.
• Assume that the entire page table and all the pages are in the physical
memory.
• It takes 10 milliseconds to search the TLB and 80 milliseconds to access
the physical memory.
• If the TLB hit ratio is 0.6, Find the effective memory access time (in
milliseconds).
72
Effective Access Time: Solution
Answer:
§ In the case that the page is found in the TLB (TLB hit) the total time
would be the time of search in the TLB plus the time to access memory
§ TLB_hit_time = TLB_search_time + memory_access_time
§ In the case that the page is not found in the TLB (TLB miss) the total
time would be the time to search the TLB plus the time to access
memory to get the page table and frame, plus the time to access memory
to get the data.
§ TLB_miss_time := TLB_search_time + memory_access_time +
memory_access_time
73
Effective Access Time: Solution
Answer:
Effective Access Time
EAT := TLB_miss_time × (1- hit_ratio) + TLB_hit_time × hit_ratio.
75
Valid (v) or Invalid (i) Bit In A Page
Table
76
Shared Pages
• Shared code
– One copy of read-only (reentrant) code shared among processes
(i.e., text editors, compilers, window systems).
– Shared code must appear in same location in the logical address
space of all processes
77
Shared Pages Example
78
Structure of the Page Table
• Hierarchical Paging
79
Hierarchical Page Tables
Need:
The need for multilevel paging arises when
• The size of page table is greater than the frame size.
• As a result, the page table cannot be stored in a single frame in main
memory.
80
Hierarchical Page Tables
Working:
In multilevel paging
• The page table having size greater than the frame size is divided into
several parts.
• The size of each part is same as frame size except possibly the last part.
• The pages of page table are then stored in different frames of the main
memory.
• To keep track of the frames storing the pages of the divided page table,
another page table is maintained.
• As a result, the hierarchy of page tables get generated.
• Multilevel paging is done till the level is reached where the entire page
table can be stored in a single frame.
81
Two-Level Page-Table Scheme
82
Two-Level Paging Scheme
logical address logical address
offset offset
00
0000 01
0001 10
0010 11
0011
0100 00
0101 01
0110 00 10
0111 01 11
1000 10
1001 11 00
1010 01
1011 10
1100 11
1101
1110
1111 two-level 00
01
single level page table 10
Page table 11
83
Two-Level Paging Example
• A logical address (on 32-bit machine with 1K page size) is divided into:
– a page number consisting of 22 bits
– a page offset consisting of 10 bits (page size 1K)
• Since the page table is paged, the page number is further divided into:
– a 12-bit page number
– a 10-bit page offset
12 10 10
where pi is an index into the outer page table, and p2 is the displacement
within the page of the outer page table
84
Address-Translation Scheme
85
Illustration of Multilevel Paging
86
Illustration of Multilevel Paging
88
Illustration of Multilevel Paging
Number of Pages of Process
Number of pages the process is divided
= Process size / Page size = 4 GB / 4 KB = 220 pages
Number of Bits Required to Search an Entry in One Page of Inner Page Table
• One page of inner page table contains 210 entries.
• Thus, Number of bits required to search a particular entry in one page of
90 inner page table = 10 bits
Illustration of Multilevel Paging
Outer Page Table Size
Outer page table is required to keep track of the frames storing the pages of
inner page table.
Outer Page table size
= Number of entries in outer page table × Page table entry size
= Number of pages the inner page table is divided × Number of bits in
frame number
= 210 × 32 bits = 210 × 4 bytes = 4 KB
Now, observe
• The size of outer page table is same as frame size (4 KB).
• Thus, outer page table can be stored in a single frame.
• So, for given system, we will have two levels of page table.
• Page Table Base Register (PTBR) will store the base address of the outer
91 page table.
Illustration of Multilevel Paging
Number of Bits Required to Search an Entry in Outer Page Table
• Outer page table contains 210 entries.
• Thus, Number of bits required to search a particular entry in outer page
table = 10 bits
The paging system will look like as shown below
92
Example: Two level page table need
93
Example: Two level page table need
94
Example: Two level page table need
95
Example: two level page table need
2nd level page tables
8 MB 210
entries
210
entries 1K × 4Bytes +
5 × 1K × 4Bytes
210 ….
232bytes unused
entries 210 = 24 KB
= 4 GB
entries space needed
to hold
top level 210
entries the page
page table tables of the
12 MB 210 process
entries
96
Problem: Two level page table need
97
Solution: Two level page table need
Base address of these tables are stored in page table [second last level].
Size of page table [second last level]
= 222 × 22B
= 224B
99
Solution: Two level page table need
Base address of these tables are stored in page table [third last level]
Size of page table [third last level]
= 211 × 22 B
= 213 B
= page size
10
0 3 levels are required.
Three-level Paging Scheme
10
1
Hashed Page Tables
• Common in address spaces > 32 bits
• The virtual page number is hashed into a page table
• A page table entry contains a chain of elements hashing to the same
location
– each element = <a virtual page number, frame number>
• Virtual page numbers are compared in this chain searching for a match
– If a match is found, the corresponding physical frame is extracted
10
2
Hashed Page Table
10
3
Inverted Page Table
• Most of the Operating Systems implement a separate pagetable for each
process.
– for ‘n’ processes running on a Multiprocessing operating system, there are ‘n’
number of pagetables stored in the memory.
• Sometimes when a process is very large in size
– its pagetable size also increases substantially.
• An inverted page table
Has one entry for every frame of memory
Records which page is in that frame
Is indexed by frame number not page number!
§ Example: A process of size 2 GB with Page size = 512 Bytes, Size of page table
entry = 4 Bytes, then Number of pages in the process = 2 GB / 512 B = 222
10 PageTable Size = 222 × 22 = 224 bytes
4
Inverted Page Table
• An alternate approach is to use the Inverted Page Table structure that
consists of one-page table entry for every frame of the main memory.
• So the number of page table entries in the Inverted Page Table reduces
to the number of frames in physical memory and a single page table is
used to represent the paging information of all the processes.
10
5
Inverted Page Table Architecture
10
6
Example: Inverted Page Table
If the virtual address space supported is 264 bits,
• the page size is 1K,
• the size of the physical memory is 64K,
• the size of a PTE is two bytes, and
• the addressing is at the byte level,
Calculate the size of the page table required for both standard and inverted
page tables.
Standard page table:
• address space is 61 bits in byte addressing
• number of pages = 261 / 1K = 251;
• Page Table Size = 251 × (PTE size) = 252 bytes
Inverted page table:
• Total frames = 64K / 1K = 64;
10 • Page Table Size = 64 × (PTE size) = 128 bytes
7
Segmentation
• Segmentation is a memory management technique in which, the
memory is divided into the variable size parts.
• Each part is known as segment which can be allocated to a process.
• There are types of segmentation
– Virtual memory segmentation: Each process is divided into a number of
segments, not all of which are resident at any one point in time.
– Simple segmentation: Each process is divided into a number of segments,
all of which are loaded into memory at run time, though not necessarily
contiguous.
10
8
Simple Segmentation
• It divides all the process into the form of pages regardless of the fact that a
process can have some relative parts of functions which needs to be loaded in
the same page.
• Operating system doesn't care about the User's view of the process.
• It may divide the same function into different pages and those pages may or
may not be loaded at the same time into the memory.
• It is better to have segmentation which divides the process into the segments.
• Each segment contain same type of functions such as main function can be
10 included in one segment and the library functions can be included in the other
9 segment
Simple Segmentation
11
0
Segmentation
• Memory-management scheme that supports user view of memory
• A program is a collection of segments
– A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
11
1
User’s View of a Program
11
2
Logical View of Segmentation
4
1
3 2
4
3
Logical-to-Physical Address Translation in
segmentation
11
4
Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>,
• Segment table – maps two-dimensional physical addresses; each
table entry has:
– base – contains the starting physical address where the
segments reside in memory
– limit – specifies the length of the segment
• Segment-table base register (STBR) points to the segment
table’s location in memory
• Segment-table length register (STLR) indicates number of
segments used by a program;
segment number s is legal if s < STLR
11
5
Segmentation Architecture (Cont.)
• Protection
– With each entry in segment table associate:
• validation bit = 0 illegal segment
• read/write/execute privileges
• Protection bits associated with segments; code sharing occurs at
segment level
• Since segments vary in length, memory allocation is a dynamic
storage-allocation problem
• A segmentation example is shown in the following diagram
11
6
Segmentation Hardware
11
7
Example of Segmentation
11
8
Illustration: Segmentation
Problem: Consider the following segment table
Segment No. Base Length
0 1219 700
1 2300 14
2 90 100
3 1327 580
4 1952 96
Which of the following logical address will produce trap addressing error?
A. 0, 430
B. 1, 11
C. 2, 100
D. 3, 425
11 E. 4, 95
9 Calculate the physical address if no trap is produced..
Illustration: Segmentation
In a segmentation scheme, the generated logical address consists of two parts
• Segment Number
• Segment Offset
We know
• Segment Offset must always lie in the range [0, limit-1].
• If segment offset becomes greater than or equal to the limit of segment,
then trap addressing error is produced.
12
0
Illustration: Segmentation
Option-A: 0, 430
Here,
Segment Number = 0
Segment Offset = 430
We have,
In the segment table, limit of segment 0 is 700.
Thus, segment offset must always lie in the range = [0, 700-1] = [0, 699]
Now, Since generated segment offset lies in the above range, so request
generated is valid.
Therefore, no trap will be produced.
Physical Address = 1219 + 430 = 1649
12
1
Illustration: Segmentation
Option-B: 1, 11-
Here,
Segment Number = 1
Segment Offset = 11
We have,
In the segment table, limit of segment-1 is 14.
Thus, segment offset must always lie in the range = [0, 14-1] = [0, 13]
Now, Since generated segment offset lies in the above range, so request
generated is valid.
Therefore, no trap will be produced.
Physical Address = 2300 + 11 = 2311
12
2
Illustration: Segmentation
Option-C: 2, 100
Here,
Segment Number = 2
Segment Offset = 100
We have,
In the segment table, limit of segment-2 is 100.
Thus, segment offset must always lie in the range = [0, 100-1] = [0, 99]
Now,
Since generated segment offset does not lie in the above range, so request
generated is invalid.
12
Therefore, trap will be produced.
3
Illustration: Segmentation
Option-D: 3, 425
Here,
Segment Number = 3
Segment Offset = 425
We have,
In the segment table, limit of segment-3 is 580.
Thus, segment offset must always lie in the range = [0, 580-1] = [0, 579]
Now,
Since generated segment offset lies in the above range, so request generated is
valid.
12 Therefore, no trap will be produced.
Physical Address = 1327 + 425 = 1752
4
Illustration: Segmentation
Option-E: 4, 95
Here,
Segment Number = 4
Segment Offset = 95
We have,
In the segment table, limit of segment-4 is 96.
Thus, segment offset must always lie in the range = [0, 96-1] = [0, 95]
Now,
Since generated segment offset lies in the above range, so request
generated is valid.
Therefore, no trap will be produced.
Physical Address = 1952 + 95 = 2047
12
Thus, Option-(C) is correct.
5
Pros and Cons
Advantages of Segmentation
• No internal fragmentation
• Average Segment Size is larger than the actual page size.
• Less overhead
• It is easier to relocate segments than entire address space.
• The segment table is of lesser size as compare to the page table in
paging.
Disadvantages
• It can have external fragmentation.
• it is difficult to allocate contiguous memory to variable sized partition.
12 • Costly memory management algorithms.
6