Unit3 - Memory Management
Unit3 - Memory Management
Definition
• Equal-size partitions
– any process whose size is less than
or equal to the partition size can be
loaded into an available partition
• The operating system can swap
out a process if all partitions are
full and no process is in the
Ready or Running state
• A program may be too big to fit in a partition
• program needs to be designed with the use of overlays
• Main memory utilization is inefficient
• any program, regardless of size, occupies an entire
partition
• internal fragmentation
– wasted space due to the block of data loaded being
smaller than the partition
• Using unequal size partitions
helps lessen the problems
– programs up to 16M can be
accommodated without overlays
– partitions smaller than 8M allow smaller
programs to be accommodated with less
internal fragmentation
Memory Assignment
• The number of partitions specified at system
generation time limits the number of active
processes in the system
• Small jobs will not utilize partition space efficiently
• Partitions are of variable length and number
• Process is allocated exactly as much memory as it
requires
• This technique was used by IBM’s mainframe operating
system, OS/MVT
Effect of
Dynamic
Partitioning
Dynamic Partitioning
Placement Algorithms
Memory
Configuration
Example
Buddy System
• Comprised of fixed and dynamic partitioning
schemes
• Space available for allocation is treated as a
single block
• Memory blocks are available of size 2K words,
L ≤ K ≤ U, where
• 2L = smallest size block that is allocated
• 2U = largest size block that is allocated; generally 2U
is the size of the entire memory available for
allocation
Buddy System Example
Addresses
• Partition memory into equal fixed-size chunks that are relatively
small
• Process is also divided into small fixed-size chunks of the same size
Assignment of
Process to Free
Frames
Page Table
• The TLB only contains some of the page table entries so we cannot
simply index into the TLB based on page number
– each TLB entry must include the page number as well as the
complete page table entry
• The processor is equipped with hardware that allows it to
interrogate simultaneously a number of TLB entries to determine if
there is a match on page number
Direct Versus
Associative Lookup
TLB and Cache Operation
Page Size
• The smaller the page size, the lesser the amount of internal
fragmentation
– however, more pages are required per process
– more pages per process means larger page tables
– for large programs in a heavily multiprogrammed environment
some portion of the page tables of active processes must be in
virtual memory instead of main memory
– the physical characteristics of most secondary-memory devices
favor a larger page size for more efficient block transfer of data
Paging Behavior of a Program
Example: Page Sizes
• Contemporary programming
techniques used in large programs
tend to decrease the locality of
references within a process
Segmentation
• Segmentation
allows the
programmer to
view memory as
consisting of
multiple address
spaces or
segments
Segment Organization
• Demand Paging
– only brings pages into main memory when a reference is made
to a location on the page
– many page faults when process is first started
– principle of locality suggests that as more and more pages are
brought in, most future references will be to pages that have
recently been brought in, and page faults should drop to a very
low level
Prepaging
• Prepaging
– pages other than the one demanded by a page fault are brought
in
– exploits the characteristics of most secondary memory devices
– if pages of a process are stored contiguously in secondary
memory it is more efficient to bring in a number of pages at one
time
– ineffective if extra pages are not referenced
– should not be confused with “swapping”
Placement Policy
• Replaces the page that has not been referenced for the longest
time
• By the principle of locality, this should be the page least likely to
be referenced in the near future
• Difficult to implement
– one approach is to tag each page with the time of last
reference
• this requires a great deal of overhead
LRU Example
First-in-First-out (FIFO)
Fixed-allocation Variable-allocation
• gives a process a fixed • allows the number of page
number of frames in main frames allocated to a
memory within which to process to be varied over
execute the lifetime of the process
– when a page fault occurs,
one of the pages of that
process must be replaced
• The scope of a replacement strategy can be categorized as
global or local
– both types are activated by a page fault when there are no free
page frames
VM Policies - Review
• Key issue: Performance
minimize page faults
Fixed Allocation, Local Scope
Working Set of
Process as
Defined by
Window Size
W(t, Δ)
Page Fault Frequency
(PFF)
UNIX SVR4
Memory
Management
Parameters
(page 1 of 2)
Table 8.6
UNIX SVR4
Memory
Management
Parameters
(page 2 of 2)
• The page frame data table is used for page replacement
• Pointers are used to create lists within the table
– all available frames are linked together in a list of free frames
available for bringing in pages
– when the number of available frames drops below a certain
threshold, the kernel will steal a number of frames to
compensate
“Two Handed” Clock
Page Replacement
• The kernel generates and destroys small tables and buffers
frequently during the course of execution, each of which requires
dynamic memory allocation.
• Most of these blocks are significantly smaller than typical pages
(therefore paging would be inefficient)
• Allocations and free operations must be made as fast as possible
• Technique adopted for SVR4
• UNIX often exhibits steady-state behavior in kernel memory
demand
– i.e. the amount of demand for blocks of a particular size
varies slowly in time
• Defers coalescing until it seems likely that it is needed, and then
coalesces as many blocks as possible
Lazy Buddy System Algorithm
Linux
Memory Management
• A buddy algorithm is used so that memory for the kernel can be allocated
and deallocated in units of one or more pages
• Page allocator alone would be inefficient because the kernel requires small
short-term memory chunks in odd sizes
• Slab allocation
• used by Linux to accommodate small chunks
Windows
Memory Management