0% found this document useful (0 votes)
15 views

Memory Management Slides - Part-1

Uploaded by

ÀbdUŁ ßaŠiŤ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Memory Management Slides - Part-1

Uploaded by

ÀbdUŁ ßaŠiŤ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 116

Operating Systems

Memory Management
Fall-2023
Memory Management
• Subdividing memory to accommodate
multiple processes
• Memory needs to be allocated efficiently
to pack as many processes into memory
as possible
Memory Management
Requirements
• Relocation
– Programmer does not know where the
program will be placed in memory when it
is executed
– While the program is executing, it may be
swapped to disk and returned to main
memory at a different location (relocated)
– Memory references must be translated in
the code to actual physical memory address
Memory Management
Requirements
• Protection
– Processes should not be able to reference
memory locations in another process
without permission
– Impossible to check absolute addresses in
programs since the program could be
relocated
– Must be checked during execution
• Operating system cannot anticipate all of the
memory references a program will make
Memory Management
Requirements
• Sharing
– Allow several processes to access the same
portion of memory
– Better to allow each process (person) access
to the same copy of the program rather than
have their own separate copy
Memory Management
Requirements
• Logical Organization
– Programs are written in modules
– Modules can be written and compiled
independently
– Different degrees of protection given to
modules (read-only, execute-only)
– Share modules
Memory Management
Requirements
• Physical Organization
– Memory available for a program plus its
data may be insufficient
• Overlaying allows various modules to be
assigned the same region of memory
– Programmer does not know how much
space will be available
Fixed Partitioning
• Equal-size partitions
– any process whose size is less than or equal
to the partition size can be loaded into an
available partition
– if all partitions are full, the operating
system can swap a process out of a partition
– a program may not fit in a partition. The
programmer must design the program with
overlays
Fixed Partitioning
• Main memory use is inefficient. Any
program, no matter how small, occupies
an entire partition. This is called
internal fragmentation.
Placement Algorithm with
Partitions
• Equal-size partitions
– because all partitions are of equal size, it
does not matter which partition is used
• Unequal-size partitions
– can assign each process to the smallest
partition within which it will fit
– queue for each partition
– processes are assigned in such a way as to
minimize wasted memory within a partition
Dynamic Partitioning
• Partitions are of variable length and
number
• Process is allocated exactly as much
memory as required
• Eventually get holes in the memory.
This is called external fragmentation
• Must use compaction to shift processes
so they are contiguous and all free
memory is in one block
Dynamic Partitioning
Placement Algorithms
• The operating system must decide which
free block to allocate to a process
• For this some kind of criteria or policy
must be implemented so as to
accommodate the incoming processes
and manage external fragmentation as
well.
• Following policies are implemented in
placement decisions
Dynamic Partitioning
Placement Algorithms
• Best-fit algorithm
– Free storage list is kept in Ascending order
– Chooses the block that is closest in size to
the incoming process request
– Worst performer overall
– Since smallest block is found for process,
the smallest amount of fragmentation is left
– Thus, memory compaction must be done
more often resulting in more overhead
Dynamic Partitioning
Placement Algorithm
• First-fit algorithm
– Free storage list is kept in address order or
sometimes in random order
– Fastest
– An incoming process is placed in the
memory in the first available hole large
enough to hold it.
– It has also intuitive appeal in that it allows
the placement decision to be made quickly.
Dynamic Partitioning
Placement Algorithm
• Next-fit
• A variation of First Fit
• It begins each search for an available hole at
the point where the previous search ended.
Dynamic Partitioning
Placement Algorithm
• Worst-fit
• Free storage list is kept in Descending order
• Although it seems to be a illogical policy, yet it
gives the best result in terms of fragmentation!
• It places a process in the memory in the hole in
which it fits worst, i.e., the largest possible hole.
• After placing the process in the largest hole, the
remaining hole is also large and is thus able to
hold a relatively large new process.
Buddy System
• Entire space available is treated as a
single block of 2U
• If a request of size s such that
2U-1 < s <= 2U, entire block is allocated
– Otherwise block is split into two equal
buddies
– Process continues until smallest block
greater than or equal to s is generated
Relocation
• When program loaded into memory the actual
(absolute) memory locations are determined
• A process may occupy different partitions
which means different absolute memory
locations during execution (from swapping)
• Compaction will also cause a program to
occupy a different partition which means
different absolute memory locations
Addresses
• Logical
– reference to a memory location independent of the
current assignment of data to memory
– translation must be made to the physical address
• Relative
– address expressed as a location relative to some
known point
• Physical
– the absolute address or actual location in main
memory
Registers Used during
Execution
• Base register
– starting address for the process
• Bounds register
– ending location of the process
• These values are set when
– the process is loaded, and
– when the process is swapped in
Registers Used during
Execution
• The value of the base register is added to
a relative address to produce an absolute
address
• The resulting address is compared with
the value in the bounds register
• If the address is not within bounds, an
interrupt is generated to the operating
system
Paging
• Partition memory into small equal-size chunks
and divide each process into the same size
chunks
• The chunks of a process are called pages and
chunks of memory are called frames
• Operating system maintains a Page Table for
each process
– contains the frame location for each page in the
process
– memory address consist of a page number and
offset within the page
Page Tables for Example
Address Translation Scheme
• Address generated by CPU is divided
into:
– Page number (p) – used as an index into a
page table which contains base address of
each page in physical memory.

– Page offset (d) – combined with base


address to define the physical memory
address that is sent to the memory unit.

Operating System
Concepts
Address Translation
Architecture

Operating System
Concepts
Paging Example

Operating System
Concepts
Paging Example

Operating System
Concepts
Free Frames

Before allocation
Operating System
Concepts After allocation
Valid (v) or Invalid (i) Bit In A Page Table

Operating System
Concepts
Implementation of Page Table
• Page table is kept in main memory.
• Page-table base register (PTBR) points to the page
table.
• Page-table length register (PTLR) indicates size of the
page table.
• In this scheme every data/instruction access requires
two memory accesses. One for the page table and one
for the data/instruction.
• The two memory access problem can be solved by the
use of a special fast-lookup hardware cache called
associative memory or translation look-aside buffers
(TLBs)
Operating System
Concepts
Associative Memory
• Associative memory – parallel search
Page # Frame #

• Address translation (A´, A´´)


– If A´ is in associative register, get frame #
out.
– Otherwise get frame # from page table in
memory
Operating System
Concepts
Paging Hardware With TLB

Operating System
Concepts
Demand Paging

 Demand paging: allows pages that are


referenced actively to be loaded into memory
 Remaining pages stay on disk
 Provides the illusion of infinite physical memory
Demand Paging Mechanism

 Page tables sometimes need to point to disk


locations (as opposed to memory locations)
 A table entry needs a present (valid) bit
 Present (1) means a page is in memory
 Not present (0) means that there is a page fault
Page Fault
 Hardware trap
 OS performs the following steps while running other
processes (analogy: firing and hiring someone)
 Choose a page
 If the page has been modified, write its contents to disk
 Change the corresponding page table entry and TLB entry
 Load new page into memory from disk
 Update page table entry
 Continue the thread
Segmentation
 Memory-management scheme that supports user view of memory.
 A program is a collection of segments. A segment is a logical unit
such as:
main program,
procedure,
function,
method,
object,
local variables, global variables,
common block,
stack,
symbol table, arrays

Operating System
Concepts
User’s View of a Program

Operating System
Concepts
Logical View of Segmentation
1
4
1
2

3 2
4
3

user space physical memory space

Operating System
Concepts
Segmentation Architecture
 Logical address consists of a two tuple:
<segment-number, offset>,
 Segment table – maps two-dimensional physical
addresses; each table entry has:
 base – contains the starting physical address where

the segments reside in memory.


 limit – specifies the length of the segment.

 Segment-table base register (STBR) points to the


segment table’s location in memory.
 Segment-table length register (STLR) indicates number
of segments used by a program;
segment number s is legal if s < STLR.
Operating System
Concepts
Segmentation Architecture (Cont.)
 Relocation.
 dynamic

 by segment table

 Sharing.
 shared segments

 same segment number

 Allocation.
 first fit/best fit

 external fragmentation

Operating System
Concepts
Segmentation Architecture (Cont.)
 Protection. With each entry in segment table
associate:
 validation bit = 0  illegal segment

 read/write/execute privileges

 Protection bits associated with segments; code


sharing occurs at segment level.
 Since segments vary in length, memory allocation is
a dynamic storage-allocation problem.
 A segmentation example is shown in the following
diagram

Operating System
Concepts
Segmentation Hardware

Operating System
Concepts
Example of Segmentation

Operating System
Concepts
Sharing of Segments

Operating System
Concepts
Page Replacement Algorithms

• These policies are concerned with


deciding which page or segment to
displace/remove to make room for an
incoming page (or segment) when the
main memory is already full.
• Following are these strategies:
1. The Principle of Optimality
• In this strategy, that page is replaced that
will not be used again for the furthest
time into the future.
• Although it is possible to demonstrate
the optimality of this strategy, it cannot
be fully applied since the future cannot
be predicted.
2. Random Page Replacement
• It’s a simple technique in which all pages in
the main storage have an equal probability
of being selected for replacement.
• This strategy could select any page for
replacement, including the next page to be
referenced (which is, of course the worst
page to replace).
• Since it’s a hit-or-miss approach, this
scheme is rarely used.
3. FIFO (First-In-First-Out) Page
Replacement
• Each page is time-stamped as it enters (arrives) the main
memory.
• The page which has stayed longest in the main memory
is chosen to be replaced (since it had arrived first).
• The general technique of this strategy is that, since this
page has had its chance and it’s time to give another
page a chance.
• Negative: Under this scheme a heavily used page is
likely to be replaced, because it has been in the memory
for a long time (remember the time-stamp!). And it’s the
same page that has been in constant use and is likely to
be used again in the future.
• Implementation: This strategy can also be
implemented with a simple FIFO queue.
• As each page arrives, it is placed at the tail
of the queue and pages are replaced from
the head of the queue.
• -ve: The overhead of keeping the
time-stamp.
• Belady’s Anomaly (Reading Homework
Assignment).
Example

 A process makes references to 4 pages: A,


B, E, and R
 Reference stream: BEERBAREBEAR
 Physical memory size: 3 pages
FIFO

Memory page B E E R B A R E B E A R
1 B
2
3
FIFO

Memory page B E E R B A R E B E A R
1 B
2 E
3
FIFO

Memory page B E E R B A R E B E A R
1 B
2 E *
3
FIFO

Memory page B E E R B A R E B E A R
1 B
2 E *
3 R
FIFO

Memory page B E E R B A R E B E A R
1 B *
2 E *
3 R
FIFO

Memory page B E E R B A R E B E A R
1 B *
2 E *
3 R
FIFO

Memory page B E E R B A R E B E A R
1 B * A
2 E *
3 R
FIFO

Memory page B E E R B A R E B E A R
1 B * A
2 E *
3 R *
FIFO

Memory page B E E R B A R E B E A R
1 B * A
2 E * *
3 R *
FIFO

Memory page B E E R B A R E B E A R
1 B * A
2 E * *
3 R *
FIFO

Memory page B E E R B A R E B E A R
1 B * A
2 E * * B
3 R *
FIFO

Memory page B E E R B A R E B E A R
1 B * A
2 E * * B
3 R *
FIFO

Memory page B E E R B A R E B E A R
1 B * A
2 E * * B
3 R * E
FIFO

Memory page B E E R B A R E B E A R
1 B * A *
2 E * * B
3 R * E
FIFO

Memory page B E E R B A R E B E A R
1 B * A *
2 E * * B
3 R * E
FIFO

Memory page B E E R B A R E B E A R
1 B * A * R
2 E * * B
3 R * E
FIFO

 7 page faults

Memory page B E E R B A R E B E A R
1 B * A * R
2 E * * B
3 R * E
4. LRU (Least-Recently-Used)
• This strategy selects that page for replacement
that has not been used for the longest time.
• The principle here is that “the recent past is a
good indicator of the near future”.
• LRU requires that each page be time-stamped
whenever it is referenced.
• -ve: This strategy requires substantial overhead,
and thus although it’s appealing, it is not often
implemented in current systems. However,
lower overhead strategies that approximate
LRU are used.
• Implementation: This strategy can be implemented
with a list structure containing one entry for each
occupied page frame.
– Each time a page frame is referenced, the entry
for that page is placed at the head of the list.
– Older entries migrate toward the tail of the list.
• When a page must be replaced to make room for an
incoming page, the entry at the tail of the list is
selected, the corresponding page is freed, and the
incoming page is placed at the head of the list
because that page is now the one that has been most
recently used.
LRU

Memory page B E E R B A R E B E A R
1 B
2 E *
3 R
LRU

Memory page B E E R B A R E B E A R
1 B *
2 E *
3 R
LRU

Memory page B E E R B A R E B E A R
1 B *
2 E *
3 R
LRU

Memory page B E E R B A R E B E A R
1 B *
2 E * A
3 R
LRU

Memory page B E E R B A R E B E A R
1 B *
2 E * A
3 R *
LRU

Memory page B E E R B A R E B E A R
1 B *
2 E * A
3 R *
LRU

Memory page B E E R B A R E B E A R
1 B * E
2 E * A
3 R *
LRU

Memory page B E E R B A R E B E A R
1 B * E
2 E * A
3 R *
LRU

Memory page B E E R B A R E B E A R
1 B * E
2 E * A B
3 R *
LRU

Memory page B E E R B A R E B E A R
1 B * E *
2 E * A B
3 R *
LRU

Memory page B E E R B A R E B E A R
1 B * E *
2 E * A B
3 R *
LRU

Memory page B E E R B A R E B E A R
1 B * E *
2 E * A B
3 R * A
LRU

Memory page B E E R B A R E B E A R
1 B * E *
2 E * A B
3 R * A
LRU

Memory page B E E R B A R E B E A R
1 B * E *
2 E * A B R
3 R * A
LRU

 8 page faults

Memory page B E E R B A R E B E A R
1 B * E *
2 E * A B R
3 R * A
5. LFU (Least-Frequently-Used)
• The thing to be considered here is “how intensive
the use of each page has been”.
• The page to replace is that page that is least
frequently used or least intensively referenced.
• -ve: The wrong page could quite easily be selected
for replacement, e.g., the least frequently used
page could be the page brought into main memory
most recently. This page has been used once
whereas all other pages in the main storage could
have been used more than once. So, this technique
replaces this page when in fact it would be highly
likely to be used immediately.
LFU

Memory page B E E R B A R E B E A R
1 B
2
3
LFU

Memory page B E E R B A R E B E A R
1 B
2 E
3
LFU

Memory page B E E R B A R E B E A R
1 B
2 E 2
3
LFU

Memory page B E E R B A R E B E A R
1 B
2 E 2
3 R
LFU

Memory page B E E R B A R E B E A R
1 B 2
2 E 2
3 R
LFU

Memory page B E E R B A R E B E A R
1 B 2
2 E 2
3 R A
LFU

Memory page B E E R B A R E B E A R
1 B 2
2 E 2
3 R A R
LFU

Memory page B E E R B A R E B E A R
1 B 2
2 E 2 3
3 R A R
LFU

Memory page B E E R B A R E B E A R
1 B 2 3
2 E 2 3
3 R A R
LFU

Memory page B E E R B A R E B E A R
1 B 2 3
2 E 2 3 4
3 R A R
LFU

Memory page B E E R B A R E B E A R
1 B 2 3
2 E 2 3 4
3 R A R A
LFU

Memory page B E E R B A R E B E A R
1 B 2 3
2 E 2 3 4
3 R A R A R
LFU

 7 page faults

Memory page B E E R B A R E B E A R
1 B 2 3
2 E 2 3 4
3 R A R A R
6. NUR (Not-Used-Recently)
• Pages not used recently are not likely to be used in the near
future and they may be replaced with incoming pages.
• This is a scheme for approximating LRU with little
overhead.
• Implementation: NUR is implemented with addition of two
hardware bits per page.
These are:
• (1). Reference bit = 0 if the page has not been referenced
• 1 if the page has been referenced
• (2). Dirty bit = 0 if the page has not been modified
• 1 if the page has been modified
• Initially, the reference bits of all pages are set to 0.
• If a particular page is referenced, its reference bit is set to 1.
• Initially, the dirty bits of all pages are also set to 0.
• Whenever a page is modified, its modified bit is set to 1.
• When a page is to be replaced, the OS first tries to find a
page which has not been referenced (otherwise, we have no
choice but to replace a referenced page).
• If a page has not been referenced, we check whether it has
been modified or not.
• If it has not been modified, we replace it on the basis that
there is less overhead involved than in replacing a modified
page that must be written back to secondary storage.
• Otherwise, a modified page is replaced.
The Principle of Locality
• The Principle of Locality states that
“processes tend to reference storage in
non-uniform, highly localized patterns”.
• Locality is of two types as follows:
– Temporal Locality
– Spatial Locality
Temporal Locality
• Temporal Locality relates to Time.
Example: Weather w.r.t. time.
• It means that storage locations
referenced recently are likely to be
referenced in the near future.
• Supporting this observation are stacks,
subroutines, looping and variables used
for counting and totaling.
Spatial Locality
• Spatial Locality relates to Space.
Example: Weather w.r.t. space.
• It means that storage references tend to be
clustered so that once a location is
referenced; it is highly likely that nearby
locations will be referenced. Supporting
this observation are array traversals,
sequential code execution and the tendency
of programmers to place related variable
definitions near one another.
Working Set Theory
• Denning formulated the working set theory of program
behavior based upon observations of locality.
• A working set is a collection of pages a process is
actively referencing.
• The real working set of a process is the set of pages that
must be in main memory for a process to execute
efficiently.
• Denning maintained that for a program to run efficiently,
its working set of pages must be maintained in the main
memory.
• Else, excessive paging activity (Thrashing) might occur
as the program repeatedly requests pages from the
secondary storage.

You might also like