0% found this document useful (0 votes)
35 views

Memory Managment

Uploaded by

binakasehun2026
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Memory Managment

Uploaded by

binakasehun2026
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

MEMORY MANAGEMENT

• Basics of memory Management


• Swapping
• Virtual memory
• Page replacement
• Segmentation
Basics of memory management
• Program must be brought into memory and
placed within a process for it to be run
• Input queue – collection of processes on the
disk that are waiting to be brought into memory
to run the program
• User programs go through several steps before
being run
• Subdividing memory to accommodate multiple
processes
• Memory needs to be allocated to ensure a
reasonable supply of ready processes to
consume available processor time
Basics of memory management
• Memory management is the task carried out by
the OS and hardware to accommodate multiple
processes in main memory
• If only a few processes can be kept in main
memory, then much of the time all processes will
be waiting for I/O and the CPU will be idle
• Hence, memory needs to be allocated efficiently
in order to pack as many processes into memory
as possible
Basics of memory management
Memory Management Requirements
• Relocation
– Programmer does not know where the
program will be placed in memory when it is
executed
– While the program is executing, it may be
swapped to disk and returned to main
memory at a different location (relocated)
– Memory references must be translated in the
code to actual physical memory address
Basics of memory management
Basics of memory management
Memory Management Requirements
• Protection
– Processes should not be able to reference
memory locations in another process without
permission
– Impossible to check absolute addresses at
compile time  Must be checked at run time
– Memory protection requirement must be
satisfied by the processor (hardware) rather
than the operating system (software)
• Operating system cannot expect all of the memory
references a program will make
Basics of memory management
Memory Management Requirements
• Sharing
– Allow several processes to access the same
portion of memory
– if a number of processes are executing the
same program, it is advantageous to allow
each process access to the same copy of the
program rather than have their own separate
copy
– Processes that are cooperating on some task
may need to share access to the same data
structure
Basics of memory management
Memory Management Requirements
• Logical Organization
– Main memory in a computer system is organized as a
linear, or one-dimensional, address space, consisting
of a sequence of bytes or words
– But programs are written in modules
– If HW and OS can deal with user programs and data
in the form of modules of some sort, then a number of
advantages can be realized
• Modules can be written and compiled independently
• Different degrees of protection given to modules (read-only,
execute-only)
• Share modules among processes
Basics of memory management
Memory Management Requirements
• Physical Organization
– Secondary memory of large capacity can be provided for
long-term storage of programs and data, while a smaller
main memory holds programs and data currently in use
– Memory available for a program plus its data may be
insufficient
• for switching the modules in and out as needed
– Programmer does not know how much space will be
available
– moving information between these two levels of memory is
a major concern of memory management (OS)
Basics of memory management
• Memory management systems can be
divided into two classes:
– Those that move processes back and forth
between main memory and disk during
execution (swapping and paging)
– Those that do not  sufficient main memory
to hold all the programs at once
• Two simple memory management
schemes
– Mono-programming
– Multiprogramming
Basics of memory management
• Mono-programming
– to run just one program at a time, sharing the
memory between that program and the
operating system
– Three variations:
OS Device
ROM Drivers
ROM

User User
RAM Prog RAM User RAM Prog
Prog
OS OS
Basics of memory management
Multiprogramming
• Although the following simple memory
management techniques are not used in
modern OS, they lay the ground for a
proper discussion of virtual memory
– fixed partitioning
– dynamic partitioning
– simple paging
– simple segmentation
Basics of memory management
Fixed Partitioning
• Partition main
memory into a set of
non overlapping
regions called
partitions
• Partitions can be of
equal or unequal
sizes
Basics of memory management
Equal-sized fixed partitions
• Any program, no matter how small, occupies an
entire partition.
• If there is an available partition, a process can
be loaded into that partition
• because all partitions are of equal size, it does not matter
which partition is used
• If all partitions are occupied by blocked
processes, choose one process to swap out to
make room for the new process
• Main memory use is inefficient.
– This is called internal fragmentation.
Basics of memory management
Unequal-sized fixed partitions
• Any process whose size is less than or equal to
a partition size can be loaded into the partition
• If all partitions are occupied, the operating
system can swap a process out of a partition
• Two types:
– Using multiple queue
– Using single queue
• Multiple queue
– When a job arrives, it can be put into the input queue
for the smallest partition large enough to hold it
– Problem
• the queue for a large partition is empty but the
queue for a small partition is full
Basics of memory management
Unequal-sized fixed partitions
Partion 4
• Assign each process to 900k
the smallest partition Partion 3
within which it will fit
500k
• A queue for each
Partion 2
partition size tries to
300k
minimize internal Partion 1
200k
fragmentation
OS
Basics of memory management
Unequal-sized fixed partitions
• Single queue
– maintain a single queue and whenever a partition
becomes free, the job closest to the front of the queue
that fits in it could be loaded into the empty partition
and run
– Problem
• waste a large partition on a small job
– Solution
• search the whole input queue whenever a partition becomes
free and pick the largest job that fits
• But this algorithm discriminates against small jobs as being
unworthy of having a whole partition, whereas usually it is
desirable to give the smallest jobs (often interactive jobs) the
best service, not the worst.
Basics of memory management
Unequal-sized fixed partitions
• Single queue
– Ways to allow small jobs
• To have at least one small partition around.
– Such a partition will allow small jobs to run
without having to allocate a large partition for
them
• Another approach is to have a rule stating
that a job that is eligible to run may not be
skipped over more than k times.
– Each time it is skipped over, it gets one point.
When it has acquired k points, it may not be
skipped again.
Basics of memory management
Unequal-sized fixed partitions
Basics of memory management
Dynamic Partitioning
• Partitions are of variable length and number
• Each process is allocated exactly as much
memory as it requires
• Eventually holes are formed in main memory.
– This is called external fragmentation
• Must use compaction to shift processes so
they are contiguous and all free memory is in
one block
• Used in IBM’s OS/MVT (Multiprogramming with
a Variable number of Tasks)
Basics of memory management
Dynamic Partitioning

• A hole of 64K is left after loading 3 processes: not


enough room for another process
• Eventually each process is blocked.
• The OS swaps out process 2 to bring in process 4
Basics of memory management
Dynamic Partitioning

• Another hole of 96K is created


• Eventually each process is blocked.
• The OS swaps out process 1 to bring in again
process 2 and another hole of 96K is created...
• Compaction would produce a single hole of 256K
Basics of memory management
Dynamic Partitioning
• Used to decide which free block to allocate to a
process
• Goal: to reduce usage of compaction
• Possible algorithms:
– Best-fit: choose smallest hole
– First-fit: choose first hole from beginning
– Next-fit: choose first hole from last placement
– Worst-fit: Choose the largest available hole
Basics of memory management
Dynamic Partitioning
CHAPTER FOUR
MEMORY MANAGEMENT
• Basics of memory Management
• Swapping
• Virtual memory
• Page replacement
• Segmentation
Swapping
• Two general approaches to memory management
can be used
– Swapping: consists of bringing in each process in its
entirety, running it for a while, then putting it back on
the disk.
– Virtual memory: allows programs to run even when
they are only partially in main memory
– Concerns:
• how much memory should be allocated for a process
when it is created or swapped in.
– If processes are created with a fixed size that never
changes, then the allocation is simple:
• the operating system allocates exactly what is needed, no
more and no less
– However, a problem occurs whenever a process tries to
grow
Swapping
• When a memory grows:
– If a hole is adjacent to the process, it can be
allocated and the process allowed to grow into the
hole
– If the process is adjacent to another process
• moved the process to a hole in memory large
enough for it
• one or more processes will have to be swapped out
to create a large enough hole
– If it is expected that most processes will grow as
they run, it is probably a good idea to allocate a
little extra memory whenever a process is swapped
in or moved
– when swapping processes to disk, only the
memory actually in use should be swapped
Swapping
Swapping
Reasons for swapping in and out
• Time quantum expired
– When a quantum expires, the memory
manager will start to swap out the process that
just finished and to swap another process into
the memory space that has been freed
• Higher priority process arrives
– If a higher-priority process arrives and wants
service, the memory manager can swap out the
lower-priority process and then load and
execute the higher-priority process
Swapping
Requirements for swapping
• Address binding
– If binding is done at assembly or load time, then the
process cannot be easily moved to a different location
– a process that is swapped out will be swapped back
into the same memory space it occupied previously
• backing store
– Secondary storage must be large enough to
accommodate copies of all memory images for all
users, and it must provide direct access to these
memory images
• Swapping time
– The memory block with smallest swapping time is
swapped out
• Idle
– A process should be completely idle, to be swapped out
CHAPTER FOUR
MEMORY MANAGEMENT
• Basics of memory Management
• Swapping
• Virtual memory
• Page replacement
• Segmentation
Virtual Memory
• A computer can add more memory than the
amount physically installed on system this extra
memory is called virtual memory
• Programs are usually too big to fit in the available
memory
• virtual memory is a mechanism in which the
combined size of the program, data, and stack
may exceed the amount of physical memory
available for it.
• The operating system keeps those parts of the
program currently in use in main memory, and the
rest on the disk
• Most virtual memory systems use a technique
called paging
Paging
• Both unequal fixed-size and variable-size
partitions are inefficient in the use of memory
– Unequal fixed-sized  internal fragmentation
– Equal fixed-sized  internal fragmentation
• Main memory is partitioned into equal relatively
small fixed-size chunks known as frames.
• Each process is also divided into small fixed-size
chunks of the same size known as frames or
page frames
• Pages could be assigned to available chunks of
memory
Paging
Paging
Paging
Paging
• At a given point in time, a list of free frames
is maintained by the operating system
• Does the unavailability of sufficient
contiguous memory space prevent the
operating system from loading a process?
– No – different pages can be stored on different
frames
• Memory address used by programs are mapped
into real memory address called physical address
• These program-generated addresses are called
virtual addresses and form the virtual address
space
Paging
• If the processor encounters a virtual address that
is not in main memory, it generates an interrupt
indicating a memory access fault, known as page
fault.
• The operating system puts the interrupted
process in a blocking state and takes control
• The operating system maintains a page table
which shows a frame location for each page of the
process.
• Each process will have its own page table
• Consider a computer with 16 bit (0-64K) addressing, 32
KB of physical memory and 4K frame size
– 64 KB of virtual address space
– 16 virtual pages
– 8 page frames
Paging
Paging
• When the program tries to access address 0,
for example, using the instruction
MOV REG, 0
• virtual address 0 falls in page 0 (0 to 4095),
which according to its mapping is page frame 2
(8192 to 12287) physical address
• Similarly,
MOV REG,8192 (p2)  MOV REG,24576 (p6)
MOV REG,32780  page foult  trap
Virtual Memory
• A process executes only in main memory,
that memory is referred to as real
memory or physical memory
• A programmer or user perceives a
potentially much larger memory — that
which is allocated on disk — referred to
as virtual memory
• In the previous example a user assumes
64K memory, but actually we have only
32K memory
Virtual Memory
CHAPTER FOUR
MEMORY MANAGEMENT
• Basics of memory Management
• Swapping
• Virtual memory
• Page replacement
• Segmentation
Page Replacement Algorithm
• When a page fault occurs, the operating system
has to choose a page to remove from memory to
make room for the page that has to be brought in
• Pick a random page to force to leave at each
page fault
– probability of removing heavily used page
• it will probably have to be brought back in quickly
• Other applications of page replacement
algorithms
– Cache memory
– Web server
Page Replacement Algorithm
Optimal (for the future)
• Each page can be labeled with the number
of instructions that will be executed before
that page is first referenced
• When a page fault occurs, the page with
the highest label should be removed
• Problem
– impossible to implement
Page Replacement Algorithm
Not Recently Used (NRU)
• Most computers with virtual memory have two
status bits associated with each page
– R is set whenever the page is referenced (read or
written).
– M is set when the page is written to (i.e., modified)
• The bits are contained in each page table entry
and these bits must be updated on every memory
reference
• When a process is started up, both page bits for all its
pages are set to 0 by the operating system.
• Periodically (e.g., on each clock interrupt), the R bit is
cleared, to distinguish pages that have not been
referenced recently from those that have been.
Page Replacement Algorithm
Not Recently Used (NRU)
• The bits are contained in each page table entry and these
bits must be updated on every memory reference
• Operating system inspects all the pages and divides them
into four categories based on the current values of their R
and M bits:
– Class 0: not referenced, not modified
– Class 1: not referenced, modified
– Class 2: referenced, not modified
– Class 3: referenced, modified
• The NRU algorithm removes a page at random from the
lowest numbered nonempty class
• Implicit in this algorithm is that it is better to remove a
modified page that has not been referenced in at least
one dock tick
Page Replacement Algorithm
Not Recently Used (NRU)
• Advantages
– easy to understand
– moderately efficient to implement
– gives a performance that, while certainly not
optimal, may be adequate
Page Replacement Algorithm
The First-In, First-Out (FIFO)
• The operating system maintains a list of all pages
currently in memory,
– with the page at the head of the list the oldest one and
– the page at the tail the most recent arrival.
• On a page fault, the page at the head is removed
and the new page added to the tail of the list
• Problem
– Not efficient as it may remove frequently used pages
• Advantage
– Less overhead
Page Replacement Algorithm
Page Replacement Algorithm
The Second Chance Page
• Modification to FIFO that avoids the problem of throwing
out a heavily used page by inspect the R bit of the oldest
page
• looking for an old page that has not been referenced in
the previous clock interval
• If all the pages have been referenced, second chance
degenerates into pure FIFO
• If it is 0, the page is both old and unused, so it is replaced
immediately
• If the R bit is 1, the bit is cleared, the page is put onto the
end of the list of pages, and its load time is updated as
though it had just arrived in memory
Page Replacement Algorithm
Page Replacement Algorithm
• Suppose that a page fault occurs at time 20.
• The oldest page is A, which arrived at time 0,
when the process started.
• If A has the R bit cleared, it is
evicted(dispossessed) from memory, either by
being written to the disk or just
abandoned(uncontrolled)
• On the other hand, if the R bit is set, A is put onto
the end of the list and its “load time” is reset to
the current time (20). The R bit is also cleared
• Problem
– Moving pages around the list
Page Replacement Algorithm
The Clock
• Keep all the page frames on a circular list in the form of a
clock, a hand points to the oldest page
• When a page fault occurs, the page being pointed to by
the hand is inspected.
• If its R bit is 0, the page is evicted, the new page is
inserted into the clock in its place, and the hand is
advanced one position.
• If R is 1, it is cleared and the hand is advanced to the next
page.
• This process is repeated until a page is found with R = 0.
• Not surprisingly, this algorithm is called clock.
• It differs from second chance only in the implementation
Page Replacement Algorithm
The Least Recently Used (LRU)
• An algorithm based on the observation that
pages that have been heavily used in the last few
instructions will probably be heavily used again in
the next few
• when a page fault occurs, throw out the page that
has been unused for the longest time
• Necessary to maintain a linked list of all pages in
memory, with the most recently used page at the
front and the least recently used page at the rear
• Implementations:
– Store a counter, c, with each page and increment
automatically after each instruction
– For a machine with n page frames maintain a matrix of
n × n bits
Page Replacement Algorithm
• Whenever page frame k is referenced, the
hardware first sets all the bits of row k to 1, then
sets all the bits of column k to 0.
• At any instant, the row whose binary value is
lowest is the least recently used, the row whose
value is next lowest is next least recently used,
and so forth
• The workings of this algorithm are given in the
following figure for four page frames and page
references in the order
0123210323
Page Replacement Algorithm
Page Replacement Algorithm
The Working Set
• Pages are loaded when requested by the
process, not in advance.
• This strategy is called demand paging
because pages are loaded only on demand
• All pages of a program may not be used
• The set of pages that a process is currently
using is called its working set
• A program causing page faults every few
instructions is said to be thrashing
Page Replacement Algorithm
The Working Set
• Many paging systems try to keep track of each process‘
working set and make sure that it is in memory before letting
the process run
• This approach is called the working set model and reduces
the page fault rate.
• Loading the pages before letting processes run is also called
prepaging
• Pages which are not in the working set are removed during
page replacement
• Two attributes are considered:
– Reference bit, R
– Age
• The working set of a process is the set of pages it has
referenced during the past τ seconds of virtual time
– VT - amount of CPU time a process has actually used since it started
Page Replacement Algorithm
The Working Set
• Instead of defining the working set as those pages used
during the previous 10 million memory references, we can
define it as the set of pages used during the past 100 msec of
execution time
• Each entry contains (at least) two items of information: the
approximate time the page was last used and the R
(Referenced) bit
– age = current virtual time – time of last use
• R bit is examined, if it is 1, the current virtual time is written
into the Time of last use field in the page table
–  It’s clearly in the working set and is not a candidate for removal
• If R is 0, the page has not been referenced during the current
clock tick and may be a candidate for removal
– compare age with T; and remove it if age greater than T
• if R is 0 but the age is less than or equal to τ, the page is still
in the working set  remove the oldest page
Page Replacement Algorithm
Page Replacement Algorithm
The WSClock Page Replacement Algorithm
• An improved algorithm, that is based on the clock
algorithm but also uses the working set information
• The data structure needed is a circular list of page
frames, as in the clock algorithm
• Each entry contains the Time of last use field from
the basic working set algorithm, as well as the R bit
and the M bit
• As with the clock algorithm, at each page fault the
page pointed to by the hand is examined first.
– If the R bit is set to 1, the page has been used during the
current tick so it is not an ideal candidate to remove.
– The R bit is then set to 0, the hand advanced to the next
page, and the algorithm repeated for that page.
Page Replacement Algorithm
A case when R=1
Page Replacement Algorithm
Page Replacement Algorithm
CHAPTER FOUR
MEMORY MANAGEMENT
• Basics of memory Management
• Swapping
• Virtual memory
• Page replacement
• Segmentation
Segmentation
• The virtual memory is one-dimensional because the virtual
addresses go from 0 to some maximum address, one address
after another.
• For many problems, having two or more separate virtual
address spaces may be much better than having only one
• These completely independent address spaces are called
segments
• A user program can be subdivided using segmentation, in
which the program and its associated data are divided into a
number of segments
• Different segments may have different lengths
• A logical address using segmentation consists of two parts, in
this case a segment number and an offset(base address)
Segmentation
Segmentation
• Segmentation is similar to dynamic
partitioning in the use of unequal-size
segments
– with segmentation a program may occupy more
than one partition, and these partitions need not
be contiguous
– With dynamic partitioning, a program occupies
contiguous memory partitions
– Segmentation eliminates internal fragmentation
– Segmentation and dynamic partitioning suffers
from external fragmentation
Segmentation
• Whereas paging is invisible to the
programmer, segmentation is usually visible
and is provided as a convenience for
organizing programs and data
– similar to paging, a simple segmentation scheme
would make use of a segment table for each
process and a list of free blocks of main memory
– Each segment table entry would have to give the
• starting address in main memory of the corresponding
segment
• length of the segment, to assure that invalid
addresses are not used
Segmentation
Segmentation
• Consider an address of n X m bits, where the
leftmost n bits are the segment number and the
rightmost m bits are the offset, in the previous
figure, n= 4 and m=12.
• Thus the maximum segment size is 212 = 4096.
• The following steps are needed for address
translation:
– Extract the segment number as the leftmost n bits of
the logical address.
– Use the segment number as an index into the process
segment table to find the starting physical address of
the segment.
– Compare the offset, expressed in the rightmost m bits,
to the length of the segment. If the offset is greater than
or equal to the length, the address is invalid.
– The desired physical address is the sum of the starting
physical address of the segment plus the offset.
Segmentation
• In our example, we have the logical address
0001001011110000, which is segment number 1,
offset 752.
• Suppose that this segment is residing in main
memory starting at physical address
0010000000100000.
– Then the physical address is 0010000000100000
001011110000 0010001100010000
• To summarize, with simple segmentation, a
process is divided into a number of segments that
need not be of equal size.
• When a process is brought in, all of its segments
are loaded into available regions of memory, and a
segment table is set up.
Segmentation
• Advantages:
– simplifying the handling of data structures that
are growing or shrinking
– linking up of procedures compiled separately on
different segments is simplified
– changing one procedure’s size doesn’t affect the
starting address of others
– facilitates sharing procedures or data between
several processes
• E.g. shared library
– different segments can have different kinds of
protection.
Segmentation

You might also like