Chapter-8 (Memory Management)
Chapter-8 (Memory Management)
Operating Systems
Memory Management
Reading: Chapter 8 from the text book
Instructor:
Muhammad Mustafa Khattak
Assistant Professor,
Department of Computer Sciences,
CUI, Islamabad Campus
References/ Acknowledgement
Chapter 8
Main Memory
Operating System Concepts – 9th Edition
Silberschatz, Galvin and Gagne ©2013
Chapter 7
Memory Management
Operating Systems: Internals and Design Principles
Seventh Edition William Stallings
Address binding
Address binding of instructions and
data to memory addresses can happen
at three different stages
1. Compile time: If memory location known
a priori, absolute code can be generated;
must recompile code if starting location
changes
2. Load time: Must generate relocatable
code if memory location is not known at
compile time
3. time: Binding delayed until run time if the
process can be moved during its execution
from one memory segment to another
Need hardware support for address
maps (e.g., base and limit registers)
Logical vs. Physical Address Space
• The concept of a logical address space that is bound to a
separate physical address space is central to proper
memory management
• Logical address – generated by the CPU; also referred to as
virtual address
• Physical address – address seen by the memory unit
• Logical and physical addresses are the same in compile-
time and load-time address-binding schemes; logical
(virtual) and physical addresses differ in execution-time
address-binding scheme
• Logical address space is the set of all logical addresses
generated by a program
• Physical address space is the set of all physical addresses
generated by a program
Memory-Management Unit (MMU)
• Hardware device that at run time maps
virtual to physical address
• Many methods possible, covered in the rest
of this chapter
• To start, consider simple scheme where the
value in the relocation register is added to
every address generated by a user process
at the time it is sent to memory
• Base register now called relocation register
• The user program deals with logical
addresses; it never sees the real physical
addresses
• Execution-time binding occurs when reference
is made to location in memory
• Logical address bound to physical addresses
Dynamic Loading
Routine is not loaded until it is called
Better memory-space utilization; unused routine is never
loaded
All routines kept on disk in relocatable load format
Useful when large amounts of code are needed to handle
infrequently occurring cases
Dynamic Linking
• Static linking – system libraries and program code
combined by the loader into the binary program image
• Dynamic linking –linking postponed until execution time
• Small piece of code, stub, used to locate the appropriate
memory-resident library routine
• Stub replaces itself with the address of the routine, and
executes the routine
• Operating system checks if routine is in processes’ memory
address
• If not in address space, add to address space
• Dynamic linking is particularly useful for libraries
Swapping
• A process can be swapped temporarily out of memory to
a backing store, and then brought back into memory for
continued execution
• Total physical memory space of processes can exceed physical
memory
• Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide
direct access to these memory images
• Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped
out so higher-priority process can be loaded and executed
• Major part of swap time is transfer time; total transfer
time is directly proportional to the amount of memory
swapped
• System maintains a ready queue of ready-to-run
processes which have memory images on disk
Schematic View of Swapping
Memory management Techniques
• Memory management brings processes into main memory for
execution by the processor
1. Fixed Partitioning (contiguous allocation)
used in several variations in some now-obsolete operating
systems
does not involve virtual memory
2. Simple paging and simple segmentation
not used by themselves.
However, it will clarify the discussion of virtual memory if we
look first at these two techniques in the absence of virtual
memory considerations.
3. Virtual memory
Used in almost every modern operating system
based on segmentation and paging
Fixed Partitioning
• In most schemes for memory management,
we can assume that the OS occupies some
fixed portion of main memory and that the
rest of main memory is available for use by
multiple processes. The simplest scheme
for managing this available memory is to
partition it into regions with fixed
boundaries.
Equal-size partitions
• Any process whose size is less than or equal to the partition
size can be loaded into an available partition
• The operating system can swap out a process if all partitions
are full and no process is in the Ready or Running state
• There are two difficulties with the use of equal-size fixed
partitions:
• A program may be too big to fit in a partition
• program needs to be designed with the use of overlays
• Main memory utilization is inefficient
• any program, regardless of size, occupies an entire partition
• internal fragmentation
• wasted space due to the block of data loaded being
smaller than the partition
Unequal Size Partitions
• Both of these problems can be
lessened, though not solved, by Using
unequal size partitions
• programs up to 16M can be
accommodated without overlays
• partitions smaller than 8M allow
smaller programs to be
accommodated with less internal
fragmentation
Memory Assignment - PLACEMENT ALGORITHM
• With equal-size partitions, the placement of processes
in memory is trivial.
• As long as there is any available partition, a process
can be loaded into that partition.
• Because all partitions are of equal size, it does not
matter which partition is used.
• If all partitions are occupied with processes that are
not ready to run, then one of these processes must
be swapped out to make room for a new process.
• Which one to swap out is a scheduling decision.
• With unequal-size partitions, there are two possible
ways to assign processes to partitions.
• The simplest way is to assign each process to the
smallest partition within which it will fit. 1 In this
case, a scheduling queue is needed for each partition,
to hold swapped-out processes destined for that
partition ( Figure 7.3a ).
Memory Assignment - PLACEMENT ALGORITHM
• The advantage of this approach is that processes are always
assigned in such a way as to minimize wasted memory within
a partition (internal fragmentation).
• Although this technique seems optimum from the point of
view of an individual partition, it is not optimum from the
point of view of the system as a whole.
• Consider a case in which there are no processes with a size
between 12 and 16M at a certain point in time.
• In that case, the 16M partition will remain unused, even
though some smaller process could have been assigned to it.
• Thus, a preferable approach would be to employ a single
queue for all processes ( Figure 7.3b ). When it is time to load
a process into main memory, the smallest available partition
that will hold the process is selected.
• If all partitions are occupied, then a swapping decision must
be made.
• Preference might be given to swapping out of the smallest
partition that will hold the incoming process. It is also possible
to consider other factors, such as priority, and a preference for
swapping out blocked processes versus ready processes.
Disadvantages of unequal-size partitions
• The use of unequal-size partitions provides a
degree of flexibility to fixed partitioning. In
addition, it can be said that fixed-partitioning
schemes are relatively simple and require minimal
OS software and processing overhead.
• However, there are disadvantages:
1. The number of partitions specified at system
generation time limits the number of active (not
suspended) processes in the system.
2. Small jobs will not utilize partition space
efficiently
Dynamic Partitioning
• To overcome some of the difficulties with fixed
partitioning, an approach known as dynamic
partitioning was developed.
• With dynamic partitioning, the partitions are
of variable length and number.
• When a process is brought into main memory,
it is allocated exactly as much memory as it
requires and no more.
Effect of Dynamic Partitioning
• Example: using 64 Mbytes of main memory, Figure 7.4
• Initially, main memory is empty, except for the OS (a).
• The first three processes are loaded in, starting where the
operating system ends and occupying just enough space for each
process (b, c, d).
• This leaves a “hole” at the end of memory that is too small for a
fourth process.
• At some point, none of the processes in memory is ready. The
operating system swaps out process 2 (e), which leaves sufficient
room to load a new process, process 4 (f).
• Because process 4 is smaller than process 2, another small hole is
created.
• Later, a point is reached at which none of the processes in main
memory is ready, but process 2, in the Ready-Suspend state, is
available.
• Because there is insufficient room in memory for process 2, the
operating system swaps process 1 out (g) and swaps process 2
back in (h).
• As this example shows, this method starts out well, but
eventually it leads to a situation in which there are a lot of small
holes in memory.
• As time goes on, memory becomes more and more fragmented,
and memory utilization declines.
• This phenomenon is referred to as external fragmentation ,
indicating that the memory that is external to all partitions
Dynamic Partitioning
External Fragmentation
• memory becomes more and more fragmented
• memory utilization declines
Compaction
• technique for overcoming external fragmentation
• OS shifts processes so that they are contiguous
• free memory is together in one block
• time consuming and wastes CPU time
Placement Algorithms
• Because memory compaction is time consuming, the OS designer must be
clever in deciding how to assign processes to memory (how to plug the
holes).
• When it is time to load or swap a process into main memory, and if there is
more than one free block of memory of sufficient size, then the operating
system must decide which free block to allocate.
• Three placement algorithms are: best-fit, first-fit, and next-fit.
• All, of course, are limited to choosing among free blocks of main memory
that are equal to or larger than the process to be brought in.
Relative
Physical or Absolute
• A page table contains one entry for each page of the process, so that
the table is easily indexed by the page number (starting at page 0 ).
• Each page table entry contains the number of the frame in main
memory, if any, that holds the corresponding page.
• In addition, the OS maintains a single free-frame list of all the frames
in main memory that are currently unoccupied/available for pages.
• Thus we see that simple paging, as described here, is similar to fixed
partitioning.
• The differences are that, with paging, the partitions are rather small;
a program may occupy more than one partition; and these partitions
need not be contiguous
Logical Addresses
• To make this paging scheme convenient, let us dictate that the page size, hence the
frame size, must be a power of 2. With the use of a page size that is a power of 2, it is
easy to demonstrate that the relative address, which is defined with reference to the
origin of the program, and the logical address, expressed as a page number and offset,
are the same.
• An example is shown in Figure 7.11 . 16-bit addresses are used, and the page size is 1K
1,024 bytes. The relative address 1502, in binary form, is 0000010111011110. With a
page size of 1K, an offset field of 10 bits is needed, leaving 6 bits for the page number.
Thus a program can consist of a maximum of 26 =64 pages of 1K bytes each.
• As Figure 7.11b shows, relative address 1502 corresponds to an offset of 478
(0111011110) on page 1 (000001), which yields the same 16-bit number,
Logical Addresses
• The consequences of using a page size that is a power of 2 are
twofold.
1. First, the logical addressing scheme is transparent to the programmer,
the assembler, and the linker. Each logical address (page number, offset
of a program is identical to its relative address.
2. Second, it is a relatively easy matter to implement a function in
hardware to perform dynamic address translation at run time.
• Consider an address of n + m bits, where the leftmost n bits are the
page number and the rightmost m bits are the offset. In our
example ( Figure 7.11b ), n = 6 and m =10. The following steps are
needed for address translation:
1. Extract the page number as the leftmost n bits of the logical address.
2. Use the page number as an index into the process page table to find the
frame number, k .
3. The starting physical address of the frame is k °— 2 m
4. and the physical address of the referenced byte is that number plus
the offset. This physical address need not be calculated; it is easily
constructed by appending the frame number to the offset
Logical-to-Physical Address Translation - Paging
4
1
3 2
4
Table 7.2
Memory Management
Techniques
Security Issues
If a process has not declared a portion
of its memory to be sharable, then no
other process should have access to
the contents of that portion of
memory