0% found this document useful (0 votes)
143 views

Chapter-8 (Memory Management)

The document discusses different memory management techniques used in operating systems including fixed partitioning, dynamic loading, dynamic linking, swapping, and dynamic partitioning. Fixed partitioning allocates fixed-size partitions of memory to processes. Dynamic loading loads routines only when called. Dynamic linking postpones linking until execution. Swapping temporarily moves processes out of memory to disk. Dynamic partitioning allocates memory to processes based on their exact size requirements.

Uploaded by

Ar. Raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
143 views

Chapter-8 (Memory Management)

The document discusses different memory management techniques used in operating systems including fixed partitioning, dynamic loading, dynamic linking, swapping, and dynamic partitioning. Fixed partitioning allocates fixed-size partitions of memory to processes. Dynamic loading loads routines only when called. Dynamic linking postpones linking until execution. Swapping temporarily moves processes out of memory to disk. Dynamic partitioning allocates memory to processes based on their exact size requirements.

Uploaded by

Ar. Raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

CSC-322:

Operating Systems

Week 5: Slides only

Memory Management
Reading: Chapter 8 from the text book

Instructor:
Muhammad Mustafa Khattak
Assistant Professor,
Department of Computer Sciences,
CUI, Islamabad Campus
References/ Acknowledgement

Chapter 8
Main Memory
Operating System Concepts – 9th Edition
Silberschatz, Galvin and Gagne ©2013

Chapter 7
Memory Management
Operating Systems: Internals and Design Principles
Seventh Edition William Stallings
Address binding
Address binding of instructions and
data to memory addresses can happen
at three different stages
1. Compile time: If memory location known
a priori, absolute code can be generated;
must recompile code if starting location
changes
2. Load time: Must generate relocatable
code if memory location is not known at
compile time
3. time: Binding delayed until run time if the
process can be moved during its execution
from one memory segment to another
Need hardware support for address
maps (e.g., base and limit registers)
Logical vs. Physical Address Space
• The concept of a logical address space that is bound to a
separate physical address space is central to proper
memory management
• Logical address – generated by the CPU; also referred to as
virtual address
• Physical address – address seen by the memory unit
• Logical and physical addresses are the same in compile-
time and load-time address-binding schemes; logical
(virtual) and physical addresses differ in execution-time
address-binding scheme
• Logical address space is the set of all logical addresses
generated by a program
• Physical address space is the set of all physical addresses
generated by a program
Memory-Management Unit (MMU)
• Hardware device that at run time maps
virtual to physical address
• Many methods possible, covered in the rest
of this chapter
• To start, consider simple scheme where the
value in the relocation register is added to
every address generated by a user process
at the time it is sent to memory
• Base register now called relocation register
• The user program deals with logical
addresses; it never sees the real physical
addresses
• Execution-time binding occurs when reference
is made to location in memory
• Logical address bound to physical addresses
Dynamic Loading
 Routine is not loaded until it is called
 Better memory-space utilization; unused routine is never
loaded
 All routines kept on disk in relocatable load format
 Useful when large amounts of code are needed to handle
infrequently occurring cases
Dynamic Linking
• Static linking – system libraries and program code
combined by the loader into the binary program image
• Dynamic linking –linking postponed until execution time
• Small piece of code, stub, used to locate the appropriate
memory-resident library routine
• Stub replaces itself with the address of the routine, and
executes the routine
• Operating system checks if routine is in processes’ memory
address
• If not in address space, add to address space
• Dynamic linking is particularly useful for libraries
Swapping
• A process can be swapped temporarily out of memory to
a backing store, and then brought back into memory for
continued execution
• Total physical memory space of processes can exceed physical
memory
• Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide
direct access to these memory images
• Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped
out so higher-priority process can be loaded and executed
• Major part of swap time is transfer time; total transfer
time is directly proportional to the amount of memory
swapped
• System maintains a ready queue of ready-to-run
processes which have memory images on disk
Schematic View of Swapping
Memory management Techniques
• Memory management brings processes into main memory for
execution by the processor
1. Fixed Partitioning (contiguous allocation)
 used in several variations in some now-obsolete operating
systems
 does not involve virtual memory
2. Simple paging and simple segmentation
 not used by themselves.
 However, it will clarify the discussion of virtual memory if we
look first at these two techniques in the absence of virtual
memory considerations.
3. Virtual memory
 Used in almost every modern operating system
 based on segmentation and paging
Fixed Partitioning
• In most schemes for memory management,
we can assume that the OS occupies some
fixed portion of main memory and that the
rest of main memory is available for use by
multiple processes. The simplest scheme
for managing this available memory is to
partition it into regions with fixed
boundaries.
Equal-size partitions
• Any process whose size is less than or equal to the partition
size can be loaded into an available partition
• The operating system can swap out a process if all partitions
are full and no process is in the Ready or Running state
• There are two difficulties with the use of equal-size fixed
partitions:
• A program may be too big to fit in a partition
• program needs to be designed with the use of overlays
• Main memory utilization is inefficient
• any program, regardless of size, occupies an entire partition
• internal fragmentation
• wasted space due to the block of data loaded being
smaller than the partition
Unequal Size Partitions
• Both of these problems can be
lessened, though not solved, by Using
unequal size partitions
• programs up to 16M can be
accommodated without overlays
• partitions smaller than 8M allow
smaller programs to be
accommodated with less internal
fragmentation
Memory Assignment - PLACEMENT ALGORITHM
• With equal-size partitions, the placement of processes
in memory is trivial.
• As long as there is any available partition, a process
can be loaded into that partition.
• Because all partitions are of equal size, it does not
matter which partition is used.
• If all partitions are occupied with processes that are
not ready to run, then one of these processes must
be swapped out to make room for a new process.
• Which one to swap out is a scheduling decision.
• With unequal-size partitions, there are two possible
ways to assign processes to partitions.
• The simplest way is to assign each process to the
smallest partition within which it will fit. 1 In this
case, a scheduling queue is needed for each partition,
to hold swapped-out processes destined for that
partition ( Figure 7.3a ).
Memory Assignment - PLACEMENT ALGORITHM
• The advantage of this approach is that processes are always
assigned in such a way as to minimize wasted memory within
a partition (internal fragmentation).
• Although this technique seems optimum from the point of
view of an individual partition, it is not optimum from the
point of view of the system as a whole.
• Consider a case in which there are no processes with a size
between 12 and 16M at a certain point in time.
• In that case, the 16M partition will remain unused, even
though some smaller process could have been assigned to it.
• Thus, a preferable approach would be to employ a single
queue for all processes ( Figure 7.3b ). When it is time to load
a process into main memory, the smallest available partition
that will hold the process is selected.
• If all partitions are occupied, then a swapping decision must
be made.
• Preference might be given to swapping out of the smallest
partition that will hold the incoming process. It is also possible
to consider other factors, such as priority, and a preference for
swapping out blocked processes versus ready processes.
Disadvantages of unequal-size partitions
• The use of unequal-size partitions provides a
degree of flexibility to fixed partitioning. In
addition, it can be said that fixed-partitioning
schemes are relatively simple and require minimal
OS software and processing overhead.
• However, there are disadvantages:
1. The number of partitions specified at system
generation time limits the number of active (not
suspended) processes in the system.
2. Small jobs will not utilize partition space
efficiently
Dynamic Partitioning
• To overcome some of the difficulties with fixed
partitioning, an approach known as dynamic
partitioning was developed.
• With dynamic partitioning, the partitions are
of variable length and number.
• When a process is brought into main memory,
it is allocated exactly as much memory as it
requires and no more.
Effect of Dynamic Partitioning
• Example: using 64 Mbytes of main memory, Figure 7.4
• Initially, main memory is empty, except for the OS (a).
• The first three processes are loaded in, starting where the
operating system ends and occupying just enough space for each
process (b, c, d).
• This leaves a “hole” at the end of memory that is too small for a
fourth process.
• At some point, none of the processes in memory is ready. The
operating system swaps out process 2 (e), which leaves sufficient
room to load a new process, process 4 (f).
• Because process 4 is smaller than process 2, another small hole is
created.
• Later, a point is reached at which none of the processes in main
memory is ready, but process 2, in the Ready-Suspend state, is
available.
• Because there is insufficient room in memory for process 2, the
operating system swaps process 1 out (g) and swaps process 2
back in (h).
• As this example shows, this method starts out well, but
eventually it leads to a situation in which there are a lot of small
holes in memory.
• As time goes on, memory becomes more and more fragmented,
and memory utilization declines.
• This phenomenon is referred to as external fragmentation ,
indicating that the memory that is external to all partitions
Dynamic Partitioning
External Fragmentation
• memory becomes more and more fragmented
• memory utilization declines

Compaction
• technique for overcoming external fragmentation
• OS shifts processes so that they are contiguous
• free memory is together in one block
• time consuming and wastes CPU time
Placement Algorithms
• Because memory compaction is time consuming, the OS designer must be
clever in deciding how to assign processes to memory (how to plug the
holes).
• When it is time to load or swap a process into main memory, and if there is
more than one free block of memory of sufficient size, then the operating
system must decide which free block to allocate.
• Three placement algorithms are: best-fit, first-fit, and next-fit.
• All, of course, are limited to choosing among free blocks of main memory
that are equal to or larger than the process to be brought in.

Best-fit First-fit Next-fit


• chooses the • begins to scan • begins to scan
block that is memory from memory from
closest in size to the beginning the location of
the request and chooses the the last
first available placement and
block that is large chooses the next
enough available block
that is large
Memory Configuration Example
 Figure 7.5a shows an example memory
configuration after a number of
placement and swapping-out
operations. The last block that was
used was a 22-Mbyte block from which
a 14-Mbyte partition was created.
 Figure 7.5b shows the difference
between the best-, first-, and next-fit
placement algorithms in satisfying a
16-Mbyte allocation request.
 Best-fit will search the entire list of
available blocks and make use of the
18-Mbyte block, leaving a 2-Mbyte
fragment.
 First-fit results in a 6-Mbyte fragment,
and next-fit results in a 20-Mbyte
fragment.
Comparison of Placement Algorithms
• Which approach is the best will depend on the exact
sequence of process swapping that occurs and the size of
those processes.
• However, some general comments can be made
• The first-fit algorithm is not only the simplest but usually the
best and fastest
• The next-fit algorithm tends to produce slightly worse
results than first-fit.
• The next-fit algorithm will more frequently lead to an
allocation from a free block at the end of memory. The
result is that the largest block of free memory, which usually
appears at the end of the memory space, is quickly broken
up into small fragments. Thus, compaction may be required
more frequently with next-fit.
Comparison of Placement Algorithms
• On the other hand, the first-fit algorithm may litter the front
end with small free partitions that need to be searched over
on each subsequent first-fit pass.
• The best-fit algorithm, despite its name, is usually the worst
performer.
• Because this algorithm looks for the smallest block that will
satisfy the requirement, it guarantees that the fragment left
behind is as small as possible.
• Although each memory request always wastes the smallest
amount of memory, the result is that main memory is
quickly littered by blocks too small to satisfy memory
allocation requests.
• Thus, memory compaction must be done more frequently
than with the other algorithms.
Addresses
Logical

• A logical address is a reference to a memory location


independent of the current assignment of data to memory; a
translation must be made to a physical address before the
memory access can be achieved.

Relative

• A relative address is a particular example of logical address, in


which the address is expressed as a location relative to some
known point, usually a value in a processor register.

Physical or Absolute

• A physical address , or absolute address, is an actual location in


main memory
Paging
• Both unequal fixed-size and variable-size partitions are inefficient in
the use of memory; the former results in internal fragmentation, the
latter in external fragmentation.
• In paging, main memory is partitioned into equal fixed-size chunks that
are relatively small, and each process is also divided into small fixed-
size chunks of the same size.
• Then the chunks of a process, known as pages , could be assigned to
available chunks of memory, known as frames , or page frames.
• Using this technique the wasted space in memory for each process is
due to internal fragmentation consisting of only a fraction of the last
page of a process. There is no external fragmentation.
Pages Frames
• chunks of a • available chunks
process of memory
Assignment of Process to Free Frames
• Figure 7.9 illustrates the use of pages and
frames. At a given point in time, some of the
frames in memory are in use and some are free.
A list of free frames is maintained by the OS.
• Process A, stored on disk, consists of four pages.
When it is time to load this process, the OS
finds four free frames and loads the four pages
of process A into the four frames ( Figure 7.9b ).
• Process B, consisting of three pages, and
process C, consisting of four pages, are
subsequently loaded. Then process B is
suspended and is swapped out of main
memory. Later, all of the processes in main
memory are blocked, and the OS needs to bring
in a new process, process D, which consists of
five pages.
• Now suppose, as in this example, that there are
not sufficient unused contiguous frames to hold
the process. Does this prevent the operating
system from loading D? The answer is no,
because we can once again use the concept of
logical address.
Page Table
• A simple base address register will no longer suffice. Rather, the
operating system maintains a page table for each process.
• The page table shows the frame location for each page of the process.
• Within the program, each logical address consists of a page number and
an offset within the page.
• Recall that in the case of simple partition, a logical address is the location
of a word relative to the beginning of the program; the processor
translates that into a physical address.
• With paging, the logical-to-physical address translation is still done by
processor hardware. Now the processor must know how to access the
page table of the current process.
• Presented with a logical address (page number, offset), the processor
uses the page table to produce a physical address (frame number, offset).
• Continuing our example, the five pages of process D are loaded into
frames 4, 5, 6, 11, and 12. Figure 7.10 shows the various page tables at
this time..
Data Structures

• A page table contains one entry for each page of the process, so that
the table is easily indexed by the page number (starting at page 0 ).
• Each page table entry contains the number of the frame in main
memory, if any, that holds the corresponding page.
• In addition, the OS maintains a single free-frame list of all the frames
in main memory that are currently unoccupied/available for pages.
• Thus we see that simple paging, as described here, is similar to fixed
partitioning.
• The differences are that, with paging, the partitions are rather small;
a program may occupy more than one partition; and these partitions
need not be contiguous
Logical Addresses

• To make this paging scheme convenient, let us dictate that the page size, hence the
frame size, must be a power of 2. With the use of a page size that is a power of 2, it is
easy to demonstrate that the relative address, which is defined with reference to the
origin of the program, and the logical address, expressed as a page number and offset,
are the same.
• An example is shown in Figure 7.11 . 16-bit addresses are used, and the page size is 1K
1,024 bytes. The relative address 1502, in binary form, is 0000010111011110. With a
page size of 1K, an offset field of 10 bits is needed, leaving 6 bits for the page number.
Thus a program can consist of a maximum of 26 =64 pages of 1K bytes each.
• As Figure 7.11b shows, relative address 1502 corresponds to an offset of 478
(0111011110) on page 1 (000001), which yields the same 16-bit number,
Logical Addresses
• The consequences of using a page size that is a power of 2 are
twofold.
1. First, the logical addressing scheme is transparent to the programmer,
the assembler, and the linker. Each logical address (page number, offset
of a program is identical to its relative address.
2. Second, it is a relatively easy matter to implement a function in
hardware to perform dynamic address translation at run time.
• Consider an address of n + m bits, where the leftmost n bits are the
page number and the rightmost m bits are the offset. In our
example ( Figure 7.11b ), n = 6 and m =10. The following steps are
needed for address translation:
1. Extract the page number as the leftmost n bits of the logical address.
2. Use the page number as an index into the process page table to find the
frame number, k .
3. The starting physical address of the frame is k °— 2 m
4. and the physical address of the referenced byte is that number plus
the offset. This physical address need not be calculated; it is easily
constructed by appending the frame number to the offset
Logical-to-Physical Address Translation - Paging

In our example, we have the logical address


0000010111011110, which is page number 1, offset 478.
Suppose that this page is residing in main memory frame
6 binary 000110. Then the physical address is frame
number 6, offset 478 0001100111011110 ( Figure 7.12a ).
Segmentation
• A program can be subdivided into segments
 may vary in length
 there is a maximum length
• Addressing consists of two parts:
 segment number
 an offset
• Similar to dynamic partitioning
• Eliminates internal fragmentation
Logical View of Segmentation

4
1

3 2
4

user space physical memory space


Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>,

• Segment table – maps two-dimensional physical addresses; each


table entry has:
• base – contains the starting physical address where the segments reside in
memory
• limit – specifies the length of the segment

• Segment-table base register (STBR) points to the segment table’s


location in memory

• Segment-table length register (STLR) indicates number of segments


used by a program;
segment number s is legal if s < STLR
Segmentation Architecture (Cont.)
Segmentation Hardware
Logical-to-Physical Address Translation -
Segmentation

In our example, we have the logical address 0001001011110000, which


is segment number 1, offset 752. Suppose that this segment is residing
in main memory starting at physical address 0010000000100000. Then
the physical address is 0010000000100000 + 001011110000
0010001100010000 ( Figure 7.12b ).
Summary

Table 7.2
Memory Management
Techniques
Security Issues
If a process has not declared a portion
of its memory to be sharable, then no
other process should have access to
the contents of that portion of
memory

If a process declares that a portion of


memory may be shared by other
designated processes then the security
service of the OS must ensure that only
the designated processes have access

You might also like