Unit 5 Memory Management
Unit 5 Memory Management
What is Memory?
Computer memory can be defined as a
collection of some data represented in the
binary format.
On the basis of various functions, memory
can be classified into various categories:
cache memory, main memory, secondary
memory.
A computer device that is capable to store
any information or data temporally or
permanently, is called storage device.
How Data is being stored in a computer system?
In order to understand memory management,
we have to make everything clear about how
data is being stored in a computer system.
Machine understands only binary language
that is 0 or 1. Computer converts every data
into binary language first and then stores it
into the memory.
That means if we have a program line written
as int α = 10 then the computer converts it
into the binary language and then store it into
the memory blocks.
Need for Multi programming
The program always executes in main memory.
The size of main memory affects degree of
Multi programming to most of the extent.
If the size of the main memory is larger then
CPU can load more processes in the main
memory at the same time and therefore will
increase degree of Multi programming as well
as CPU utilization.
When degree of multiprogramming will
increase, CPU will get better utilized.
Let's consider,
Process Size = 4 MB
Main memory size = 4 MB
If the time for which the process does IO is P,
Then,
CPU utilization = (1-P)
let's say,
P = 70%
CPU utilization = 30 %
Now, increase the memory size, Let's say it is 8 MB.
Process Size = 4 MB
Two processes can reside in the main memory at the same time.
Let's say the time for which, one process does its IO is P,
Then
CPU utilization = (1-P^2)
let's say P = 70 %
CPU utilization = (1-0.49) =0.51 = 51 %
Memory Management
Memory management is the functionality of an
operating system which handles or manages primary
memory and moves processes back and forth between
main memory and disk during execution.
Memory management keeps track of each and every
memory location, regardless of either it is allocated to
some process or it is free.
It checks how much memory is to be allocated to
processes.
It decides which process will get memory at what time.
It tracks whenever some memory gets freed or
unallocated and correspondingly it updates the status.
Objectives and functions
An Operating System does the following
activities for memory management −
Keeps tracks of primary memory, i.e., what part
of it are in use by whom, what part are not in
use.
In multiprogramming, the OS decides which
process will get memory when and how much.
Allocates the memory when a process requests
it to do so.
De-allocates the memory when a process no
longer needs it or has been terminated.
Every program we execute and every file we
access must be copied from a storage device
into main memory.
All the programs are loaded in the main memory
for execution. A complete program is loaded
into the memory, this enhance the performance.
Also, sometimes one program is dependent on
some other program. In such a case, rather than
loading all the dependent programs, CPU links
the dependent programs to the main executing
program when its required. This mechanism is
known as Dynamic Linking.
Swapping
A process needs to be in memory for
execution. But sometimes there is not enough
main memory to hold all the currently active
processes in a timesharing system.
To manage limited space in RAM and it allow
the system to run more programs.
It used when data isn’t available in RAM.
Swapping is the process of bringing in each
process in main memory, running it for a while
and then putting it back to the disk. Swap out
and Swap in
Swapping is a mechanism in which a process
can be swapped temporarily out of main memory
to secondary storage (disk) and make that
memory available to other processes. At some
later time, the system swaps back the process
from the secondary storage to main memory.
Though performance is usually affected by
swapping process but it helps in running
multiple and big processes in parallel and that's
the reason Swapping is also known as a
technique for memory compaction.
Schematic View of Swapping
Fragmentation
As processes are loaded and removed from
memory, the free memory space is broken
into little pieces.
It happens after sometimes that processes
cannot be allocated to memory blocks
considering their small size and memory
blocks remains unused.
This problem is known as Fragmentation.
Fragmentation is of two types −
External fragmentation
Total memory space is enough to satisfy a
request or to reside a process in it, but it is not
contiguous, so it cannot be used.
Internal fragmentation
Memory block assigned to process is bigger.
Some portion of memory is left unused, as it
cannot be used by another process.
The following diagram shows how
fragmentation can cause waste of memory
and a compaction technique can be used to
create more free memory out of fragmented
memory −
Logical & Physical Addresses
Logical Address:
A logical address, also known as a virtual
address, is an address generated by the CPU
during program execution. It is the address
seen by the process and is relative to the
program’s address space
An address that is created by the CPU while a
program is running is known as a logical
address. Because the logical address is virtual
—that is, it doesn’t exist physically
Physical Address:
A physical address is the actual address in the
main memory where data is stored.
Physical addresses are used by the
Memory Management Unit (MMU) to translate
logical addresses into physical addresses.
Difference Between Logical address and Physical Address
User can view the logical User can never view physical
Visibility
address of a program. address of program.
The user can use the logical The user can indirectly access
Access address to access the physical physical address but not
address. directly.