0% found this document useful (0 votes)
4 views

CHapter-05 Operating System

This chapter covers memory management in operating systems, detailing how it allocates and deallocates memory to processes, and the differences between logical and physical addresses. It discusses various allocation strategies, including contiguous and non-contiguous memory allocation, as well as paging techniques and their advantages and disadvantages. Additionally, it explains the role of the Memory Management Unit (MMU), different page table structures, and methods for optimizing memory access and protection.

Uploaded by

tagev50063
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

CHapter-05 Operating System

This chapter covers memory management in operating systems, detailing how it allocates and deallocates memory to processes, and the differences between logical and physical addresses. It discusses various allocation strategies, including contiguous and non-contiguous memory allocation, as well as paging techniques and their advantages and disadvantages. Additionally, it explains the role of the Memory Management Unit (MMU), different page table structures, and methods for optimizing memory access and protection.

Uploaded by

tagev50063
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter – 05

Operating system
Memory Management
Memory Management:
 The function of the operating system that handles or manages primary
memory.
 Keeps track of each byte in memory.
 Allocates memory to processes when needed and deallocates when
done.

🧭 Logical vs Physical Address


Aspect Logical Address Physical Address
Also called Virtual address Actual address
Generated by CPU (during program MMU (Memory
execution) Management Unit)
Visible to User/program Hardware/Memory
Translation done by Memory Management Not translated—used
Unit (MMU) directly

Contiguous memory allocation: -


 The contiguous allocation method was implemented by partitioning the
memory into various regions.
 The memory partition which is free to allocate is known as a hole. Thus
an appropriate hole is searched for allocating memory to a process.

Contiguous allocation with fixed partitioning: -


 Internal fragmentation = amount of memory left after giving program in
one partition in ram
 Limit in process size = limited size of program can be assign in ram or
not.
 Limitation on degree of multiprogramming = limited number of partition
in ram

Contiguous allocation with dynamic/variable partitioning: -


 Instead of having static partitions in the memory the memory partition
will be allocated to a process dynamically.
 As the processes dynamically occupy and release the memory, a time is
reached when there are small holes generated in the memory partitions.
These memory partitions at a certain time cause the problem(External
Fragmentation: – total memory space exists to satisfy a request, but it is
not contiguous).
Reduce external fragmentation by compaction  Shuffle memory contents to
place all free memory together in one large block.

Compaction: -
Compaction is the solution for memory wastage occurred in dynamic
partitioning. The operating system observes the number of holes in the
memory and compacts them after a period so that a contiguous memory can
be allocated for a new process.

Memory partition selection techniques: -


1. First Fit
 Allocates the first hole (partition) that is big enough.
 Searches memory from the beginning.
 Fast, but may leave small useless holes (external fragmentation)
2. Best Fit
 Finds the smallest partition that is big enough.
 Tries to reduce wasted space, but slower due to full scan.
3. Worst Fit
 Allocates memory in the largest available partition.
 Leaves the biggest leftover space, which may be reused later.
 Can cause more fragmentation over time.

Buddy System in Memory Management: -


The Buddy System divides memory into blocks that are powers of 2 (like
2KB, 4KB, 8KB, etc.) and allocates memory by splitting and merging
these blocks as needed.

📦 Non-Contiguous Memory Allocation


Unlike contiguous allocation (where a process must be stored in a single
block of memory), non-contiguous allocation allows a process to be split
and stored in different parts of memory.

This helps in better utilization of memory and reduces external


fragmentation.

🧠 Memory-Management Unit (MMU)


The MMU is a hardware component in the CPU responsible for
translating logical addresses (used by programs) into physical addresses
(used by RAM).

It acts as a bridge between the CPU and the main memory.

🔄 Why is MMU Needed?

 Programs use logical addresses (like "start from 0").


 The OS stores them in physical memory (anywhere).
 The MMU does the conversion automatically and quickly.
1.🧠 What is Paging?
Paging is a memory management technique where:

 The process is divided into small fixed-size blocks called pages.


 The main memory (RAM) is divided into fixed-size blocks called
frames.
 Pages are loaded into any available frames, not necessarily continuous.

🔄 How Paging Works (Step-by-Step)


1. Divide Process and Memory:
o Example: Process size = 10KB, Page size = 1KB → 10 pages
o Memory: Frame size = 1KB → Memory divided into 1KB frames
2. Load Pages into Frames:
o OS puts each page of the process into any available frame in
memory.
3. Page Table:
o The OS maintains a page table to map each page number to its
corresponding frame number.
4. Address Translation:
o A logical address (from CPU) is split into:
 Page number (p)
 Offset within the page (d)(i.e. size of page)
o Physical address = Frame number from page table + offset(i.e size
of page)
✅ Advantages of Paging
 No external fragmentation
 Easy to implement
 Supports non-contiguous allocation
❌ Disadvantages
 Internal fragmentation (last page might not be fully used)
 Overhead of maintaining page tables

Address Translation Scheme


 Page number (p)– used as an index into a page table which contains
base address of each page in physical memory
 Page offset (d)– combined with base address to define the physical
memory address that is sent to the memory unit

📘 Paging Implementation & Hardware (Simplified)


🧠 How Paging is Implemented
 The OS keeps track of all memory frames (whether they’re free or used).
 When a process starts, empty frames are assigned to its pages.
 The OS updates the page table with the frame number where each page
is placed.
 The page table is stored in memory, and its address is kept in the
process’s PCB (Process Control Block).

🧰 Hardware Requirement
 To support paging, a special register called the Page Table Base Register
(PTBR) stores the address of the page table.
 When a process starts, the PTBR is loaded with that process's page table
address.
 When the process ends or is switched, PTBR is updated for the new
process.

🕒 Problem: Slower Memory Access


 To access a data location:
1. First access the page table to find the frame number.
2. Then access the memory to get the actual data.
 So, 2 memory accesses are needed → slower performance.
🚀 Solution: TLB (Translation Look-aside Buffer)
 A TLB is a high-speed cache that stores recently used page table entries.
 When the CPU needs a page:
o It checks the TLB first.
o If found (TLB hit) → data is accessed quickly (only 1 memory
access).
o If not found (TLB miss) → go to page table, then memory (2
accesses), and update the TLB.

🔒 Protection in Paging (Simplified): -


In a paging system, each page of memory can have its own access rights, like:
 Read-only
 Write-only
 Read-write
 Execute
These rights are stored in the page table as protection bits (also called access
bits).

⚠️How Protection Works


 If a process tries to access a page in a way that's not allowed (e.g.,
writing to a read-only page),
→ the OS interrupts and blocks the action.
 This helps protect memory and keeps processes from crashing or
interfering with each other.

❌ Invalid Pages
 Sometimes, a process doesn’t use all its pages.
 To prevent the use of these unused or illegal pages, a valid-invalid bit is
added in the page table:
o Valid → page is in use
o Invalid → page is not in use; access is not allowed

📏 Page Table Length Register (PTLR)


 The PTLR stores the total number of valid pages.
 If a process tries to access a page beyond the allowed range, the OS will
stop it with an interrupt.

📘 Shared Pages (Simplified)


In a multi-user system, many users might run the same program (like a
compiler).
But instead of storing multiple copies of the program in memory, we can
share one copy using shared pages.

💡 Example:
 A compiler needs 1500 KB of memory.
 If two users run it separately, it would normally use 3000 KB (1500 × 2).
 With shared pages, both users use the same code pages, so only 1500
KB is used.
 The page tables of both users point to the same code in memory.

🧠 Page Table Structures: Hierarchical / Multi-level


(Simplified)
Modern systems support large address spaces (like 32-bit), which means page
tables become huge and can take up megabytes of memory.

❗ The Problem:
 If page size = 4 KB, and each page table entry = 4 bytes
 Then a 32-bit address space needs 1 million entries
 Total memory for one page table = 4 MB
 Keeping such a large page table in one block (contiguous space) is hard

✅ The Solution: Multi-level (Hierarchical) Page Table


 Instead of one large page table, break it into smaller tables
 First level: Outer page table (points to second-level page tables)
 Second level: Inner page tables (point to actual pages in memory)

🔁 How it works:
1. Logical address is divided into 3 parts:
o p1: Index for outer page table
o p2: Index for inner page table
o d: Offset inside the page
2. First, p1 is used to find the correct inner page table
3. Then p2 finds the correct page frame
4. Offset d gives the exact memory location inside the page

📌 Why use it?


 Saves memory by not loading full page tables when not needed
 Allows non-contiguous allocation of page tables

🔄 10.7.2 Inverted Page Table (Simplified)


🔧 Problem:
Traditional page tables are large because they have one entry for every virtual
page. This wastes memory.
✅ Solution: Inverted Page Table
 One entry per physical frame, not per virtual page.
 Each entry stores:
o Which process the frame belongs to (PID)
o Which virtual page is stored there
 This means one global table for all processes.
🧠 How it works:
 Logical address format: (PID, Page Number, Offset)
 The system searches the inverted table to find a match for (PID, Page
Number)
 When found, it combines the matched frame address with the offset to
get the physical address
🚫 Limitation:
 Slow search, especially for large tables
 Not suitable for shared pages, since one physical frame may belong to
multiple processes

🧮 10.7.3 Hashed Page Table (Simplified)


✅ Why hashing?
To speed up the slow search in the inverted page table.
🔧 How it works:
 A hash function is applied on the page number of the virtual address
 This gives a location (index) in the hashed page table
 The system checks that entry:
o If it matches the page → return frame address
o If not → follow the chaining pointer to check the next (like in
linked lists)
🧠 Summary:
 Faster lookup than inverted table
 Still saves memory
 Page fault occurs if nothing is found even after chaining

You might also like