0% found this document useful (0 votes)
11 views87 pages

CSC 421 Operating System for DLC2

The document is a course outline for CSC 421 (Operating System II) at the University of Ibadan, detailing the objectives and contents of the course. It covers essential topics such as memory management, processor management, device management, and file management, along with various types of operating systems. The document serves as a comprehensive guide for understanding the functionalities and roles of operating systems in computer systems.

Uploaded by

samuelayomide032
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views87 pages

CSC 421 Operating System for DLC2

The document is a course outline for CSC 421 (Operating System II) at the University of Ibadan, detailing the objectives and contents of the course. It covers essential topics such as memory management, processor management, device management, and file management, along with various types of operating systems. The document serves as a comprehensive guide for understanding the functionalities and roles of operating systems in computer systems.

Uploaded by

samuelayomide032
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

IBADAN DISTANCE LEARNING SERIES

CSC 421
(Operating System II)

By

Elizabeth O. Ogunseye

And
Nancy C. Woods

Department of Computer Science

University of Ibadan
Ibadan, Nigeria.

1
GENERAL INTRODUCTION AND COURSE OBJECTIVES

The Course Contents

To understand an operating system is to understand the workings of an entire computer system,


because the operating system manages each and every piece of hardware and software. This text
explores what operating systems are, how they work, what they do, and why.

All operating systems have certain core items in common: each must manage memory,
processing capability, devices and peripherals, files, and networks.

2
TABLE OF CONTENTS

UNIT 1: OVERVIEW OF OPERATING S YSTEM............................................................................................................ 6


1.1 DEFINITION OF OPERATING SYST EM ............................................................................................................................. 6
1.1.1 M EMORY M ANAGEMENT ........................................................................................................................................... 7
1.1.2 PROCESSOR M ANAGEMENT ....................................................................................................................................... 7
1.1.3 DEVICE M ANAGEMENT .............................................................................................................................................. 7
1.1.4 FILE MANAGEMENT .................................................................................................................................................... 8
1.1.5 OT HER IMPORTANT A CTIVITIES................................................................................................................................ 8
1.2 TYPES OF OPERATING SYSTEM....................................................................................................................................... 8
1.2.1 Batch operating system ........................................................................................................................................ 8
1.2.2 Time-sharing operating systems ......................................................................................................................... 9
1.2.3 Distributed operating System.............................................................................................................................. 9
1.2.4 Network operating System ................................................................................................................................... 9
1.2.5 Real Time operating System ................................................................................................................................ 9
1.3 M ULTITASKING...............................................................................................................................................................10
1.4 M ULTIPROGRAMMING....................................................................................................................................................11
1.5 SERVICES OF T HE OPERATING SYSTEM.......................................................................................................................11
1.5.1 Program execution..............................................................................................................................................11
1.5.2 I/O Operation.......................................................................................................................................................11
1.5.3 File system manipulation ...................................................................................................................................11
1.5.4 Communication....................................................................................................................................................12
1.5.5 Error handling .....................................................................................................................................................12
1.5.6 Resource Management .......................................................................................................................................12
1.5.7 Protection .............................................................................................................................................................12
UNIT 2: MEMORY MANAGEMENT: EARLY S YSTEMS .....................................................................................14
2.1 M EMORY MANAGEMENT..............................................................................................................................................14
2.2 SINGLE CONTIGUOUS ALLOCATION................................................................................................................15
2.3 FIXED PARTITION .........................................................................................................................................................15
2.4 DYNAMIC PA RTITIONS...............................................................................................................................................16
2.4.1 FIRST FIT.............................................................................................................................................................17
2.4.2 BEST FIT ....................................................................................................................................................................18
2.5 RELOCATABLE DYNAMIC PA RTITIONS..............................................................................................................18
UNIT 3 MEMORY MANAGEMENT (VIRTUAL MEMORY) ..............................................................................21
3.1 VIRTUA L MEM ORY ..................................................................................................................................................21
3.2 PA GED M EM ORY A LLOCATION .........................................................................................................................22
3.3 DEMAND PA GED MEM ORY MANA GEM ENT (DPMM) ................................................................................24
3.3.1 SWAPPING ..........................................................................................................................................................24
3.3.2 Page Interrupt Handler......................................................................................................................................25
3.3.3 EFFICIENC Y OF DPMM .................................................................................................................................26
3.3.4 PAGE REPLACEMENT POLICIES/STRATEGIES ......................................................................................26
3.4 SEGM ENTED MEM ORY MANA GEM ENT ..........................................................................................................27
3.5 SEGM ENTED AND DEMAND-PA GED M EM ORY MANA GEM ENT...........................................................29
UNIT 4 CONCURRENT PROCESS ING .......................................................................................................................31
4.1 CONCURRENT PROCESSING...........................................................................................................................................31
4.2 M ULTIPROCESSING CONFIGURATIONS........................................................................................................................33
4.2.1 Master/Slave Configuration ..............................................................................................................................33

3
4.2.2 Loosely Coupled Configuration........................................................................................................................34
4.2.3 Symmetric Configuration ...................................................................................................................................34
UNIT 5 PROCESS S YNCHRONIZATION ...................................................................................................................36
5.1 INT RODUCTION ...............................................................................................................................................................36
5.2 HOW DO SEVERAL PROCESSORS COOPERATE..............................................................................................................36
5.3 SYNCHRONIZATION M ECHANISMS...............................................................................................................................37
5.3.1 Test-and-Set..........................................................................................................................................................37
5.3.2 Wait and Signal ...................................................................................................................................................38
5.3.3 Semaphores ..........................................................................................................................................................38
5.4 PROCESS COOPERATION ................................................................................................................................................39
5.4.2 READERS AND W RITERS...........................................................................................................................................40
UNIT 6 PROCESS MANAGEMENT (DEADLOCK) ................................................................................................41
6.1 DEADLOCK ......................................................................................................................................................................41
6.1.1 Four Necessary Conditions for Deadlock.......................................................................................................41
6.2 SEVEN CASE S OF DEADLOCKS......................................................................................................................................42
6.2.1 Case 1 : Deadlocks on File Requests...............................................................................................................42
6.2.2 Case 2 : Deadlocks in Databases.....................................................................................................................42
6.3 M ODELING DEADLOCKS USING DIRECTED GRAPHS.................................................................................................43
6.4 ST RATEGIES FOR HANDLING DEADLOCKS..................................................................................................................44
6.4.1 Deadlock Prevention ..........................................................................................................................................44
6.4.2 Deadlock Avoidance ...........................................................................................................................................45
6.4.3 Deadlock Detection.............................................................................................................................................46
6.4.4 Deadlock Recovery .............................................................................................................................................47
6.5 ST ARVATION ...................................................................................................................................................................47
6.6 RACE CONDIT ION ...........................................................................................................................................................47
UNIT 7 DEVICE MANAGEMENT .................................................................................................................................49
INTRODUCTION .......................................................................................................................................................................49
7.1 DEVICE M ANAGEMENT BASIC FUNCTIONS .................................................................................................................49
7.2. SYSTEM DEVICES ....................................................................................................................................................49
7.2.1. Dedicated Devices...............................................................................................................................................49
7.2.2. Shared Devices ....................................................................................................................................................50
7.2.3. Virtual Devices .........................................................................................................................................................50
7.3. ST ORAGE M EDIA.......................................................................................................................................................50
7.3.1. Sequential Access Storage Media.....................................................................................................................50
7.3.2. Direct Access Storage Devices (DASD) ..........................................................................................................52
7.4. COMPONENT S OF THE I/O SUBSYST EM ..................................................................................................................55
7.4.1. Communication Among Devices.......................................................................................................................56
7.4.2. Direct memory access (DMA) ...........................................................................................................................57
7.5. M ANAGEMENT OF I/O REQUEST S...........................................................................................................................57
7.5.1. Device Handler Seek Strategies for hard disk with movable R/W head ....................................................58
UNIT 8 FILE MANAGEMENT ........................................................................................................................................60
INTRODUCTION .......................................................................................................................................................................60
8.1. W HAT IS A FILE ?........................................................................................................................................................60
8.1.1. Characteristics of Files ......................................................................................................................................61
8.2. DIRECT ORY / VOLUME ST RUCTURE .......................................................................................................................61
8.3. FILE SYST EM..............................................................................................................................................................61
8.4. FILE ORGANIZATION.................................................................................................................................................63

4
8.4.1. Record Format.....................................................................................................................................................63
8.4.2. Access Methods ...................................................................................................................................................63
8.5. PHYSICAL ST ORAGE A LLOCATION .........................................................................................................................63
8.5.1. Contiguous storage .............................................................................................................................................64
8.5.2 NON-CONTIGUOUS ST ORAGE...................................................................................................................................65
8.5.2.1 Chained (or linked) storage .........................................................................................................................65
8.5.2.2 Indexed Storage..............................................................................................................................................65
8.6. A CCESS CONT ROL MAT RIX .....................................................................................................................................66
UNIT 9 PERFORMANCE ..................................................................................................................................................68
9.1. SYSTEM PERFORMANCE MONITOR..................................................................................................................................68
9.2 PERFORMANCE....................................................................................................................................................................68
9.2.1. Performance Measures ......................................................................................................................................68
9.2.2. Performance Evaluation Techniques ................................................................................................................69
9.2.3. Performance Monito ring Techniques ..............................................................................................................70
9.3. BOTTLENECK AND SATURATION ......................................................................................................................................70
9.4. FEEDBACK LOOPS ...........................................................................................................................................................70
UNIT 10 SECURITY .........................................................................................................................................................72
10.1 W HAT IS SECURITY ................................................................................................................................................................72
10.2 SYSTEM MANAGEMENT ..........................................................................................................................................................73
10.3 SYSTEM SURVIVABILITY ..........................................................................................................................................................74
10.4 LEVELS OF PROTECTION ..................................................................................................................................................75
10.5 BACKUP AND RECOVERY ........................................................................................................................................................76
10.6 SECURITY BREACHES.......................................................................................................................................................76
10.6.1 Unintentional Intrusions ........................................................................................................................................77
10.6.2 Intentional Attacks.........................................................................................................................................77
10.7 GUESSING PASSWORDS .........................................................................................................................................................79
UNIT 11 SYSTEM PROTECTION...............................................................................................................................80
INTRODUCTION .......................................................................................................................................................................80
11.1 PROTECTION ..................................................................................................................................................................80
11.2 PROTECTION METHODS ..................................................................................................................................................80
Firewalls ................................................................................................................................................................................80
Authentication .....................................................................................................................................................................81
Encryption .............................................................................................................................................................................81
Sniffers...................................................................................................................................................................................81
Password Management .....................................................................................................................................................81
UNIT 12 CONCURRENT PROGRAMMING ...........................................................................................................84
UNIT 13 STUDY OF VARIOUS EXIS TING OPERATING S YSTEMS ...........................................................87

5
Unit 1: Overview of Operating System

Introduction

An operating system (OS) is a collection of software that manages computer hardware resources
and provides common services for computer programs. The operating system is a vital component
of the system software in a computer system. An Operating System (OS) is an interface between
a computer user and computer hardware. An operating system is a software which performs all
the basic tasks like file management, memory management, process management, handling input
and output, and controlling peripheral devices such as disk drives and printers.
Learning Objectives
After completing this chapter, you should be able to describe:
• The basic role of an operating system
• The major operating system software subsystem managers and their functions
• The types of machine hardware on which operating systems run
• The differences among batch, interactive, real-time, hybrid, and embedded operating systems

1.1 Definition of Operating System


An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.

Following are some of important functions of an operating System.

6
 Memory Management
 Processor Management
 Device Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users
1.1.1 Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main memory
is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to
be executed, it must in the main memory. An Operating System does the following activities for
memory management −
 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are
not in use.
 In multiprogramming, the OS decides which process will get memory when and how
much.
 Allocates the memory when a process requests it to do so.
 De-allocates the memory when a process no longer needs it or has been terminated.
1.1.2 Processor Management
In multiprogramming environment, the OS decides which process gets the processor when and
for how much time. This function is called process scheduling. An Operating System does the
following activities for processor management −
 Keeps tracks of processor and status of process. The program responsible for this task is
known as traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required.
1.1.3 Device Management
An Operating System manages device communication via their respective drivers. It does the
following activities for device management −
 Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.

7
 De-allocates devices.
1.1.4 File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.
1.1.5 Other Important Activities
Following are some of the important activities that an Operating System performs −
 Security − By means of password and similar other techniques, it prevents unauthor ized
access to programs and data.
 Control over system performance − Recording delays between request for a service and
response from the system.
 Job accounting − Keeping track of time and resources used by various jobs and users.
 Error detecting aids − Production of dumps, traces, error messages, and other debugging
and error detecting aids.
 Coordination between other software and users − Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the computer
systems.
1.2 Types of Operating System
Operating systems are there from the very first computer generation and they keep evolving with
time. In this chapter, we will discuss some of the important types of operating systems which are
most commonly used.
1.2.1 Batch operating system
The users of a batch operating system do not interact with the computer directly. Each user
prepares his job on an off-line device like punch cards and submits it to the computer operator.
To speed up processing, jobs with similar needs are batched together and run as a group. The
programmers leave their programs with the operator and the operator then sorts the programs with
similar requirements into batches.
The problems with Batch Systems are as follows −

 Lack of interaction between the user and the job.


 CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.
 Difficult to provide the desired priority.

8
1.2.2 Time-sharing operating systems
Time-sharing is a technique which enables many people, located at various terminals, to use a
particular computer system at the same time. Time-sharing or multitasking is a logical extension
of multiprogramming. Processor's time which is shared among multiple users simultaneously is
termed as time-sharing. The main difference between Multiprogrammed Batch Systems and
Time-Sharing Systems is that in case of Multiprogrammed batch systems, the objective is to
maximize processor use, whereas in Time-Sharing Systems, the objective is to minimize response
time.
Multiple jobs are executed by the CPU by switching between them, but the switches occur so
frequently. Thus, the user can receive an immediate response. For example, in a transaction
processing, the processor executes each user program in a short burst or quantum of computatio n.
That is, if n users are present, then each user can get a time quantum. When the user submits the
command, the response time is in few seconds at most. The operating system uses CPU
scheduling and multiprogramming to provide each user with a small portion of a time. Computer
systems that were designed primarily as batch systems have been modified to time-shar ing
systems.
1.2.3 Distributed operating System
Distributed systems use multiple central processors to serve multiple real-time applications and
multiple users. Data processing jobs are distributed among the processors accordingly. The
processors communicate with one another through various communication lines (such as high-
speed buses or telephone lines). These are referred as loosely coupled systems or distributed
systems. Processors in a distributed system may vary in size and function. These processors are
referred as sites, nodes, computers, and so on.
1.2.4 Network operating System
A Network Operating System runs on a server and provides the server the capability to manage
data, users, groups, security, applications, and other networking functions. The primary purpose
of the network operating system is to allow shared file and printer access among multip le
computers in a network, typically a local area network (LAN), a private network or to other
networks. Examples of network operating systems include Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
1.2.5 Real Time operating System
A real-time system is defined as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment. The time taken by the
system to respond to an input and display of required updated information is termed as
the response time. So in this method, the response time is very less as compared to online
processing. Real-time systems are used when there are rigid time requirements on the operation
of a processor or the flow of data and real-time systems can be used as a control device in a
dedicated application. A real-time operating system must have well-defined, fixed time
constraints, otherwise the system will fail. For example, Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
There are two types of real-time operating systems.

9
1.3 Multitasking
Multitasking is when multiple jobs are executed by the CPU simultaneously by switching between
them. Switches occur so frequently that the users may interact with each program while it is
running. An OS does the following activities related to multitasking −
 The user gives instructions to the operating system or to a program directly, and receives
an immediate response.
 The OS handles multitasking in the way that it can handle multiple operations/exec utes
multiple programs at a time.
 Multitasking Operating Systems are also known as Time-sharing systems.
 These Operating Systems were developed to provide interactive use of a computer system
at a reasonable cost.
 A time-shared operating system uses the concept of CPU scheduling and
multiprogramming to provide each user with a small portion of a time-shared CPU.
 Each user has at least one separate program in memory.

 A program that is loaded into


memory and is executing is commonly
referred to as a process.
 When a process executes, it typically
executes for only a very short time
before it either finishes or needs to
perform I/O.
 Since interactive I/O typically runs at
slower speeds, it may take a long time
to complete. During this time, a CPU can be utilized by another process.
 The operating system allows the users to share the computer simultaneously. Since each
action or command in a time-shared system tends to be short, only a little CPU time is
needed for each user.
 As the system switches CPU rapidly from one user/program to the next, each user is given
the impression that he/she has his/her own CPU, whereas actually one CPU is being shared
among many users.

10
1.4 Multiprogramming
Sharing the processor, when two or more programs reside in memory at
the same time, is referred as multiprogramming. Multiprogramming
assumes a single shared processor. Multiprogramming increases CPU
utilization by organizing jobs so that the CPU always has one to execute.
The figure shows the memory layout for a multiprogramming system.
Here you can see that there are several jobs loaded into the memory to be
executed by the CPU. The CPU will then use one of the scheduling
algorithms you studied in CSC 321 to select which job to be executed and
for how long.

1.5 Services of the Operating System


An Operating System provides services to both the users and to the programs.

 It provides programs an environment to execute.


 It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −
1.5.1 Program execution
Operating systems handle many kinds of activities from user programs to system programs like
printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate, registers,
OS resources in use). Following are the major activities of an operating system with respect to
program management −

 Loads a program into memory.


 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.
1.5.2 I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide
the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.

 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.
1.5.3 File system manipulation
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include magnetic

11
tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own
properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −

 Program needs to read a file or write a file.


 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.
1.5.4 Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in the
network.
The OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication −

 Two processes often require data to be transferred between them


 Both the processes can be on one computer or on different computers, but are connected
through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.
1.5.5 Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling −

 The OS constantly checks for possible errors.


 The OS takes an appropriate action to ensure correct and consistent computing.
1.5.6 Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles
and files storage are to be allocated to each user or job. Following are the major activities of an
operating system with respect to resource management −

 The OS manages all kinds of resources using schedulers.


 CPU scheduling algorithms are used for better utilization of CPU.
1.5.7 Protection
Considering a computer system having multiple users and concurrent execution of multip le
processes, the various processes must be protected from each other's activities.

12
Protection refers to a mechanism or a way to control the access of programs, processes, or users
to the resources defined by a computer system. Following are the major activities of an operating
system with respect to protection −

 The OS ensures that all access to system resources is controlled.


 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.

Review Questions
1. List and explain the different types of O/S?
2. When is an O/S termed a multiprogramming O/S?
3. What is the basic role of an operating system?
4. List and describe the services provided by the O/S.

13
Unit 2: Memory Management: Early Systems
Expected Duration: 1 week or 2 contact hours

Introduction

The Memory Manager is in charge of main memory, also known as RAM, short for Random
Access Memory. The Memory Manager checks the validity of each request for memory space and,
if it is a legal request, it allocates a portion of memory that isn’t already in use. In a multiuser
environment, the Memory Manager sets up a table to keep track of who is using which section of
memory. Finally, when the time comes to reclaim the memory, the Memory Manager deallocates
memory. A primary responsibility of the Memory Manager is to protect the space in main memor y
occupied by the operating system itself—it can’t allow any part of it to be accidentally or
intentionally altered.

Learning Objectives
After completing this unit, you should be able to describe:
• The basic functionality of the three memory allocation schemes presented in this unit: fixed partitions,
dynamic partitions, relocatable dynamic partitions
• Best-fit memory allocation as well as first-fit memory allocation schemes
• How a memory list keeps track of available memory
• The importance of deallocation of memory in a dynamic partition system
• The importance of the bounds register in memory allocation schemes
• The role of compaction and how it improves memory allocation efficiency

2.1 Memory Management


The memory management of the o/s is critical and is concerned with the management of primary
memory. The performance of the entire system is directly dependent on 2 things
 How much memory is available and
 How it is optimized while jobs are being processes.
The memory management is concerned with
1. Keeping track of the status of each memory location.
2. Determining allocation policy for memory i.e. who gets what and what quantity
3. Allocation techniques: i.e. once it is decided to allocate memory, the specific location must
be selected.
4. Deallocation technique and policy
Memory management has a great impact on system efficiency.
Storage Management Strategies are geared to obtaining the best possible use of the main memory.
The strategies are divided into 3
 Placement strategies: i.e. where in main memory to place an incoming job. These include
first fit, best fit and worst fit.
 Replacement strategies: concerned with determining which job or data to be replaced in
the main memory to make room for incoming programs.
 Fetch strategies: concerned with when to get the next piece of program or data transferred
from the secondary storage to the main memory.

Here we have 2 main options which are:

14
 Demand Fetch: programs and data are brought into the memory specifically when they are
requested for or referenced.
 Anticipating Fetch: the system predicts a program’s need and attempts to load the
appropriate program and data pieces into the main memory before they are actually needed.
There are many memory management policies which are as follows:

2.2 SINGLE CONTIGUOUS ALLOCATION


This is the first memory allocation scheme. In this scheme, the job to be
executed is loaded completely into the memory in a single contiguo us
region. If the job is too large and did not fit in, it cannot be executed.
Only one job loaded into the memory. There was no multiprogramming.
If the job is too small, then there is wasted/unused memory. The job runs
to completion and releases all the memory for another job.

Advantages
 It is simple
 It does not require great expertise to understand or use such a
system
Disadvantages
 Memory is not fully utilized.
 Poor utilization of processors when the running job is waiting for an i/o
i.e. CPU lies idle
 User job is limited to the size of the memory
 Only one job can be processed at a time i.e. no multiprogramming

2.3 FIXED PARTITION


Fixed partitions are also known as static partition. Here the memory is divided into partitions
before processing any job. One partition is used for one job. The size of each partition is designated
when the system was powered in and cannot be changed except if the system was shut down and
reconfigured. When a job is to be loaded, a partition of sufficient size is found and assigned to it.
Jobs are not allowed to access partitions not allocated to them, so there is a boundary between job
memory allocations. The jobs are usually assigned memory in the best possible way. However, if
partitions are too small, large jobs cannot be run, if they are too large a lot of memory will be
wasted by small jobs. If a job that requires large memory comes after the large partition has been
allocated, then the job must wait until that partition becomes available. So larger jobs may end up
with longer turnaround time. For large partitions that are assigned to small jobs there is a lot of
wasted space within the partition and the wasted space not be given to another job, because the
partition is fixed. This phenomenon is called Internal Fragmentation.

Fixed Partition example


Given a system that has partitioned it memory into 5 portions as shown in Figure 2.1 (a), Jobs were
further assigned to this device to execute. Assuming the following jobs with their respective sizes
have been submitted to the O/S for processing in this order:- Job 1- 50k, Job 2- 35k, Job 3- 80k,
Job 4- 40k.

15
 The O/S will assign these jobs to suitable partitions so as to minimize interna l
fragmentation.
 Figure 2.1(b) shows this allocation.
 From the diagram note that Job 4 must wait for other jobs to run to completion before it
can be allocated memory space because its size is bigger or larger than the remaining
available space

Memory partitions Memory partitions


O/S O/S

100k Job 3 100k

25k 25k
25k 25k
50k Job 1 50k

50k Job 2 50k

Figure 2.1 (a) Figure 2.1 (b)

Advantages
 More jobs get executed
 Memory is better used than in the single contiguous allocation
Disadvantages
 Internal fragmentation
 If jobs that require large memory arrive after the large partition has been allocated then
they have to wait i.e. longer turnaround time for larger jobs.
 All the jobs are still loaded into the memory and some information may never be used.

2.4 DYNAMIC PARTITIONS


With dynamic partitions, partitions are created when a job is to be processes and the job is given
only as much memory as it requests. There are no fixed partitions, so if a job requires 30k, then
30k will be allocated to it. The memory manager has to keep track of available spaces as well as
allocated spaces i.e. 2 lists. For each space, the memory manager also lists its size and location,
this will help it when trying to make allocation. When the jobs, are loaded at first, the memory is
fully utilized but as jobs finish and release their memory allocation and these spaces get
reallocated, there is a tendency to have memory wastage outside the partition. This phenomenon
is known as External Fragmentation. Figure 2.2 shows what happens when dynamic partitio ns
are used. When the jobs initially arrive, the memory is well utilized as in (a). However, as Jobs
complete execution and their memory spaces de-allocated, several fragments of memory are
created, which lead to external fragmentation. In addition to that, when a job arrives and needs
space, even if the total size of the several fragments can accommodate that job, it can not the
loaded, because those fragments are not contiguous (e).

16
Since the Operating system keeps track of free and available spaces, it makes it easy for it to
allocate memory spaces when a new job arrives. The Operating system must find a free memory
space and then allocate it to the job. There are 2 major ways of allocating memory spaces which
are FIRST FIT and BEST FIT

2.4.1 FIRST FIT


In this partition algorithm, the first partition that meets the requirement of the job is allocated to
the job. In first fit, the list or table of free memory are sorted by location starting from the lowest
to the highest memory location. This way it is easy for the memory manager to locate the first
memory location that is big enough that can meet the request of a job.
Advantages
 Fast in allocation
 Uses free areas at the lower memory address and this tends to allow a large free area to
form at high memory address.

17
Disadvantages
 It is not always efficient, since there may be free memory spaces more suitable and which
will minimize external fragmentation at a higher memory address.

2.4.2 BEST FIT


In this scheme, the best suited fragment is used to carve out a fragment for the new job, This is
done in such a way to minimize wasted memory. The best suited fragment in this case would be
a fragment closest in size to the job in question.
Advantages
 Best fit area can be found by searching half the table, so it is fast.
 If there is a free area of exactly the desired size, it will be selected.
 If there is no area of desired size, the partition is carved out of the smallest possible free
area and does not destroy a large area that may be needed later.
Disadvantages
 Since the free area is not always the exact size, it has to be carved out, thus we may have
large number of small free fragments (slivers) instead of a large free area.

We could also have NEXT FIT which starts searching from the last allocated block for the next
available block when a job arrives. And WORST FIT which allocates the largest free available
block to the new job. There is also Multiple Partition Algorithm. This algorithm can sometimes
decrease the fragmentation. It is such that a job usually consists of several separate memory
segment contiguous however there is adequate protection. A new job may be allocated 250k
memory partitions or 425k memory partitions and so on. This way it is even easy for the job to
request more memory during execution, since the memory partition does not have to be
contiguous.

Advantages of Partition Allocation


 It facilitates multiprogramming hence more efficient utiliza tion of the processor and I/O
devices.
 Requires no special hardware
 Algorithms used are simple and easy to implement

Disadvantages of Partitioned Memory Allocation


 Fragmentation both internal and external.
 The single free area may not be large enough for a partition or jobs request
 It requires more memory than the single contiguous allocation and a more complex .
 Memory may contain information that is never used. Also, a job’s partition size is limited
to the size of physical memory.

2.5 RELOCATABLE DYNAMIC PARTITIONS


This scheme was developed to solve the problem of fragmentation and also to utilize the slivers.
In this memory allocation scheme, the memory manager relocates programs to gather together all
the empty blocks and compact them to make one block large enough to accommodate some more
jobs waiting to get in. This compaction is sometimes called garbage collection or defragmenta tio n
and is performed by the operating system.

18
While compaction is taking place, everything else must wait. All the programs in the memory are
moved so that they are contiguous. In so doing, you have to change the addresses and references
to an address of the memory in the program. This can be a complex task because when programs
are compiled and interpreted, they are loaded into the memory as sets of zeros and ones. The
operating system has to be determined which set is an address and which is an instruction or data.
Some hardware techniques are developed to cope with the relocation problem which exist. Most
of them are based on the concept of MAPPED MEMORY i.e. the address space “seen” by a job is
not necessarily the same as the physical memory address used. Since programs are relocated, the
address would change so also with all the address within the program.

If an address is left unchanged, then the program may not function well or may branch/refere nce
another program. The hardware component of the computer that assists with these problems are:
Bound Register: This stores the highest (or lowest) location in memory accessible by each
program. It ensures that during execution, the program does not access memory locations that are
“out of bounds” to it.
Relocation Register: Contains the value that must be added or subtracted to each address
referenced in a program so that it will be able to access the correct memory address after relocation.
If the program is not relocated the value is zero. Before any line of instruction is to be executed
the true address of the location is computed by adding its original address values to the value stored
in the relocation register. This method does not alter the address specified in the program.
Originally, the relocation adjustment is done automatically as each instruction is executed. This is
called Dynamic Relocation.

Compaction can either be done when there are jobs waiting to get in and a large enough free area
was not available or when a certain % of memory becomes busy. OR after a certain amount of
time, or after a partition is deallocated.
Any method chosen will still incur some overhead cost and time is lost.
Advantages
 Eliminates fragmentation and thus makes it possible to allocate more partitions.
 It allows for a higher degree of multi programming which results in increased memory and
processor utilization and thus increases throughput.
Disadvantages
 Relocation hardware increases the cost of the computer
 Compaction time may be substantial.
 Some memory will still be unused because though it is compacted, the amount of free space
area may be less than the needed partition size.
All the memory allocation technique discussed so far has 3 things in common.
The entire program:
 Be loaded into the memory
 Be stored contiguously
 Remain in memory until the job runs to completion

The following techniques do not require the entire program be contiguously loaded in the memory.

Review Questions

19
1. Compare and contrast Internal and External fragmentation.
2. In which allocation scheme are the Bound Register and Relocation Register used? What are they
used for?
3. What is common to all the memory allocation schemes described in this unit?
4. Which memory allocation scheme first allowed multiprogramming? Justify your answer.

20
UNIT 3 MEMORY MANAGEMENT (VIRTUAL MEMORY)
Introduction
In the previous lecture, we learned about the early memory management schemes. These schemes
are no longer being used, because technology has changed. Most recent systems, make use of
virtual memory management schemes. These schemes will be studied in this lecture. There are
two common terms that are in virtual memory. They are: Segment and Page. A segment is a
logical unit of information that is visible to the user’s program and normally performs some related
functions and is of arbitrary size. A page on the other hand, is a physical unit of informa tio n
strictly used for memory management, invisible to the user’s program and is of fixed size.

Learning Objectives
After completing this chapter, you should be able to describe:
• The basic functionality of the memory allocation methods covered in this unit: paged, demand
paging, segmented, and segmented/demand paged memory allocation
• The influence that these page allocation methods have had on virtual memory
• The difference between a first-in first-out page replacement policy, a least-recently used page
replacement policy, and a clock page replacement policy
• The mechanics of paging and how a memory allocation scheme determines which pages
should be swapped out of memory
• The impact that virtual memory had on multiprogramming

3.1 VIRTUAL MEMORY


This is a concept that gives the users of a system the impression that their programs are being
completely loaded in the main memory during the entire processing time. It appears as if the
memory in the system is not limited. With virtual memory (VM), it is possible to move
pages/segments between the main memory and the secondary storage. Before VM, programmers
had to limit the size of their programs to the size of available memory.

Advantages
 Job size not limited or restricted to the size of main memory
 Memory is used more efficiently. Jobs are stored in memory when they are needed
 It allows unlimited amount of multi programming
 Eliminates external fragmentation and minimize internal fragmentation
 It allows the sharing of code and data
 It facilitates dynamic linking of program segments

Disadvantages
 Increased hardware costs
 Increased overhead for handling paging interrupts
 Increased software complexity to prevent thrashing
The various VM schemes are discussed next.

21
3.2 PAGED MEMORY ALLOCATION
It is based on the concept of dividing each incoming job address spaces into pages of equal size.
The physical memory is divided into piece of the same size called blocks. Any page can be placed
in any page frame. Usually sections of a disk are called sectors or blocks. The sections of main
memory are called page frames. This technique works well if the size of block = size of page =
size of page frames. Figure 3.1 shows what this allocation scheme looks like. Before executing
a program/ job, the operating system does the following:
1. Determines the number of pages in the program/job
2. Locates enough empty page frames in the main memory and by using a suitable hardware
facility, loads any page into any page frame.
3. The pages remain logically contiguous, but the frames are not contiguous.
To perform this mapping from address space to physical memory, there must be a separate register
for each page. The registers are called page maps or page map tables (PMT). The memory manager
uses 2 more tables to keep track of the jobs and the pages where they are located. These tables are
job tables and memory map tables. Job 1 as shown in Figure 3.1 is of 350bytes. This device as
shown, has also divided the Main memory into frames of 100Bytes. As can be seen from the left,
Job 1 was also divided into pages of 100bytes each. Note that the last page of this job is only
50bytes. The arrows point to where each of the pages are loaded.

Figure 3.1: Page Memory Allocation Sample Diagram

The Page Map Table


As written earlier, there is a Page Map Table (PMT) for each active job in the system. The PMT
contains the vital information for each page of the program. This means the page no. and its
corresponding page frame memory address. So there is only one entry per page. Note that all the
pages of a job must be loaded into assigned blocks and appropriate entries made in the PMT and
MMT. The PMT assists the operating system in the translation of address spaces into the physical
memory location (i.e. address resolution). To be able to locate a particular line in a program, the
displacement or offset of that line is used. The offset is how far away a line is from the beginning
of its page. The PMT for Job 1 above is shown below:

Page Page Frame


22
0 8
1 10
2 5
3 11

Note that there is internal fragmentation in page frame 11, because the section of the job loaded
into it is just 50bytes in size.

Job Table
 The JT contains 2 entries for each active job.
 The size and location of the PMT.
 The JT is a dynamic list that grows as jobs are loaded into the system and shrinks as they
get completed.

Memory Map Table


The MMT has an entry for each page frame, listing the location and free/busy status for each one.
Also the memory block table. There is only one in the system.

Sample Memory Map Table for Figure 3.1


Page Frame No Status
0 Busy
1 Busy
2 Free
3 Free
4 Free
5 Busy
6 Free
7 Free
8 Busy
9 Free
10 Busy
11 Busy
12 Free

Advantages of Page Memory Allocation


 It allows jobs to be allocated in non-contiguous memory location.
 here is better utilization of the memory as more jobs can fit into the main memory at a time.
 There is higher degree of multiprogramming due to the ability to load more jobs in the
main memory

23
Disadvantages of Page Memory Allocation
 There is still internal fragmentation, however on the last page of a job only.
 The overload (i.e. processor time) is increased to take care and update the JT, PMT and
MMT which are stored in the memory.
 All the pages of the job is loaded into the memory before execution so that memory
contains information that is seldom used.
 A job’s address space is limited to the size of physical memory.

3.3 DEMAND PAGED MEMORY MANAGEMENT (DPMM)


Demand Pages Memory Management is an improvement on PMM. DPMM loads only part of a
job into the memory for execution with the hope that the program is sequential and instruction at
the end of the program might not be needed in the beginning. To keep track of or decide the page
to load in memory, the PMT is extended with a status column. With DPMM, jobs are still divided
into equally sized pages that initially reside in the secondary memory. When a job begins to run,
its pages are brought into the memory only as needed. For this to be successful, the system, uses
a high speed direct access storage device that can work directly with the CPU, because pages must
be passed quickly from storage to main memory and back.
3.3.1 SWAPPING
Swapping is the passing of pages to main memory from the secondary storage and also back to the
main memory. Some predefined policies or algorithms are employed during swapping. The
operating system uses the JT, PMT and MMT to implement the algorithm. These tables are the
same as in paged memory allocation only that the PMT is extended with 3 additional fields:
 A field to determine if the page is already in the memory
 A field to determine if the program contents have been modified
 A field to determine if the page has been referenced recently.

Sample PMT IN DPMM


Page no Status Modified? Referenced? Page frame no.
0 Y
1 Y
2 Y
Y means that the page is in already in the main memory. Modified fields tell the system if the page
should be written back to the secondary storage. If modified then the system will write it back.
Otherwise, the original is already in the secondary storage. Referenced field tells the operating
system how often a page is referenced, this will be used to determine which pages should remain
in the main memory and the one that should be swapped out.

If a job requests that one of its pages be brought into the main memory and there is no empty page
frame available then the system must
1. Check that the demanded page is not already in main memory
2. If not, decide which page should be swapped out of the main memory
3. This includes writing the content of the swapped page back to the secondary storage if it
was modified

24
4. Since the demanded page is in the secondary storage, the operating system calls the page
interrupt handler to resolve the problem.

Figure 3.2: Sample DPMM

Demand paging requires that the Page Map Table for each job keep track of each page as it is
loaded or removed from main memory. Each PMT tracks the status of the pages of all active jobs
(as shown above), whether it has been modified, whether it has been recently referenced, and the
page frame number for each page currently in main memory. (Note: For this illustration, the Page
Map Tables have been simplified.

3.3.2 Page Interrupt Handler


This determines if there is an empty page frame in the memory, so that the requested page can be
loaded immediately. If all page frames are busy as shown above, the page interrupt handler must
decide which page that will be swapped out using the predefined policy. Then the swap is made.
All jobs’ tables such as PMT and the MMT are then updated. If a page is removed from the memory
but called back shortly after, then we say that there is thrashing caused by excessive amount of
page swapping. Thrashing can occur across jobs also. Page Fault is the failure to find a job in the
memory.

25
Advantages
 It has better memory utilization
 The Job size is not limited by memory size
 Here the entire job does not need to be loaded before execution
 It has higher degree of multiprogramming
 The Page Fetch/Page Failure is whenever there is a page reference for which the page
needed is not in the memory.
 The Page Success is when a page that is required is already in the memory.

Disadvantages
 It supports Thrashing
 Move tables information to be used
 Space occupied by tables
 Increased overhead due to the tables

3.3.3 EFFICIENCY OF DPMM


The efficient operation of VM system depends on the degree of locality if reference.
There are 2 classes which are:
 Temporal locality: i.e. once a location (data or instruction) is referenced in a program, then
it is often referenced again very soon.
 Spacial locality: i.e. the probability that once a location is referenced, a nearby location
will be referenced soon.
Working Sets: used in DPMM is an innovation that improved the performance of DPMM. A job’s
working set is the set of pages residing in memory that can be accesses without incurring a page
fault.

3.3.4 PAGE REPLACEMENT POLICIES/STRATEGIES


This is the policy that selects which page is to be removed from the memory, to create space for a
demanded page, especially when all the page frames are occupied. There are 2 most well known
policies which are: FIFO and Least Recently Used (LRU).
First-In-First-Out policy (FIFO)
First-In-First-Out policy will remove the pages that have been in memory the longest. Sometimes,
a page that has just been swapped because of FIFO may be wanted since the page request order
cannot be changed by the system, although the size of the memory can be changed to accommodate
more pages. However, there is no guarantee that buying more memory will always result in better
performance. This is known as FIFO anomaly or Belady’s anomaly.

Least Recently used policy (LRU)


Least Recently used policy selects for removal, the pages that has not been referenced for the
longest time. The LRU page analysis puts the most recently used page on the top of the memory
list so it is easy to tell which page has been referenced most. The page at the bottom will then be
chosen for the removal. Note that LRU belongs to the removal technique called Stack Algorithm
because such algorithm function in such a way that increasing memory size can never cause the
no of page interrupts to increase.

Other replacement strategies include:

26
 Principle of Optimality: replaces page which will not be used until further time in future
 Random: pages are selected at random
 Least Frequently Used: the LFU page is selected
 NUR: an inexpensive and efficient approximation of LRU that replaces a page not used
recently. NUR means not-used-recently.
 Working Set: replaces a page, if it is not in the favoured subset of a process’s pages
 Page Fault Frequency: adjusts the size of a process’s resident set of pages in response to
changes in the process’s page fault rate.

3.4 SEGMENTED MEMORY MANAGEMENT


Segmentation is based on the practice that programs are structured in modules (logical groupings
of code) e.g. a sub routine. With segmented memory management (SMM), each job/program is
divided into segment of different sizes, each segment contains a module that has group of
information that perform related functions. A segment can be defined as a logical group of
information, such as a sub routine. The memory is no longer divided into page frames since the
sizes of the segments vary. So memory allocation per segment is allocated in a dynamic way.
When a program is compiled, the segments are set up according to the program’s structural
module. The segments are numbered and a segment map table is generated for each job. This
means that when a segment is to be loaded into the memory, the exact size needed by that segment
is allocated to the segments. Note that since memory is allocated in a dynamic way, the problem
of external fragmentation may occur in this scheme.

To keep track of status of memory locations, the operating system makes use of the following
tables combining aspects of dynamic partition and DPMM.
 Job table: This table lists every job in the system. There is only 1 per system.
 Segment Map Table: This table contains details of each segment, every job has a
SMT. The Segment Map Table (SMT) keeps track of whether a segment is in
memory or not. The table contains segment numbers, their lengths, access rights,
status, and (when each is loaded into memory) its location in memory
 Memory map table: This table monitors allocation of memory, there is 1 per system

With SMM, all the job is also not loaded into the memory before execution, rather the segments
of the job are loaded when needed. Copies of all segments are kept in the secondary storage. When
a segment not in the memory is referenced, a Segment Exception Interrupt occurs. The operating
system must find a space in memory for the needed segment either by compacting memory or
removing a segment from memory. Some segments when loaded on the memory may experience
“out of bound” interrupt, if the programmer is trying to store a new entry into the segment. When
this occurs, the operating system gains control and increases the length/size of the segment to
accommodate the new entry.

27
Figure 3.3: Example of Segmentation

Dynamic Linking and Loading: In a DLL environment, only the segment with the main program
is loaded. If it makes reference to any other subroutine, then the segment with the sub routine is
loaded. This is known as deferred linking.

Shared Segments: If 2 or more jobs are using the same procedure, then only a copy of the segment
with that procedure is loaded into the memory. When a job needs that procedure e.g. square root,
the operating system will check via the AST to see if the segment is already in the memory. If so,
it adds the new job to the list of jobs to use that procedure. It does not load it afresh for each job
that requires it. The AST i.e. Active Segment Table is a system wide table which is used to
coordinate segment sharing. See Figure 3.4.

28
Figure 3.4: Sample of Shared segments

Advantages of SMM
 It eliminates fragmentation
 It provides virtual memory
 This allows dynamic segment growth
 It assists dynamic linking
 It also facilitates shared segments
 It enforces controlled access

Disadvantages of SMM
 It increases hardware cost, processor overhead for address mapping
 Additional for address mapping
 Additional overhead is incurred in order to support dynamic segment growth and eliminate
fragmentation
 It is difficult to manage variable size segments in secondary storage
 The maximum size of a segment is limited by the size of main memory
 Techniques need to be developed to prevent segment thrashing

3.5 SEGMENTED AND DEMAND-PAGED MEMORY MANAGEMENT


This is a combination of DPMM and SMM. In this memory management, each segment is sub-
divided into pages of equal sizes. It offers the logical benefit of SMM and the physical benefits of
DPMM. By physically manipulating the pages, instead of the entire segment, problem of
compaction, secondary storage handling, limitation on segment size are removed. To access a
location in memory, the system must locate the address which is composed of : segment no., page
no., within the segment, and displacement within that page because this is time consuming, an
associative register is used to store information about recently referenced pages.

29
Advantages
 It has all the advantages of DPMM and SMM
Disadvantages
 It has increased overhead on processor, hardware cost and complexity, space occupied by
the tables and the danger of thrashing.

Review Questions

1. Differentiate between Segmented memory management and Paged memory management?


2. Define the terms
a. Page
b. Segment
c. Page fault
d. Internal Fragmentation
e. External Fragmentation
f. Swapping
g. Thrashing
3. Give a vivid example of how shared segments can be use in any software package that you know
and use.
4. How can the size of the page frame affect Paged Memory Management.

30
UNIT 4 Concurrent Processing
Introduction
In the previous lectures, we learned that multiple jobs can be loaded into the memory for the
processor to execute. As technology improved, systems were built with more than one processor.
This means that more than one process will be in the running state at a time. Imagine how you
can be playing music on your phone and at the same time checking your WhatsApp messages!
This unit will let us understand how this works.

Learning Objectives
After completing this chapter, you should be able to describe:
• The critical difference between processes and processors, and their connection
• The differences among common configurations of multiprocessing systems
• The need for process cooperation when several processes work together
• How several processors cooperate

4.1 Concurrent Processing


Multiprogramming Systems are Systems that use only one CPU, that is one processor, which is
shared by several jobs or processes. Multiprocessing Systems are Single computers with multip le
cores as well as linked computing systems with only one processor each to share processing among
them.

Parallel Processing
Parallel Processing: one form of multiprocessing, is a situation in which two or more processors
operate in unison. Two or more CPUs are executing instructions simultaneously. The Processor
Manager has to coordinate the activity of each processor as well as synchronize cooperative
interaction among the CPUs.

The primary benefits to parallel processing systems are:


1. Increased reliability
The availability of more than one CPU; If one processor fails, then the others can continue
to operate and absorb the load. This is not simple to implement. The system must be
carefully designed so that:
 The failing processor can inform the other processors to take over
 The Operating System must restructure its resource allocation strategies so the remaining
processors don’t become overloaded.
2. Faster processing

The increased processing speed is often achieved because sometimes instructions can be
processed in parallel, two or more at a time in one several ways: Some systems allocate a
CPU to each program or job. Others allocate a CPU to each working set or parts of it.
Others subdivide individual instructions so that each subdivision can be processes
simultaneously, this is referred to as Concurrent programming.

Two major challenges remain:


1. How to connect the processors into configurations;

31
2. How to orchestrate processor interaction, which applies to multiple interacting processes as
well. Note that increased flexibility brings increased complexity

The complexities of the processor manager’s task when dealing with multiple processors can be
illustrated by the Fast-Food Lunch Stop example below, where a customer has come to buy
shawarma:

Originator Action Receiver


Processor 1 Accepts the query, checks for errors, and Processor 2
(the order clerk) passes the request on to (the bagger)

Processor 2 Searches the database for the required


(the bagger) information (the shawarma)

Processor 3 Retrieves the data from the database (the


(the cook) meat to cook for the shawarma) if it’s kept
off-line in secondary storage
Processor 3 Once the data is gathered (the shawarma is Processor 2
(the cook) cooked), it’s placed where the receiver can (the bagger)
get it (in the shawarma bin)

Processor 2 Passes it on to Processor 4


(the bagger) (the cashier)
Processor 4 Routes the response (your order) back to the You
(the cashier) originator of the request

From the above you can see that synchronization is the key to the system’s success because many
things can go wrong in a multiprocessing system. The system can’t work properly unless every
processor communicates and cooperates with every other processor.

Multiprocessing can take place at several different levels, each of which requires a differe nt
frequency of synchronization. Multiprocessing at the job level is fairly benign. it’s as if each job
is running on its own workstation with shared system resources. When multiprocessing takes place
at the thread level; a high degree of synchronization is required to:
 Disassemble the process;
 Perform the thread’s instruction
 Correctly reassemble the process.

Parallelism level Process assignment Synchronization required

32
Job level Each job has its own processor and No explicit synchronization required
all processes and threads are run by
that same processor
Process level Unrelated processes, regardless of Moderate amount of synchronization
job, are assigned to any available required to track processes
processor

Thread level Threads are assigned to available High degree of synchronization


processors required, often requiring explicit
instructions from the programmer

4.2 Multiprocessing Configurations

4.2.1 Master/Slave Configuration


This is an asymmetric multiprocessing system. A single-processor system with additional slave
processors, each of which is managed by the primary master processor. The master processor:
• Manages the entire system: all files, devices, memory and processors
• Maintains the status of all processes in the system;
• Performs storage management activities;
• Schedules the work for the other processors;
• Executes all control programs

The configuration is well suited for computing environments in which processing time is divided
between front-end and back-end processors. The front-end processor takes care of the interactive
users and quick jobs. The back-end processor takes care of those with long jobs using the batch
mode.
The primary advantage of this configuration is its simplicity.
It also has three serious disadvantages
1. Its reliability is no higher than for a single processor system because if the master processor
fails, the entire system fails.

33
2. It can lead to poor use of resources because if a slave processor should become free while
the master processor is busy, the slave must wait until the master processor becomes free
and can assign more work to it.
3. It increases the number of interrupts because all slave processors must interrupt the master
processor every time they need OS intervention such as for I/O requests. This creates long
queues at the master processor level when there are many processors and many interrupts.

4.2.2 Loosely Coupled Configuration


Loosely Coupled Configuration features several complete computer systems, each with its own
memory, I/O devices, CPU, and O/S. Each processor controls its own resources: files, access to
memory, I/O devices. Each processor maintains its own commands and I/O management tables.
The only difference between a loosely coupled multiprocessing system and a collection of
independent single-processing system is that each processor can communicate and cooperate with
the other processors.

I/O
Main Memory Processor 1 Devices

I/O
Main Memory Processor 2 Devices

I/O
Main Memory Processor 3
Devices

When a job arrives for the first time, it’s assigned to one processor. Once allocated, the job remains
with the same processor until it’s finished. Each processor must have global tables that indicate to
which processor each job has been allocated. To keep the system well balanced and to ensure the
best use of resources, job scheduling is based on several requirements and policies. New jobs might
be assigned to the processor with lightest load or the best combination of output available.
• This system isn’t prone to catastrophic system failures.
• Even when a single processor fails, the others can continue to work independently.
• However, it can be difficult to detect when a processor has failed.

4.2.3 Symmetric Configuration


The Symmetric Configuration is also known as tightly coupled configuration. In this arrangeme nt,
we have several processors in a system and these processors execute jobs on the same Main
memory and also have access to the same set of I/O devices.
It has four advantages over loosely coupled configurations which are:
1. It is more reliable;

34
2. It uses resources effectively;
3. It can balance loads well;
4. It can degrade gracefully in the event of a failure.

In this configuration, processor scheduling is decentralized. A single copy of the Operating System
and a global table listing each process and its status is stored in a common area of memory so every
processor has access to it. Each processor uses the same scheduling algorithm to select which
process it will run next. Whenever a process is interrupted, whether because of an I/O request or
another type of interrupt, its processor updates the corresponding entry in the process list and finds
another process to run. This means that the processors are kept busy. It also means that any given
job or task may be executed by several different processors during its run time. This configura tio n
is what is currently being used by most digital devices. Think about it? Your system has just one
hard disk, a 16GB RAM, possibly multiple USB ports and perhaps it is Core i3. This means that
all the cores on your system use the same set of I/O devices.

Since each processor has access to all I/O devices and can reference any storage unit, there are
more conflicts as several processors try to access the same resource at the same time. This presents
the obvious need for algorithms to resolve conflicts between processors, thus there is need for
process synchronization. Symmetric Configuration is the most difficult configuration to
implement because the processes must be well synchronized to avoid the problems of races and
deadlocks. Process Synchronization will be studied in the next lecture.

Review Questions
1. Differentiate between multiprogramming and multiprocessing systems.
2. Assuming you have just been hired to assist HP design a multi-user interactive system applying the
concept of multi-processing. Which configuration would you implement and why would you choose
that configuration?

35
UNIT 5 PROCESS SYNCHRONIZATION
Introduction
As technology improved, systems were built with more than one processor, from Dual core, Core
i3, Core i5 to Core i9! This means that more than one process will be in the running state at a time.
We learned the different configurations in the previous lecture. In particular, studied that the
symmetric configuration is peculiar and thus the activities of the processors must be synchronized
for the system to be efficient. This unit will let us understand how to achieve process
synchronization.

Learning Objectives
After completing this unit, you should be able to describe:
• The significance of a critical region in process synchronization
• The basic concepts of process synchronization: test-and-set, WAIT and SIGNAL, and
semaphores
• The need for process cooperation when several processes work together
• How several processors, executing a single job, cooperate

5.1 Introduction
The success of process synchronization hinges on the capability of the O/S to make a resource
unavailable to other processes while it is being used by one of them. The resource being used must
be locked away from other processes until it is released. Only when the resource is released is a
waiting process allowed to use the resource. Have you ever tried to open a file and got the message
“ this file is in use by another program”? That is the O/S locking the file so that it will only be
available when not in use. If you have ever linked a Microsoft Excel file to a word document using
mail merge, then any time that word document is running/ open, the O/S will lock the Excel file
from being edited. That is a simple practical example of process synchronization. A mistake in
synchronization could leave a job waiting indefinitely(starvation) or, if it’s a key resource, cause
a deadlock.

5.2 How do several processors cooperate


We have already learned that most systems these days have multiple core, including your phones
(you can check the specs). So when you, the user, asks your device to perform some functio n,
how do the processors cooperate? Recall that processes can be in several states (hold, ready,
running, blocked…). Recall also that when a process is accepted to be executed and it has been
moved to the memory, it is usually place on the ready queue. So, for processors to run the next
job, each processor must:

1. Consult the list of jobs to see which one should be run next;
2. Retrieve the job for execution;
3. Increment the READY list to the next job;
4. Execute it.

Imagine a scenario where the next job to be executed is Job 74. Processor 1 and Processor 2 are
free and both go to the READY list to select a job. Processor 1 sees that Job 74 is the next job and
goes to retrieve it. A moment later processor 2 also selects job 74 from the READY list. Shortly
thereafter, processor 1, having retrieved job 74 returns to the READY list and increments it,

36
moving job 75 to the top. A moment later processor 2 returns; it has also retrieved job 74 and is
ready to process it, so it increments the READY list and now job 76 is moved to the top and
becomes the next job in line to be processed. Job 75 has become the missed waiting customer and
will never be processed. Job 74 is being processed twice. This problem can also occur in memory
and page allocation tables, I/O tables, application databases, and any shared resource. This
situation calls for synchronization.

5.3 Synchronization Mechanisms


Several synchronization mechanisms are available to provide cooperation and communica tio n
among processes. This is applicable both to multiprocessors and to two or more processes in a
single-processor (time-shared) processing system. The common element in all synchroniza tio n
schemes is to allow a process to finish work on a critical part of the program before other processes
have access to it. When a process has access to a shared, modifiable resource, we say that that
process is in a CRITICAL REGION. It is a critical section and its execution must be handled as
a unit. Processes within a critical region can’t be interleaved without threatening the integrity of
the operation. The concept of a critical region becomes necessary because it ensures that parallel
processes will modify shared data only while in the critical region.

Synchronization is sometimes implemented as a lock-and-key arrangement such as the following:


1. Before a process can work on a critical region, it must get the key
2. Once it has the key all other processes are locked out until it finishes;
3. Unlocks the entry to the critical region;
4. Returns the key so that another process can get the key and begin work
This sequence consists of two actions which are:
1. The process must first see if the key is available;
2. If it is available, the process must pick it up and put it in the lock to make it unavailable to
other processors.
For this scheme to work, both actions must be performed in a single machine cycle, otherwise, it
is conceivable that while the first process is ready to pick up the key, another one would find the
key available and prepare to pick up the key and each could block the other from proceeding any
further.

The aim of these mechanisms is to enforce Mutual exclusion on the resource being used. That is
some form of ‘locking’ which ensure that only one process has access to that shared resource at a
time. The synchronization mechanisms can be handled via Hardware methods, known as Test-
and-Set, WAIT and SIGNAL. There is also a software method called Semaphores.

5.3.1 Test-and-Set
A single, indivisible machine instruction known simply as TS and was introduced by IBM for its
multiprocessing system 360/370 computers. In a single machine cycle a process tests to see if the
key is available. If it is, it’s set to unavailable. The actual key is a single bit in a storage location
that can contain a 0 (free) or a 1 (busy). We can consider TS to be a function subprogram that has

37
one parameter (the storage location) and returns one value (the condition code: busy/free), with
the exception that it takes only one machine cycle.
The mechanism works as below:
• Process 1 would test the condition code using the TS instruction before entering a critical
region.
• If no other process was in this critical region, then process 1 would be allowed to proceed
and the condition code would be changed from 0 to 1.
• Later, when process 1 exits the critical region, the condition code is reset to 0 so another
process can enter
• If process 1 finds a busy condition code, then it’s placed in a waiting loop where it
continues to test the condition code and waits until it’s free
Advantages
1. It’s a simple procedure to implement.
2. It works well for a small number of processes.
Disadvantages
1. First, when many processes are waiting to enter a critical region, starvation could occur
because the processes gain access in an arbitrary fashion. Unless a first-come first served
policy were set up, some processes could be favoured over others and thus lead to
Starvation.
2. A second drawback is that the waiting processes remain in unproductive, resources-
consuming wait loops, requiring context switching (known as busy waiting). This wait
loops not only consumes valuable processor time but also relies on the competing processes
to test the key.

5.3.2 Wait and Signal


This is a modification of test-and-set that is designed to remove the busy waiting condition. The
two new operations, which are mutually exclusive and become part of the process scheduler’s
operations are:
WAIT
This is activated when a process encounters a busy condition. WAIT sets the process’s process
control block (PCB) to the blocked state and links it to the queue of the processes waiting to enter
this particular critical region. The process scheduler then selects another process for execution.
SIGNAL
This is activated when a process exits the critical region and the condition code is set to “free”. It
checks the queue of processes waiting to enter this critical region and selects one, setting it to the
READY state. Eventually the Process Scheduler will choose this process for running.
The operations WAIT and SIGNAL frees the processes from the busy waiting dilemma and returns
control to the O/S, which can run other jobs while the waiting processes are idle.

5.3.3 Semaphores
A Semaphore is a non-negative integer variable that’s used as a binary signal (a flag). It signals if
and when a resource is free and can be used by a process. Dijkstra (1965) introduced two
operations to overcome process synchronization which are:
- P (proberen means “to test”)

38
- V (verhogen means “to increment”)
This is how a Semaphore works: Let s be a semaphore variable.
• The V operation on s is simply to increment s by 1
V(s): s: = s+1
This necessitates a fetch, increment, and store sequence. The increment operation must be
performed as a single indivisible action to avoid deadlocks. s cannot be accessed by any other
process during operation.

• The operation P on s is to test the value of s and, if it is not 0, to decrement it by 1.


P(s): if s>0, then s: = s-1.
This involves a test, fetch, decrement and store sequence. This sequence must be performed
as an individual action in a single machine cycle or be arranged so that the process cannot take
action until the operation (test or increment) is finished.

The operations to test or increment are executed by the Operating System in response to calls
issued to any one process naming a semaphore as parameter. If s=0, it means that the critical
region is busy and the process calling on the test operation must wait until the operation can be
executed (s>0). Test and increment operations on semaphore s enforces the concept of mutual
exclusion, which is necessary to avoid having two operations attempt to execute at the same time.
The name traditionally given to this semaphore is mutex (MUTual EXclusion):
P(mutex): if mutex > 0 then mutex := mutex – 1
V(mutex): mutex: =mutex + 1

5.4 Process Cooperation


There are occasions when several processes work directly together to complete a common task.
Two famous examples are the problems of producers and consumers, and of readers and writers.
Each case requires mutual exclusion and synchronization. Each is implemented by using
semaphores.

5.4.1. Producers and Consumers


One process produces some data that another process consumes later. Example: The CPU can
generate output data must faster than a printer can print it. Since this involves a producer and a
consumer of two different speeds, we need a buffer where the producer can temporarily store data
that can be retrieved by the consumer at a more appropriate speed. Since the buffer can hold only
a finite amount of data, the synchronization process must delay the producer from generating more
data when the buffer is full. It must also be prepared to delay the consumer from retrieving data
when the buffer is empty. This task can be implemented by two counting semaphores:
 one to indicate the number of full positions in the buffer;
 The other to indicate the number of empty positions in the buffer
 A third semaphore, mutex, will ensure mutual exclusion between processes.

The concept of producers and consumers can be extended to buffers that hold records or other data,
as well as to other situations in which direct process-to-process communication of messages is
required.

39
5.4.2 Readers and Writers
Readers are processes that only read data from storage locations, while Writers are processes that
write data to a storage location. When two types of process need to access a shared resource for
examples in an Airline Reservation System, The readers are those who want flight informatio n.
They only read the existing data they don’t modify it because no one is changing the database, the
system can allow many readers to be active at the same time there’s no need to enforce mutual
exclusion among them. The writers are those who are making reservations on a particular flight.
Writers must be carefully accommodated because they are modifying existing data in the database.
The system can’t allow someone to be writing while someone else is reading (or writing). It must
enforce mutual exclusion if there are groups of readers and a writer, or if there are several writers
in the system.

This can be implemented using two semaphores to ensure mutual exclusion between readers and
writers
- A resource can be given to all readers, provided that no writers are processing
- A resource given to a writer, provided that no readers are reading and no writers writing

Readers must always call two procedures:


- The first checks whether the resources can be immediately granted for reading
- When the resource is released, the second checks to see if there are any writers waiting

Writers must always call two procedures:


- The first determines if the resource can be immediately granted for writing
- When the resource is released, the second checks to see if there are any readers waiting

Questions
1. When is a process said to be in a critical region?
2. What mechanisms are used to enforce mutual exclusion? Describe these mechanisms.
3. With suitable examples, describe the problem of Readers and Writers.

40
UNIT 6 Process Management (Deadlock)
Introduction

Learning Objectives

6.1 Deadlock
A Lack of Process Synchronization Causes Deadlock or Starvation
Deadlock (“deadly embrace”) – This is a system-wide tangle of resource requests that begins when
2 or more jobs are put on hold.

Here, each job is waiting for a vital resource to become


available. Needed resources are held by other jobs also
waiting to run but can’t because they’re waiting for other
unavailable resources. The jobs come to a standstill. Now
the deadlock is complete if remainder of system comes to
a standstill as well. This can be resolved via external
intervention.

Deadlock is more serious than indefinite postponement or


starvation because it affects more than one job. Because
resources are being tied up, the entire system (not just a few programs) is affected. It requires
outside intervention (e.g., operators or users terminate a job).

6.1.1 Four Necessary Conditions for Deadlock


Deadlock preceded by simultaneous occurrence of four conditions that operating system could
have recognized which are:
1. Mutual exclusion -- the act of allowing only one process to have access to a dedicated
resource.
2. Resource holding -- the act of holding a resource and not releasing it; waiting for the other
job to retreat.
3. No preemption -- the lack of temporary reallocation of resources; once a job gets a
resource it can hold on to it for as long as it needs.
4. Circular wait -- each process involved in impasse is waiting for another to voluntar ily
release the resource so that at least one will be able to continue.

41
6.2 Seven Cases of Deadlocks
6.2.1 Case 1 : Deadlocks on File Requests

If jobs can request and hold files for duration of their executio n,
deadlock can occur. Any other programs that require F1 or F2 are
put on hold as long as this situation continues. Deadlock remains
until a program is withdrawn or forcibly removed and its file is
released.
6.2.2 Case 2 : Deadlocks in Databases
1. P1 accesses R1 and locks it.
2. P2 accesses R2 and locks it.
3. P1 requests R2, which is locked by P2.
4. P2 requests R1, which is locked by P1.

Deadlock can occur if 2 processes access & lock records in database.


3 different levels of locking :
 entire database for duration of request
 a subsection of the database
 individual record until process is completed.
If you don’t use locks, this can lead to a race condition.
6.2.3 Case 3: Deadlocks in Dedicated Device Allocation
1. P1 requests tape drive 1 and gets it.
2. P2 requests tape drive 2 and gets it.
3. P1 requests tape drive 2 but is blocked.
4. P2 requests tape drive 1 but is blocked.
Deadlock can occur when there is a limited number of dedicated devices. E.g., printers, plotters or
tape drives.
6.2.4 Case 4 : Deadlocks in Multiple Device Allocation

Deadlocks can happen when several processes request, and hold on


to, dedicated devices while other processes act in a similar manner.
6.2.5 Case 5 : Deadlocks in Spooling
Most systems have transformed dedicated devices such as a printer
into a sharable device by installing a high-speed device, a disk, between it and the CPU. Disk
accepts output from several users and acts as a temporary storage area for all output until printer
is ready to accept it (this is known as spooling). If printer needs all of a job's output before it will
begin printing, but spooling system fills available disk space with only partially completed output,
then a deadlock can occur.
6.2.6 Case 6 : Deadlocks in Disk Sharing

42
Disks are designed to be shared, so it’s not uncommon for 2 processes access different areas of
same disk. Without controls to regulate use of disk drive, competing processes could send
conflicting commands and deadlock the system.
6.2.7 Case 7: Deadlocks in a Network

A network that’s congested (or filled large % of its I/O buffer


space) can become deadlocked if it doesn’t have protocols to
control flow of messages through network.

6.3 Modeling Deadlocks Using Directed Graphs


In modeling deadlocks, processes are usually represented by circles, while resources are
represented by squares. A Solid line from a resource to a process means that process is holding
that resource (see Figure 6.6 (a)). A Solid line from a process to a resource means that process is
waiting for that resource (see Figure 6.6 (b)). This means that the direction of the arrows indicate
the flow of request. If there’s a cycle (see Figure 6.6 (c)) in the graph then there’s a deadlock
involving the processes and the resources in the cycle.

Figure 6.6: Directed Graph Examples

43
Figure 6.7: More Directed Graph Examples

From the figure above we can see that a circle exists at the bottom right graph (starting at any point, trace
the arrows and you will see this). As long as there are no additional resources added to the pool, the
processes in that graph will remain in this state until some strategy is used to resolve the situation. These
strategies are discussed next.

6.4 Strategies for Handling Deadlocks


1. Deadlock Prevention: Prevent one of the four conditions from occurring.
2. Deadlock Avoidance : Avoid the deadlock if it becomes probable.
3. Deadlock Detection: Detect the deadlock when it occurs and
4. Deadlock Recovery: Recover from it gracefully.

6.4.1 Deadlock Prevention


To prevent deadlock, the O/S must eliminate 1 out of 4 necessary conditions. Mutual exclusion is necessary
in any computer system because some resources (memory, CPU, dedicated devices) must be exclusively
allocated to 1 user at a time. It might be able to use spooling for some devices. This means that basically
only the remaining three conditions can be eliminated. Note that only one condition should be eliminated
in any case, not that two should be eliminated.
Eliminating Resource Holding
Resource holding can be avoided by forcing each job to request, at creation time, every resource it will need
to run to completion. The job cannot proceed to run until it has been allocated all the resources that it
requested for. If, however, all the resources are not available, then that process must return other resources
that may have been allocated to it. Basically, jobs are not allowed to hold to any resource if they do not
have all the resources need to run to completion. This method, significantly decreases the degree of

44
multiprogramming. Some peripheral devices would be idle because, although they are allocated to a job,
they wouldn't be used all the time.

Eliminating No Preemption
No preemption could be bypassed by allowing the O/S to deallocate resources from jobs. This means that
if a job has been assigned resources that is needs, the O/S has the right to “collect” those resources if they
are needed by another process. This approach is okay if the state of a job can be easily saved in its PCB
and restored at a later time.
Eliminating Circular Wait
Circular wait can be bypassed if the O/S prevents the formation of a circle. This is achieved by Havender’s
solution (1968). The solution is based on a numbering system for resources, for example printer = 1,
Harddisk = 2, plotter= 3. The O/S then forces each job to request its resources in ascending order. Basically,
if a process needs the plotter, then the printer, it must request for the printer first before the plotter. With
this arrangement, the O/S can somehow tell if those devices have been allocated or not. This requires that
jobs anticipate the order in which they will request resources and this can be difficult to determine.
6.4.2 Deadlock Avoidance
Another strategy to tackle deadlocks, is to avoid it. The O/S can avoid deadlock if system knows ahead of
time the sequence of requests associated with each of the active processes. Dijkstra’s Bankers Algorithm
(1965) is used to regulate resource allocation to avoid deadlock. According to the algorithm, the computing
system is expected to be in any of the two following states: -
Safe state – This state means that there exists a safe sequence to allocate resources to all active processes
such that they can all run to completion.
Unsafe state –This state doesn’t necessarily lead to deadlock, but it does indicate that system is an excellent
candidate for one.

Banker’s Algorithm
Banker’s algorithm is based on a bank with a fixed amount of capital that operates on the following
principles:
1. No customer will be granted a loan exceeding bank’s total capital.
2. All customers will be given a maximum credit limit when opening an account.
3. No customer will be allowed to borrow over the limit.
4. The sum of all loans won’t exceed the bank’s total capital.
The O/S (bank) must be sure never to satisfy a request that moves it from a safe state to an unsafe one.

45
A Bank’s Safe and Unsafe States
Safe
Customer Loan amount Maximum credit Remaining credit
C1 0 4,000 4,000
C2 2,000 5,000 3,000
C3 4,000 8,000 4,000
Total loaned: $6,000
Total capital fund: $10,000

From the table above, you can see that the bank has loaned out a total of $6,000 out of the $10,000, leaving
a balance of $4,000 which can be used to satisfy any of the customers.

Unsafe
Customer Loan amount Maximum credit Remaining credit
C1 2,000 4,000 2,000
C2 3,000 5,000 2,000
C3 4,000 8,000 4,000
Total loaned: $9,000
Total capital fund: $10,000

From the table above, you can see that the bank has loaned out a total of $9,000 out of the $10,000, leaving
a balance of $1,000 which cannot be used to satisfy any of the customers.

Problems with Banker’s Algorithm


1. As they enter the system, jobs must state in advance the maximum number of resources needed.
2. Number of total resources for each class must remain constant.
3. Number of jobs must remain fixed.
4. Overhead cost incurred by running the avoidance algorithm can be quite high.
5. Resources aren’t well utilized because the algorithm assumes the worst case.
6. Scheduling suffers as a result of the poor utilization and jobs are kept waiting for resource allocation.

6.4.3 Deadlock Detection


In deadlock detection, the O/S uses directed graphs to show that circular wait exists in the system and this
is indicates a deadlock. Algorithm used to detect circularity can be executed whenever it is appropriate.

Reducing Directed Resource Graphs


1. Find a process that is currently using a resource and not waiting for one. Remove this process from
graph and return resource to “available list.”
2. Find a process that’s waiting only for resource classes that aren’t fully allocated. This process isn’t
contributing to deadlock since eventually it gets resource it is waiting for, finish its work, and return
resource to “available list.”
3. Go back to Step 1 and continue the loop until all lines connecting resources to processes have been
removed.
If steps 1-3 are applied repeatedly and there still exist a circle, then the O/S has no choice than to
Recover from the deadlock.

46
6.4.4 Deadlock Recovery
Once a deadlock has been detected it must be untangled and system returned to normal as quickly as
possible. There are several recovery algorithms, all requiring at least one victim ( an expendable job that
must be terminated, to resolve the deadlock and free the system). The several recovery mechanisms are
listed below.
1. Terminate every job that’s active in system and restart them from beginning. (recall when your system
“misbehaves” and you just press the following keys ctrl+alt+delete and eventually restart the system)
2. Terminate only the jobs involved in deadlock and ask their users to resubmit them.
3. Terminate jobs involved in deadlock one at a time, checking to see if deadlock is eliminated after each
removal, until it has been resolved.
4. Have job keep record (snapshot) of its progress so it can be interrupted and then continued without
starting again from the beginning of its execution.
5. Select a non-deadlocked job, preempt resources it’s holding, and allocate them to a deadlocked process
so it can resume execution, thus breaking the deadlock
6. Stop new jobs from entering system, which allows non-deadlocked jobs to run to completion so they’ll
release their resources (no victim).

To select a victim with Least-Negative Effect, the O/S considers


 Priority of job under consideration—high-priority jobs are usually untouched.
 CPU time used by job—jobs close to completion are usually left alone.
 Number of other jobs that would be affected if this job were selected as the victim.
 Programs working with databases deserve special treatment.

6.5 Starvation
Starvation is as a result of conservative allocation of resources where a single job is prevented from
execution because it’s kept waiting for resources that never become available. There is an algorithm
designed to detect starving jobs which tracks how long each job has been waiting for resources. To enable
the job recover from starvation, the process is aged. Aging is basically increasing the priority of a starved
process such that it eventually becomes the process with the highest priority and thus allowed to run.

6.6 Race Condition


In simple terms, race condition is a situation whereby two processes are competing for the same resource.
In computer memory or storage, a race condition may occur if commands to read and write a large amount
of data are received at almost the same instant, and the machine attempts to overwrite some or all of the old
data while that old data is still being read. The result may be one or more of the following:
 the computer crashes or identifies an illegal operation of the program
 errors reading the old data
 errors writing the new data

Race conditions occasionally occur in logic gates when inputs come into conflict. Because the gate output
state takes a finite, nonzero amount of time to react to any change in input states, sensitive circuits or devices
following the gate may be fooled by the state of the output and not operate properly. In a network, a race
condition may occur if two users try to access a channel at the same instant and neither computer receives
notification the channel is occupied before the system grants access. Statistically, this kind of situation
occurs mostly in networks with long lag times, such as those that use geostationary satellite.

47
The serialization of memory or storage access will prevent race conditions. This means if read and write
commands are received close together, the read command is executed and completed first by default. To
prevent race condition in a network, a priority scheme must be devised to give one user exclusive access.
For example, the subscriber whose username or number begins with the earlier letter of the alphabet or the
lower numeral may get priority when two subscribers attempt to access the system within a prescribed
increment of time.

What are the types of race conditions?


There are a few types of race conditions. Two categories that define the impact of the race condition on a
system are referred to as critical and noncritical:
 A critical race condition will cause the end state of the device, system or program to change. For
example, if flipping two light switches connected to a common light at the same time blows the
circuit, it is considered a critical race condition. In software, a critical race condition is when a
situation results in a bug with unpredictable or undefined behaviour.
 A noncritical race condition does not directly affect the end state of the system, device or program.
In the light example, if the light is off and flipping both switches simultaneously turns the light on
and has the same effect as flipping one switch, then it is a noncritical race condition. In software, a
noncritical race condition does not result in a bug.

Questions
1. State the four necessary conditions for deadlock to occur?
2. Differentiate between Deadlock, Starvation and Race condition.
3. When is system said to be in a safe state?
4. List and explain any 4 methods you can use to recover your system from a deadlock situation.

48
UNIT 7 Device Management

Introduction
Device management in an operating system means controlling the Input/Output devices like disk,
microphone, camera, keyboard, printer, magnetic tape, USB ports, camcorder, scanner, other
accessories, and supporting units like supporting units control channels. A process may require
various resources, including main memory, file access, and access to disk drives, and others. If
resources are available, they could be allocated, and control returned to the CPU. Otherwise, the
procedure would have to be postponed until adequate resources become available. The system has
multiple devices, and in order to handle these physical or virtual devices, the operating system
requires a separate program known as a device controller. It also determines whether the requested
device is available.

Learning Objectives
After completing this unit, you should be able to describe:
• Features of dedicated, shared, and virtual devices
• Differences between sequential and direct access media
• Roles of seek time, search time, and transfer time in calculating access time
• Critical components of the input/output subsystem, and how they interact
• Strengths and weaknesses of common seek strategies, including FCFS, SSTF, SCAN/LOOK, C -
SCAN/C-LOOK, and how they compare

7.1 Device Management basic functions


The device manager of the operating system, like other managers, has four basic functions which
are:
1. Tracking the status of each device
2. Using pre-set policy to determine which process gets which device and for how long
3. Allocating the device
4. Deallocating them at two levels
a. At the process (or task) level i.e. when an I/O command has been executed and the device is
temporarily released and
b. Then at the job level when the job is finished, and the device is permanently released

7.2. System Devices


The system’s devices fall into one of three categories: dedicated, shared, and virtual. The
differences are a function of the characteristics of the devices, as well as how they are managed by
the device manager.
7.2.1. Dedicated Devices
Dedicated devices are devices that are assigned to only one job at a time; they serve that job for
the entire time it’s active or until the job releases them. Some dedicated devices include tape drives,

49
printers, and plotters. It would be awkward to let several users share them. A shared plotter might
produce half of one user’s graph and half of another.
Disadvantages
1. They must be allocated to a single user/job for the duration of a job’s execution, and can
be quite inefficient, because the device may not be used 100 percent of the time.
2. Wastage of resources
7.2.2. Shared Devices
Shared Devices can be assigned to several processes at a time. They include any direct access
storage media (DASD). E.g. They can be shared by several processes at the same time by
interleaving their requests, but this interleaving must be carefully controlled by the Device
Manager. All conflicts—such as when Process A and Process B, each need to read from the same
disk—must be resolved based on predetermined policies to decide which request will be handled
first.
7.2.3. Virtual Devices
Virtual Devices are dedicated devices that have been transformed into shared devices. For
example, printers (which are dedicated devices) are converted into sharable devices through a
spooling program that reroutes all print requests to a disk. Only when all of a job’s output is
complete, and the printer is ready to print out the entire document, is the output sent to the printer
for printing. Have you ever encountered a situation where you send a file to a printer that is offline
and then forget that you did so. However, any day that printer is connected to your system, the
file gets printed!. (This procedure has to be managed carefully to prevent deadlock). Because disks
are sharable devices, this technique can convert one printer into several virtual printers, thus
improving both its performance and use. Spooling is a technique that is often used to speed up
slow dedicated I/O devices.
7.3. Storage Media
The storage media that any digital device uses can be classified into two groups:- Sequential access
media, and Direct access storage devices (DASD). There are vast differences in their speed and
shareability.
7.3.1. Sequential Access Storage Media
These are storage media that store records sequentially i.e one after the other. In early systems
they included printouts, punch cards and paper tape. This was before magnetic tape was developed
for routine secondary storage. These are now used for routine archiving and for storing backup
data. In magnetic tape, records of any length are stored serially on the tape. Each record is
identified by its position on the tape. So to access a record, the tape is mounted and “fast -
forwarded” from the beginning until we get to the desired position. This can be time consuming.
Tapes are usually of 9 track in nature with the ninth track used as parity bit (for error checking).
Data is recorded on the other eight parallel tracks.
The amount of characters that can be recorded per inch depends on the density of the tape e.g 1600
bytes per inch (bpi). Between each record on the tape is an inter record gap (IGR) which is about
½ inch long. The IRG allows the tape some space to stop while it waits for the next read. The tape
only moves when there is need to access a record, otherwise it is standing still.
There 2 ways to store records on a tape.

50
 Either individually or as a block (i.e. grouped). If stored individually, then each record
(irrespective of its length) is followed by the IRG. This could lead to wasting of tape, when the
records’ length is small say 1/10 of an inch.
 If records are grouped into blocks, then the block is followed by an inter block gap (IBG) which
is also ½ inch in length. Blocking is performed when the file is created and the blocks need to
be “deblocked” to be able to access the records. The number of records in a block is usually
determined by the application program which normally takes advantage of the transfer rate of
the tape.
The access time of a tape is how long it takes to access a record or block on the tape, it varies. For
a 2400 feet tape, the average access time is 1.25 minutes.
Advantages of Blocking
1. Fewer i/o operations are needed because a single read command can move the entire block
2. Less tape is wasted
Disadvantages of blocking
1. Overhead and software routines are needed for blocking, deblocking and record keeping.
2. Buffer space may be wasted, if you need only one record but must read the entire block to get
it.

Figure 7.1: Example of Storage on Tape


IRG(Inter Record Gap)
IRG(Inter Record Gap)

IRG(Inter Record Gap)

IRG(Inter Record Gap)

IRG(Inter Record Gap)


Record 1

Record 2

Record 3

Record 4

Record 5

Figure 7.2: Example of Records with Inter Record Gaps

51
Figure 7.3: Block Records with Inter Block Gaps

7.3.2. Direct Access Storage Devices (DASD)


These are devices that can directly read or write to any specific place on a disk. They are also
called random access storage devices. The DASDs are grouped into those with fixed read/write
heads and those with movable read/write heads.

7.3.2.1. Fixed-Head Drums And Disks


Fixed head drums were the first to be developed in early 1950s. It resembles a can covered with
magnetic film and formatted so the tracks run around it. Just like a cylinder. Data is recorded
serially on each track by the read/write head. Fixed head disks are similar, except that they are flat
disks covered with magnetic film, usually on both sides. As can be seen in figure 7.4, each track
has its own read/write head.

Figure 7.4: A disk with fixed read/write head.

Storage disks are structured as having tracks that appear like concentric circles. Each circle is a
track. The tracks are numbered from the outermost to the innermost in ascending order. Tracks
are further divided into sectors. See figure 7.5.

Figure 7.5: Structure of a Magnetic disk

7.3.2.2. Movable-Head Disks

52
These have only a few read/write heads that move from track to track to cover the entire surface
of the drum or disk. Movable head disks have one read/write head that floats over the surface of
the disk. So, for the system to read or write data to the disk, the disk must spin. This contributes
to the noise you would hear sometimes when you double a file to open. Disks can be a single unit
as in 5 ¼ disk or packet of a disk pack which is a stack of disks that are mounted on a central
spindle ½ inch apart to allow space for the read/write head (Figure 7.6). Disks packs are made of
platters which have 2 recording surfaces, except the top and bottom platters. For a disk part, you
can either record data track by track on all disks or fill a disk first before writing on another one.

Figure 7.6: Cross section of a movable head disk

The read/write heads move between each pair of surfaces, and all the heads are moved in unison
by the arm.

Figure 7.7: A Hard drive

Figure 7.7 shows a typical hard drive from a PC showing the arm that floats over the surface of
the disk (Courtesy Seagate Technology)

On a magnetic disk, the sectors are of different sizes: bigger at the rim and smaller at the centre.
The disk spins at a constant angular velocity (CAV) to compensate for this difference.

Magnetic Disk Drive Access Times


Depending on whether a disk has fixed or movable heads, these three factors contribute to the time
required to access a file: seek time, search time, and transfer time.

Seek Time: time required to position the read/write head on the proper track. It does not apply to
fixed head devices.
Search time/rotational delay time: time it takes to rotate the drum/disk until the requested record
is moved under the read/write head.

53
Transfer time: time it takes to transfer data from secondary storage to main memory.

7.3.2.3. Optical Disc Storage


These include the CD ROM, DVD and Blu Ray. They provide high-density storage and are reliable
media. The optical disc drive functions like the magnetic disk drive. It has a head which only reads
in the cause of CD ROM. The advent of optical disc storage was made possible by developme nts
in laser technology.

Differences Between Magnetic Disk and Optical Disc


Among the many differences between an optical disc and a magnetic disk is the design of the disc
track and sectors.
1. A magnetic disk, which consists of concentric tracks of sectors, spins at a constant speed—
this is called constant angular velocity (CAV).
2. On the other hand, an optical disc consists of a single spiralling track of same-sized sectors
running from the centre to the rim of the disc. This single track also has sectors, but all
sectors are the same size regardless of their locations on the disc.

Figure 7.8: the structure of an optical disc


3. This design allows many more sectors, and much more data, to fit on an optical disc
compared to a magnetic disk of the same size. The disc drive adjusts the speed of the disc’s
spin to compensate for the sector’s location on the disc—this is called constant linear
velocity (CLV).

Important Features of Optical Drives:


 Sustained data transfer rate. The data transfer rate is measured in megabytes per second,
and this refers to the speed at which massive amounts of data can be read from the disc.
 Average access time expressed in milliseconds(ms). Access time indicates the average time
required to move the head to a specific place on the disc, and this is expressed in
milliseconds (ms).
 The fastest units have the smallest average access time, which is the most important factor
when searching for information randomly, such as in a database.

54
 Cache size has a substantial impact on perceived performance.
 Hardware cache acts as a buffer by transferring blocks of data from the disc, anticipating
that the user may want to reread some recently retrieved information, which can be done
quickly if the information remains in the cache.
 The cache can also act as a read-ahead buffer, looking for the next block of information on
the disc.
 Latency time- the time it takes for data to rotate from its current position to a position
adjacent to the read/write head

7.3.2.3. Flash Memory Storage


Flash memory is a type of electrically erasable programmable read-only memory (EEPROM). It
is a non-volatile removable medium that emulates random access memory, but, unlike the RAM,
it stores data securely even when it’s removed from its power source. Historically, flash memory
was primarily used to store start-up (boot up) information for computers, but it is now used to store
data for cell phones, mobile devices, music players, cameras, and more. If your system has SSD
(Solid State Device, look up what this means), then the hard disk is made of the flash memory,
rather that the magnetic hard disk. This explains why it is very silent. It is sold in a variety of
configurations, including compact flash, smart cards, and memory sticks.
7.4. Components of the I/O Subsystem
The pieces of the I/O subsystem all must work
harmoniously. The components of the I/O subsystem
include the channels, control unit and the I/O devices. I/O
channels are programmable units between the CPU and the
control unit to synchronize the fast speed of the CPU with
the slow speed of the I/O devices. They make it possible to
overlap I/O operations with processor operation so that
both CPU and I/O can process concurrently. Each channel
program specifies the action to be performed by the
devices and controls the transmission of data between main
memory and the control units. The channels send one
signal for each function and the I/O control unit interprets
the signal for the device. Example of channel signals could be “go to the top of the page” if the
device is a printer or “rewind” if the device is a tape drive.

55
Some systems also have a disk controller, or disk
drive interface, which is a special purpose device
used to link the disk drives with the system bus.
Disk drive interfaces control the transfer of
information between the disk drives and the rest
of the computer system. The operating system
normally deals with the controller, not the device.

From figure 7.9, we have I/O subsystem


configuration with multiple paths, which
increase both flexibility and reliability. With two
additional paths, shown with dashed lines, if
Control Unit 2 malfunctions, then Tape 2 can still
be accessed via Control Unit 3. Figure 7.9: An alternative configuration

7.4.1. Communication Among Devices


The device manager relies on some features to keep running efficiently, under the demanding
conditions of the computer system.
There are 3 problems that need to be resolved:
1. It needs to know which components are busy or free
2. It must be able to accommodate the requests that come in
3. It must accommodate the disparity in speed between the CPU and I/O devices
2 and 3 are handled by buffering records and queuing request. 1 is handled by structuring the
interaction between units.

Example
For example, after a device has begun writing a record, and before it has completed the task, the
connection between the device and its control unit can be cut off so the control unit can initiate
another I/O task with another device. Meanwhile, at the other end of the system, the CPU is free
to process data while I/O is being performed, which allows for concurrent processing and I/O. For
the systems to know when a device has completed its job, it tests a hardware flag, this hardware
flag (has 3 bits) and resides in the channel status word (CSW). CSW contains information about
the status of each channel. This is in a predefined location in main memory and contains
information indicating the status of the channel. Each bit represents one of the components of the
I/O subsystem, one each for the channel, control unit, and device. Each bit is changed from 0 to 1
to indicate that the unit has changed from free to busy. Each component has access to the flag,
which can be tested before proceeding with the next I/O operation to ensure that the entire path is
free and vice versa.

56
Two common ways to perform the test are through polling and using interrupts. Polling uses a
special instruction to test the flag. For example, the CPU periodically tests the channel status bit
(in the CSW). If the channel is still busy, the CPU performs some other processing task until the
test shows that the channel is free; then the channel performs the I/O operation. Disadvantage with
this scheme is determining how often the flag should be polled.
 If polling is done too frequently, the CPU wastes time testing the flag just to find
out that the channel is still busy.
 If polling is done too seldom, the channel could sit idle for long periods of time.
Interrupts
The use of interrupts is a more efficient way to test the flag. Instead of having the CPU test the
flag, a hardware mechanism does the test as part of every machine instruction executed by the
CPU. If the channel is busy, the flag is set so that execution of the current sequence of instructio ns
is automatically interrupted and control is transferred to the interrupt handler, which is part of the
operating system and resides in a predefined location in memory. The job of the interrupt handler
is to determine the best course of action based on the current situation because it’s not unusual for
more than one unit to have caused the I/O interrupt. Interrupt handler must find out which unit sent
the signal, analyse its status, restart it when appropriate with the next operation, and finally return
control to the interrupted process.
7.4.2. Direct memory access (DMA)
Direct memory access (DMA) is an I/O technique that allows a control unit to directly access main
memory. Here, once reading or writing has begun, the remainder of the data can be transferred to
and from memory without CPU intervention. This mode of data transfer is used for high-speed
devices such as disks. Without DMA, the CPU is responsible for the physical movement of data
between main memory and devices. This is a time-consuming task that results in significa nt
overhead and decreased CPU utilization.

Buffers
Buffers are temporary storage areas residing in convenient locations throughout the system.
They are used to better synchronize the movement of data between slow I/O devices and very fast
CPU. They hold data from input devices before they are needed by the CPU and vice versa.
Sometimes a double buffering system is used to minimize device idle time and here are 2 buffers
present. One is used to hold data while being processed by the CPU and the other is used to
read/write data that had been previously stored by the CPU.

7.5. Management of I/O Requests


The device manager handles requests using 3 of its components
1. I/O traffic controllers that monitor the status of every device, CU and channel. It has 3 main
tasks
 it determines if there is at least one path available
 if more than one is available, it must decide which to use
 if all paths are busy, it must determine when one will become available.

57
To do this, it maintains a database containing the status and connection for each unit in the
I/O sub system.
2. I/O scheduler: allocates the devices, CU and channels. When there are requests greater than
the number of available paths, the I/O scheduler must decide which request will be satisfied
first. I/O requests are not pre-empted, i.e once a channel program has started, it is allowed
to continue to completion, even if there are I/O requests with higher priority.
3. I/O device handler: this processes the I/O interrupts, handles error conditions, and provides
detailed scheduling algorithms which are device dependent. Each type of I/O device has
its own device handler algorithm.

7.5.1. Device Handler Seek Strategies for hard disk with movable R/W head
During light loads (i.e a small request queue length) the system uses FCFS scheduling to handle
requests (FCFS is fair but arm movement is much). However, this is not always effective because
the system may need to read/write from the following tracks 1, 20, 2, 30, 5, in this order. So the
read/write head will be moving up and down. In heavy loads, other scheduling algorithms are used.
A good disk scheduling policy should have the following characteristics
1. minimize arm movement thereby maximizing throughput
2. minimize mean response time
3. minimize the variance in response time
The main goal is to keep seek time to a minimum.
The various disk scheduling policies are:
1. FCFS (for light loads)
2. SSTF – Shortest Seek Time First (for moderate load): the request with the track closest to
the one being served is the next to be satisfied. That is, disk arm is positioned such that it
takes the shortest distance to the next track to be served. This may lead to tracks further
away waiting for long.
3. SCAN (light to moderate): disk arm serves requests along its path. When it gets to the
outermost or innermost track, it changes its direction to the opposite side and services all
requests along its path that were already on the queue. New requests must wait. SCAN
ensures that requests in the outermost or farthest distance from the arm does not wait for
long.
4. A variation of SCAN is LOOK (elevation algorithm) in which the arm does not go all the
way to the edges, except if there are requests.
5. N-STEP SCAN: moves as in SCAN, however any new requests while the arm is going in
one direction is incorporated and reordered in the queue so that when the arm is on its
return trip they are served.
6. C-SCAN (moderate to heavy): circular scan moves in one direction i.e. it starts from the
outermost to the innermost and serves them. It also picks up requests on its inward sweep.
When it gets to the innermost, it immediately returns to the outermost and starts servicing
requests that had arrived during its last inward sweep.

58
7. C-LOOK: is the look version of C-SCAN. The arm does not go all the way to the inner most
track except if there are requests there.
8. The best scheduling algorithm for implementation may be a combination of 2.

Review Questions
1. What the component of an I/O subsystem and what are their functions?
2. With suitable diagrams, compare and contrast the structure of the hard disk with that of the CD-
ROM
3. System devices can be broadly classified into 3. List and describe these three.
4. In a bid to reduce the outbreak of malaria in the ground floor of Nando Hall, the hall warden
authorized that the rooms on that floor be sprayed by Mr Peace. There are 30 rooms on that floor
and the students are to make requests to Mr. Peace for their room to be sprayed. At a point in time,
Mr. Peace was heading to the beginning of the floor (ie lower room numbers) but stopped in room
no. 10 to service a request. The outstanding requests he has are in the order: 1, 11, 29, 14, 22 room
numbers. It takes Mr Peace, 1 second to move from one room to the next. Ignoring how long it
will take to spray a room, state the sequence of Mr Peace movement along the floor and calculate
the total seek time only if he is using:
i. Shortest Seek Time First
ii. FCFS
iii LOOK

59
UNIT 8 FILE MANAGEMENT

Introduction
The File Manager controls every file in the system. The efficiency of the File Manager depends
on how the system’s files are organized; how they are stored; how each file’s records are
structured; and how access to these files is controlled. In this chapter, we will study all of these.
The File Manager (also called the file management system) is the software responsible for creating,
deleting, modifying, and controlling access to files—as well as for managing the resources used
by the files. The File Manager provides support for libraries of programs and data to online users,
for spooling operations, and for interactive computing. These functions are performed in
collaboration with the Device Manager that we studied in the previous lesson.

Learning Objectives
After completing this unit, you should be able to describe:
1. The fundamentals of file management and the structure of the file management system
2. File-naming conventions, including the role of extensions
3. The difference between fixed-length and variable-length record format
4. The advantages and disadvantages of contiguous, non-contiguous, and indexed file storage
techniques

8.1. What is a file?


A file is a named collection of data, usually stored on a secondary storage device. Every piece of
data/ information stored on a digital device is a file. There are however, 3 classification for files :-
Data files( these are created with application program files, e.g. A word document), Program files
(these are used to give instructions to the computer to do certain things. They are further divided
into application files and system files.) and Directories (this is a file that lists all the files in a
directory/ folder). For example, the file Holiday.docx is a data file that was created with Microsoft
Word application file and the extension is “docx”. This shows us that every file on the system
must be named. A file can be referred to by its Absolute file name (complete file name), which is
a long name that includes all path info (e.g. C:\user\documents\Holiday.docx). Alternatively, files
may also be referred to by their Relative file name which is the short name seen in directory listings.
This name is selected by user when the file is created. Every File name must have an extension.
The extension is 2-4 characters name used to identify type of file or its contents. It is separated
from relative name by a period. E.g. BAS, BAT, COB, & EXE signal to system to use specific
compiler or program to run these files. E.g. TXT, DOC, PDF, JPEG created by applications or
by users for own identification.

Files are made up of records and records consist of fields. File types include:- Executable, Text,
Object, Library source codes, Print (pdf), Archir.zip, Multimedia.

There are several operations that can be carried out on files. These include: -

1. Open 2. Close 3. Create 4. Destroy 5. Copy 6. Rename 7. List (Print)


It is possible that you must have performed at least 4 of the operations on a file. Apart from the
operations on a file, there are further operations that can performed on the individual records that
make up a file. Those operations include:- Read, Write, Update, Insert, Delete.

60
8.1.1. Characteristics of Files
When working with certain files, especially data files, there are some characteristics that are
usually noted. They are:
1. File volatility: this is the frequency of additions and deletions made to a file
2. Activity: this is the percentage of file’s records accessed during a given period of time
3. Size: the amount of information stored in the file

8.2. Directory / Volume Structure


The are three main ways computing devices arrange the files stored on them. Note that directories
are synonymous with folders.
1. Single level: This was the first way of storing files on a storage media, where all the files
are stored in on location, usually the root directory.
2. 2 level: The 2-level structure enabled the operating system to create some directories under
the root directory where user files are stored.
3. Tree: The tree structure is the current structure being used to organise files on a storage
media. This structure enables the users to create as many directories (folders) as possible.
Note that unlike the regular tree you see around, this particular tree structure has it’s root
at the top. The root is usually labelled as the C drive.

Figure 8.1: File directory tree structure


The “root” is the MFD shown at the top, each node is a directory file, and each branch is a directory
entry pointing to either another directory or to a real file. All program and data files subsequently
added to the tree are the leaves, represented by circles.

8.3. File System


File Systems are very important. Some of them are NTFS, CDFS, FAT. They determine the
naming convention used for files. The file system for any storage media generally stipulates
1. Access methods: concerned with how the data stored in files are accessed.
2. File management: providing mechanism for files to be stored, shared, secured, referenced.

61
3. Auxiliary Storage management: allocating space for files on secondary storage device.
4. File Integrity Mechanisms: guaranteeing that the information stored on files is not
corrupted.

Functions of a file system


It gives user the
1. Ability to create, modify and delete files
2. Ability to share files in a carefully controlled manner (read access, write and execute
access).
3. Ability to structure their files.
4. Ability to transfer information between files.
5. Ability to refer to files by symbolic names instead of using physical device name
It also
6. Provides backup and recovery capabilities.
7. Provides security to sensitive files
8. Provides a user-friendly interface that gives users a logical view of their data, instead of a
physical view. This is achieved with the help of folders.
Volume refers to storage unit, whether the unit is removable or not e.g. drum, hard disk (HD),
diskettes, memory cards, flash disks. Each volume is given a name and the O/S writes this name
along with other description in an area on the volume. These are stored in the volume description.
The volume descriptor includes
1. Creation date
2. Volume name
3. Pointer to directory area: i.e 1st sector where directory is stored.
4. Pointer to file area:- i.e. 1st sector where file is stored
5. File system code used to detect volumes with incorrect formats.
There is also a Master File Directory (MFD), which is stored immediately after the volume
descriptor. The MFD lists the name and characteristics of every file contained in the volume.
Subdirectories are also listed there for systems that support subdirectories. Each file entry in every
directory contains information describing the file. This is known as the file descriptor or file control
block. The File Control Block (FCB) is a highly system dependent structure that users may not
access directly. It is controlled by the O/S and contains information the system needs to manage
the file. The FCB includes: -

1. File name 6. File type


2. Size 7. Location
3. Creation date and time 8. Record size
4. Owner access control data 9. Organization
5. Device type 10. Destroy date
6. Date and time last modified 1. Access activity counts

62
8.4. File Organization
File organization refers to the way the records of a file are arranged on secondary storage. The
popular ones include:
 Sequential record organization: used mainly in magnetic tapes. Here the records are stored
/ accessed serially one after the other. To access a file/record you must search from the
beginning.
 Direct record organization: used only for direct access storage devices. Records are directly
(randomly) accessed/or stored by their physical addresses. A hashing technique/algor ithm
is used in locating data in direct access files i.e. to convert the physical addresses into
logical address.
 Indexed sequential: Records are arranged in logical sequence according to the record key.
Then the system maintains an index of the physical addresses. Indexed sequential records
can be accessed sequentially using the key order or directly using the system generated
index.

8.4.1. Record Format


Recall that files are made up of logically related records. When a user gives a command to modify
the contents of a file, it’s actually a command to access records within the file. Within each file,
the records are all presumed to have the same format. They can be of fixed length or of variable
length, and these records, regardless of their format, can be blocked or not blocked.

When data is stored in fixed length fields as in (a) above, any data that extends beyond the fixed
size is truncated (can you decipher the length (in charaters) used in (a)?). However, when data is
stored in a variable length record format as in (b) above, the size expands to fit the contents, but
it takes longer to access it.

8.4.2. Access Methods


This is the method of accessing/retrieving records in a file. There are two access methods which
are:
1. Queued access method: used mainly for serially stored data and implemented when the
sequence in which the records are to be processed can be anticipated. It performs
anticipatory buffering, automatic blocking and deblocking.
2. Basic access method: used when the records processing sequence cannot be anticipated.

8.5. Physical Storage Allocation


This is how the records of a file are physically stored on the disks. There are three main physical
storage allocation methods and these are Contiguous, Non-contiguous.

63
8.5.1. Contiguous storage
In this type of allocation, records of a file are stored one after other. Single set of blocks are
allocated to a file at the time of creation, so every part of file is stored in same compact area. See
figure 8.2. The file allocation table contains information which states the starting block and length
of each file. Any record can be found and read once the starting address and size are known.

Figure 8.2: Contiguous file allocation

Because of the way file blocks are allocated in this type of allocation, external fragmentation will
occur. External fragmentation is a phenomenon where there are slivers of unused storage spaces
(block no 0, 1, 5, 29, etc from figure 8.2 above) between sets of blocks belonging to different files.
When this occurs, there will be need for the operating system to perform compaction. Compaction
is rearranging several blocks of different files on the disk together so that they are closely packed
and then gather the unused blocks that were between various files. This done such that there will
be space large enough to accommodate other files, contiguously. Figure 8.3 shows the outcome
of what will happen after figure 8.2 has been compacted.

Disadvantages of contiguous storage


1. Files can’t be expanded unless there’s empty space available immediately following the file.
2. Room for expansion must be provided when a file is created and may not be used.
3. Fragmentation occurs (that is slivers of unused storage space).
4. Compaction may take time.
5. Files can’t be accessed while compaction is taking place.

Figure 8.3: Contiguous allocation after compaction

64
8.5.2 Non-contiguous Storage
Non-contiguous storage allows files to use any storage space available on disk. File’s records are
stored in a contiguous manner if there is enough empty space. Otherwise, any remaining records,
and all other additions to file, are stored in other sections of disk. The two methods of non-
contiguous storage are the chained/ linked storage and indexed storage.

8.5.2.1 Chained (or linked) storage


In this method, allocation is done on basis of individual blocks. Each block contains a pointer to
the next block (extent) in the chain. The file allocation table also stores information necessary for
loading all the block of the file. These include the starting block and the length of the file. Since
the records of a file can be stored in any block and linked together, there is no external
fragmentation and no need for compaction. This type of storage is best for sequential files. It does
not support direct access because there is no easy way to determine the exact location of a specific
record. Figure 8.4 shows what this type of storage looks like. From the file allocation table you
can see that the starting block for File B, is block 1 and the file spans 5 blocks. So to load the file
the system starts from block 1. At the end of block
the will be a pointer specifying that the next block
is block 8. This can be seen from the arrow. The
operating system goes through and fetches all the
blocks, one by one to load the file completely.
This is why sometimes when you click on a large
document to open, it take a little while for the
whole document to be loaded.

The linking can be at storage level, where each


extent points to the next one in sequence. The
directory entry here will consist of file name,
storage location of first extent, location of last Figure 8.4: Chained storage
extent, and the total number of extents, not
counting first. However, if linking is at directory level, each extent is listed with its physical
address, size, and a pointer to next extent. A null pointer indicates that that block is the last one.

8.5.2.2 Indexed Storage


In indexed storage, every file has its own index block which contains the addresses of each disk
sector that make up the file. The index block lists each entry in the same order in which sectors
are linked. When a file is created, pointers in the index block are set to null. As each sector is
filled, the pointer is set to the appropriate sector address and this address is removed from the
empty space list.

In this storage method, the file allocation table contains a separate one-level index for each file
save on the medium. This means that the file allocation table contains the block number for the
index of a file. The index has one entry for each portion/ block allocated to the file. Figure 8.5
shows what this type of storage looks like.

65
Indexed storage supports both sequential and
direct access to file records. However, it
does not necessarily improve the use of
storage space because each file must have
index block. For larger files with more
entries, several levels of indexes can be
generated. To find a desired record, File
Manager accesses first index (highest level),
which points to a second index (lower level),
which points to an even lower-level index
and eventually to data record. Figure 8.5: Indexed allocation

8.6. Access Control Matrix


For the system to be able to control the access of files by different users, it creates a 2-dimensio na l
matrix that lists all the files in the system on one side (column). For each file the system specifies
the user access rights which can be (read, write, execute, delete).

R = Read Access, W = Write Access, E = Execute Access, D = Delete Access, and a dash (-) = Access Not Allowed

The table above shows the access control matrix showing access rights for each user for each file.
User 1 is allowed unlimited access to File 1 but is allowed only to read and execute File 4 and is
denied access to the three other files. Can you decipher the rights for the other users?

The above diagram is a typical module of a file management system showing how information is
passed from the File Manager to the Device Manager.

66
Questions
1. What are the two main ways that files are stored on any storage medium?
2. What is fragmentation?
3. In the chained storage, what does a null pointer signify?
4. Why is compaction performed by the operating system?
5. File A is stored using chained storage and File B which is the same size as file A is stored using
indexed storage. Which of the file will occupy more space on the disk?
6. Describe the linked storage with a suitable diagram?
7. Write short notes on any 5 types of File System.

67
UNIT 9 PERFORMANCE

Introduction
Performance is measuring the effectiveness of the Operating System (O/S) in managing the
computer system resources. In computing, computer performance is the amount of useful work
accomplished by a computer system. Outside of specific contexts, computer performance is
estimated in terms of accuracy, efficiency and speed of executing computer program instructio ns.
When next you have your system on, press CTRL+ALT+DELETE and select the task manager.
You can view the performance of your system from there.

Learning Objectives
After completing this unit, you should be able to describe:
• The trade-offs to be considered when attempting to improve overall system performance.
• The roles of system measurement tools such as positive and negative feedback loops.
• Two system monitoring techniques.

9.1. System Performance Monitor


A system performance monitor (SPM) is a type of application that identifies, collects, monitors
and reports on the overall operational health of a computer system. It is a performance monitor ing
tool that enables end users, administrators, and organizations to gauge and evaluate the
performance of a given system. Why is there a need for performance monitoring and evaluatio n?
This is because system design engineers need them and some of the tools are :-

SELECTION EVALUATION:- this helps to decide if getting a computer system from a


particular vendor is appropriate. Why would you prefer to buy
a Dell laptop, instead of a HP laptop?
PERFORMANCE PROJECTION:- The goal is to estimate the performance of a system that
does not exist.
PERFORMANCE MONITORING: - Here the evaluator accumulates performance data on an
existing system or component to be sure that it is meeting its
performance goals.
9.2 Performance
In performance, we focus on the overall speed of the system. The performance of all the four areas
of operating system go hand in hand. (Do you recall the 4 areas of O/S). The performance of one
component affects that of the other. There are different types of performance measures. Some are
user oriented (e.g., response time) while others are system oriented (e.g. processor utilization).

9.2.1. Performance Measures


1. Turnaround Time:- (In batch systems) it is the time from when a job is submitted until
the job’s output is returned to the user.
2. Response Time:- (In online interactive system) it is the time from when a user presses an
enter key or clicks the mouse until the system begins to display a response.
3. System Reaction Time:- (In interactive system) This is defined as the time from which
the user presses enter key until the first time slice of service is given to that user’s request.
68
4. Throughput:- The amount of jobs completed in a given amount of time by a computer
system.
5. Workload:- The measure of the amount of work that has been submitted to the system and
which the system must process in order to be seen to perform well.
6. Capacity:- i.e the measure of the maximum throughput a system may have.
7. Resource Utilization :- Usually given as a percentage of time that a resource is actually in
use. You can also see this from the task manager of your system.

9.2.2. Performance Evaluation Techniques


TIMING:- Provides a quick means of comparing computer hardware. It categorizes computer by
Billions of Instructions Per Second (BIPs), Millions of Instructions Per Second (MIPs),
Trillions of Instructions Per Second (TIPs).

INSTRUCTION MIXES:- This is also used for quick comparisons. This technique uses a
weighted average of various instruction timings more suited to a particular application.

KERNEL PROGRAMS:- This is used at installation stage. The program is run on differe nt
systems and timed. By comparing the execution time of the different systems, you can
tell which performs better.

ANALYTICAL MODELS:- These are mathematical representations of computer systems or


components of computer systems.

BENCHMARKS :- These are real programs that are installed on the system being evaluated and
executed using real data. Since the performance of the benchmark is already known
before being used, the computer’s performance is then compared with what is expected
or known.

SYNTHETIC PROGRAMS :- These are also real programs that have being custom-designed to
exercise specific features of a machine. There are 2 standard programs which are
Whetstone and Dhrystone.

SIMULATION :- This is a technique in which the evaluator develops a computerized model of a


system being evaluated. This model is then run to reflect the behaviour of the system
being evaluated.

There are of 2 types:

1. Event-Driven Simulators:- These are controlled by events that are made to occur in the
model according to probability.

2. Script-Driven Simulators:- These are controlled by empirically derived data carefully


manipulated to reflect anticipated behaviour of the simulated system.

69
9.2.3. Performance Monitoring Techniques
Performance monitoring is the collection and analysis of system performance information for
existing systems. It is useful in determining how a system performs in terms of throughp ut,
response time etc. It helps to locate bottlenecks quickly. Performance monitoring can be
accomplished by software or hardware techniques. Some of the techniques are:
Whetstone is a systemic benchmark program designed to measure a processor’s number-
crunching effectiveness with floating point calculations. It is useful in comparing how effective ly
processors run FORTRAN programs.

LINPACK programs (SLINPACK for single-precision calculation and DLINPACK for double-
precision) are genuine benchmark programs (not - synthetic) that are also used for measuring
floating point performance.

Savage benchmark measures the accuracy of trigonometric floating – points operations instead
of their performance.

Dhrystone is a synthetic benchmark developed for measuring how effectively an architecture runs
systems programs. The Ada version has 100 statements. The effectiveness of the Dhrystone
depends on the size of the cache, then it will be executed faster than otherwise.

9.3. Bottleneck And Saturation


The operating system manages a lot of resources which interact and interface in complex ways to
affect the performance of the overall system. Sometimes one or some of these resources become
bottlenecks because they can no longer do what they are supposed to do. Bottleneck tends to
develop at a resource when the traffic of jobs or processes at that resource begins to approach its
capacity.

This means that the resource has become saturated. Processes competing for the attention of the
resource start to interfere with one another. Example of saturation is thrashing. To detect
bottlenecks, the system checks the resource’s request queue. If the queue begins to grow quickly,
it means that the resource has become saturated because the arrival rate of requests is greater than
the service rate. To remove bottlenecks, we can increase the capacity of the resources or by adding
more resources of the same type at the point of need by the system.

9.4. Feedback Loops


This is a mechanism that prevents the CPU from spending a lot of time doing overhead than
executing jobs. What happens is that the information about the current state of the system is
feedback to the job scheduler. The job scheduler will then either allow more jobs to enter the
system or prevent more jobs from entering the system. Have you ever experienced a situatio n
where you click on a application to be loaded, and the system does not load the application? That
must been as a result of the feedback gotten by the job scheduler. There are 2 types of feed back
loops which are :
1. Negative Feedback Loop:- Mechanism monitors the system and signals the appropriate
manager to slow down the arrival of request because the system has become congested. E.g in
distributed systems, the jobs/outputs queue for a particular printer may be so long while another

70
printer has much less. In such a situation, some of the operations may be re-routed to the printer
with less job.
2. Positive Feedback Loop:- This on the other hand monitors the system to check if it is
underutilized. If so it sends the signal to the appropriate manager to increase the rate of job
arrivals. Positive feedback loops are often used in paged virtual memory systems with caution,
though because if too many jobs are loaded, this could result to thrashing and leads to decrease
in processor utilization. If positive feedback loop is not well designed they could lead to an
unstable system state.

Questions:

1. Differentiate between Negative feedback loops and positive feedback loops


2. When doe a system become a bottleneck
3. Write short notes on Turnaround Time, Response Time, System Reaction Time,
Throughput.

71
UNIT 10 SECURITY
Introduction
Security refers to providing a protection to computer system resources such as CPU, memory, disk,
software programs and most importantly data/information stored in the computer system. If a
computer program is run by unauthorized user then he/she may cause severe damage to computer
or data stored in it. So, a computer system must be protected against unauthorized access, malic io us
access to system memory, viruses, worms etc. The various ways this can be done will be discussed
in the lecture.

Learning Objectives
After completing this topic, you should be able to describe:
• The role of the operating system regarding system security
• The effects of system security practices on overall system performance
• The levels of system security that can be implemented and the threats posed by evolving
technologies

10.1 What is Security


Security is any measure taken to prevent unauthorized users. A total approach to security can be
implanted, where you consider both external and internal security. External security is concerned
with securing the computer facility from intruders and disasters. Internal Security deals with the
various controls built into the hardware and the operating system to ensure reliable and
uncorrupted operation of the computer system and integrity of the programs and data. The are
several approaches to securing the computer. These are mentioned in the following paragraphs.

User Interface Security:- Once a user is allowed physical access to a computer, the user
access/access level should be granted by the operating system once the operating system can
identify the user.

Operational Security:- This approach is to divide the jobs to be performed on the system to
different personnel. E.g Super users, programmers, operators. In this way, an operator cannot do
what a programmer should do.

Surveillance:- This deals with monitoring and auditing the system as well as authenticating users.
Voice recognition or thumb print can also be used.

Threat Monitoring:- Here users are not granted direct and full access to critical files on the
system. An operating system routine called surveillance program does that. Such accesses are
requested through the operating system, and the operating system may either grant or deny access
to the user.

Amplification:-This is granting more access right to the surveillance program that enables
function.

Password protection:- Here the user specifies a password to the system and this is stored at every
log on, user must supply the password.

72
Auditing:-This is done to check if fraudulent activities have occurred. In the computer system, an
audit log is a permanent record of important events that occur in the computer system. This is
stored in a protected area of the system.

Access control:-This is the key to internal security. It controls the access to stored data. Access
rights are granted to people or processes or programs. These rights are read, write, execute or
modify. An access control matrix is used for this.

Security Kernel:- The security of the operating system depends on securing the following:-
Access control, Logging, Monitoring, Manage real storage, Virtual storage, File system. These
are functions that occupy a large portion of the operating system code that makes it difficult to
keep the kernel small. Security Kernels is making sure that the systems are secure by building
the security into the system at design stage. It is imperative to make the kernel of the operating
system secure.

10.2 System management


To evaluate the operating system, we look at the overall performance of the system and how the
effectiveness of one set of hardware can affect another hardware. We also consider who will be
using the operating system and on what hardware and for which purpose. The Operating systems
security plays a primitive role in protecting memory, files, user authentication and data access
protection. Consistent protection means that the system meets standard security requirements and
have the required functionality to enforce security practices. Operating system level vulnerability
opens entire system to attack. As the Operating system complexity and power increases the more
vulnerable to attack it becomes. Because of this system administrator must be on guard.

Security System administrator’s role


Security systems administrators are in charge of the daily operation of security systems, and can
handle things like systems monitoring and running regular backups; setting up, deleting and
maintaining individual user accounts; and developing organizational security procedures. A
security systems administrator's responsibilities may include the following:
1. Defending systems against unauthorized access
2. Performing vulnerability and penetration tests
3. Monitoring traffic for suspicious activity
4. Configuring and supporting security tools (firewalls, antivirus, and IDS/IPS software)
5. Implementing network security policies
6. Analyzing and establishing security requirements
7. Identifying threats and working on steps to defend against them
8. Training employees in security awareness/procedures
Developing and updating disaster recovery protocols
9. Conducting security audits
10. Making policy recommendations
11. Providing technical security advice
12. Consulting with staff, managers and executives on best security practices
13. Monitoring networks for security breaches, investing violations as occurs

73
14. Developing and supporting organizational security standards, best practices, preventative
measures, and disaster recovery plans
15. Conducting penetration tests (simulating cyberattacks to find vulnerabilities before others
can find them)
16. Reporting on security breaches to users, as necessary, and to upper management
17. Implementing and updating software to protect information
18. Staying up to date on IT security trends and information
19. Recommending security enhancements to management and C-suite executives

10.3 System Survivability


Survivability of a system is the capability of a system to fulfil its mission in a timely manner in
the presence of attacks, failures, or accidents. Survivability is the ability of a system to adapt and
recover from a serious failure, or more generally the ability to retain service functionality in the
face of threats. This could be related to small local events—such as equipment failures, and
reconfigure itself essentially automatically and over a time scale of seconds to minutes.
Survivability could relate to major events, such as a large natural disaster or a capable attack, on a
time scale of hours to days or even longer. Another important part of survivability is robustness.
While survivability is to do with coping with the impact of events, robustness is to do with reducing
the impact in the first place. Survivable systems’ key properties are attack resistance, attack and
resulting recognition, essential services recovery after attack has occurred and system defense
mechanism adaptation and evolution. And to mitigate against future attacks

Four key properties of a survivable system and strategies to achieve it.


Key Property Description Example Strategies
1. Resistance to attacks Strategies for repelling attacks Authentication; Access controls
Encryption; Message filter ing;
System diversificatio n;
Functional isolation
2. Recognition of attacks Strategies for detecting attacks and Intrusion detection; Integrity
and damages evaluating damage checking

3. Recovery of essential Strategies for limiting damage, Redundant components; Data


and full services after restoring comprised information or replication; System backup and
attack functionality, maintaining or restoration; Continge nc y
restoring essential services within planning
mission time constraints, restoring
full services
4. Adaptation and Strategies for improving system Intrusion recognition patterns
evolution to reduce survivability based on knowledge
effectiveness of gained from intrusions
future attacks

74
10.4 Levels of Protection
System administrator evaluates each computer configuration intrusion risk. This depends on
connectivity level given to system. There are four levels at which a system must be protected:
1. Physical - The easiest way to steal data is to pocket the backup tapes. Also, access to the root
console will often give the user special privileges, such as rebooting the system as from
removable media. Even general access to terminals in a computer room offers some
opportunities for an attacker, although today's modern high-speed networking environme nt
provides more and more opportunities for remote attacks.
2. Human - There is some concern that the humans who are allowed access to a system should
be trustworthy, and that they cannot be coerced into breaching security. However, more and
more attacks today are made via social engineering, which basically means fooling trustworthy
people into accidentally breaching security. Some of the methods used in fooling people are:
 Phishing involves sending an innocent-looking e-mail or web site designed to fool people
into revealing confidential information. E.g. spam e-mails pretending to be from e-Bay,
PayPal, or any of a number of banks or credit-card companies.
 Dumpster Diving involves searching the trash or other locations for passwords that are
written down. (Note: Passwords that are too hard to remember, or which must be changed
frequently are more likely to be written down somewhere close to the user's station.)
 Password Cracking involves getting users passwords, either by watching them type in
their passwords, knowing something about them like their pet's names, or simply trying all
words in common dictionaries. (Note: "Good" passwords should involve a minimum
number of characters, include non-alphabetical characters, and not appear in any dictionar y
(in any language), and should be changed frequently. Note also that it is proper etiquette to
look away from the keyboard while someone else is entering their password.
3. Operating System -The O/S must protect itself from security breaches, such as runaway
processes (denial of service), memory-access violations, stack overflow violations, the
launching of programs with excessive privileges, and many others.
4. Network -As network communications become ever more important and pervasive in modern
computing environments, it becomes ever more important to protect this area of the system.
(Both protecting the network itself from attack, and protecting the local system from attacks
coming in through the network.) This is a growing area of concern as wireless communicatio ns
and portable devices become more and more prevalent.
 The table below shows a simplified comparison of security protection required for three typical
configurations. Notice that system vulnerabilities accumulate as the computer’s level of
connectivity increases.
Configuration Ease of Relative Vulnerabilities
Protection Risk
Single computer (without High Low Compromised passwords, viruses
email or internet)
LAN connected (without Medium Medium Sniffers, spoofing (+passwords,
internet) viruses)
LAN connected Low High Emails, webservers, FTP, Telnet
(with internet) (+sniffers, spoofing, password,
viruses)

75
10.5 Backup and Recovery
Backup and recovery describe the process of creating and storing copies of data that can be used
to protect organizations against data loss. This is sometimes referred to as operational recovery.
Recovery from a backup typically involves restoring the data to the original location, or to an
alternate location where it can be used in place of the lost or damaged data. A proper backup copy
is stored in a separate system or medium, such as tape, from the primary data to protect against the
possibility of data loss due to primary hardware or software failure. Backups protect against
human errors, hardware failure, virus attacks, power failure, and natural disasters. Backups can
help save time and money if these failures occur.

The security system should be able to recover from successful or attempted breaches of security.
Now when there are sufficient backup and recovery policies in place and performing other
archiving techniques are standard operating procedure for most computing systems, to recover
from any unforeseen situation in the system will not be a difficult thing. Many system managers
use a layered backup schedule. They back up the entire system depending on their policies. As an
extra measure of safety, copies of complete system backups are stored in a safe off-site location.
Backups become essential when the system becomes unavailable because of a natural disaster or
when a computer virus infects the system. When it is discovered early eradication software can be
run on it and reload damaged files from the backup copies. When changes are made since the files
were backed up, they will have to be regenerated. Backups, with one set stored off-site, are also
crucial to disaster recovery. The disaster could come from anywhere. Here are just a few of the
threats:
• water from a fire upstairs
• fire from an electrical connection
• malfunctioning server
• corrupted archival media
• intrusion from unauthorized users
Policies, procedures and regular user training are essential elements of system management. Many
of the system failures that occur are caused by honest mistakes made by well-intentioned users and
not by malicious intruders. Any written security procedures would have to recommend frequent
password changes, reliable backup procedures, guidelines for loading new software, compliance
with software licenses, network safeguards, guidelines for monitoring network activity, and rules
for terminal access.

10.6 Security Breaches


A security breach is any incident that results in unauthorized access to computer data, applicatio ns,
networks, or devices. It results in information being accessed without authorization. Typically, it
occurs when an intruder can bypass security mechanisms. A security breach occurs when a
network or system is accessed by an unauthorized individual or application. Once your system is
infiltrated, the intruders can steal data, install viruses, and compromise software. A security breach
can be a complete disaster for a managed services provider (MSP) and their customers. Security
Breach can also be said to be when there is system security gaps. This could be malicious or not.
Intrusions classifications could be due to uneducated users and unauthorized access to system
resources. It could be purposeful disruption of system operation and can also be purely accidental.
Examples are hardware malfunctions, undetected errors in operating system or applicatio ns,
natural disasters. Any security breach severely damages system credibility.

76
10.6.1 Unintentional Intrusions
Unintentional Intrusions can be said to be security breach or data modification that is not resulting
from planned intrusion. Examples are accidental incomplete modification of data, non
synchronized processes access to data records, modification of some record fields, errors due to
incorrect storage of data values, field not large enough to hold numeric value stored (Figure 10.1
shows this).

Figure 10.1: Error in storing data in small fields

From figure 10.1, we see that the original data value is shown (a) in a field large enough to hold
it. If the field is too small, FORTRAN (b) replaces the data with asterisks, while COBOL truncates
the higher-order digits (values to the left of the dotted line) and stores only the digits that remain.

10.6.2 Intentional Attacks


Intentional threats or attacks is said or referred to purposeful actions resulting in the theft or
damage of computer resources, equipment, and data. Intentional threats include viruses, denial of
service attacks, theft of data, sabotage, and destruction of computer resources. Attack types
include Intentional unauthorized access, Denial of service attacks, browsing, wiretapping, repeated
trials, trap doors, trash collection, Viruses worms, Trojans, Bombs, Blended threats.

Intentional unauthorized access are as follow:


Denial of service (DoS) attacks:-These are Synchronized attempts denying service to authorized
users causing computer to perform repeated unproductive task.
Browsing:-This occurs when unauthorized users gain access to search through secondary storage
directories or files for information they should not have the privilege to read.
Wire tapping is when unauthorized users monitor or modify transmission through a
communication line. There is passive wire tapping and there is also active wire tapping.
Passive wire tapping: In passive wiretapping transmissions are monitored. Passive wire
tapping reasons are as follow:
• Copy data while bypassing authorization procedures
• Collect specific information (password)
Active wire tapping: In Active wire tapping data are modified. There are two methods of active
wiretapping which include “between lines transmission” and “piggyback entry”.

Repeated trials describes the way the intruder enters the system by guessing authentic passwords.

77
Trap doors:- This include backdoor password and can be said to be unspecified and
undocumented system entry point. They are usually installed for future use by Diagnostician or
programmer. This makes the system to be vulnerable to future intrusion.

Trash collection is also known as dumpster diving. Intruders can use discarded materials (such as
disks, CDs, printouts) to enter system illegally.

Malicious computer attacks is usually possible state and federal law violation. When convicted
they usually pay significant fines and jail terms are given to them sometimes and may also include
computer equipment confiscation. It is a punishable offence if caught.

Viruses can be said to be small programs that can alter computer operations. There is no user
permission to run these programs. There are two criteria.
1. It must be self-executing. That is it places its own code in the path of another program.
2. It must be self-replicating and this is usually accomplished by copying itself from infected files to
clean files.
For example viruses can infect desktop computers and network servers and also spread each time
the host file is executed. It is operating system specific usually and spread using wide variety of
applications.

Macro Virus attaches itself to template (such as NORMAL.DOT) and in turn attaches to word
processing documents.

Worm is a memory-resident program. It copies itself from one system to next. and it needs no aid
from infected program file. When it attaches itself, the effect is the slower processing time of real
work. It is especially destructive on networks.

Trojan is a destructive program. It disguises as legitimate or harmless program. It allows program


creator secret access to system

Spoofing
This is a security threat that relies on cleartext transmission whereby assailant fakes IP address of
internet server and changes address recorded in packets sent over internet. Here unauthorized users
disguise themselves as friendly sites

Logic bomb is a destructive program with fuse (triggering event). It spreads unnoticed througho ut
network.
Time bomb is a destructive program that is triggered by specific time.
Blended threat has the Logic bomb and time bomb characteristics combined together. This is a
single program which includes virus, worm, Trojan, spyware, and other malicious codes.

Characteristics of Blended threat are:


1. Harms affected system
2. Spreads to other systems using multiple methods
3. Attacks other systems from multiple points
4. Propagates without human intervention

78
5. Exploits vulnerabilities of target systems
6. Protection
7. Combination of defences with regular patch management

10.7 Guessing Passwords


The average time required for a human and computer to guess passwords up to 10 alphabetic
characters (A-Z) using brute force is shown in the table below.

No of Alphabetic Possible combinations Human attempt. avg. time Computer attempt.


Characters to discovery at 1 try/second Avg. time to discovery
at 1 million tries/second
1 26 13 seconds .000013 seconds
2 262 = 676 6 minutes .000338 seconds

3 263 = 17,756 2.5 hours .008788 seconds

8 268 = 208,827,064,576 6640 years 58 hours

10 2610 = (1.4 ∗ 10) 4 4.5 million years 4.5 years

Questions
1. What are the different levels of protecting a system?
2. If a password is made up of 5 characters, how many possible combinations of the characters are
there?
3. Can a virus harm your system? How?

79
UNIT 11 System Protection

Introduction
Threats can come from outsiders i.e. those that are outside the organization and it can as well as
come from insiders i.e. employees or others who have access to the system and this can include
stealing of intellectual properties or other confidential or sensitive information, fraud, and acts of
system sabotage. Hence the need for system protection.
Learning objectives
• The importance of system protection
• The role of education and ethical practices in system security

11.1 Protection
Protection refers to a mechanism which controls the access of programs, processes, or users to the
resources defined by a computer system. We can take protection as a helper to multi programming
operating system, so that many users might safely share a common logical name space such as
directory or files. There is no single guaranteed method of system protection
System vulnerabilities include:
- File downloads, e-mail exchange
- Vulnerable firewalls
- Improperly configured internet connections
Security issues require continuous attention and system protection is multifaceted.

11.2 Protection methods


These are the use of Antivirus software, firewalls, restrictive access and encryption.
The use of Antivirus Software:
• combats viruses only
• can be used as preventive, diagnostic, or both
o Preventive programs calculate checksum for each production program
o Diagnostic software compares file sizes and looks for replicating instructions or
unusual file activity
• It also removes infection and leaves remainder intact
• It cannot repair worms, Trojans, blended threats. Malicious code in entirety
Firewalls is a set of hardware and/or software. It is designed to protect system. It disguises IP
address from unauthorized users. Firewall sits between internet and the network. It blocks curious
inquiries and potentially dangerous intrusions.

In the above diagram of a university system, the firewall sits between the campus networks and
the Internet, filtering requests for access.

80
Typical firewall tasks
- Log activities in accessing the internet
- It maintains access control based on senders’ or receivers’ IP addresses.
- It maintains access control based on services requested for.
- It hides internal network from unauthorized users.
- It verifies virus protection installed and enforced on the system.
- It performs authentication based on source of a request from the internet.
Packet filtering:-Here, the firewall reviews header information, incoming and outgoing internet
packets and verify source address, destination address, protocol authenticity.
Proxy server This hides important network information from outsiders. It makes network server
invisible. It determines validity of network access request, and it is invisible to users. This is critical
to firewall success.

Authentication
Authentication is verifying authorization of individuals accessing the system. The popular
authentication tool is Kerberos. Kerberos was designed to provide strong authentication for
client/server applications in order to answer the need for password encryption to improve network
security. The Kerberos protocol uses strong cryptography (the science of coding messages) so
that a client can prove its identity to a server, and vice versa, across an insecure network
connection. Then, once authentication is completed, both client and server can encrypt all of their
subsequent communications to assure privacy and data integrity. Network authentication protocol
provides strong authentication for client/server applications and it uses strong cryptography to
encrypt information on the network.

Encryption
This is extreme protection method for data. Sensitive data are put into secret code, and it is also
called system communication encryption. For the system to communicate with another system,
data are encrypted, transmitted, decrypted, and processed. Sender inserts public key with message
to encrypt it while receiver uses private key to decode the message sent.
Disadvantages
- It Increases system overhead.
- System is dependent on encryption process itself.

Sniffers
These are programs on computer that are attached to network. They peruse data packets as they
pass by, and they examine each packet for specific information as they pass by. They are
particularly problematic most especially in wireless networks. Anyone with a wireless device can
detect a wireless network that’s within range. If the network is passing cleartext packets, it’s quite
easy to intercept, read, modify, and resend them.

Password Management
The basic techniques that protect hardware and software are to invest in good passwords and to
also engage in careful user training. Though it is not as simple as it appears or sounds. Passwords
are forgettable, unlikely to be changed often, commonly shared, and considered bothersome by

81
many people that use it. To construct a good password, they must be unusual, memorable, changed
often and stored in encrypted form. In addition, the password length should not be less than 8
characters and should contain other characters apart from the 26 English alphabets. This directly
affects ability of password to survive password cracking attempts.
Number of combinations of passwords depending on their length and available character set

2 4 6 8 10 12
2 4 6 8 10
ASCII characters 128 128 128 128 128 12812
Printable Characters 952 954 956 958 9510 9512
Alphanumeric Characters 622 624 626 628 6210 6212
Lowercase Letters and numbers 362 364 366 368 3610 3612
Lowercase letters 262 264 266 268 2610 2612

Good password techniques


- Use minimum of eight characters. Including numbers and nonalphanumeric characters
- Create misspelled word. Join bits of phrases into word easy to remember
- Follow certain pattern on the keyboard
- Create acronyms from memorable sentences
- Use upper and lowercase characters (if allowed)
- Never use word included in any dictionary

Password Alternatives
• Smart card use
- Credit card-sized calculator. Requires something you have and something you know
- Displays constantly changing multi-digit number. Synchronized with identical number
generator in system
- User must enter number appearing on smart card. Added protection: user enters secret code
- User admitted to system if both number and code validated

• Biometrics
- Science and technology of identifying individuals. Based on each person’s unique biological
characteristics
- Current research focus. Analysis of human face, fingerprints, hand measurements, iris/retina,
voice prints
- Positively identifies person being scanned
- Critical factor. Reducing margin of error
- Expensive
• Graphics and pattern clicks
- Establish sequence of clicks on photo/illustration. Repeat sequence to gain access
- Eliminates keyboard entries

82
Summary
 Must emphasize importance of secure system
 System only as good as integrity of stored data
 Single security breach damages system’s integrity
 Damaged integrity threatens viability of best designed systems, its managers, its designers, its
users
 Vigilant security precautions are essential

83
UNIT 12 Concurrent Programming

Until now we’ve looked at multiprocessing as several jobs executing at the same time on a single
processor or on multi-processors. Multiprocessing can also refer to one job using several
processors to execute sets of instructions in parallel.

Concurrent Processing Systems


It requires a programming language and a computer system that can support this type of construct .
Most programming languages are serial in nature. Instructions are executed one at a time. To
resolve an arithmetic expression, every operation is done in sequence following the order
prescribed by the programmer and compiler.

Applications of Concurrent Programming

A= 3 * B * C + 4 / (D + E) ** (F – G)

STEP NO. OPERATION RESULT


1 (F – G) Store difference in T1
2 (D + E) Store sum in T2
3 (T2) ** (T1) Store power in T1
4 4 / (T1) Store quotient in T2
5 3*B Store product in T1
6 (T1) * C Store product in T1
7 (T1) + (T2) Store sum in A
The sequential computation of the expression requires several steps. (In this example, there are
seven steps, but each step may involve more than one machine operation)

All equations follow a standard of order.


 You first perform all calculations in parentheses.
 Second, you calculate all exponents
 Third, you perform all multiplication and division
 Fourth, you perform the addition and subtraction
 For each step you go from left to right

If you performed the calculations in some other order, you could come up with the wrong answer.
For many computational purposes, serial processing is sufficient; it’s easy to implement and fast
enough for most users.

Arithmetic expressions can be processed differently if we use a language that allows for concurrent
processing.

-COBEGIN
-COEND

84
This indicates to the compiler which instructions can be processed concurrently
COBEGIN
T1= 3 * B
T2= D + E
T3= F – G
COEND

COBEGIN
T4= T1 * C
T5= T2 ** T3
COEND
A= T4 + 4 / T5

A= 3 * B * C + 4 / (D + E) ** (F – G)

STEP NO. PROCESSOR OPERATION RESULT


1 1 3*B Store difference in T1
2 (D + E) Store sum in T2
3 (F – G) Store power in T1
2 1 (T1) * C Store quotient in T2
2 (T2) ** (T3) Store product in T1
3 1 4 / (T5) Store product in T1
4 1 (T4) + (T1) Store sum in A
With concurrent processing, the seven-step procedure can be processed in only four steps, which
reduces execution time

With this system, we’ve increased the computation speed, but we’ve also increased the complexity
of the programming language and the hardware. We’ve placed a large burden on the programmer
to explicitly state which instructions can be executed in parallel.

With a true concurrent processing system, the instruction is coded as a single expression. It is the
compiler that translates the algebraic expression into separate instructions and decides which steps
can be performed in parallel and which in serial mode. The equation Y = A+ B * C + D could be
rearranged by the compiler as A + D + B * C so that two operations A + D and B * C would be
done in parallel, leaving the final addition to be calculated last. Concurrent processing can also
dramatically reduce the complexity of working with array operations within loops.

To perform an array operation within a loop in three steps, the instructions might say:

For (j = 1: J <=3; J++)


a (j) = b(j) + c(j);

85
If we use three processors, the instruction can be performed in a single step:

Processor 1 Processor 2 Processor 3


a(1)=b(1)+c(1) a(2)=b(2)+c(2) a(3)=b(3)+c(3)

Of performing matrix multiplication using one processor, the answer to the equation on page 189
can be computed in 27 steps. By multiplying several elements of the first row of Matrix 1 by
corresponding elements in Matrix 2, three processors could cooperate to resolve the equation in
fewer steps.
With two processors it takes 18 steps to perform the calculations in parallel. With three, it would
require even fewer steps.

Concurrent processing does not necessarily cut processing activity in direct proportion to the
number of processors available.

Of conducting parallel searches in databases:


• Database searching is a common non-mathematical application for concurrent processing
systems.
• The entire file can be split into discrete sections with one processor allocated to each
section. This results in a significant reduction in search time.
• Once the item is found, all processors can be deallocated and set to work on the next task.
• A concurrent search is faster than if a single processor was allocated to search the database.

Of sorting or merging files:


• By dividing a large file into sections, each with its own processor, every section can be
sorted at the same time.
• Then pairs of sections can be merged together until the entire file is whole again- and
sorted.

86
UNIT 13 Study of Various Existing Operating systems

Throughout this book, we have studied the various component of a generic operating system. We
looked at the various managers and how they interact. We also studied the way the various
managers handle users requests and allocation of resources. Above all, we learned that all
applications running on our various digital computing devices are treated as processes.

In this concluding lecture, you are expected to study various existing operating systems. You are
expected to study how they handle processes. It will interest you to know that with mobile
operating systems like Android, processes can have a state called:- Foreground, Visible, Service,
Background.

Studying these will help you understand better, all that you have learned in this course. The are
various resources online that can help. Get online and study the following operating systems.

1. Windows
2. Unix
3. iOS
4. Android
5. Blackberry

87

You might also like