CSC 421 Operating System for DLC2
CSC 421 Operating System for DLC2
CSC 421
(Operating System II)
By
Elizabeth O. Ogunseye
And
Nancy C. Woods
University of Ibadan
Ibadan, Nigeria.
1
GENERAL INTRODUCTION AND COURSE OBJECTIVES
All operating systems have certain core items in common: each must manage memory,
processing capability, devices and peripherals, files, and networks.
2
TABLE OF CONTENTS
3
4.2.2 Loosely Coupled Configuration........................................................................................................................34
4.2.3 Symmetric Configuration ...................................................................................................................................34
UNIT 5 PROCESS S YNCHRONIZATION ...................................................................................................................36
5.1 INT RODUCTION ...............................................................................................................................................................36
5.2 HOW DO SEVERAL PROCESSORS COOPERATE..............................................................................................................36
5.3 SYNCHRONIZATION M ECHANISMS...............................................................................................................................37
5.3.1 Test-and-Set..........................................................................................................................................................37
5.3.2 Wait and Signal ...................................................................................................................................................38
5.3.3 Semaphores ..........................................................................................................................................................38
5.4 PROCESS COOPERATION ................................................................................................................................................39
5.4.2 READERS AND W RITERS...........................................................................................................................................40
UNIT 6 PROCESS MANAGEMENT (DEADLOCK) ................................................................................................41
6.1 DEADLOCK ......................................................................................................................................................................41
6.1.1 Four Necessary Conditions for Deadlock.......................................................................................................41
6.2 SEVEN CASE S OF DEADLOCKS......................................................................................................................................42
6.2.1 Case 1 : Deadlocks on File Requests...............................................................................................................42
6.2.2 Case 2 : Deadlocks in Databases.....................................................................................................................42
6.3 M ODELING DEADLOCKS USING DIRECTED GRAPHS.................................................................................................43
6.4 ST RATEGIES FOR HANDLING DEADLOCKS..................................................................................................................44
6.4.1 Deadlock Prevention ..........................................................................................................................................44
6.4.2 Deadlock Avoidance ...........................................................................................................................................45
6.4.3 Deadlock Detection.............................................................................................................................................46
6.4.4 Deadlock Recovery .............................................................................................................................................47
6.5 ST ARVATION ...................................................................................................................................................................47
6.6 RACE CONDIT ION ...........................................................................................................................................................47
UNIT 7 DEVICE MANAGEMENT .................................................................................................................................49
INTRODUCTION .......................................................................................................................................................................49
7.1 DEVICE M ANAGEMENT BASIC FUNCTIONS .................................................................................................................49
7.2. SYSTEM DEVICES ....................................................................................................................................................49
7.2.1. Dedicated Devices...............................................................................................................................................49
7.2.2. Shared Devices ....................................................................................................................................................50
7.2.3. Virtual Devices .........................................................................................................................................................50
7.3. ST ORAGE M EDIA.......................................................................................................................................................50
7.3.1. Sequential Access Storage Media.....................................................................................................................50
7.3.2. Direct Access Storage Devices (DASD) ..........................................................................................................52
7.4. COMPONENT S OF THE I/O SUBSYST EM ..................................................................................................................55
7.4.1. Communication Among Devices.......................................................................................................................56
7.4.2. Direct memory access (DMA) ...........................................................................................................................57
7.5. M ANAGEMENT OF I/O REQUEST S...........................................................................................................................57
7.5.1. Device Handler Seek Strategies for hard disk with movable R/W head ....................................................58
UNIT 8 FILE MANAGEMENT ........................................................................................................................................60
INTRODUCTION .......................................................................................................................................................................60
8.1. W HAT IS A FILE ?........................................................................................................................................................60
8.1.1. Characteristics of Files ......................................................................................................................................61
8.2. DIRECT ORY / VOLUME ST RUCTURE .......................................................................................................................61
8.3. FILE SYST EM..............................................................................................................................................................61
8.4. FILE ORGANIZATION.................................................................................................................................................63
4
8.4.1. Record Format.....................................................................................................................................................63
8.4.2. Access Methods ...................................................................................................................................................63
8.5. PHYSICAL ST ORAGE A LLOCATION .........................................................................................................................63
8.5.1. Contiguous storage .............................................................................................................................................64
8.5.2 NON-CONTIGUOUS ST ORAGE...................................................................................................................................65
8.5.2.1 Chained (or linked) storage .........................................................................................................................65
8.5.2.2 Indexed Storage..............................................................................................................................................65
8.6. A CCESS CONT ROL MAT RIX .....................................................................................................................................66
UNIT 9 PERFORMANCE ..................................................................................................................................................68
9.1. SYSTEM PERFORMANCE MONITOR..................................................................................................................................68
9.2 PERFORMANCE....................................................................................................................................................................68
9.2.1. Performance Measures ......................................................................................................................................68
9.2.2. Performance Evaluation Techniques ................................................................................................................69
9.2.3. Performance Monito ring Techniques ..............................................................................................................70
9.3. BOTTLENECK AND SATURATION ......................................................................................................................................70
9.4. FEEDBACK LOOPS ...........................................................................................................................................................70
UNIT 10 SECURITY .........................................................................................................................................................72
10.1 W HAT IS SECURITY ................................................................................................................................................................72
10.2 SYSTEM MANAGEMENT ..........................................................................................................................................................73
10.3 SYSTEM SURVIVABILITY ..........................................................................................................................................................74
10.4 LEVELS OF PROTECTION ..................................................................................................................................................75
10.5 BACKUP AND RECOVERY ........................................................................................................................................................76
10.6 SECURITY BREACHES.......................................................................................................................................................76
10.6.1 Unintentional Intrusions ........................................................................................................................................77
10.6.2 Intentional Attacks.........................................................................................................................................77
10.7 GUESSING PASSWORDS .........................................................................................................................................................79
UNIT 11 SYSTEM PROTECTION...............................................................................................................................80
INTRODUCTION .......................................................................................................................................................................80
11.1 PROTECTION ..................................................................................................................................................................80
11.2 PROTECTION METHODS ..................................................................................................................................................80
Firewalls ................................................................................................................................................................................80
Authentication .....................................................................................................................................................................81
Encryption .............................................................................................................................................................................81
Sniffers...................................................................................................................................................................................81
Password Management .....................................................................................................................................................81
UNIT 12 CONCURRENT PROGRAMMING ...........................................................................................................84
UNIT 13 STUDY OF VARIOUS EXIS TING OPERATING S YSTEMS ...........................................................87
5
Unit 1: Overview of Operating System
Introduction
An operating system (OS) is a collection of software that manages computer hardware resources
and provides common services for computer programs. The operating system is a vital component
of the system software in a computer system. An Operating System (OS) is an interface between
a computer user and computer hardware. An operating system is a software which performs all
the basic tasks like file management, memory management, process management, handling input
and output, and controlling peripheral devices such as disk drives and printers.
Learning Objectives
After completing this chapter, you should be able to describe:
• The basic role of an operating system
• The major operating system software subsystem managers and their functions
• The types of machine hardware on which operating systems run
• The differences among batch, interactive, real-time, hybrid, and embedded operating systems
6
Memory Management
Processor Management
Device Management
File Management
Security
Control over system performance
Job accounting
Error detecting aids
Coordination between other software and users
1.1.1 Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main memory
is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to
be executed, it must in the main memory. An Operating System does the following activities for
memory management −
Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are
not in use.
In multiprogramming, the OS decides which process will get memory when and how
much.
Allocates the memory when a process requests it to do so.
De-allocates the memory when a process no longer needs it or has been terminated.
1.1.2 Processor Management
In multiprogramming environment, the OS decides which process gets the processor when and
for how much time. This function is called process scheduling. An Operating System does the
following activities for processor management −
Keeps tracks of processor and status of process. The program responsible for this task is
known as traffic controller.
Allocates the processor (CPU) to a process.
De-allocates processor when a process is no longer required.
1.1.3 Device Management
An Operating System manages device communication via their respective drivers. It does the
following activities for device management −
Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
Decides which process gets the device when and for how much time.
Allocates the device in the efficient way.
7
De-allocates devices.
1.1.4 File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management −
Keeps track of information, location, uses, status etc. The collective facilities are often
known as file system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
1.1.5 Other Important Activities
Following are some of the important activities that an Operating System performs −
Security − By means of password and similar other techniques, it prevents unauthor ized
access to programs and data.
Control over system performance − Recording delays between request for a service and
response from the system.
Job accounting − Keeping track of time and resources used by various jobs and users.
Error detecting aids − Production of dumps, traces, error messages, and other debugging
and error detecting aids.
Coordination between other software and users − Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the computer
systems.
1.2 Types of Operating System
Operating systems are there from the very first computer generation and they keep evolving with
time. In this chapter, we will discuss some of the important types of operating systems which are
most commonly used.
1.2.1 Batch operating system
The users of a batch operating system do not interact with the computer directly. Each user
prepares his job on an off-line device like punch cards and submits it to the computer operator.
To speed up processing, jobs with similar needs are batched together and run as a group. The
programmers leave their programs with the operator and the operator then sorts the programs with
similar requirements into batches.
The problems with Batch Systems are as follows −
8
1.2.2 Time-sharing operating systems
Time-sharing is a technique which enables many people, located at various terminals, to use a
particular computer system at the same time. Time-sharing or multitasking is a logical extension
of multiprogramming. Processor's time which is shared among multiple users simultaneously is
termed as time-sharing. The main difference between Multiprogrammed Batch Systems and
Time-Sharing Systems is that in case of Multiprogrammed batch systems, the objective is to
maximize processor use, whereas in Time-Sharing Systems, the objective is to minimize response
time.
Multiple jobs are executed by the CPU by switching between them, but the switches occur so
frequently. Thus, the user can receive an immediate response. For example, in a transaction
processing, the processor executes each user program in a short burst or quantum of computatio n.
That is, if n users are present, then each user can get a time quantum. When the user submits the
command, the response time is in few seconds at most. The operating system uses CPU
scheduling and multiprogramming to provide each user with a small portion of a time. Computer
systems that were designed primarily as batch systems have been modified to time-shar ing
systems.
1.2.3 Distributed operating System
Distributed systems use multiple central processors to serve multiple real-time applications and
multiple users. Data processing jobs are distributed among the processors accordingly. The
processors communicate with one another through various communication lines (such as high-
speed buses or telephone lines). These are referred as loosely coupled systems or distributed
systems. Processors in a distributed system may vary in size and function. These processors are
referred as sites, nodes, computers, and so on.
1.2.4 Network operating System
A Network Operating System runs on a server and provides the server the capability to manage
data, users, groups, security, applications, and other networking functions. The primary purpose
of the network operating system is to allow shared file and printer access among multip le
computers in a network, typically a local area network (LAN), a private network or to other
networks. Examples of network operating systems include Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
1.2.5 Real Time operating System
A real-time system is defined as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment. The time taken by the
system to respond to an input and display of required updated information is termed as
the response time. So in this method, the response time is very less as compared to online
processing. Real-time systems are used when there are rigid time requirements on the operation
of a processor or the flow of data and real-time systems can be used as a control device in a
dedicated application. A real-time operating system must have well-defined, fixed time
constraints, otherwise the system will fail. For example, Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
There are two types of real-time operating systems.
9
1.3 Multitasking
Multitasking is when multiple jobs are executed by the CPU simultaneously by switching between
them. Switches occur so frequently that the users may interact with each program while it is
running. An OS does the following activities related to multitasking −
The user gives instructions to the operating system or to a program directly, and receives
an immediate response.
The OS handles multitasking in the way that it can handle multiple operations/exec utes
multiple programs at a time.
Multitasking Operating Systems are also known as Time-sharing systems.
These Operating Systems were developed to provide interactive use of a computer system
at a reasonable cost.
A time-shared operating system uses the concept of CPU scheduling and
multiprogramming to provide each user with a small portion of a time-shared CPU.
Each user has at least one separate program in memory.
10
1.4 Multiprogramming
Sharing the processor, when two or more programs reside in memory at
the same time, is referred as multiprogramming. Multiprogramming
assumes a single shared processor. Multiprogramming increases CPU
utilization by organizing jobs so that the CPU always has one to execute.
The figure shows the memory layout for a multiprogramming system.
Here you can see that there are several jobs loaded into the memory to be
executed by the CPU. The CPU will then use one of the scheduling
algorithms you studied in CSC 321 to select which job to be executed and
for how long.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
1.5.3 File system manipulation
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include magnetic
11
tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own
properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −
12
Protection refers to a mechanism or a way to control the access of programs, processes, or users
to the resources defined by a computer system. Following are the major activities of an operating
system with respect to protection −
Review Questions
1. List and explain the different types of O/S?
2. When is an O/S termed a multiprogramming O/S?
3. What is the basic role of an operating system?
4. List and describe the services provided by the O/S.
13
Unit 2: Memory Management: Early Systems
Expected Duration: 1 week or 2 contact hours
Introduction
The Memory Manager is in charge of main memory, also known as RAM, short for Random
Access Memory. The Memory Manager checks the validity of each request for memory space and,
if it is a legal request, it allocates a portion of memory that isn’t already in use. In a multiuser
environment, the Memory Manager sets up a table to keep track of who is using which section of
memory. Finally, when the time comes to reclaim the memory, the Memory Manager deallocates
memory. A primary responsibility of the Memory Manager is to protect the space in main memor y
occupied by the operating system itself—it can’t allow any part of it to be accidentally or
intentionally altered.
Learning Objectives
After completing this unit, you should be able to describe:
• The basic functionality of the three memory allocation schemes presented in this unit: fixed partitions,
dynamic partitions, relocatable dynamic partitions
• Best-fit memory allocation as well as first-fit memory allocation schemes
• How a memory list keeps track of available memory
• The importance of deallocation of memory in a dynamic partition system
• The importance of the bounds register in memory allocation schemes
• The role of compaction and how it improves memory allocation efficiency
14
Demand Fetch: programs and data are brought into the memory specifically when they are
requested for or referenced.
Anticipating Fetch: the system predicts a program’s need and attempts to load the
appropriate program and data pieces into the main memory before they are actually needed.
There are many memory management policies which are as follows:
Advantages
It is simple
It does not require great expertise to understand or use such a
system
Disadvantages
Memory is not fully utilized.
Poor utilization of processors when the running job is waiting for an i/o
i.e. CPU lies idle
User job is limited to the size of the memory
Only one job can be processed at a time i.e. no multiprogramming
15
The O/S will assign these jobs to suitable partitions so as to minimize interna l
fragmentation.
Figure 2.1(b) shows this allocation.
From the diagram note that Job 4 must wait for other jobs to run to completion before it
can be allocated memory space because its size is bigger or larger than the remaining
available space
25k 25k
25k 25k
50k Job 1 50k
Advantages
More jobs get executed
Memory is better used than in the single contiguous allocation
Disadvantages
Internal fragmentation
If jobs that require large memory arrive after the large partition has been allocated then
they have to wait i.e. longer turnaround time for larger jobs.
All the jobs are still loaded into the memory and some information may never be used.
16
Since the Operating system keeps track of free and available spaces, it makes it easy for it to
allocate memory spaces when a new job arrives. The Operating system must find a free memory
space and then allocate it to the job. There are 2 major ways of allocating memory spaces which
are FIRST FIT and BEST FIT
17
Disadvantages
It is not always efficient, since there may be free memory spaces more suitable and which
will minimize external fragmentation at a higher memory address.
We could also have NEXT FIT which starts searching from the last allocated block for the next
available block when a job arrives. And WORST FIT which allocates the largest free available
block to the new job. There is also Multiple Partition Algorithm. This algorithm can sometimes
decrease the fragmentation. It is such that a job usually consists of several separate memory
segment contiguous however there is adequate protection. A new job may be allocated 250k
memory partitions or 425k memory partitions and so on. This way it is even easy for the job to
request more memory during execution, since the memory partition does not have to be
contiguous.
18
While compaction is taking place, everything else must wait. All the programs in the memory are
moved so that they are contiguous. In so doing, you have to change the addresses and references
to an address of the memory in the program. This can be a complex task because when programs
are compiled and interpreted, they are loaded into the memory as sets of zeros and ones. The
operating system has to be determined which set is an address and which is an instruction or data.
Some hardware techniques are developed to cope with the relocation problem which exist. Most
of them are based on the concept of MAPPED MEMORY i.e. the address space “seen” by a job is
not necessarily the same as the physical memory address used. Since programs are relocated, the
address would change so also with all the address within the program.
If an address is left unchanged, then the program may not function well or may branch/refere nce
another program. The hardware component of the computer that assists with these problems are:
Bound Register: This stores the highest (or lowest) location in memory accessible by each
program. It ensures that during execution, the program does not access memory locations that are
“out of bounds” to it.
Relocation Register: Contains the value that must be added or subtracted to each address
referenced in a program so that it will be able to access the correct memory address after relocation.
If the program is not relocated the value is zero. Before any line of instruction is to be executed
the true address of the location is computed by adding its original address values to the value stored
in the relocation register. This method does not alter the address specified in the program.
Originally, the relocation adjustment is done automatically as each instruction is executed. This is
called Dynamic Relocation.
Compaction can either be done when there are jobs waiting to get in and a large enough free area
was not available or when a certain % of memory becomes busy. OR after a certain amount of
time, or after a partition is deallocated.
Any method chosen will still incur some overhead cost and time is lost.
Advantages
Eliminates fragmentation and thus makes it possible to allocate more partitions.
It allows for a higher degree of multi programming which results in increased memory and
processor utilization and thus increases throughput.
Disadvantages
Relocation hardware increases the cost of the computer
Compaction time may be substantial.
Some memory will still be unused because though it is compacted, the amount of free space
area may be less than the needed partition size.
All the memory allocation technique discussed so far has 3 things in common.
The entire program:
Be loaded into the memory
Be stored contiguously
Remain in memory until the job runs to completion
The following techniques do not require the entire program be contiguously loaded in the memory.
Review Questions
19
1. Compare and contrast Internal and External fragmentation.
2. In which allocation scheme are the Bound Register and Relocation Register used? What are they
used for?
3. What is common to all the memory allocation schemes described in this unit?
4. Which memory allocation scheme first allowed multiprogramming? Justify your answer.
20
UNIT 3 MEMORY MANAGEMENT (VIRTUAL MEMORY)
Introduction
In the previous lecture, we learned about the early memory management schemes. These schemes
are no longer being used, because technology has changed. Most recent systems, make use of
virtual memory management schemes. These schemes will be studied in this lecture. There are
two common terms that are in virtual memory. They are: Segment and Page. A segment is a
logical unit of information that is visible to the user’s program and normally performs some related
functions and is of arbitrary size. A page on the other hand, is a physical unit of informa tio n
strictly used for memory management, invisible to the user’s program and is of fixed size.
Learning Objectives
After completing this chapter, you should be able to describe:
• The basic functionality of the memory allocation methods covered in this unit: paged, demand
paging, segmented, and segmented/demand paged memory allocation
• The influence that these page allocation methods have had on virtual memory
• The difference between a first-in first-out page replacement policy, a least-recently used page
replacement policy, and a clock page replacement policy
• The mechanics of paging and how a memory allocation scheme determines which pages
should be swapped out of memory
• The impact that virtual memory had on multiprogramming
Advantages
Job size not limited or restricted to the size of main memory
Memory is used more efficiently. Jobs are stored in memory when they are needed
It allows unlimited amount of multi programming
Eliminates external fragmentation and minimize internal fragmentation
It allows the sharing of code and data
It facilitates dynamic linking of program segments
Disadvantages
Increased hardware costs
Increased overhead for handling paging interrupts
Increased software complexity to prevent thrashing
The various VM schemes are discussed next.
21
3.2 PAGED MEMORY ALLOCATION
It is based on the concept of dividing each incoming job address spaces into pages of equal size.
The physical memory is divided into piece of the same size called blocks. Any page can be placed
in any page frame. Usually sections of a disk are called sectors or blocks. The sections of main
memory are called page frames. This technique works well if the size of block = size of page =
size of page frames. Figure 3.1 shows what this allocation scheme looks like. Before executing
a program/ job, the operating system does the following:
1. Determines the number of pages in the program/job
2. Locates enough empty page frames in the main memory and by using a suitable hardware
facility, loads any page into any page frame.
3. The pages remain logically contiguous, but the frames are not contiguous.
To perform this mapping from address space to physical memory, there must be a separate register
for each page. The registers are called page maps or page map tables (PMT). The memory manager
uses 2 more tables to keep track of the jobs and the pages where they are located. These tables are
job tables and memory map tables. Job 1 as shown in Figure 3.1 is of 350bytes. This device as
shown, has also divided the Main memory into frames of 100Bytes. As can be seen from the left,
Job 1 was also divided into pages of 100bytes each. Note that the last page of this job is only
50bytes. The arrows point to where each of the pages are loaded.
Note that there is internal fragmentation in page frame 11, because the section of the job loaded
into it is just 50bytes in size.
Job Table
The JT contains 2 entries for each active job.
The size and location of the PMT.
The JT is a dynamic list that grows as jobs are loaded into the system and shrinks as they
get completed.
23
Disadvantages of Page Memory Allocation
There is still internal fragmentation, however on the last page of a job only.
The overload (i.e. processor time) is increased to take care and update the JT, PMT and
MMT which are stored in the memory.
All the pages of the job is loaded into the memory before execution so that memory
contains information that is seldom used.
A job’s address space is limited to the size of physical memory.
If a job requests that one of its pages be brought into the main memory and there is no empty page
frame available then the system must
1. Check that the demanded page is not already in main memory
2. If not, decide which page should be swapped out of the main memory
3. This includes writing the content of the swapped page back to the secondary storage if it
was modified
24
4. Since the demanded page is in the secondary storage, the operating system calls the page
interrupt handler to resolve the problem.
Demand paging requires that the Page Map Table for each job keep track of each page as it is
loaded or removed from main memory. Each PMT tracks the status of the pages of all active jobs
(as shown above), whether it has been modified, whether it has been recently referenced, and the
page frame number for each page currently in main memory. (Note: For this illustration, the Page
Map Tables have been simplified.
25
Advantages
It has better memory utilization
The Job size is not limited by memory size
Here the entire job does not need to be loaded before execution
It has higher degree of multiprogramming
The Page Fetch/Page Failure is whenever there is a page reference for which the page
needed is not in the memory.
The Page Success is when a page that is required is already in the memory.
Disadvantages
It supports Thrashing
Move tables information to be used
Space occupied by tables
Increased overhead due to the tables
26
Principle of Optimality: replaces page which will not be used until further time in future
Random: pages are selected at random
Least Frequently Used: the LFU page is selected
NUR: an inexpensive and efficient approximation of LRU that replaces a page not used
recently. NUR means not-used-recently.
Working Set: replaces a page, if it is not in the favoured subset of a process’s pages
Page Fault Frequency: adjusts the size of a process’s resident set of pages in response to
changes in the process’s page fault rate.
To keep track of status of memory locations, the operating system makes use of the following
tables combining aspects of dynamic partition and DPMM.
Job table: This table lists every job in the system. There is only 1 per system.
Segment Map Table: This table contains details of each segment, every job has a
SMT. The Segment Map Table (SMT) keeps track of whether a segment is in
memory or not. The table contains segment numbers, their lengths, access rights,
status, and (when each is loaded into memory) its location in memory
Memory map table: This table monitors allocation of memory, there is 1 per system
With SMM, all the job is also not loaded into the memory before execution, rather the segments
of the job are loaded when needed. Copies of all segments are kept in the secondary storage. When
a segment not in the memory is referenced, a Segment Exception Interrupt occurs. The operating
system must find a space in memory for the needed segment either by compacting memory or
removing a segment from memory. Some segments when loaded on the memory may experience
“out of bound” interrupt, if the programmer is trying to store a new entry into the segment. When
this occurs, the operating system gains control and increases the length/size of the segment to
accommodate the new entry.
27
Figure 3.3: Example of Segmentation
Dynamic Linking and Loading: In a DLL environment, only the segment with the main program
is loaded. If it makes reference to any other subroutine, then the segment with the sub routine is
loaded. This is known as deferred linking.
Shared Segments: If 2 or more jobs are using the same procedure, then only a copy of the segment
with that procedure is loaded into the memory. When a job needs that procedure e.g. square root,
the operating system will check via the AST to see if the segment is already in the memory. If so,
it adds the new job to the list of jobs to use that procedure. It does not load it afresh for each job
that requires it. The AST i.e. Active Segment Table is a system wide table which is used to
coordinate segment sharing. See Figure 3.4.
28
Figure 3.4: Sample of Shared segments
Advantages of SMM
It eliminates fragmentation
It provides virtual memory
This allows dynamic segment growth
It assists dynamic linking
It also facilitates shared segments
It enforces controlled access
Disadvantages of SMM
It increases hardware cost, processor overhead for address mapping
Additional for address mapping
Additional overhead is incurred in order to support dynamic segment growth and eliminate
fragmentation
It is difficult to manage variable size segments in secondary storage
The maximum size of a segment is limited by the size of main memory
Techniques need to be developed to prevent segment thrashing
29
Advantages
It has all the advantages of DPMM and SMM
Disadvantages
It has increased overhead on processor, hardware cost and complexity, space occupied by
the tables and the danger of thrashing.
Review Questions
30
UNIT 4 Concurrent Processing
Introduction
In the previous lectures, we learned that multiple jobs can be loaded into the memory for the
processor to execute. As technology improved, systems were built with more than one processor.
This means that more than one process will be in the running state at a time. Imagine how you
can be playing music on your phone and at the same time checking your WhatsApp messages!
This unit will let us understand how this works.
Learning Objectives
After completing this chapter, you should be able to describe:
• The critical difference between processes and processors, and their connection
• The differences among common configurations of multiprocessing systems
• The need for process cooperation when several processes work together
• How several processors cooperate
Parallel Processing
Parallel Processing: one form of multiprocessing, is a situation in which two or more processors
operate in unison. Two or more CPUs are executing instructions simultaneously. The Processor
Manager has to coordinate the activity of each processor as well as synchronize cooperative
interaction among the CPUs.
The increased processing speed is often achieved because sometimes instructions can be
processed in parallel, two or more at a time in one several ways: Some systems allocate a
CPU to each program or job. Others allocate a CPU to each working set or parts of it.
Others subdivide individual instructions so that each subdivision can be processes
simultaneously, this is referred to as Concurrent programming.
31
2. How to orchestrate processor interaction, which applies to multiple interacting processes as
well. Note that increased flexibility brings increased complexity
The complexities of the processor manager’s task when dealing with multiple processors can be
illustrated by the Fast-Food Lunch Stop example below, where a customer has come to buy
shawarma:
From the above you can see that synchronization is the key to the system’s success because many
things can go wrong in a multiprocessing system. The system can’t work properly unless every
processor communicates and cooperates with every other processor.
Multiprocessing can take place at several different levels, each of which requires a differe nt
frequency of synchronization. Multiprocessing at the job level is fairly benign. it’s as if each job
is running on its own workstation with shared system resources. When multiprocessing takes place
at the thread level; a high degree of synchronization is required to:
Disassemble the process;
Perform the thread’s instruction
Correctly reassemble the process.
32
Job level Each job has its own processor and No explicit synchronization required
all processes and threads are run by
that same processor
Process level Unrelated processes, regardless of Moderate amount of synchronization
job, are assigned to any available required to track processes
processor
The configuration is well suited for computing environments in which processing time is divided
between front-end and back-end processors. The front-end processor takes care of the interactive
users and quick jobs. The back-end processor takes care of those with long jobs using the batch
mode.
The primary advantage of this configuration is its simplicity.
It also has three serious disadvantages
1. Its reliability is no higher than for a single processor system because if the master processor
fails, the entire system fails.
33
2. It can lead to poor use of resources because if a slave processor should become free while
the master processor is busy, the slave must wait until the master processor becomes free
and can assign more work to it.
3. It increases the number of interrupts because all slave processors must interrupt the master
processor every time they need OS intervention such as for I/O requests. This creates long
queues at the master processor level when there are many processors and many interrupts.
I/O
Main Memory Processor 1 Devices
I/O
Main Memory Processor 2 Devices
I/O
Main Memory Processor 3
Devices
When a job arrives for the first time, it’s assigned to one processor. Once allocated, the job remains
with the same processor until it’s finished. Each processor must have global tables that indicate to
which processor each job has been allocated. To keep the system well balanced and to ensure the
best use of resources, job scheduling is based on several requirements and policies. New jobs might
be assigned to the processor with lightest load or the best combination of output available.
• This system isn’t prone to catastrophic system failures.
• Even when a single processor fails, the others can continue to work independently.
• However, it can be difficult to detect when a processor has failed.
34
2. It uses resources effectively;
3. It can balance loads well;
4. It can degrade gracefully in the event of a failure.
In this configuration, processor scheduling is decentralized. A single copy of the Operating System
and a global table listing each process and its status is stored in a common area of memory so every
processor has access to it. Each processor uses the same scheduling algorithm to select which
process it will run next. Whenever a process is interrupted, whether because of an I/O request or
another type of interrupt, its processor updates the corresponding entry in the process list and finds
another process to run. This means that the processors are kept busy. It also means that any given
job or task may be executed by several different processors during its run time. This configura tio n
is what is currently being used by most digital devices. Think about it? Your system has just one
hard disk, a 16GB RAM, possibly multiple USB ports and perhaps it is Core i3. This means that
all the cores on your system use the same set of I/O devices.
Since each processor has access to all I/O devices and can reference any storage unit, there are
more conflicts as several processors try to access the same resource at the same time. This presents
the obvious need for algorithms to resolve conflicts between processors, thus there is need for
process synchronization. Symmetric Configuration is the most difficult configuration to
implement because the processes must be well synchronized to avoid the problems of races and
deadlocks. Process Synchronization will be studied in the next lecture.
Review Questions
1. Differentiate between multiprogramming and multiprocessing systems.
2. Assuming you have just been hired to assist HP design a multi-user interactive system applying the
concept of multi-processing. Which configuration would you implement and why would you choose
that configuration?
35
UNIT 5 PROCESS SYNCHRONIZATION
Introduction
As technology improved, systems were built with more than one processor, from Dual core, Core
i3, Core i5 to Core i9! This means that more than one process will be in the running state at a time.
We learned the different configurations in the previous lecture. In particular, studied that the
symmetric configuration is peculiar and thus the activities of the processors must be synchronized
for the system to be efficient. This unit will let us understand how to achieve process
synchronization.
Learning Objectives
After completing this unit, you should be able to describe:
• The significance of a critical region in process synchronization
• The basic concepts of process synchronization: test-and-set, WAIT and SIGNAL, and
semaphores
• The need for process cooperation when several processes work together
• How several processors, executing a single job, cooperate
5.1 Introduction
The success of process synchronization hinges on the capability of the O/S to make a resource
unavailable to other processes while it is being used by one of them. The resource being used must
be locked away from other processes until it is released. Only when the resource is released is a
waiting process allowed to use the resource. Have you ever tried to open a file and got the message
“ this file is in use by another program”? That is the O/S locking the file so that it will only be
available when not in use. If you have ever linked a Microsoft Excel file to a word document using
mail merge, then any time that word document is running/ open, the O/S will lock the Excel file
from being edited. That is a simple practical example of process synchronization. A mistake in
synchronization could leave a job waiting indefinitely(starvation) or, if it’s a key resource, cause
a deadlock.
1. Consult the list of jobs to see which one should be run next;
2. Retrieve the job for execution;
3. Increment the READY list to the next job;
4. Execute it.
Imagine a scenario where the next job to be executed is Job 74. Processor 1 and Processor 2 are
free and both go to the READY list to select a job. Processor 1 sees that Job 74 is the next job and
goes to retrieve it. A moment later processor 2 also selects job 74 from the READY list. Shortly
thereafter, processor 1, having retrieved job 74 returns to the READY list and increments it,
36
moving job 75 to the top. A moment later processor 2 returns; it has also retrieved job 74 and is
ready to process it, so it increments the READY list and now job 76 is moved to the top and
becomes the next job in line to be processed. Job 75 has become the missed waiting customer and
will never be processed. Job 74 is being processed twice. This problem can also occur in memory
and page allocation tables, I/O tables, application databases, and any shared resource. This
situation calls for synchronization.
The aim of these mechanisms is to enforce Mutual exclusion on the resource being used. That is
some form of ‘locking’ which ensure that only one process has access to that shared resource at a
time. The synchronization mechanisms can be handled via Hardware methods, known as Test-
and-Set, WAIT and SIGNAL. There is also a software method called Semaphores.
5.3.1 Test-and-Set
A single, indivisible machine instruction known simply as TS and was introduced by IBM for its
multiprocessing system 360/370 computers. In a single machine cycle a process tests to see if the
key is available. If it is, it’s set to unavailable. The actual key is a single bit in a storage location
that can contain a 0 (free) or a 1 (busy). We can consider TS to be a function subprogram that has
37
one parameter (the storage location) and returns one value (the condition code: busy/free), with
the exception that it takes only one machine cycle.
The mechanism works as below:
• Process 1 would test the condition code using the TS instruction before entering a critical
region.
• If no other process was in this critical region, then process 1 would be allowed to proceed
and the condition code would be changed from 0 to 1.
• Later, when process 1 exits the critical region, the condition code is reset to 0 so another
process can enter
• If process 1 finds a busy condition code, then it’s placed in a waiting loop where it
continues to test the condition code and waits until it’s free
Advantages
1. It’s a simple procedure to implement.
2. It works well for a small number of processes.
Disadvantages
1. First, when many processes are waiting to enter a critical region, starvation could occur
because the processes gain access in an arbitrary fashion. Unless a first-come first served
policy were set up, some processes could be favoured over others and thus lead to
Starvation.
2. A second drawback is that the waiting processes remain in unproductive, resources-
consuming wait loops, requiring context switching (known as busy waiting). This wait
loops not only consumes valuable processor time but also relies on the competing processes
to test the key.
5.3.3 Semaphores
A Semaphore is a non-negative integer variable that’s used as a binary signal (a flag). It signals if
and when a resource is free and can be used by a process. Dijkstra (1965) introduced two
operations to overcome process synchronization which are:
- P (proberen means “to test”)
38
- V (verhogen means “to increment”)
This is how a Semaphore works: Let s be a semaphore variable.
• The V operation on s is simply to increment s by 1
V(s): s: = s+1
This necessitates a fetch, increment, and store sequence. The increment operation must be
performed as a single indivisible action to avoid deadlocks. s cannot be accessed by any other
process during operation.
The operations to test or increment are executed by the Operating System in response to calls
issued to any one process naming a semaphore as parameter. If s=0, it means that the critical
region is busy and the process calling on the test operation must wait until the operation can be
executed (s>0). Test and increment operations on semaphore s enforces the concept of mutual
exclusion, which is necessary to avoid having two operations attempt to execute at the same time.
The name traditionally given to this semaphore is mutex (MUTual EXclusion):
P(mutex): if mutex > 0 then mutex := mutex – 1
V(mutex): mutex: =mutex + 1
The concept of producers and consumers can be extended to buffers that hold records or other data,
as well as to other situations in which direct process-to-process communication of messages is
required.
39
5.4.2 Readers and Writers
Readers are processes that only read data from storage locations, while Writers are processes that
write data to a storage location. When two types of process need to access a shared resource for
examples in an Airline Reservation System, The readers are those who want flight informatio n.
They only read the existing data they don’t modify it because no one is changing the database, the
system can allow many readers to be active at the same time there’s no need to enforce mutual
exclusion among them. The writers are those who are making reservations on a particular flight.
Writers must be carefully accommodated because they are modifying existing data in the database.
The system can’t allow someone to be writing while someone else is reading (or writing). It must
enforce mutual exclusion if there are groups of readers and a writer, or if there are several writers
in the system.
This can be implemented using two semaphores to ensure mutual exclusion between readers and
writers
- A resource can be given to all readers, provided that no writers are processing
- A resource given to a writer, provided that no readers are reading and no writers writing
Questions
1. When is a process said to be in a critical region?
2. What mechanisms are used to enforce mutual exclusion? Describe these mechanisms.
3. With suitable examples, describe the problem of Readers and Writers.
40
UNIT 6 Process Management (Deadlock)
Introduction
Learning Objectives
6.1 Deadlock
A Lack of Process Synchronization Causes Deadlock or Starvation
Deadlock (“deadly embrace”) – This is a system-wide tangle of resource requests that begins when
2 or more jobs are put on hold.
41
6.2 Seven Cases of Deadlocks
6.2.1 Case 1 : Deadlocks on File Requests
If jobs can request and hold files for duration of their executio n,
deadlock can occur. Any other programs that require F1 or F2 are
put on hold as long as this situation continues. Deadlock remains
until a program is withdrawn or forcibly removed and its file is
released.
6.2.2 Case 2 : Deadlocks in Databases
1. P1 accesses R1 and locks it.
2. P2 accesses R2 and locks it.
3. P1 requests R2, which is locked by P2.
4. P2 requests R1, which is locked by P1.
42
Disks are designed to be shared, so it’s not uncommon for 2 processes access different areas of
same disk. Without controls to regulate use of disk drive, competing processes could send
conflicting commands and deadlock the system.
6.2.7 Case 7: Deadlocks in a Network
43
Figure 6.7: More Directed Graph Examples
From the figure above we can see that a circle exists at the bottom right graph (starting at any point, trace
the arrows and you will see this). As long as there are no additional resources added to the pool, the
processes in that graph will remain in this state until some strategy is used to resolve the situation. These
strategies are discussed next.
44
multiprogramming. Some peripheral devices would be idle because, although they are allocated to a job,
they wouldn't be used all the time.
Eliminating No Preemption
No preemption could be bypassed by allowing the O/S to deallocate resources from jobs. This means that
if a job has been assigned resources that is needs, the O/S has the right to “collect” those resources if they
are needed by another process. This approach is okay if the state of a job can be easily saved in its PCB
and restored at a later time.
Eliminating Circular Wait
Circular wait can be bypassed if the O/S prevents the formation of a circle. This is achieved by Havender’s
solution (1968). The solution is based on a numbering system for resources, for example printer = 1,
Harddisk = 2, plotter= 3. The O/S then forces each job to request its resources in ascending order. Basically,
if a process needs the plotter, then the printer, it must request for the printer first before the plotter. With
this arrangement, the O/S can somehow tell if those devices have been allocated or not. This requires that
jobs anticipate the order in which they will request resources and this can be difficult to determine.
6.4.2 Deadlock Avoidance
Another strategy to tackle deadlocks, is to avoid it. The O/S can avoid deadlock if system knows ahead of
time the sequence of requests associated with each of the active processes. Dijkstra’s Bankers Algorithm
(1965) is used to regulate resource allocation to avoid deadlock. According to the algorithm, the computing
system is expected to be in any of the two following states: -
Safe state – This state means that there exists a safe sequence to allocate resources to all active processes
such that they can all run to completion.
Unsafe state –This state doesn’t necessarily lead to deadlock, but it does indicate that system is an excellent
candidate for one.
Banker’s Algorithm
Banker’s algorithm is based on a bank with a fixed amount of capital that operates on the following
principles:
1. No customer will be granted a loan exceeding bank’s total capital.
2. All customers will be given a maximum credit limit when opening an account.
3. No customer will be allowed to borrow over the limit.
4. The sum of all loans won’t exceed the bank’s total capital.
The O/S (bank) must be sure never to satisfy a request that moves it from a safe state to an unsafe one.
45
A Bank’s Safe and Unsafe States
Safe
Customer Loan amount Maximum credit Remaining credit
C1 0 4,000 4,000
C2 2,000 5,000 3,000
C3 4,000 8,000 4,000
Total loaned: $6,000
Total capital fund: $10,000
From the table above, you can see that the bank has loaned out a total of $6,000 out of the $10,000, leaving
a balance of $4,000 which can be used to satisfy any of the customers.
Unsafe
Customer Loan amount Maximum credit Remaining credit
C1 2,000 4,000 2,000
C2 3,000 5,000 2,000
C3 4,000 8,000 4,000
Total loaned: $9,000
Total capital fund: $10,000
From the table above, you can see that the bank has loaned out a total of $9,000 out of the $10,000, leaving
a balance of $1,000 which cannot be used to satisfy any of the customers.
46
6.4.4 Deadlock Recovery
Once a deadlock has been detected it must be untangled and system returned to normal as quickly as
possible. There are several recovery algorithms, all requiring at least one victim ( an expendable job that
must be terminated, to resolve the deadlock and free the system). The several recovery mechanisms are
listed below.
1. Terminate every job that’s active in system and restart them from beginning. (recall when your system
“misbehaves” and you just press the following keys ctrl+alt+delete and eventually restart the system)
2. Terminate only the jobs involved in deadlock and ask their users to resubmit them.
3. Terminate jobs involved in deadlock one at a time, checking to see if deadlock is eliminated after each
removal, until it has been resolved.
4. Have job keep record (snapshot) of its progress so it can be interrupted and then continued without
starting again from the beginning of its execution.
5. Select a non-deadlocked job, preempt resources it’s holding, and allocate them to a deadlocked process
so it can resume execution, thus breaking the deadlock
6. Stop new jobs from entering system, which allows non-deadlocked jobs to run to completion so they’ll
release their resources (no victim).
6.5 Starvation
Starvation is as a result of conservative allocation of resources where a single job is prevented from
execution because it’s kept waiting for resources that never become available. There is an algorithm
designed to detect starving jobs which tracks how long each job has been waiting for resources. To enable
the job recover from starvation, the process is aged. Aging is basically increasing the priority of a starved
process such that it eventually becomes the process with the highest priority and thus allowed to run.
Race conditions occasionally occur in logic gates when inputs come into conflict. Because the gate output
state takes a finite, nonzero amount of time to react to any change in input states, sensitive circuits or devices
following the gate may be fooled by the state of the output and not operate properly. In a network, a race
condition may occur if two users try to access a channel at the same instant and neither computer receives
notification the channel is occupied before the system grants access. Statistically, this kind of situation
occurs mostly in networks with long lag times, such as those that use geostationary satellite.
47
The serialization of memory or storage access will prevent race conditions. This means if read and write
commands are received close together, the read command is executed and completed first by default. To
prevent race condition in a network, a priority scheme must be devised to give one user exclusive access.
For example, the subscriber whose username or number begins with the earlier letter of the alphabet or the
lower numeral may get priority when two subscribers attempt to access the system within a prescribed
increment of time.
Questions
1. State the four necessary conditions for deadlock to occur?
2. Differentiate between Deadlock, Starvation and Race condition.
3. When is system said to be in a safe state?
4. List and explain any 4 methods you can use to recover your system from a deadlock situation.
48
UNIT 7 Device Management
Introduction
Device management in an operating system means controlling the Input/Output devices like disk,
microphone, camera, keyboard, printer, magnetic tape, USB ports, camcorder, scanner, other
accessories, and supporting units like supporting units control channels. A process may require
various resources, including main memory, file access, and access to disk drives, and others. If
resources are available, they could be allocated, and control returned to the CPU. Otherwise, the
procedure would have to be postponed until adequate resources become available. The system has
multiple devices, and in order to handle these physical or virtual devices, the operating system
requires a separate program known as a device controller. It also determines whether the requested
device is available.
Learning Objectives
After completing this unit, you should be able to describe:
• Features of dedicated, shared, and virtual devices
• Differences between sequential and direct access media
• Roles of seek time, search time, and transfer time in calculating access time
• Critical components of the input/output subsystem, and how they interact
• Strengths and weaknesses of common seek strategies, including FCFS, SSTF, SCAN/LOOK, C -
SCAN/C-LOOK, and how they compare
49
printers, and plotters. It would be awkward to let several users share them. A shared plotter might
produce half of one user’s graph and half of another.
Disadvantages
1. They must be allocated to a single user/job for the duration of a job’s execution, and can
be quite inefficient, because the device may not be used 100 percent of the time.
2. Wastage of resources
7.2.2. Shared Devices
Shared Devices can be assigned to several processes at a time. They include any direct access
storage media (DASD). E.g. They can be shared by several processes at the same time by
interleaving their requests, but this interleaving must be carefully controlled by the Device
Manager. All conflicts—such as when Process A and Process B, each need to read from the same
disk—must be resolved based on predetermined policies to decide which request will be handled
first.
7.2.3. Virtual Devices
Virtual Devices are dedicated devices that have been transformed into shared devices. For
example, printers (which are dedicated devices) are converted into sharable devices through a
spooling program that reroutes all print requests to a disk. Only when all of a job’s output is
complete, and the printer is ready to print out the entire document, is the output sent to the printer
for printing. Have you ever encountered a situation where you send a file to a printer that is offline
and then forget that you did so. However, any day that printer is connected to your system, the
file gets printed!. (This procedure has to be managed carefully to prevent deadlock). Because disks
are sharable devices, this technique can convert one printer into several virtual printers, thus
improving both its performance and use. Spooling is a technique that is often used to speed up
slow dedicated I/O devices.
7.3. Storage Media
The storage media that any digital device uses can be classified into two groups:- Sequential access
media, and Direct access storage devices (DASD). There are vast differences in their speed and
shareability.
7.3.1. Sequential Access Storage Media
These are storage media that store records sequentially i.e one after the other. In early systems
they included printouts, punch cards and paper tape. This was before magnetic tape was developed
for routine secondary storage. These are now used for routine archiving and for storing backup
data. In magnetic tape, records of any length are stored serially on the tape. Each record is
identified by its position on the tape. So to access a record, the tape is mounted and “fast -
forwarded” from the beginning until we get to the desired position. This can be time consuming.
Tapes are usually of 9 track in nature with the ninth track used as parity bit (for error checking).
Data is recorded on the other eight parallel tracks.
The amount of characters that can be recorded per inch depends on the density of the tape e.g 1600
bytes per inch (bpi). Between each record on the tape is an inter record gap (IGR) which is about
½ inch long. The IRG allows the tape some space to stop while it waits for the next read. The tape
only moves when there is need to access a record, otherwise it is standing still.
There 2 ways to store records on a tape.
50
Either individually or as a block (i.e. grouped). If stored individually, then each record
(irrespective of its length) is followed by the IRG. This could lead to wasting of tape, when the
records’ length is small say 1/10 of an inch.
If records are grouped into blocks, then the block is followed by an inter block gap (IBG) which
is also ½ inch in length. Blocking is performed when the file is created and the blocks need to
be “deblocked” to be able to access the records. The number of records in a block is usually
determined by the application program which normally takes advantage of the transfer rate of
the tape.
The access time of a tape is how long it takes to access a record or block on the tape, it varies. For
a 2400 feet tape, the average access time is 1.25 minutes.
Advantages of Blocking
1. Fewer i/o operations are needed because a single read command can move the entire block
2. Less tape is wasted
Disadvantages of blocking
1. Overhead and software routines are needed for blocking, deblocking and record keeping.
2. Buffer space may be wasted, if you need only one record but must read the entire block to get
it.
Record 2
Record 3
Record 4
Record 5
51
Figure 7.3: Block Records with Inter Block Gaps
Storage disks are structured as having tracks that appear like concentric circles. Each circle is a
track. The tracks are numbered from the outermost to the innermost in ascending order. Tracks
are further divided into sectors. See figure 7.5.
52
These have only a few read/write heads that move from track to track to cover the entire surface
of the drum or disk. Movable head disks have one read/write head that floats over the surface of
the disk. So, for the system to read or write data to the disk, the disk must spin. This contributes
to the noise you would hear sometimes when you double a file to open. Disks can be a single unit
as in 5 ¼ disk or packet of a disk pack which is a stack of disks that are mounted on a central
spindle ½ inch apart to allow space for the read/write head (Figure 7.6). Disks packs are made of
platters which have 2 recording surfaces, except the top and bottom platters. For a disk part, you
can either record data track by track on all disks or fill a disk first before writing on another one.
The read/write heads move between each pair of surfaces, and all the heads are moved in unison
by the arm.
Figure 7.7 shows a typical hard drive from a PC showing the arm that floats over the surface of
the disk (Courtesy Seagate Technology)
On a magnetic disk, the sectors are of different sizes: bigger at the rim and smaller at the centre.
The disk spins at a constant angular velocity (CAV) to compensate for this difference.
Seek Time: time required to position the read/write head on the proper track. It does not apply to
fixed head devices.
Search time/rotational delay time: time it takes to rotate the drum/disk until the requested record
is moved under the read/write head.
53
Transfer time: time it takes to transfer data from secondary storage to main memory.
54
Cache size has a substantial impact on perceived performance.
Hardware cache acts as a buffer by transferring blocks of data from the disc, anticipating
that the user may want to reread some recently retrieved information, which can be done
quickly if the information remains in the cache.
The cache can also act as a read-ahead buffer, looking for the next block of information on
the disc.
Latency time- the time it takes for data to rotate from its current position to a position
adjacent to the read/write head
55
Some systems also have a disk controller, or disk
drive interface, which is a special purpose device
used to link the disk drives with the system bus.
Disk drive interfaces control the transfer of
information between the disk drives and the rest
of the computer system. The operating system
normally deals with the controller, not the device.
Example
For example, after a device has begun writing a record, and before it has completed the task, the
connection between the device and its control unit can be cut off so the control unit can initiate
another I/O task with another device. Meanwhile, at the other end of the system, the CPU is free
to process data while I/O is being performed, which allows for concurrent processing and I/O. For
the systems to know when a device has completed its job, it tests a hardware flag, this hardware
flag (has 3 bits) and resides in the channel status word (CSW). CSW contains information about
the status of each channel. This is in a predefined location in main memory and contains
information indicating the status of the channel. Each bit represents one of the components of the
I/O subsystem, one each for the channel, control unit, and device. Each bit is changed from 0 to 1
to indicate that the unit has changed from free to busy. Each component has access to the flag,
which can be tested before proceeding with the next I/O operation to ensure that the entire path is
free and vice versa.
56
Two common ways to perform the test are through polling and using interrupts. Polling uses a
special instruction to test the flag. For example, the CPU periodically tests the channel status bit
(in the CSW). If the channel is still busy, the CPU performs some other processing task until the
test shows that the channel is free; then the channel performs the I/O operation. Disadvantage with
this scheme is determining how often the flag should be polled.
If polling is done too frequently, the CPU wastes time testing the flag just to find
out that the channel is still busy.
If polling is done too seldom, the channel could sit idle for long periods of time.
Interrupts
The use of interrupts is a more efficient way to test the flag. Instead of having the CPU test the
flag, a hardware mechanism does the test as part of every machine instruction executed by the
CPU. If the channel is busy, the flag is set so that execution of the current sequence of instructio ns
is automatically interrupted and control is transferred to the interrupt handler, which is part of the
operating system and resides in a predefined location in memory. The job of the interrupt handler
is to determine the best course of action based on the current situation because it’s not unusual for
more than one unit to have caused the I/O interrupt. Interrupt handler must find out which unit sent
the signal, analyse its status, restart it when appropriate with the next operation, and finally return
control to the interrupted process.
7.4.2. Direct memory access (DMA)
Direct memory access (DMA) is an I/O technique that allows a control unit to directly access main
memory. Here, once reading or writing has begun, the remainder of the data can be transferred to
and from memory without CPU intervention. This mode of data transfer is used for high-speed
devices such as disks. Without DMA, the CPU is responsible for the physical movement of data
between main memory and devices. This is a time-consuming task that results in significa nt
overhead and decreased CPU utilization.
Buffers
Buffers are temporary storage areas residing in convenient locations throughout the system.
They are used to better synchronize the movement of data between slow I/O devices and very fast
CPU. They hold data from input devices before they are needed by the CPU and vice versa.
Sometimes a double buffering system is used to minimize device idle time and here are 2 buffers
present. One is used to hold data while being processed by the CPU and the other is used to
read/write data that had been previously stored by the CPU.
57
To do this, it maintains a database containing the status and connection for each unit in the
I/O sub system.
2. I/O scheduler: allocates the devices, CU and channels. When there are requests greater than
the number of available paths, the I/O scheduler must decide which request will be satisfied
first. I/O requests are not pre-empted, i.e once a channel program has started, it is allowed
to continue to completion, even if there are I/O requests with higher priority.
3. I/O device handler: this processes the I/O interrupts, handles error conditions, and provides
detailed scheduling algorithms which are device dependent. Each type of I/O device has
its own device handler algorithm.
7.5.1. Device Handler Seek Strategies for hard disk with movable R/W head
During light loads (i.e a small request queue length) the system uses FCFS scheduling to handle
requests (FCFS is fair but arm movement is much). However, this is not always effective because
the system may need to read/write from the following tracks 1, 20, 2, 30, 5, in this order. So the
read/write head will be moving up and down. In heavy loads, other scheduling algorithms are used.
A good disk scheduling policy should have the following characteristics
1. minimize arm movement thereby maximizing throughput
2. minimize mean response time
3. minimize the variance in response time
The main goal is to keep seek time to a minimum.
The various disk scheduling policies are:
1. FCFS (for light loads)
2. SSTF – Shortest Seek Time First (for moderate load): the request with the track closest to
the one being served is the next to be satisfied. That is, disk arm is positioned such that it
takes the shortest distance to the next track to be served. This may lead to tracks further
away waiting for long.
3. SCAN (light to moderate): disk arm serves requests along its path. When it gets to the
outermost or innermost track, it changes its direction to the opposite side and services all
requests along its path that were already on the queue. New requests must wait. SCAN
ensures that requests in the outermost or farthest distance from the arm does not wait for
long.
4. A variation of SCAN is LOOK (elevation algorithm) in which the arm does not go all the
way to the edges, except if there are requests.
5. N-STEP SCAN: moves as in SCAN, however any new requests while the arm is going in
one direction is incorporated and reordered in the queue so that when the arm is on its
return trip they are served.
6. C-SCAN (moderate to heavy): circular scan moves in one direction i.e. it starts from the
outermost to the innermost and serves them. It also picks up requests on its inward sweep.
When it gets to the innermost, it immediately returns to the outermost and starts servicing
requests that had arrived during its last inward sweep.
58
7. C-LOOK: is the look version of C-SCAN. The arm does not go all the way to the inner most
track except if there are requests there.
8. The best scheduling algorithm for implementation may be a combination of 2.
Review Questions
1. What the component of an I/O subsystem and what are their functions?
2. With suitable diagrams, compare and contrast the structure of the hard disk with that of the CD-
ROM
3. System devices can be broadly classified into 3. List and describe these three.
4. In a bid to reduce the outbreak of malaria in the ground floor of Nando Hall, the hall warden
authorized that the rooms on that floor be sprayed by Mr Peace. There are 30 rooms on that floor
and the students are to make requests to Mr. Peace for their room to be sprayed. At a point in time,
Mr. Peace was heading to the beginning of the floor (ie lower room numbers) but stopped in room
no. 10 to service a request. The outstanding requests he has are in the order: 1, 11, 29, 14, 22 room
numbers. It takes Mr Peace, 1 second to move from one room to the next. Ignoring how long it
will take to spray a room, state the sequence of Mr Peace movement along the floor and calculate
the total seek time only if he is using:
i. Shortest Seek Time First
ii. FCFS
iii LOOK
59
UNIT 8 FILE MANAGEMENT
Introduction
The File Manager controls every file in the system. The efficiency of the File Manager depends
on how the system’s files are organized; how they are stored; how each file’s records are
structured; and how access to these files is controlled. In this chapter, we will study all of these.
The File Manager (also called the file management system) is the software responsible for creating,
deleting, modifying, and controlling access to files—as well as for managing the resources used
by the files. The File Manager provides support for libraries of programs and data to online users,
for spooling operations, and for interactive computing. These functions are performed in
collaboration with the Device Manager that we studied in the previous lesson.
Learning Objectives
After completing this unit, you should be able to describe:
1. The fundamentals of file management and the structure of the file management system
2. File-naming conventions, including the role of extensions
3. The difference between fixed-length and variable-length record format
4. The advantages and disadvantages of contiguous, non-contiguous, and indexed file storage
techniques
Files are made up of records and records consist of fields. File types include:- Executable, Text,
Object, Library source codes, Print (pdf), Archir.zip, Multimedia.
There are several operations that can be carried out on files. These include: -
60
8.1.1. Characteristics of Files
When working with certain files, especially data files, there are some characteristics that are
usually noted. They are:
1. File volatility: this is the frequency of additions and deletions made to a file
2. Activity: this is the percentage of file’s records accessed during a given period of time
3. Size: the amount of information stored in the file
61
3. Auxiliary Storage management: allocating space for files on secondary storage device.
4. File Integrity Mechanisms: guaranteeing that the information stored on files is not
corrupted.
62
8.4. File Organization
File organization refers to the way the records of a file are arranged on secondary storage. The
popular ones include:
Sequential record organization: used mainly in magnetic tapes. Here the records are stored
/ accessed serially one after the other. To access a file/record you must search from the
beginning.
Direct record organization: used only for direct access storage devices. Records are directly
(randomly) accessed/or stored by their physical addresses. A hashing technique/algor ithm
is used in locating data in direct access files i.e. to convert the physical addresses into
logical address.
Indexed sequential: Records are arranged in logical sequence according to the record key.
Then the system maintains an index of the physical addresses. Indexed sequential records
can be accessed sequentially using the key order or directly using the system generated
index.
When data is stored in fixed length fields as in (a) above, any data that extends beyond the fixed
size is truncated (can you decipher the length (in charaters) used in (a)?). However, when data is
stored in a variable length record format as in (b) above, the size expands to fit the contents, but
it takes longer to access it.
63
8.5.1. Contiguous storage
In this type of allocation, records of a file are stored one after other. Single set of blocks are
allocated to a file at the time of creation, so every part of file is stored in same compact area. See
figure 8.2. The file allocation table contains information which states the starting block and length
of each file. Any record can be found and read once the starting address and size are known.
Because of the way file blocks are allocated in this type of allocation, external fragmentation will
occur. External fragmentation is a phenomenon where there are slivers of unused storage spaces
(block no 0, 1, 5, 29, etc from figure 8.2 above) between sets of blocks belonging to different files.
When this occurs, there will be need for the operating system to perform compaction. Compaction
is rearranging several blocks of different files on the disk together so that they are closely packed
and then gather the unused blocks that were between various files. This done such that there will
be space large enough to accommodate other files, contiguously. Figure 8.3 shows the outcome
of what will happen after figure 8.2 has been compacted.
64
8.5.2 Non-contiguous Storage
Non-contiguous storage allows files to use any storage space available on disk. File’s records are
stored in a contiguous manner if there is enough empty space. Otherwise, any remaining records,
and all other additions to file, are stored in other sections of disk. The two methods of non-
contiguous storage are the chained/ linked storage and indexed storage.
In this storage method, the file allocation table contains a separate one-level index for each file
save on the medium. This means that the file allocation table contains the block number for the
index of a file. The index has one entry for each portion/ block allocated to the file. Figure 8.5
shows what this type of storage looks like.
65
Indexed storage supports both sequential and
direct access to file records. However, it
does not necessarily improve the use of
storage space because each file must have
index block. For larger files with more
entries, several levels of indexes can be
generated. To find a desired record, File
Manager accesses first index (highest level),
which points to a second index (lower level),
which points to an even lower-level index
and eventually to data record. Figure 8.5: Indexed allocation
R = Read Access, W = Write Access, E = Execute Access, D = Delete Access, and a dash (-) = Access Not Allowed
The table above shows the access control matrix showing access rights for each user for each file.
User 1 is allowed unlimited access to File 1 but is allowed only to read and execute File 4 and is
denied access to the three other files. Can you decipher the rights for the other users?
The above diagram is a typical module of a file management system showing how information is
passed from the File Manager to the Device Manager.
66
Questions
1. What are the two main ways that files are stored on any storage medium?
2. What is fragmentation?
3. In the chained storage, what does a null pointer signify?
4. Why is compaction performed by the operating system?
5. File A is stored using chained storage and File B which is the same size as file A is stored using
indexed storage. Which of the file will occupy more space on the disk?
6. Describe the linked storage with a suitable diagram?
7. Write short notes on any 5 types of File System.
67
UNIT 9 PERFORMANCE
Introduction
Performance is measuring the effectiveness of the Operating System (O/S) in managing the
computer system resources. In computing, computer performance is the amount of useful work
accomplished by a computer system. Outside of specific contexts, computer performance is
estimated in terms of accuracy, efficiency and speed of executing computer program instructio ns.
When next you have your system on, press CTRL+ALT+DELETE and select the task manager.
You can view the performance of your system from there.
Learning Objectives
After completing this unit, you should be able to describe:
• The trade-offs to be considered when attempting to improve overall system performance.
• The roles of system measurement tools such as positive and negative feedback loops.
• Two system monitoring techniques.
INSTRUCTION MIXES:- This is also used for quick comparisons. This technique uses a
weighted average of various instruction timings more suited to a particular application.
KERNEL PROGRAMS:- This is used at installation stage. The program is run on differe nt
systems and timed. By comparing the execution time of the different systems, you can
tell which performs better.
BENCHMARKS :- These are real programs that are installed on the system being evaluated and
executed using real data. Since the performance of the benchmark is already known
before being used, the computer’s performance is then compared with what is expected
or known.
SYNTHETIC PROGRAMS :- These are also real programs that have being custom-designed to
exercise specific features of a machine. There are 2 standard programs which are
Whetstone and Dhrystone.
1. Event-Driven Simulators:- These are controlled by events that are made to occur in the
model according to probability.
69
9.2.3. Performance Monitoring Techniques
Performance monitoring is the collection and analysis of system performance information for
existing systems. It is useful in determining how a system performs in terms of throughp ut,
response time etc. It helps to locate bottlenecks quickly. Performance monitoring can be
accomplished by software or hardware techniques. Some of the techniques are:
Whetstone is a systemic benchmark program designed to measure a processor’s number-
crunching effectiveness with floating point calculations. It is useful in comparing how effective ly
processors run FORTRAN programs.
LINPACK programs (SLINPACK for single-precision calculation and DLINPACK for double-
precision) are genuine benchmark programs (not - synthetic) that are also used for measuring
floating point performance.
Savage benchmark measures the accuracy of trigonometric floating – points operations instead
of their performance.
Dhrystone is a synthetic benchmark developed for measuring how effectively an architecture runs
systems programs. The Ada version has 100 statements. The effectiveness of the Dhrystone
depends on the size of the cache, then it will be executed faster than otherwise.
This means that the resource has become saturated. Processes competing for the attention of the
resource start to interfere with one another. Example of saturation is thrashing. To detect
bottlenecks, the system checks the resource’s request queue. If the queue begins to grow quickly,
it means that the resource has become saturated because the arrival rate of requests is greater than
the service rate. To remove bottlenecks, we can increase the capacity of the resources or by adding
more resources of the same type at the point of need by the system.
70
printer has much less. In such a situation, some of the operations may be re-routed to the printer
with less job.
2. Positive Feedback Loop:- This on the other hand monitors the system to check if it is
underutilized. If so it sends the signal to the appropriate manager to increase the rate of job
arrivals. Positive feedback loops are often used in paged virtual memory systems with caution,
though because if too many jobs are loaded, this could result to thrashing and leads to decrease
in processor utilization. If positive feedback loop is not well designed they could lead to an
unstable system state.
Questions:
71
UNIT 10 SECURITY
Introduction
Security refers to providing a protection to computer system resources such as CPU, memory, disk,
software programs and most importantly data/information stored in the computer system. If a
computer program is run by unauthorized user then he/she may cause severe damage to computer
or data stored in it. So, a computer system must be protected against unauthorized access, malic io us
access to system memory, viruses, worms etc. The various ways this can be done will be discussed
in the lecture.
Learning Objectives
After completing this topic, you should be able to describe:
• The role of the operating system regarding system security
• The effects of system security practices on overall system performance
• The levels of system security that can be implemented and the threats posed by evolving
technologies
User Interface Security:- Once a user is allowed physical access to a computer, the user
access/access level should be granted by the operating system once the operating system can
identify the user.
Operational Security:- This approach is to divide the jobs to be performed on the system to
different personnel. E.g Super users, programmers, operators. In this way, an operator cannot do
what a programmer should do.
Surveillance:- This deals with monitoring and auditing the system as well as authenticating users.
Voice recognition or thumb print can also be used.
Threat Monitoring:- Here users are not granted direct and full access to critical files on the
system. An operating system routine called surveillance program does that. Such accesses are
requested through the operating system, and the operating system may either grant or deny access
to the user.
Amplification:-This is granting more access right to the surveillance program that enables
function.
Password protection:- Here the user specifies a password to the system and this is stored at every
log on, user must supply the password.
72
Auditing:-This is done to check if fraudulent activities have occurred. In the computer system, an
audit log is a permanent record of important events that occur in the computer system. This is
stored in a protected area of the system.
Access control:-This is the key to internal security. It controls the access to stored data. Access
rights are granted to people or processes or programs. These rights are read, write, execute or
modify. An access control matrix is used for this.
Security Kernel:- The security of the operating system depends on securing the following:-
Access control, Logging, Monitoring, Manage real storage, Virtual storage, File system. These
are functions that occupy a large portion of the operating system code that makes it difficult to
keep the kernel small. Security Kernels is making sure that the systems are secure by building
the security into the system at design stage. It is imperative to make the kernel of the operating
system secure.
73
14. Developing and supporting organizational security standards, best practices, preventative
measures, and disaster recovery plans
15. Conducting penetration tests (simulating cyberattacks to find vulnerabilities before others
can find them)
16. Reporting on security breaches to users, as necessary, and to upper management
17. Implementing and updating software to protect information
18. Staying up to date on IT security trends and information
19. Recommending security enhancements to management and C-suite executives
74
10.4 Levels of Protection
System administrator evaluates each computer configuration intrusion risk. This depends on
connectivity level given to system. There are four levels at which a system must be protected:
1. Physical - The easiest way to steal data is to pocket the backup tapes. Also, access to the root
console will often give the user special privileges, such as rebooting the system as from
removable media. Even general access to terminals in a computer room offers some
opportunities for an attacker, although today's modern high-speed networking environme nt
provides more and more opportunities for remote attacks.
2. Human - There is some concern that the humans who are allowed access to a system should
be trustworthy, and that they cannot be coerced into breaching security. However, more and
more attacks today are made via social engineering, which basically means fooling trustworthy
people into accidentally breaching security. Some of the methods used in fooling people are:
Phishing involves sending an innocent-looking e-mail or web site designed to fool people
into revealing confidential information. E.g. spam e-mails pretending to be from e-Bay,
PayPal, or any of a number of banks or credit-card companies.
Dumpster Diving involves searching the trash or other locations for passwords that are
written down. (Note: Passwords that are too hard to remember, or which must be changed
frequently are more likely to be written down somewhere close to the user's station.)
Password Cracking involves getting users passwords, either by watching them type in
their passwords, knowing something about them like their pet's names, or simply trying all
words in common dictionaries. (Note: "Good" passwords should involve a minimum
number of characters, include non-alphabetical characters, and not appear in any dictionar y
(in any language), and should be changed frequently. Note also that it is proper etiquette to
look away from the keyboard while someone else is entering their password.
3. Operating System -The O/S must protect itself from security breaches, such as runaway
processes (denial of service), memory-access violations, stack overflow violations, the
launching of programs with excessive privileges, and many others.
4. Network -As network communications become ever more important and pervasive in modern
computing environments, it becomes ever more important to protect this area of the system.
(Both protecting the network itself from attack, and protecting the local system from attacks
coming in through the network.) This is a growing area of concern as wireless communicatio ns
and portable devices become more and more prevalent.
The table below shows a simplified comparison of security protection required for three typical
configurations. Notice that system vulnerabilities accumulate as the computer’s level of
connectivity increases.
Configuration Ease of Relative Vulnerabilities
Protection Risk
Single computer (without High Low Compromised passwords, viruses
email or internet)
LAN connected (without Medium Medium Sniffers, spoofing (+passwords,
internet) viruses)
LAN connected Low High Emails, webservers, FTP, Telnet
(with internet) (+sniffers, spoofing, password,
viruses)
75
10.5 Backup and Recovery
Backup and recovery describe the process of creating and storing copies of data that can be used
to protect organizations against data loss. This is sometimes referred to as operational recovery.
Recovery from a backup typically involves restoring the data to the original location, or to an
alternate location where it can be used in place of the lost or damaged data. A proper backup copy
is stored in a separate system or medium, such as tape, from the primary data to protect against the
possibility of data loss due to primary hardware or software failure. Backups protect against
human errors, hardware failure, virus attacks, power failure, and natural disasters. Backups can
help save time and money if these failures occur.
The security system should be able to recover from successful or attempted breaches of security.
Now when there are sufficient backup and recovery policies in place and performing other
archiving techniques are standard operating procedure for most computing systems, to recover
from any unforeseen situation in the system will not be a difficult thing. Many system managers
use a layered backup schedule. They back up the entire system depending on their policies. As an
extra measure of safety, copies of complete system backups are stored in a safe off-site location.
Backups become essential when the system becomes unavailable because of a natural disaster or
when a computer virus infects the system. When it is discovered early eradication software can be
run on it and reload damaged files from the backup copies. When changes are made since the files
were backed up, they will have to be regenerated. Backups, with one set stored off-site, are also
crucial to disaster recovery. The disaster could come from anywhere. Here are just a few of the
threats:
• water from a fire upstairs
• fire from an electrical connection
• malfunctioning server
• corrupted archival media
• intrusion from unauthorized users
Policies, procedures and regular user training are essential elements of system management. Many
of the system failures that occur are caused by honest mistakes made by well-intentioned users and
not by malicious intruders. Any written security procedures would have to recommend frequent
password changes, reliable backup procedures, guidelines for loading new software, compliance
with software licenses, network safeguards, guidelines for monitoring network activity, and rules
for terminal access.
76
10.6.1 Unintentional Intrusions
Unintentional Intrusions can be said to be security breach or data modification that is not resulting
from planned intrusion. Examples are accidental incomplete modification of data, non
synchronized processes access to data records, modification of some record fields, errors due to
incorrect storage of data values, field not large enough to hold numeric value stored (Figure 10.1
shows this).
From figure 10.1, we see that the original data value is shown (a) in a field large enough to hold
it. If the field is too small, FORTRAN (b) replaces the data with asterisks, while COBOL truncates
the higher-order digits (values to the left of the dotted line) and stores only the digits that remain.
Repeated trials describes the way the intruder enters the system by guessing authentic passwords.
77
Trap doors:- This include backdoor password and can be said to be unspecified and
undocumented system entry point. They are usually installed for future use by Diagnostician or
programmer. This makes the system to be vulnerable to future intrusion.
Trash collection is also known as dumpster diving. Intruders can use discarded materials (such as
disks, CDs, printouts) to enter system illegally.
Malicious computer attacks is usually possible state and federal law violation. When convicted
they usually pay significant fines and jail terms are given to them sometimes and may also include
computer equipment confiscation. It is a punishable offence if caught.
Viruses can be said to be small programs that can alter computer operations. There is no user
permission to run these programs. There are two criteria.
1. It must be self-executing. That is it places its own code in the path of another program.
2. It must be self-replicating and this is usually accomplished by copying itself from infected files to
clean files.
For example viruses can infect desktop computers and network servers and also spread each time
the host file is executed. It is operating system specific usually and spread using wide variety of
applications.
Macro Virus attaches itself to template (such as NORMAL.DOT) and in turn attaches to word
processing documents.
Worm is a memory-resident program. It copies itself from one system to next. and it needs no aid
from infected program file. When it attaches itself, the effect is the slower processing time of real
work. It is especially destructive on networks.
Spoofing
This is a security threat that relies on cleartext transmission whereby assailant fakes IP address of
internet server and changes address recorded in packets sent over internet. Here unauthorized users
disguise themselves as friendly sites
Logic bomb is a destructive program with fuse (triggering event). It spreads unnoticed througho ut
network.
Time bomb is a destructive program that is triggered by specific time.
Blended threat has the Logic bomb and time bomb characteristics combined together. This is a
single program which includes virus, worm, Trojan, spyware, and other malicious codes.
78
5. Exploits vulnerabilities of target systems
6. Protection
7. Combination of defences with regular patch management
Questions
1. What are the different levels of protecting a system?
2. If a password is made up of 5 characters, how many possible combinations of the characters are
there?
3. Can a virus harm your system? How?
79
UNIT 11 System Protection
Introduction
Threats can come from outsiders i.e. those that are outside the organization and it can as well as
come from insiders i.e. employees or others who have access to the system and this can include
stealing of intellectual properties or other confidential or sensitive information, fraud, and acts of
system sabotage. Hence the need for system protection.
Learning objectives
• The importance of system protection
• The role of education and ethical practices in system security
11.1 Protection
Protection refers to a mechanism which controls the access of programs, processes, or users to the
resources defined by a computer system. We can take protection as a helper to multi programming
operating system, so that many users might safely share a common logical name space such as
directory or files. There is no single guaranteed method of system protection
System vulnerabilities include:
- File downloads, e-mail exchange
- Vulnerable firewalls
- Improperly configured internet connections
Security issues require continuous attention and system protection is multifaceted.
In the above diagram of a university system, the firewall sits between the campus networks and
the Internet, filtering requests for access.
80
Typical firewall tasks
- Log activities in accessing the internet
- It maintains access control based on senders’ or receivers’ IP addresses.
- It maintains access control based on services requested for.
- It hides internal network from unauthorized users.
- It verifies virus protection installed and enforced on the system.
- It performs authentication based on source of a request from the internet.
Packet filtering:-Here, the firewall reviews header information, incoming and outgoing internet
packets and verify source address, destination address, protocol authenticity.
Proxy server This hides important network information from outsiders. It makes network server
invisible. It determines validity of network access request, and it is invisible to users. This is critical
to firewall success.
Authentication
Authentication is verifying authorization of individuals accessing the system. The popular
authentication tool is Kerberos. Kerberos was designed to provide strong authentication for
client/server applications in order to answer the need for password encryption to improve network
security. The Kerberos protocol uses strong cryptography (the science of coding messages) so
that a client can prove its identity to a server, and vice versa, across an insecure network
connection. Then, once authentication is completed, both client and server can encrypt all of their
subsequent communications to assure privacy and data integrity. Network authentication protocol
provides strong authentication for client/server applications and it uses strong cryptography to
encrypt information on the network.
Encryption
This is extreme protection method for data. Sensitive data are put into secret code, and it is also
called system communication encryption. For the system to communicate with another system,
data are encrypted, transmitted, decrypted, and processed. Sender inserts public key with message
to encrypt it while receiver uses private key to decode the message sent.
Disadvantages
- It Increases system overhead.
- System is dependent on encryption process itself.
Sniffers
These are programs on computer that are attached to network. They peruse data packets as they
pass by, and they examine each packet for specific information as they pass by. They are
particularly problematic most especially in wireless networks. Anyone with a wireless device can
detect a wireless network that’s within range. If the network is passing cleartext packets, it’s quite
easy to intercept, read, modify, and resend them.
Password Management
The basic techniques that protect hardware and software are to invest in good passwords and to
also engage in careful user training. Though it is not as simple as it appears or sounds. Passwords
are forgettable, unlikely to be changed often, commonly shared, and considered bothersome by
81
many people that use it. To construct a good password, they must be unusual, memorable, changed
often and stored in encrypted form. In addition, the password length should not be less than 8
characters and should contain other characters apart from the 26 English alphabets. This directly
affects ability of password to survive password cracking attempts.
Number of combinations of passwords depending on their length and available character set
2 4 6 8 10 12
2 4 6 8 10
ASCII characters 128 128 128 128 128 12812
Printable Characters 952 954 956 958 9510 9512
Alphanumeric Characters 622 624 626 628 6210 6212
Lowercase Letters and numbers 362 364 366 368 3610 3612
Lowercase letters 262 264 266 268 2610 2612
Password Alternatives
• Smart card use
- Credit card-sized calculator. Requires something you have and something you know
- Displays constantly changing multi-digit number. Synchronized with identical number
generator in system
- User must enter number appearing on smart card. Added protection: user enters secret code
- User admitted to system if both number and code validated
• Biometrics
- Science and technology of identifying individuals. Based on each person’s unique biological
characteristics
- Current research focus. Analysis of human face, fingerprints, hand measurements, iris/retina,
voice prints
- Positively identifies person being scanned
- Critical factor. Reducing margin of error
- Expensive
• Graphics and pattern clicks
- Establish sequence of clicks on photo/illustration. Repeat sequence to gain access
- Eliminates keyboard entries
82
Summary
Must emphasize importance of secure system
System only as good as integrity of stored data
Single security breach damages system’s integrity
Damaged integrity threatens viability of best designed systems, its managers, its designers, its
users
Vigilant security precautions are essential
83
UNIT 12 Concurrent Programming
Until now we’ve looked at multiprocessing as several jobs executing at the same time on a single
processor or on multi-processors. Multiprocessing can also refer to one job using several
processors to execute sets of instructions in parallel.
A= 3 * B * C + 4 / (D + E) ** (F – G)
If you performed the calculations in some other order, you could come up with the wrong answer.
For many computational purposes, serial processing is sufficient; it’s easy to implement and fast
enough for most users.
Arithmetic expressions can be processed differently if we use a language that allows for concurrent
processing.
-COBEGIN
-COEND
84
This indicates to the compiler which instructions can be processed concurrently
COBEGIN
T1= 3 * B
T2= D + E
T3= F – G
COEND
COBEGIN
T4= T1 * C
T5= T2 ** T3
COEND
A= T4 + 4 / T5
A= 3 * B * C + 4 / (D + E) ** (F – G)
With this system, we’ve increased the computation speed, but we’ve also increased the complexity
of the programming language and the hardware. We’ve placed a large burden on the programmer
to explicitly state which instructions can be executed in parallel.
With a true concurrent processing system, the instruction is coded as a single expression. It is the
compiler that translates the algebraic expression into separate instructions and decides which steps
can be performed in parallel and which in serial mode. The equation Y = A+ B * C + D could be
rearranged by the compiler as A + D + B * C so that two operations A + D and B * C would be
done in parallel, leaving the final addition to be calculated last. Concurrent processing can also
dramatically reduce the complexity of working with array operations within loops.
To perform an array operation within a loop in three steps, the instructions might say:
85
If we use three processors, the instruction can be performed in a single step:
Of performing matrix multiplication using one processor, the answer to the equation on page 189
can be computed in 27 steps. By multiplying several elements of the first row of Matrix 1 by
corresponding elements in Matrix 2, three processors could cooperate to resolve the equation in
fewer steps.
With two processors it takes 18 steps to perform the calculations in parallel. With three, it would
require even fewer steps.
Concurrent processing does not necessarily cut processing activity in direct proportion to the
number of processors available.
86
UNIT 13 Study of Various Existing Operating systems
Throughout this book, we have studied the various component of a generic operating system. We
looked at the various managers and how they interact. We also studied the way the various
managers handle users requests and allocation of resources. Above all, we learned that all
applications running on our various digital computing devices are treated as processes.
In this concluding lecture, you are expected to study various existing operating systems. You are
expected to study how they handle processes. It will interest you to know that with mobile
operating systems like Android, processes can have a state called:- Foreground, Visible, Service,
Background.
Studying these will help you understand better, all that you have learned in this course. The are
various resources online that can help. Get online and study the following operating systems.
1. Windows
2. Unix
3. iOS
4. Android
5. Blackberry
87