0% found this document useful (0 votes)
120 views

OS Modules Summary

This document contains information about three modules in an Operating Systems course. Module 1 covers basics of operating systems including functions, structures, processes, memory management and I/O device management. Module 2 discusses multithreaded programming, process synchronization, scheduling algorithms and thread scheduling. Module 3 discusses deadlocks including characterization, prevention, detection and recovery, as well as memory management strategies like swapping, contiguous allocation and paging.

Uploaded by

shilpasg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views

OS Modules Summary

This document contains information about three modules in an Operating Systems course. Module 1 covers basics of operating systems including functions, structures, processes, memory management and I/O device management. Module 2 discusses multithreaded programming, process synchronization, scheduling algorithms and thread scheduling. Module 3 discusses deadlocks including characterization, prevention, detection and recovery, as well as memory management strategies like swapping, contiguous allocation and paging.

Uploaded by

shilpasg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 6

DON BOSCO INSTITUTE OF TECHNOLOGY

Department of Computer Science & Engineering

Semester: VI Section: ‘A’ ‘B’ ‘C’ Academic Year: 2017-2018


Course: Operating Systems Course Code: 15CS64

MODULE 1

Introduction to Operating Systems, System structures:

 Functions of Operating Systems.


 Storage device hierarchies and structures.
 Structure and Operating of Operating Systems.
 Operating System Services.
 System Calls.
 Process Management, Memory Management, I/O device management.
 Implementation Methods.
 Micro kernels, Virtual Machines.
 Process operations and scheduling.
 Interposes communication methods
Module 1 deals basics of Operating Systems and its functions. The structure of
secondary storage devices provides the detailed structure of devices. The
operations of operating system include Process Management, Memory
Management, I/O device management.

As Microkernel and Virtual Machines are core part of the operating system, this
unit details the structure and functionalities. Process states and operations,
scheduling and interposes scheduling.

1. http://nptel.ac.in/courses/106108101/3
2. https://www.lifewire.com/operating-systems-2625912
3. https://www.merriam-webster.com/dictionary/operating%20system
4. https://en.wikipedia.org/wiki/Operating_system
5. https://www.techopedia.com/definition/3515/operating-system-os

MODULE 2

Multithreaded Programming and Process Synchronization.


 Overview and multithreading models.
 Thread Libraries, Threading issues.
 Process Scheduling: Basic concepts, Criteria.
 Scheduling Algorithms.
 Multiple-processor scheduling, Thread scheduling.
 process synchronization
 The critical section problem
 Peterson’s solution, Synchronization hardware, semaphores.
 Classical problems of synchronization, Monitors.
Multithreaded Programming: A thread is a flow of control within a process. A
multithreaded process containsseveral different flows of control within the same address
space. The benefits ofMultithreading include increased responsiveness to the users,resource
sharingwithin the process, economy, and scalability issues such as more efficient useof
multiple cores.
User-level threads are threads that are visible to the programmer and are unknown to the
kernel. The operating-system kernel supports and manages kernel-level threads. In general,
user-level threads are faster to create and manage than are kernel threads, as no
intervention from the kernel is required.
Three different types of models relate user and kernel threads: The many-to-one model
maps many user threads to a single kernel thread. The one-to-one model maps each user
thread to a corresponding kernel thread. The many-to-many model multiplexes many user
threads to a smaller or equal number of kernel threads.
Most modern operating systems provide kernel support for threads; amongthese are
Windows 98, NT, 2000, and XP, as well as Solaris and Linux. Thread libraries provide the
application programmer with an API for creating and managing threads. Three primary
thread libraries are in common use: POSIX Pthreads, Win32 threads for Windows systems,
and Java threads. Multithreaded programs introduce many challenges for the programmer,
including the semantics of the fork () and exec() system calls. Other issues include thread
cancellation, signal handling, and thread-specific data.
Process Synchronization: Given a collection of cooperating sequential processes that share
data, mutualexclusion must be provided to ensure that a critical section of code is usedby
only one process or thread at a time. Typically, computer hardwareprovides several
operations that ensure mutual exclusion. However, suchhardware-based solutions are too
complicated for most developers to use.Semaphores overcome this obstacle. Semaphores
can be used to solve varioussynchronization problems and can be implemented efficiently,
especially ifhardware support for atomic operations is available.Various synchronization
problems (such as the bounded-buffer problem,the readerswriters problem, and the dining-
philosophers problem) are importantmainly because they are examples of a large class of
concurrency-controlproblems. These problems are used to test nearly every newly
proposedsynchronization scheme.
The operating system must provide the means to guard against timing errors. Several
language constructs have been proposed to deal with these problems. Monitors provide the
synchronization mechanism for sharing abstract data types. A condition variable provides a
method by which a monitor procedure can block its execution until it is signaled to continue.
Operating systems also provide support for synchronization. For example, Solaris, Windows
XP, and Linux provide mechanisms such as semaphores, mutexes, spinlocks, and condition
variables to control access to shared data. The Pthreads API provides support for mutexes
and condition variables. A transaction is a program unit that must be executed atomically;
that is, either all the operations associated with it are executed to completion, or none are
performed. To ensure atomicity despite system failure, we can use a write-ahead log. All
updates are recorded on the log, which is kept in stable storage.
1. http://nptel.ac.in/courses/106108101/3
2. https://www.lifewire.com/operating-systems-2625912
3. https://www.merriam-webster.com/dictionary/operating%20system
4. https://en.wikipedia.org/wiki/Operating_system
5. https://www.techopedia.com/definition/3515/operating-system-os

MODULE 3

Deadlocks and Memory Management


 Deadlocks; System model.
 Deadlock characterization.
 Methods forhandling deadlocks.
 Deadlock prevention and Deadlock avoidance
 Deadlockdetection and recovery from deadlock.
 Memorymanagement strategies.
 Background; Swapping
 Contiguous memory allocation
 Paging.
 Structure of page table; Segmentation
Deadlocks: A deadlocked state occurs when two or more processes are waiting indefinitely
for an event that can be caused only by one of the waiting processes. There are three
principal methods for dealing with deadlocks:
 Use some protocol to prevent or avoid deadlocks, ensuring that the system will never enter
a deadlocked state.
 Allow the system to enter a deadlocked state, detect it, and then recover.
 Ignore the problem altogether and pretend that deadlocks never occur in the system.
The third solution is the one used by most operating systems, including UNIX and Windows.
A deadlock can occur only if four necessary conditions hold simultaneously in the system:
mutual exclusion, hold and wait, no preemption, and circular wait. To prevent deadlocks, we
can ensure that at least one of the necessary conditions never holds.
A method for avoiding deadlocks, rather than preventing them, requires that the
operating system have a priori information about how each process will utilize system
resources. The banker's algorithm, for example, requires a priori information about the
maximum number of each resource class that each process may request. Using this
information, we can define a deadlock avoidance algorithm.
If a system does not employ a protocol to ensure that deadlocks will never occur,
then a detection-and-recovery scheme may be employed. A deadlock detection algorithm
must be invoked to determine whether a deadlock has occurred. If a deadlock is detected,
the system must recover either by terminating some of the deadlocked processes or by
preempting resources from some of the deadlocked processes.
Where preemption is used to deal with deadlocks, three issues must be addressed:
selecting a victim, rollback, and starvation. In a system that selects victims for rollback
primarily on the basis of cost factors, starvation may occur, and the selected process can
never complete its designated task.
Researchers have argued that none of the basic approaches alone is appropriatefor
the entire spectrum of resource-allocation problems in operating systems. The basic
approaches can be combined, however, allowing us to select an optimal approach for each
class of resources in a system.
Memory Management:Memory-management algorithms for multi-programmed operating
systemsrange from the simple single-user system approach to paged segmentation. The
most important determinant of the method used in a particular system is the hardware
provided. Every memory address generated by the CPU must be checked for legality and
possibly mapped to a physical address. The checking cannot be implemented (efficiently) in
software. Hence, we are constrained by the hardware available.
The various memory-management algorithms (contiguous allocation, paging,
segmentation, and combinations of paging and segmentation) differ in many aspects. In
comparing different memory-management strategies, we use the following considerations:

Hardware support: A simple base register or a base-limit register pair is sufficient for the
single- and multiple-partition schemes, whereas paging and segmentation need mapping
tables to define the address map.
Performance: As the memory-management algorithm becomes more complex, the time
required to map a logical address to a physical address increases. For the simple systems, we
need only compare or add to the logical address-operations that are fast. Paging and
segmentation can beas fast if the mapping table is implemented in fast registers. If the table
isin memory, however, user memory accesses can be degraded substantially. A TLB can reduce
the performance degradation to an acceptable level.
Fragmentation:A multi-programmed system will generally perform more efficiently if it has a
higher level of multiprogramming. For a given set of processes, we can increase the
multiprogramming level only by packing more processes into memory. To accomplish this task,
we must reduce memory waste, or fragmentation. Systems with fixed-sized allocation units,
such as the single-partition scheme and paging, suffer from internal fragmentation. Systems
with variable-sized allocation units, such as the multiple-partition scheme and segmentation,
suffer from external fragmentation.
Relocation: One solution to the external-fragmentation problem is compaction. Compaction
involves shifting a program in memory in such a way that the program does not notice the
change. This consideration requires that logical addresses be relocated dynamically, at
execution time. If addresses are relocated only at load time, we cannot compact storage.
Swapping: Swapping can be added to any algorithm, at intervals determined by the operating
system, usually dictated by CPU-scheduling policies;processes are copied from main memory
to a backing store and later are copied back to main memory. This scheme allows more
processes to be run than can be fit into memory at one time.
Sharing: Another means of increasing the multiprogramming level is to share code and data
among different users. Sharing generally requires that either paging or segmentation be used
to provide small packets of information (pages or segments) that can be shared. Sharing is a
means of running many processes with a limited amount of memory, but shared programs and
data must be designed carefully.
Protection: If paging or segmentation is provided, different sections of a user program can be
declared execute-only, read -only, or read-write. This restriction is necessary with shared code
or data and is generally useful in any case to provide simple run-time checks for common
programming errors.
1. http://nptel.ac.in/courses/106108101/3
2. https://www.lifewire.com/operating-systems-2625912
3. https://www.merriam-webster.com/dictionary/operating%20system
4. https://en.wikipedia.org/wiki/Operating_system
5. https://www.techopedia.com/definition/3515/operating-system-os

MODULE 4

Virtual Memory Management

 Demand paging, Copy-on-write, Page replacement.


 Allocation of frames. Thrashing.
 Implementation of File System, File system, File concept.
 Access methods.
 File system mounting, File sharing, protection.
 Implementing File system, File system structure.
 File system implementation.
 Directory implementation.
 Allocation methods.
 Free space management.
In Module 4 deals basics of virtual memory, design characteristics and various page
replacement algorithms. Virtual memory is commonly implemented by demand paging. We can
use demand paging to reduce the number of frames allocated to a process. In addition to a page-
replacement algorithm, a frame-allocation policy is needed. Allocation can be fixed, suggesting
local page replacement, or dynamic, suggesting global replacement. Most operating systems
provide features for memory mapping files, thus allowing file I/O to be treated as routine
memory access. The Win32 API implements shared memory through memory mapping of files.
And also student learnt the proper design of a paging system requires that we consider
prepaging, page size, TLB reach, inverted page tables, program structure, I/O interlock and page
locking, and other issues.Disk-scheduling algorithms can improve the effective bandwidth, the
average response time, and the variance in response time.

1. http://nptel.ac.in/courses/106108101/3
2. http://csapp.cs.cmu.edu/2e/ch9-preview.pdf
MODULE 5

MASS STORAGE

 Secondary storage structure, Protection, Disk structure.


 Disk attachment, Disk scheduling.
 Disk management, Swap space management.
 Goals of protection.
 Principles of protection.
 Domain of protection, Access matrix,
 Implementation of Access Matrices’.
 Access control, Revocation of access rights,
 Capability- Based systems. Case Study: The Linux Operating System
 Design principles, Kernel modules, Process management, Scheduling Memory
Management, Inter-process communication
Module 5 covers the mass storage structure. In this module students learnt the concepts of
operating system which may specifically support various record types or may leave that
support to the application program. The major task for the operating system is to map the
logical file concept onto physical storage devices such as magnetic disk or tape. Files may have
multiple readers, multiple writers, or limits on sharing. Distributed file systems allow client
hosts to mount volumes or directories from servers. Remote file systems present challenges in
reliability, performance, and security.Since files are the main information-storage mechanism
in most computer systems, file protection is needed. Access to files can be controlled
separately for each type of access—read, write, execute, append, delete, list directory, and so
on. File protection can be provided by access lists, passwords, or other techniques. Linux is a
modern, free operating system based on UNIX standards. Linux is a multiuser system,
providing protection between processes and running multiple processes according to a time-
sharing scheduler.Internally, Linux uses an abstraction layer to manage multiple file systems.
Device-oriented, networked, and virtual file systems are supported.
1. https://www.studytonight.com/operating-system/secondary-storage
2. https://www.geeksforgeeks.org/disk-scheduling-algorithms/

HOD-CSE PRINCIPAL

You might also like