0% found this document useful (0 votes)
9 views

Operating System Important Semester Questions

osy study material
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Operating System Important Semester Questions

osy study material
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Operating System

Important Questions For


Semester
Components of Operating System
Execute Process Commands:
• Process Management
• I/O Device Management • Kill: Terminate a process forcefully, ending its
• File Management execution abruptly.
• Main Memory Management
• Network Management • Sleep: Pause process execution for a specified time
interval.
Types of Operating System:
• Wait: Suspend process execution until a specific
• Multiprogramming Operating System event or condition occurs.
• Multiprocessing Operating System
• Distributed Operating System • Exit: Terminate a process voluntarily, signaling its
• Cluster Operating System completion.
• Embedded Operating System
• Real Time Operating System • Ps: Command to list currently running processes in
• Batch Operating System an operating system.
Define the Following:

Multithreading: a technique by which a single set of code can be used by several processors at different
stages of execution.

Shared Memory: It is a faster inter process communication system. It allows cooperating processes to
access the same pieces of data concurrently. It speeds up computation.

Process: A process is a program in execution. Process is also called as job, task or unit of work.

Context Switching: When the CPU switches from one process to another it is called as Context Switching.
It is done to transfer the data from one place to another. It is usually done using Inter-Process
Communication (IPC)

Throughput: The number of tasks executed per unit time.


Operations Of Operating System
Program Management: Operating System handles many kinds of activities from user programs to system programs
like printer, name server, file server, etc.
• It handles program’s execution
• Loads a program into memory
• Executes the program, etc.

Resource Management: When there are multiple users or multiple jobs running at the same time, resources must be
allocated to each of them. Many different types of resources like file storages, CPU cycles, etc. are managed by the
Operating System.
• The OS manages all kinds of resources using schedulers.
• CPU scheduling algorithms are used for better utilization of CPU.

Security & Protection: We use anti-virus for security so unauthorised users cannot access or attack the computer
system.
• The OS ensures that all access to system resources is controlled.
• The OS ensures that eternal i/O devices are protected from invalid access attempts.
Views Of Operating System:
• User View: This view focuses on the interaction and experience of the end-users. It includes
elements like the user interface (CLI or GUI), ease of use, and how users access and utilize
applications.

• System View: The system view is concerned with the internal workings of the operating system. It
encompasses aspects such as process management, memory allocation, file system management, and
resource allocation.
Real Time Operating System: in Real Time Operating System, the task that is to be performed needs to be completed
at the given time limit. It is fixed time constraint. In Real Time Operating System the response should be fast.

Types of Real Time Operating System:


• Hard Real Time System: It guarantees that critical task will be completed within a range of time. (Missile Military)
• Soft Real Time System: It provides some relaxation in the time limit. (Live Streaming)

Applications:
• Flight Control System
• Simulations
• Industrial control
• Military applications
List Any Four File Operations.

• Creating a file
• Writing a file
• Reading a file
• Deleting a file
• Renaming an existing file.
• Creating copy of a file, copy file to another I/O device such as printer or display
Command Line Based on DOS, UNIX, GUI, LINUX

DOS (Command Line): GUI-based OS (e.g., Windows):

• Character-based interface. • Graphical interface.


• Commands executed through text input. • Point-and-click interaction.
• Limited multitasking. • Multitasking with windowed apps.
• Common commands: DIR, CD, COPY. • Desktop, icons, and file explorer.

Unix (Command Line): Linux (Command Line & GUI):

• Text-based interface. • Offers both CLI and GUI.


• Powerful shell (e.g., Bash). • Diverse distributions (e.g., Ubuntu, CentOS).
• Supports scripting. • Robust command-line tools.
• Extensive command set • Variety of desktop environments (e.g., GNOME, KDE).
Scheduling Queue: Scheduling Queue means, the operating system schedules the program one after another in a
queue. So that it can perform the next task which is scheduled right after the first task is completed.

Priority Scheduling: Priority Scheduling OS is the scheduling algorithm which schedules processes according to the
priority assigned to each of the processes

Round Robin Scheduling: Round-robin (RR) is one of the algorithms employed by process and network schedulers
in computing.

Avg. Waiting Time = Waiting Time Of All Process / Total Number of Process
Multiprogramming Vs Multiprocessing / Multitasking
Multiprogramming Multiprocessing / Multitasking
It allows multiple programs to utilize the A supplementary of the
CPU simultaneously multiprogramming system also allows for
user interaction.
Based on the context switching Based on the timesharing mechanism.
mechanism
In a multiuser environment, the CPU In a single-user environment, the CPU
switches between programs/processes switches between the processes of various
quickly. programs.
It takes maximum time to execute the It takes minimum time to execute the
process. process.
Real Time System Vs Time Sharing System
Real Time System Time Sharing System
In real time system, job has to be done In time sharing system, fixed time is given
within a fixed deadline. to each process and all the processors are
arranged in a queue.
Real Time Sharing has well-defined and It requires more complicated CPU
fixed time constraints. scheduling algorithms.
Response Time is important Response Time is not important.
It focusses on accomplishing a Emphasis on provide a quick response to
computational task before its specified a request.
deadline.
Services Provided By Operating System.

• Program Execution: The purpose of computer system is to allow the users to execute programs in an efficient
manner.

• User Interface: All operating systems have a user interface that allows users to communicate with the system.
Three types of user interfaces are available: a. Command line interface (CLI) b. Batch interface c. Graphical user
interface (GUI)

• I/O Operations: When a program is running, it may require input/output resources such as a file or devices such
as printer. So the operating system provides a service to do I/O.

• Error Detection: Operating systems detects CPU and memory hardware such as a memory error or power
failure, a connection failure on a network or lack of paper in the printer etc.

• Accounting: Operating system keeps track of usages of various computer resources allocated to users.

• Resource Allocation: Operating system manages resource allocation to the processes. These resources are CPU,
main memory, file storage and I/O devices.
Components/Functions Of Operating System

Process Management: A process is a program in execution. The operating system assigns processors with
different tasks that must be performed by the computer system. The OS is responsible for creating and
deleting both user and system processors, Suspending and Resuming Processes, etc.

Main Memory Management: An operating system deals with the allocation of main memory and other
storage areas to the system programs as well as user programs and data. The OS is responsible for Keeping
track of which parts of memory are currently being used and by whom, Allocating and De-Allocating
memory space as needed.

File Management: OS deals with the management and organisation of various files in the system. The OS
is responsible for creating and deleting files, directories, Backing up files on storage media.

I/O Device Management: Operating System performs the task of allocation and de-allocation of the
devices. The. I/O subsystem consists of drivers for specific hardware devices, a general device driver
interface, etc.
Write Two Use of the following OS tools
1) Device Management: Allows interaction with hardware devices through device driver. Keeps track
of all device’s data and location.

2) Task scheduler: Assign processor to task ready for execution. Executing predefined actions
automatically whenever a certain set of condition is met.

3) Performance monitor: Monitor various activities on a computer such as CPU or memory usage.
Used to examine how programs running on their computer affect computer’s performance.

4) User Management: User management includes everything from creating a user to deleting a user on
your system. User management can be done in three ways on a Linux system.

5) Security Policy: Access Control: Security policies define who can access specific resources, enhancing
data protection by restricting unauthorized entry. Compliance: They ensure adherence to regulatory
requirements and industry standards, helping organizations maintain legal and security standards.
Write any four systems call related to file management.

System calls related to file management are:


1. create new file
2. delete existing file
3. open file
4. close file
5. create directories
6. delete directories
7. read, write, reposition in file
8. get file attributes
9. set file attributes
Context switching refers to the process of changing focus or transitioning from one task or activity to another. In
computing, it often refers to the operating system's ability to switch between multiple tasks or processes running
on a computer's CPU. Context switching involves saving the current state of a task, loading the saved state of
another task, and resuming its execution. It can introduce overhead and impact system performance if done
frequently. In a broader sense, context switching can also apply to human activities when switching between
different tasks or topics.

System calls are an essential part of a computer's operating system that allow applications and user-level processes
to request services and functionality from the kernel (the core of the operating system). These calls serve as a
bridge between user-level software and the hardware resources of a computer. Here's how they work:

Medium-Term Scheduler: Some operating systems have a medium-term scheduler, which can swap processes out
of memory and into secondary storage (e.g., hard disk) to free up memory for other processes. This helps in
managing the available physical memory more efficiently.
Threads And Its Types

A Threads is sometimes called as a Light Weight Process (LWP), is a basic unit of CPU utilization. It comprises a
thread Id, a program counter, a register set and a stack.

There are Two Types of Threads namely:

A) User Level Thread: The thread implemented at the user level are known as User Level Thread. In user level
thread, thread management is done by the application while the kernel is not aware of the existence of threads.
Advantages of User level threads are: 1) They can run on any Operating System, 2) User thread library is easy to
portable.

B) Kernel Level Thread: The threads implemented at the kernel level are known as Kernel Level Threads. Kernel
threads are directly supported by the Operating System. Advantages of Kernel level threads are: 1) Kernel
routines themselves can multithreaded, 2) If one thread in a process is blocked, the kernel can schedule another
thread of same process.
Multithreading Models

One to One:

Each user-level thread corresponds to a kernel-level thread.


Provides true parallelism, but may create overhead for thread management.

Many to One:

Many user-level threads share a single kernel-level thread.


Efficient, but lacks true parallelism due to kernel-level bottleneck.

Many to Many:

Many user-level threads are multiplexed onto multiple kernel-level threads.


Balances efficiency and parallelism, allowing flexibility in thread management.
Schedulers And Its Types

Schedulers are special system software which handles process scheduling in various ways. Their main task is to select
the jobs to be submitted into the system and to decide which process to run.

There are Three Types of Schedulers:

1) Short Term Scheduler: The scheduler which selects the jobs or processes which are ready to execute from the
ready queue and allocate the CPU to one of them is called as Short Term Scheduler.

2) Medium Term Scheduler: The scheduler which remove the process from main memory and again reloads
afterwards when required is called as Medium Term Scheduler.

3) Long Term Scheduler: The scheduler which picks up job from pool and loads into main memory for execution
is called as Long Term Scheduler.
Process State Diagram

New State: A process that has just been created but


has not been admitted yet is said to be in New State.

Ready State: When the process is ready to execute but


it is waiting for the CPU to execute then the process is
said to be in ready state.

Running State: The process that is currently being


executed then the process is said to be in running state.

Waiting or Blocked State: A process that cannot execute until some event occurs or an I/O completion then the
process is said to be in Waiting or Blocked State.

Terminated State: After the completion the process, the process will be automatically terminated by the CPU, so this
is called as the terminated state of the process.
Process Control Block

PCB stands for Process Control Block. Pointer Process State


Process Control Block is a data structure that contains information of the
Process Number
process related to it
Program Counter
Each Process is represented in the operating system by a Process Control
Block (PCB) also called as Task Control Block (TCB). CPU Registers

A PCB stores descriptive information referring to a process such as its Memory Allocation
state, program counter, memory management, information, allocated
resources, its scheduling, etc. that is required to control and manage a
Event Information
particular process. List Of Open Files
The basic purpose of PCB is to indicate the so far progress of a process.

The Diagram of Process Control Block is as follows:


Explain types of Inter-process Communication
Inter-process Communication (IPC) is used to communicate inside the process itself.

Types of Inter-Process Communication:

A) Shared Memory B) Message Passing


Process A Process A

Shared Memory
Process B Process B

Kernel Kernel

Shared Memory: Two processes exchange data or Message Passing: In message passing model the data or
information through sharing region. They can read and information is exchanged in the form of messages.
write the data from this region
Long Term Scheduler Vs Short Term Scheduler

Long Term Scheduler Short Term Scheduler


It is job scheduler It is CPU scheduler
Access job pool and ready queue Access ready queue and CPU
Speed is less than short term scheduler Speed is fast
It controls the degree of It provides lesser control over degree of
multiprogramming multiprogramming
User Level Threads Vs Kernel Level Threads

User Level Threads Kernel Level Threads


User Level Threads are faster to create and Kernel Level Threads are slower to create and
manage. manage.
Implemented by a thread library at the user level. Operating System supports directly to kernel
threads.
User Level Threads can run on any Operating Kernel Level Threads are specfc to the operating
System. System
Multithreaded Application cannot take advantage Kernel routines themselves can be multithreaded.
of multiprocessing.
Types of Scheduling

There are two main types of Schedules A) Pre-Emptive B) Non- Pre-Emptive

A) Pre-Emptive Scheduling: It allows a higher priority process to replace a currently running process, even if its time
slot is not completed or it has not requested for any I/O. If a higher priority process enters the system, the currently
running process is stopped and the CPU transfers the control to the higher priority process.

 Advantage Of Pre-Emptive Scheduling: They allow real multiprogramming.

 Disadvantage of Pre-Emptive Scheduling: They are complex and they lead the system to race condition.

B) Non Pre-Emptive Scheduling: In Non-Pre-Emptive, once the CPU is assigned with a process, the processor do
not release until the completion of that process. It means that a running process has the control of the CPU and other
allocated resources until the normal termination of that process.

 Advantage of Non Pre-Emptive Scheduling: They are simple and they cannot lead the system to race condition.

 Disadvantage of Non Pre-Emptive Scheduling: They do not allow real multiprogramming.


Scheduling Objectives

A system user / designer must consider a variety of factors when developing a scheduling discipline, such as the type
of system and the user’s needs. Following are the scheduling Objectives that the user must expect.

1) Fairness: It is defined as the degree to which each process gets an equal chance to execute.

2) Maximize the Resource Utilization: The scheduling techniques should keep the resources of the system busy.

3) Response Time: A scheduler should minimize the response time for interactive user.

4) Efficiency: The scheduler must make sure that the system is utilized every second and not kept idle.

5) Turnaround: A scheduler should minimize the time batch users must wait for an output.

6) Maximize Throughput: A scheduler should maximize the number of jobs processed per unit time.

7) Enforce Priorities: If the system assigns priorities to processors, the scheduling mechanism should favour the
higher priority processors first.
CPU I/O Burst Cycles

• CPU Burst Cycles: It is a time period when process is busy with CPU.

• I/O Burst Cycle It is a time period when process is busy in working with I/O
resource

• CPU scheduling is greatly affected by how a process behaves during its Execution.
Almost all the processes continue to switch between CPU (for processing) and I/O
devices (for performing I/O) during their execution

• CPU I/O burst cycles refer to the alternating phases in a computer's operation. The
CPU processes data (CPU burst), then waits for input/output tasks to finish (I/O
burst), before resuming processing. This pattern repeats to ensure efficient data
handling and smooth functioning of the system.
Explain Scheduling Criteria.

1) CPU Utilization: The main objective is to keep CPU as busy as possible. CPU utilization can range from 0 to 100
percent.

2) Throughput: It is the number of processes that are completed per unit time. It is a measure of work done in the
system. Throughput depends on the execution time required for any process.

3) Turnaround Time: The time interval from the time of submission of a process to the time of completion of
that process is called as Turnaround Time.

4) Response Time: The time period from the submission of a request until the first response is produced is called
as response time.

5) Waiting time: It is the sum of time periods spent in the ready queue by a process.

6) Balanced Utilization: It’s main aim is to get more work done by the system.
Scheduling Algorithms

CPU Scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the
CPU. The algorithm used by the scheduler to carry out the selection of a process for execution is known as scheduling
algorithm.

Various Types of Scheduling Algorithms are:

A) First Come First Serve Scheduling Algorithm

B) Shortest Job First Scheduling Algorithm

C) Shortest Remaining Time Next Scheduling Algorithm

D) Priority Scheduling Algorithm

E) Round Robin Scheduling Algorithm


First Come First Server Scheduling Algorithm
FCFS stands for First Come First Server. It is the simplest type of algorithm in which the process that requires the
CPU first is allocated the CPU first.

 Advantage Of FCFS:
• FCFS are well suited for Batch Systems where the longer time periods for each process are often acceptable.
• Ensures fairness by serving processes in the order they arrive.

 Disadvantage of FCFS:
• Average Waiting Time is very large
• Not suitable for time-sensitive or real-time systems.

Average Waiting Time = Waiting Time / Number of Process


Average Turnaround Time = Turnaround Time / Number of Process

4 3 2 1 CPU Completed Job

Concept Of FCFS
Shortest Job First Scheduling Algorithm

SJF stands for Shortest Job First, it is also known as Shortest Process Next (SPN) or Shortest Request Next (SRN).
This algorithm schedules the processes according to the length of the CPU burst they require.

 Advantage Of SJF:
• Minimizes Average Waiting Time
• Reduces the Average Turnaround Time.
• Provides optimal efficiency for processes with short burst times.

 Disadvantage of SJF:
• It may cause long waiting times for longer processes
• It is very difficult to know the length of the next CPU burst

Average Waiting Time = Total Waiting Time / Number of Process


Average Turnaround Time = Total Turnaround Time / Number of Process
Shortest Remaining Time Next Scheduling Algorithm

SRTN stands for Shortest Remaining Time Next. It is also known as Shortest Time To Go. It is a scheduling
discipline in which the next scheduling entity, job or a process is selected on the basis of the shortest remaining
execution time.

 Advantage Of SRTN:
• Minimizes Average Waiting Time.
• Reduces Turnaround Time

 Disadvantage of SRTN:
• Can lead to starvation for processes with longer burst times.
• Requires frequent preemption, causing high overhead.

Average Waiting Time = Total Waiting Time / Number of Process


Average Turnaround Time = Total Turnaround Time / Number of Process
Priority Scheduling Algorithm
In priority scheduling algorithm, a priority is associated with each process and the scheduler always picks up the highest
priority first for execution. Equal priority processes are scheduled in FCFS Order.

There are Two Types of Priority Scheduling Algorithm

• Internal Priority: They are based on time, memory requirements, number of open files, etc.
• External Priority: They are human created examples seniority, influence of any person, etc.

 Advantage of Priority:
• It is simple in use
• Prioritizes important tasks to be completed first

 Disadvantage of Priority:
• Possibility of starvation for low priority tasks.
• High priority tasks might monopolize the CPU, leading to delayed execution of lower priority tasks.

Average Waiting Time = Total Waiting Time / Number of Process


Average Turnaround Time = Total Turnaround Time / Number of Process
Round Robin Scheduling Algorithm

The Round Robin (RR) scheduling algorithm is designed especially for Timesharing systems. RR is the pre-emptive
Version of FCFS. It is similar to FCFS but pre-emption is added to switch between processes.

 Advantage of RR:
• Ensures fairness by providing equal opportunities for all processes to execute.
• Reduces response time as each process gets a small time slice, ensuring quick execution.

 Disadvantage of RR:
• Care must be taken in choosing quantum value.
• Throughput in RR is low if time quantum is too small

Average Waiting Time = Total Waiting Time / Number of Process


Average Turnaround Time = Total Turnaround Time / Number of Process
Deadlock

Deadlock is a situation in a resource allocation system in which two or more processes are in a simultaneous
weed state each one waiting for one of the others to release a resource before it can proceed or It can be defined
as the permanent blocking of a set of process that either compete for system resources or communicate with
each other

Deadlock Handing
A deadlock in operating system can be handled in following four different ways

1. Adopt methods for avoiding the deadlock.


2. Prevent the deadlock from occurring (use protocol)
3. Ignore the deadlock.
4. Allow the deadlock to occur, detect it and recover from it.

To prevent the Deadlock, the System can use either deadlock prevention or deadlock avoidance techniques
Conditions for Deadlock Preventions

By ensuring that at least one of below conditions cannot hold, we can prevent the occurrence of a Deadlock

1) Mutual Exclusion: The mutual exclusion condition must hold for non shareable resources, Shareable resources
do not require mutually exclusive access thus cannot be involved in a deadlock.

2) Hold And Wait: There are two possible ways to handle this situation:
First protocol that can be used, requires each processes to request all the resources needed before its execution.
Second protocol that can be used is, before it requests any additional resources, it must release all the resources
that are currently allocated to it.

3) No pre-emption: If a process that is holding some resource requests another resource that cannot be
immediately allocated to it then all resources currently being held are pre-empted that is these resources are
implicitly released.

4) Circular Wait: There may exist a set (P0, P1, Pn) of waiting processes such that P0 is waiting for a resource which
is held by P1, P1 is waiting for resource which is held by P-2 and vice versa. Thus there must be a circular chain of
two or more processes each of them waiting for a resource held by the next member of the chain.
Deadlock Preventions
Following are the ways to be considered in order to prevent deadlock

• Eliminating mutual exclusion condition


• Eliminating hold and wait condition:
• Eliminating no pre-emption condition
• Eliminating circular wait condition
Deadlock Avoidance
• Safe State: A state is safe if the system can allocate all resources requested by all processors without entering a
deadlock state

• Banker's Algorithm: It ensures the system can avoid deadlock by dynamically analyzing resource allocation
requests. It grants requests only if it determines that the system will remain in a safe state.

• Safety Algorithm: It is used to check if a system is in a safe state or not.

• Resource Request Algorithm: It is used to check if a resource request by a process can be granted without
causing the system to enter an unsafe state or result in a deadlock.
Memory Partitioning
In memory partitioning, memory is divided into a number of regions or portions. Each region may have one program
to be executed. When a region is free a program is selected from the job queue and loaded into the free region and
when it’s terminated, the region becomes available for another program. There are two types of memory Partitioning:

1) Static (Fixed Sized) Memory Partitioning: In static Memory 24 MB 24 MB


Partitioning, the memory is divided into a member of fixed size positions
and do not change as the system runs. There are two alternatives for fixed 20 MB 22 MB
sized memory partitioning
16 MB 18 MB
A) Equal sized Partitioning: Every Process has equal memory.
B) Unequal sized Partitioning Every Process does not have equal 12 MB 14 MB
memory
8 MB 8 MB
Operating Operating
System System
2) Dynamic (Variable) Memory Partitioning: In variable memory 0 0
partitioning the partitions can vary in number and size. In this the amount A) Equal sized B) Unequal sized
of memory allocated is exactly the amount of memory a process requires. Partitioning Partitioning
Free Space Management techniques

1) Bitmap / Bit Vector: A Bitmap or Bit Vector is series or


collection of bits where each bit corresponds to a disk block.
The bit can take two values: 0 and 1. Where 0 indicates that the
block is allocated and 1 indicates a free block.

2) Linked List: In this approach, the free disk blocks are linked 2)
together i.e. a free block contains a pointer to the next free
block.

3) Grouping: This approach stores the address of the free blocks


in the first free block.

4) Counting: This approach stores the address of the first free


disk block and a number n of free contiguous disk blocks that
follow the first block. Every entry in the list would contain: 3)
4)
Address of first free disk block and A number n
Virtual memory
Virtual memory is the separation of user logical memory from physical memory. This separation allows an extremely
large virtual memory to be provided for programmers when only a smaller physical memory is available.

Paging, Segmentation, Page Fault


1) Paging: It is a Memory Management Technique by which a computer stores and retrieves data from secondary
storage for use in main memory. In paging the operating system retrieves data from secondary storage in same size
blocks called pages.

2) Segmentation: It is a Memory Management Technique scheme that implements the user’s view of a program. In
segmentation, the entire logical address space is considered as a collection of segments with each segment having a
number and a length

3) Page Fault: A page fault in an operating system happens when a program needs data that isn't currently in the
main memory (RAM). The system then has to fetch that data from the slower storage, like the hard drive, which slows
down the program's execution temporarily.
Fragmentation

4) Fragmentation: As processes are loaded and removed from memory the free memory space is broken into little
pieces, after sometimes that processes cannot be allocated to memory blocks because of their small size and memory
blocks remains unused, this problem is known as Fragmentation.

Types of Fragmentation are: A) Internal Fragmentation And B) External Fragmentation

A) Internal Fragmentation: Wasting of memory within a partition, due to a difference in size of a Partition and of
the object resident within it, is called Internal Fragmentation.

B) External Fragmentation: Wasting of memory between partitions due to scattering of the free space into a
number of discontinuous areas is called External Fragmentation.
File & Attributes of File

A file is a named collection of related information that is recorded on secondary storage such as magnetic discs
magnetic tapes and optical disks.

Attributes of Files are:

• Name: It is the easy-to-read label given to a file, allowing humans to identify and access it within a system.

• Type: this type of information is needed for those systems that support different types.

• Location: It refers to a location / position where the file is stored on that device.

• Size: It refers to the size of the file and the maximum allowed size is mentioned in this attribute.

• Protection: Access control information controls that you can do reading writing executing and so on
File operations & File types
There are various types of file operations some of them are classified below

• Creating a File: Generating a new file within the system

• Writing a File: Adding data to an existing file

• Reading a File: Accessing and viewing content from a file.

• Deleting a File: Removing a file from the system permanently.

There are various Types of Files some of them are classified below:

• Executable – exe • PowerPoint Presentation – ppt • Document – doc

• Object – obj • Portable Document format – pdf • Zipped – zip

• Text – txt • MP3 Audio – mp3 • Library – lib


Access Methods

An access method describes the manner and mechanisms by which a process accesses the
Data / Information in a file.

There are Three Types of Axis Methods

A) Sequential File Access: It is the simplest access method. Information in the file is processed in order, one
record after the other. This mode of access is by far the most common.

B) Indexed Sequential File Access: It is the other method of accessing a file that is built on the top of the
sequential access method. Here the data is stored sequentially with an index for efficient access, allowing both
sequential and direct access.

C) Direct File Access: It is also known as relative access. A file is made up of fixed length, logical records that allow
programs to read and write records rapidly in no particular order.
Directory & Its Types
To organise files in the computer system in a systematic manner the operating system provides the concept of
directories. A directory can be defined as a way of grouping files together
There are three Types of Directories

A) Single Level directory Structure: It is the simplest form of cat bo a test data mail cont data records
directory. In single level directory, Entire files are contained in
the same directory so unique name must be assigned to each
file of the directory. Root Directory

User A User B User C


B) Two Level directory Structure: In this structure a separate
directory is provided to each user and all these directories are File 1 File 2 File 1 File 2 File 3 File 4
contained and indexed in the master directory.
Root Directory

C) Tree Structured Directory: In this structure it allows the users User A User B User C
to create their own subdirectory and to organise their files
tmp File 1 File 3 Proj 1
accordingly a subdirectory contains a set of files or subdirectories. Proj
File 2

File 1 File 2 Test c Test 1

You might also like