SlideShare a Scribd company logo
GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY
Baniatangi, Bhubaneswar
FOR 6TH SEM (CSE, ECE, EEE)
Prepared By:
Santosh Kumar Rath
Operating System
1
Introduction to Operating System:
An operating system is a program that acts as an intermediary between a
user of a computer and the computer hardware. The purpose of an operating
system is to provide an environment in which a user can execute program.
System Components
An operating system is an important part of almost every computer system.
A computer system can be divided roughly into four components:
(i) Hardware
(ii) Operating system
(iii) Applications programs
(iv) The users
1. Hardware
 The hardware- the central processing unit (CPU), the memory, and the
input /output (I/O) devices - provides the basic computing resources
for the system.
2. Operating System
 An operating system is a control program.
 A control program controls the execution of user programs to prevent
errors and improper use of the computer.
Operating System
2
 It provides a basis for the application program and acts as an
intermediary between the user and computer.
3. Application Program
 It help the user to perform the singular or multiple related tasks.
 It help the program to solve in real work.
 The applications programs- such as compilers, database systems,
games, and business programs- define the ways in which these
resources are used to solve the computing problems of the users.
Simple Batch Systems
To speed up processing, jobs with similar needs were batched together and
were run through the computer as a group. Thus, the programmers would
leave their programs with the operator. The operator would sort programs
into batches with similar requirements and, as the computer became
available, would run each batch. The output from each job would be sent
back to the appropriate programmer.
Demerits
 The lack of interaction between the user and the job while that job is
executing.
 In this execution environment, the CPU is often idle.
SPOOLING
The expansion of spooling is the simultaneous peripheral operation
on-line .For example if two or more users issue the print command, and the
Operating System
3
printer can accept the requests .Even the printer printing some other jobs.
The printer printing one job at the same time the “spool disk” can load some
other jobs.
„Spool disk‟ is a temporary buffer, it can read data from the secondary
storage devices directly. Simultaneously the CPU executing some other job in
the spool disk , at the same time the printer printing the third job. So three
jobs are running simultaneously.
Multi-programming Systems:
It is a technique to execute the number of the program
simultaneously by a single processor. In this case the number of the
processes are reside in main memory at a time. The operating system picks
and begins to execute one of the job in the main memory.
In non-multiprogramming system , the CPU can execute only one
program at a time , if the running program waiting for any I/O device , the
CPU become idle, so it will effect on the performance of the CPU. But in the
multiprogramming the CPU switches from that job to another job in the job
pool. So the CPU is not idle at any time.
This idea is common in other life situations. A lawyer does not have only one
client at a time. Rather, several clients may be in the process of being served
at the same time..) Multiprogramming is the first instance where the
operating system must make decisions for the users.
Advantages:-
Operating System
4
 Multiprogramming increases CPU utilization .
 CPU is never idle , so the performance of CPU will increase.
 Throughput of the CPU may also increase.
Time-Sharing Systems
Time sharing, or multitasking, is a logical extension of multiprogramming.
Multiple jobs are executed by the CPU switching between them, but the
switches occur so frequently that the users may interact with each program
while it is running.
Time-sharing systems were developed to provide interactive use of a
computer system at a reasonable cost. A time-shared operating system uses
CPU scheduling and multiprogramming to provide each user with a small
portion of a time-shared computer.
Time-sharing operating systems are CTTS , Multics , Cal ,Unix .
Distributed Systems
A distributed system is a collection of physically separate, possibly
heterogeneous computer systems that are networked to provide the users
with access to the various resources that the system maintains. The
processor can‟t share the memory or clock , each processor has its.
Advantages:
 Increases computation speed
 Increases functionality
 Increases the data availability, and reliability.
Operating System
5
Network Operating System :
A network operating system is an operating system that provides features
such as file sharing across the network and that includes a communication
scheme that allows different processes on different computers to exchange
messages. A computer running a network operating system acts
autonomously from all other computers on the network, although it is aware
of the network and is able to communicate with other networked computers.
Parallel System:
A system consisting of more than one processor and it is tightly coupled then it is
known as parallel system. In parallel systems number of the processor
executing their job parallel.
Tightly Coupled:-The system having more than one processor in close
communication ,sharing the computer bus , the clock , and some times
memory and peripheral devices. These are referred to as the “tightly
coupled” system.
Advantages:-
Increases throughput.
Increases reliability.
Special-Purpose Systems:
Operating System
6
The discussion thus far has focused on general-purpose computer systems
that we are all familiar with. There are, however, different classes of
computer systems whose functions are more limited and whose objective is
to deal with limited computation domains.
Real-Time Embedded Systems
A real-time operating system (RTOS) is an operating system (OS) intended to serve real-
time application requests. It must be able to process data as it comes in, typically without buffering
delays. A real-time system is used when rigid time requirements have been
placed on the operation of a processor or the flow of data; thus, it is often
used as a control device in a dedicated application. Sensors bring data to the
computer. The computer must analyze the data and possibly adjust controls
to modify the sensor inputs. Systems that control scientific experiments,
medical imaging systems, industrial control systems, and certain display
systems are realtime systems. Some automobile-engine fuel-injection
systems, home-appliance controllers, and weapon systems are also real-
time systems.
A real-time system has well-defined, fixed time constraints. Processing must
be done within the defined constraints, or the system will fail. For instance, it
would not do for a robot arm to be instructed to halt after it had smashed
into the car it was building. A real-time system functions correctly only if it
returns the correct result within its time constraints. Contrast this system
with a time-sharing system, where it is desirable (but not mandatory) to
respond
quickly, or a batch system, which may have no time constraints at all.
Operating System
7
SYSTEM STRUCTURES
Operating-System Services
OS provides an environment for execution of programs. It provides certain
services to programs and to the users of those programs. OS services
are provided for the convenience of the programmer, to make the
programming task easier.
One set of SOS services provides functions that are helpful to the user –
 User interface: All OS have a user interface(UI).Interfaces are of three
types- Command Line Interface: uses text commands and a method
for entering them Batch interface: commands and directives to
control those commands are entered into files and those files are
executed. Graphical user interface: This is a window system with
a pointing device to direct I/O, choose from menus and make
selections and a keyboard to enter text.
 Program execution: System must be able to load a program
into memory and run that program. The program must be able to
end its execution either normally or abnormally.
Operating System
8
 I/O operations: A running program may require I/O which may
involve a file or an I/O device. For efficiency and protection, users
cannot control I/O devices directly.
 File system manipulation: Programs need to read and write files
and directories. They also need to create and delete them by
name, search for a given file, and list file information.
 Communications: One process might need to exchange information
with another process.Such communication may occur between
processes that are executing on the same computer or between
processes that are executing on different computer systems tied
together by a computer network. Communications may be
implemented via shared memory or through message passing.
 Error detection: OS needs to be constantly aware of possible errors.
Errors may occur in the CPU and memory hardware, in I/O devices and
in the user program. For each type of error, OS takes appropriate
action to ensure correct and consistent computing.
Another set of OS functions exist for ensuring efficient operation of the
system. They are-
a. Resource allocation: When there are multiple users or multiple jobs
running at the same time, resources must be allocated to each of them.
Different types of resources such as CPU cycles, main memory and file
storage are managed by the operating system.
Operating System
9
b. Accounting: Keeping track of which users use how much and what
kinds of computer resources.
c. Protection and security: Controlling the use of information stored in
a multiuser or
networked computer system. Protection involves ensuring that all
access to system
resources is controlled. Security starts with requiring each user to
authenticate himself or
herself to the system by means of password and to gain access to system
resources.
System Calls
System calls provide an interface to the services made available by an
operating system.
Operating System
10
An example to illustrate how system calls are used:
Writing a simple program to read data from one file and copy them to
another file-
a) First input required is names of two files – input file and output
file. Names can be specified in many ways- One approach is for the
program to ask the user for the names of two files. In an interactive
system, this approach will require a sequence of system calls, to write
a prompting message on screen and then read from the keyboard the
characters that define the two files. On mouse based and icon based
systems, a menu of file names is displayed in a window where the user
can use the mouse to select the source names and a window can be
opened for the destination name to be specified.
b) Once the two file names are obtained, program must open the
input file and create the output file. Each of these operations requires
another system call. Possible error conditions –When the program tries to
open input file, no file of that name may exist or file is protected
against access. Program prints a message on console and terminates
abnormally. If input file exists, we must create a new output file. If the
output file with the same name exists, the situation caused the program to
abort or delete the existing file and create a new one. Another option is to
ask the user(via a sequence of system calls) whether to replace the existing
file or to abort the program.
Operating System
11
When both files are set up, a loop reads from the input file and writes to the
output file (system calls respectively). Each read and write must return status
information regarding various possible error conditions. After entire file
is copied, program closes both files, write a message to the console or
window and finally terminate normally.
Application developers design programs according to application
programming interface (API). API specifies set of functions that are available
to an application programmer.
The functions that make up the API typically invoke the actual system
calls on behalf of the application programmer. Benefits of programming
rather than invoking actual system calls:
 Program portability – An application programmer designing a program
using an API can expect program to compile and run on any system
that supports the same API.
 Actual system calls can be more detailed and difficult to work
with than the API available to an application programmer.
The run time support system ( a set of functions built into libraries included
with a compiler) for most programming languages provides a system call
Operating System
12
interface that serves as a link to system calls made available by OS.
The system call interface intercepts function calls in the API and
invokes the necessary system call within the operating system. A number is
associated with each system call and the system call interface maintains
a table indexed according to these numbers. System call interface then
invokes the intended system call in the OS kernel and returns the
status of the system call and return any values.
System calls occur in different ways, depending on the computer in
use – more information is required than simply the identity of the
desired system call. The exact type and amount of information vary
according to the particular OS and call.
Three general methods are used to pass parameters to OS-
I. Pass the parameters in registers
II. Storing parameters in blocks or tables in memory and the address of the
block id passed as a parameter in a register
III. Placing or pushing parameters onto the stack by the program and
popping off the stack by the OS.
Types of system calls
Five major categories:
1) Process control
end, abort
load, execute
Operating System
13
create process, terminate process
get process attributes, set process attributes
wait for time
wait event, signal event
allocate and free memory
2) File Management
create file, delete file
open, close
read, write, reposition
get file attributes, set file attributes
3) Device management
request device, release device
read, write, reposition
get device attributes, set device attributes
logically attach or detach devices
4) Information maintenance
get time or date, set time or date
get system data, set system data
get process, file or device attributes
set process, file or device attributes
5) Communications
create, delete communication connection
send, receive messages
Operating System
14
transfer status information
attach or detach remote devices
System Programs
Operating System
15
System programs provide a convenient environment for program
development and execution. They can be divided into these categories-
File management: These programs create, delete, copy, rename, print,
dump, list and manipulate files and directories.
Status information: Some programs ask the system for the date, time,
and amount of available memory or disk space, number of users.
File modification: Text editors may be available to create and modify the
content of files stored on disk or other storage devices.
Programming language support: Compilers, assemblers, debuggers and
interpreters for common programming languages are often provided to the
user with the OS.
Program loading and execution: Once a program is assembled or
compiled, it must be loaded into memory to be executed. System
provides absolute loaders, relocatable loaders, linkage editors and overlay
loaders.
Communications: These programs provide the mechanism for creating
virtual connections among processes, users and computer systems.
In addition to systems programs, OS are supplied with programs that
are useful in solving common problems or performing operations. Such
programs include web browsers, word processors and text formatters,
spread sheets, database systems etc. These programs are known as
system utilities or application programs.
Operating System
16
PROCESS MANAGEMENT
Current day computer systems allow multiple programs to be loaded into
memory and executed concurrently. Process is nothing but a program in
execution. A process is the unit of work in a modern time sharing
system. A system is a collection of processes: operating system
processes executing operating system code and user processes
executing user code. By switching CPU between processes, the operating
system can make the computer more productive.
Overview
A batch system executes jobs whereas a time shared system has user
programs or tasks. On a single user system, user may be able to run
several programs at one time: a word processor, a web browser and an
e-mail package. All of these are called processes.
The Process
A process is a program in execution. A process is more than a program code
sometimes known as text section. It also includes the current activity as
represented by the value of the program counter and the contents of the
processor‟s registers. A process generally also includes the process stack
which contains temporary data and a data section which contains global
variables.
A process may also include a heap which is memory that is
dynamically allocated during process run time. Program itself is not a
Operating System
17
process, a program is a passive entity such as a file containing a list
of instructions stored on disk (called an executable file) whereas
process is an active entity with a program counter specifying the next
instruction to execute and a set of associated resources. A program
becomes a process when an executable file is loaded into memory.
Although two processes may be associated with the same program, they are
considered two separate execution sequences.
Process State
As a process executes, it changes state. The state of a process is
defined in part by the current activity of that process. Each process may
be in one of the following states-
New: The process is being created.
Running: Instructions are being executed.
Waiting: The process is waiting for some event to occur.
Operating System
18
Ready: The process is waiting to be assigned to the processor.
Terminated: The process has finished execution.
Process Control Block
Each process is represented in the operating system by a process
control block (PCB) also called task control block. It contains many
pieces of information associated with a specific process including:
Process State: The state may be new, ready, running, waiting, halted etc.
Process Number: Unique number which is assigned to a program at time of
its execution.
Program Counter: The counter indicates the address of the next instruction
to be executed for this process.
CPU registers: The registers vary in number and type depending on the
computer architecture.
Operating System
19
CPU scheduling information: This information includes a process priority,
pointers to scheduling queues, and other scheduling parameters.
Memory management information: This information may include such
information as the value of base and limit registers etc.
Accounting information: This information includes the amount of CPU and
real time used, time limits etc.
I/O status information: This information includes the list of I/O devices
allocated to the process, etc.
Threads
Process is a program that performs a single thread of execution. A
single thread of control allows the process to perform only one task at one
time.
Process Scheduling
Operating System
20
The objective of multi programming is to have some process running
at all times to maximize CPU utilization. The objective of time sharing
is to switch the CPU among processes so frequently that users can
interact with each program while it is running. To meet these
objectives, the process scheduler selects an available process for
program execution on the CPU. For a single processer system, there will
never be more than one running process.
Scheduling Queues
Job queue: As processes enter the system, they are put inside the job queue,
which consists of all processes in the system.
Ready queue: The processes that are residing in main memory and are
ready and waiting to execute are kept on a list called the ready queue.
This queue is stored as a linked list. A ready queue header contains pointers
Operating System
21
to the first and final PCB‟s in the list. Each PCB includes a pointer field that
points to the next PCB in the ready queue.
Device queue: When a process is allocated the CPU, it executes for a while
and eventually quits, is interrupted or waits for the occurrence of a particular
event such as the completion of an I/O request. The list of processes waiting
for a particular I/O device is called a device queue. Each device has its own
device queue.
Queuing diagram
A common representation for process scheduling is queuing diagram. Each
rectangular box represents a queue. Two types of queues are present:
ready queue and a set of device queues. Circles represent the
resources that serve the queues and the arrows indicate the flow of
processes in the system.
Operating System
22
A new process is initially put in the ready queue. It waits there until it is
selected for execution or is dispatched. Once the process is allocated the
CPU and is executing, one of the following events might occur –
a) The process could issue and I/O request and then be placed in the I/O
queue.
b) The process could create a new sub process and wait for the sub
process‟s termination.
c) The process could be removed forcibly from the CPU as a result of
an interrupt and be put back in the ready queue.
Schedulers
A process migrated among the various scheduling queues throughout its life
time. The OS must select for scheduling purposes, processes from these
queues and this selection is carried out by scheduler.
In a batch system, more processes are submitted than can be
executed immediately. These processes are spooled to a mass storage
device (disk) where they are kept for later execution. The long term
scheduler or job scheduler selects processes from this pool and loads
them into memory for execution. The short term scheduler or CPU
scheduler selects from among the processes that are ready to execute and
allocates the CPU to one of them. The distinction between these two lies
in the frequency of execution. The long term scheduler controls the
degree of multi programming (the number of processes in memory). Most
Operating System
23
processes can be described as either I/O bound or CPU bound. An I/O bound
process is one that spends more of its time doing I/O than it spends doing
computations. A CPU bound process generates I/O requests infrequently
using more of its time doing computations. Long term scheduler must
select a good mix of I/O bound and CPU bound processes. A system
with best performance will have a combination of CPU bound and I/O bound
processes.
Some operating systems such as time sharing systems may introduce
additional/intermediate level of scheduling. Idea behind medium term
scheduler is that sometimes it can be advantageous to remove
processes from memory and thus reduce the degree of
multiprogramming. Later the process can be reintroduced into memory
and its execution can be continued where it left off. This scheme is
called swapping. The process is swapped out and is later swapped in by the
medium term scheduler.
Context switch
Interrupts cause the OS to change a CPU from its current task and to run a
kernel routine. When an interrupt occurs, the system needs to save the
Operating System
24
current context of the process currently running on the CPU so that it can
restore that context when its processing is done, essentially suspending the
process and then resuming it. The context is represented in the PCB of
the process, it includes the value of CPU registers, process state and
memory management information. We perform a state save of the current
state of the CPU and then state restore to resume operations.
Switching the CPU to another process requires performing a state save of the
current process and a state restore of a different process. This task is
known as context switch. Kernel saves the context of the old process in its
PCB and loads the saved context of the new process scheduled to run.
Context switch time is pure overhead. Its speed varies from machine to
machine depending on the memory speed, the number of registers that
must be copied and the existence of special instructions. Context switch
times are highly dependent on hardware support.
Multithreading
Thread is a basic unit of CPU utilization. Support for threads may be
provided either at the user level for user threads or by the kernel for
kernel threads. User threads are supported above the kernel and are
managed without kernel support whereas kernel threads are supported
and managed directly by the operating system.
Benefits of multi threaded programming are:
Responsiveness: Multi threading an interactive application may allow a
program to continue running even if part of it is blocked or is
Operating System
25
performing a lengthy operation, thereby increasing responsiveness to the
user.
Resource interaction: Threads share the memory and the resources of the
process to which they belong. The benefit of sharing code and data is
that it allows an application to have several different threads of activity
within the same address space.
Economy: Allocating memory and resources for process creation is costly.
Because threads share resources of the process to which they belong, it is
more economical to create and context switch threads.
Utilization of multi processor architecture: The benefits of multi
threading can be greatly increased in a multi processor architecture where
threads may be running in parallel on different processors. Multi threading
on a multi CPU machine increases concurrency.
User threads are supported above the kernel and are managed without
kernel support whereas kernel threads are supported and managed directly
by the operating system.
CPU scheduling is the basis of multi programmed operating systems. By
switching CPU among processes, the OS can make the computer more
productive. On operating systems that support threads, it is kernel level
threads and not processes that are scheduled by operating system. But
process scheduling and thread scheduling are often used interchangeably.
Operating System
26
In a single processor system, only one process can run at a time; others
must wait until CPU is free and can be rescheduled. The objective of multi
programming is to have some process running at all times to
maximize CPU utilization. Under multi programming, several processes are
kept in memory at one time. When one process has to wait, the OS takes the
CPU away from that process and gives the CPU to another process. As CPU is
one of the primary computer resources, its scheduling is central to operating
system design.
Three common ways of establishing a relationship between user level
and kernel level threads are:
Many to one model - Maps many user level threads to one kernel thread.
Thread management is done by the thread library in user space hence
it is efficient but entire process will block if a thread makes a blocking
system call. As only one thread can access the kernel at a time, multiple
threads are unable to run in parallel on multi processors.
Operating System
27
One to one model - Maps each user thread to kernel thread. It
allows more concurrency by allowing another thread to run when a
thread makes a blocking system call; it also allows multiple threads to
run in parallel on multi processors. Disadvantage is that creating a user
thread requires creating the corresponding kernel thread.
Many to many model - Multiplexes many user level threads to a
smaller or equal number of kernel level threads. The number of kernel
threads may be specific to either a particular application or a particular
machine. Developers can create as many user threads as necessary and the
corresponding kernel threads can run in parallel on a multi processor.
Also, when a thread performs a blocking system call, the kernel can
schedule another thread for execution.
Operating System
28
On operating systems that support threads, it is kernel level threads that are
being scheduled by operating system. User level threads are managed by
thread library. To run on a CPU, user level threads must be mapped to an
associated kernel level thread although this mapping may be indirect
and may use a light weight process.
PROCESS SCHEDULING:
Process is an executing program with a single thread of control. Most
operating systems provide features enabling a process to contain multiple
threads of control. A thread is a basic unit of CPU utilization; it comprises
of a thread ID, a program counter, a register set and a stack. It
shares with other threads belonging to the same process its code
section, data section and other operating system resources such as open
files and signals. A traditional or heavy weight process has a single thread of
control.
CPU – I/O burst cycle:
Process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states. Process execution begins with
CPU burst followed by an I/O burst and so on. The final CPU burst ends with
a system request to terminate execution.
Operating System
29
An I/O bound program has many short CPU bursts. A CPU bound
program might have a few long CPU bursts.
CPU scheduler:
Whenever the CPU becomes idle, the operating system must select one
of the processes in the ready queue to be executed. The selection
process is carried out by the short term scheduler or CPU scheduler.
The scheduler selects a process from the processes in memory that are ready
to execute and allocates the CPU to that process.
The ready queue is not necessarily a first in first out queue. A ready queue
can be implemented as a FIFO queue, a priority queue, a tree or an
Operating System
30
unordered linked list. All the processes in the ready queue are lined
up waiting for a chance to run on the CPU. The records in the queue are
process control blocks of the processes.
Pre-emptive scheduling:
CPU scheduling decisions may take place under the following four
conditions-
a) When a process switches from the running state to the waiting state
b) When a process switches from the running state to the ready state
c) When a process switches from the waiting state to the ready state
d) When a process terminates
When scheduling takes place under conditions 1 and 4, scheduling scheme is
non - pre emptive or co-operative. Else it is called pre emptive.
Under non pre emptive scheduling, once the CPU has been allocated
to a process, the process keeps the CPU until it releases the CPU
either by terminating or by switching to the waiting state. Pre emptive
scheduling incurs cost associated with access to shared data. Pre emption
also affects the design of the operating system kernel.
Dispatcher:
The dispatcher is the module that gives control of the CPU to the
process selected by the short term scheduler. This function involves-
a) Switching context
Operating System
31
b) Switching to user mode
c) Jumping to the proper location in the user program to restart that
program
Dispatcher should be as fast as possible since it is invoked during every
process switch. The time it takes for the dispatcher to stop one process
and start another running is called dispatch latency.
Scheduling criteria:
Different CPU scheduling algorithms have different properties. Criteria for
comparing CPU scheduling algorithms-
a) CPU utilization: Keep the CPU as busy as possible
b) Through put: One measure of work of CPU is the number of processes
that are completed per time unit called through put.
c) Turnaround time: The interval from the time of submission of a
process to the time of completion is the turnaround time. Turnaround
time is the sum of the periods spent waiting to get into memory, waiting
in the ready queue, executing on the CPU and doing I/O.
d) Waiting time: It is the sum of the periods spent waiting in the ready
queue.
e) Response time: The time it takes for the process to start responding but
is not the time it takes to output the response.
Operating System
32
It is desirable to maximize CPU utilization and through put and to
minimize turnaround time, waiting time and response time.
Scheduling algorithms:
First Come First Serve:
This is the simplest CPU scheduling algorithm. The process that
requests the CPU first is allocated the CPU first. The implementation of
FCFS is managed with a FIFO queue. When a process enters the ready
queue, its PCB is linked onto the tail of the queue. When the CPU is
free, it is allocated to the process at the head of the queue. The running
process is then removed from the queue.
The average waiting time under FCFS is often quite long. The FCFS
scheduling algorithm is non pre-emptive.
Once the CPU has been allocated to a process, that process keeps
the CPU until it releases the CPU either by terminating or by
requesting I/O. It is simple, fair, but poor performance. Average
queuing time may be long.
Operating System
33
Shortest Job First:
This algorithm associates with each process the length of the process‟s next
CPU burst. When the CPU is available, it is assigned to the process that has
the smallest next CPU burst. If the next CPU bursts of two processes are the
same, FCFS scheduling is used to break the tie.
Priority:
Operating System
34
The SJF is a special case of the general priority scheduling algorithm.
A priority is associated with each process and CPU is allocated to the
process with highest priority. Equal priority processes are scheduled in
FCFS order. An SJF algorithm is simply a priority algorithm where the
priority is the inverse of the predicted next CPU burst. The larger the
CPU burst, the lower the priority. Priorities are generally indicated by some
fixed range of numbers usually 0 to 7.
Priorities can be defined either internally or externally. Internally
defined priorities use some measurable quantity to computer the
priority of a process. External priorities are set by criteria outside the
operating system.
Priority scheduling can be either pre emptive or non pre emptive. When a
process arrives at the ready queue, its priority is compared with the
priority of the currently running process. A preemptive priority
scheduling algorithm will preempt the CPU is the priority of the newly
arrived process is higher than the priority of the currently running
process. A non preemptive priority scheduling algorithm will put the new
process at the head of the ready queue.
The major problem with priority scheduling algorithm is indefinite
blocking or starvation. A process that is ready to run but waiting for
the CPU can be considered blocked. A priority scheduling algorithm
can leave some low priority processes waiting indefinitely. A solution
to the problem of indefinite blockage of low priority processes is
Operating System
35
aging. Aging is a technique of gradually increasing the priority of
processes that wait in the system for a long time.
Round Robin:
The round robin scheduling algorithm is designed for time sharing systems.
It is similar to FCFS scheduling but preemption is added to switch
between processes. A small unit of time called a time quantum or time
slice is defined. A time quantum is generally from 10 to 100 milli seconds.
The ready queue is treated as a circular queue. The CPU scheduler goes
around the ready queue allocating the CPU to each process for a time
interval of up to 1 time quantum. To implement RR scheduling, ready
queue is implemented as a FIFO queue. New processes are added to
the tail of ready queue. The CPU scheduler picks the first process
from the ready queue, sets the timer to interrupt after 1 time quantum
and dispatches the process.
The process may either have a CPU burst of less than 1 time
quantum. Then, the process itself will release the CPU voluntarily. The
scheduler will then proceed to the next process in the ready queue. Else if
the CPU burst of the currently running process is longer than 1 time
quantum, the timer will go off and will cause an interrupt to the
operating system. A context switch will be executed and the process will
be put at the tail of the ready queue. The CPU scheduler will then select the
next process in the ready queue.
Operating System
36
In the RR scheduling algorithm, no process is allocated the CPU for more
than 1 time quantum in a row. If a process‟s CPU burst exceeds 1 time
quantum, that process is preempted and is put back in the ready queue.
Thus this algorithm is preemptive. The performance of the RR algorithm
depends heavily on the size of time quantum. If the time quantum is
extremely large, the RR policy is same as the FCFS policy. If the time
quantum is extremely small, the RR approach is called processor sharing.
Algorithm evaluation
Selection of an algorithm is difficult. The first problem is defining the
criteria to be used in selecting an algorithm. Criteria are often defined
in terms of CPU utilization, response time or through put. Criteria
includes several measures such as-
Operating System
37
a) Maximizing CPU utilization under the constraint that the maximum
response time is 1 second.
b) Maximizing through put such that turnaround time is linearly
proportional to total execution time.

More Related Content

PPTX
SCHEDULING ALGORITHMS
PPTX
CPU Scheduling in OS Presentation
PPTX
System calls
PPTX
Architecture of operating system
DOC
operating system lecture notes
PPTX
Cpu scheduling in operating System.
PPT
Multiprocessor Systems
PDF
Monitors
SCHEDULING ALGORITHMS
CPU Scheduling in OS Presentation
System calls
Architecture of operating system
operating system lecture notes
Cpu scheduling in operating System.
Multiprocessor Systems
Monitors

What's hot (20)

PPT
Scheduling algorithms
PPTX
File system structure
PPT
Services provided by os
PPT
deadlock avoidance
PPTX
Computing Environments.pptx
DOCX
Operating System Process Synchronization
PPTX
Ram & rom
PDF
OS - Process Concepts
PDF
Operating System
PPT
Memory management
PPT
Operating system.ppt (1)
PDF
Distributed Operating System_1
PPTX
Os unit 3 , process management
PPTX
Operating system 11 system calls
PPTX
Structure of operating system
PPS
Virtual memory
Scheduling algorithms
File system structure
Services provided by os
deadlock avoidance
Computing Environments.pptx
Operating System Process Synchronization
Ram & rom
OS - Process Concepts
Operating System
Memory management
Operating system.ppt (1)
Distributed Operating System_1
Os unit 3 , process management
Operating system 11 system calls
Structure of operating system
Virtual memory
Ad

Viewers also liked (12)

PPTX
Computer Graphics
DOC
Dbms Lecture Notes
PDF
Unit 4 Multimedia CSE Vth sem
PDF
3D transformation - Unit 3 Computer grpahics
DOCX
Bce notes unit 1 be 205
PDF
Unit 5 animation notes
DOCX
Database Management System
PDF
Imp notes dbms
PDF
Operating system concepts (notes)
PDF
Computer Network notes (handwritten) UNIT 1
PDF
Notes 2D-Transformation Unit 2 Computer graphics
PPTX
3D transformation in computer graphics
Computer Graphics
Dbms Lecture Notes
Unit 4 Multimedia CSE Vth sem
3D transformation - Unit 3 Computer grpahics
Bce notes unit 1 be 205
Unit 5 animation notes
Database Management System
Imp notes dbms
Operating system concepts (notes)
Computer Network notes (handwritten) UNIT 1
Notes 2D-Transformation Unit 2 Computer graphics
3D transformation in computer graphics
Ad

Similar to Operating system notes (20)

PDF
Os notes
PPTX
Operating system
PDF
Unit I OS.pdf
PDF
Os-unit1-Introduction to Operating Systems.pdf
PPTX
PDF
3330701_unit-1_operating-system-concepts.pdf
PDF
Os notes 1_5
PPTX
Advanced computer architecture lesson 1 and 2
PPT
Unit 1_Operating system
PDF
Intermediate Operating Systems
PPTX
Os unit i
PPT
Os concepts
PPTX
Introduction to Operating Systems
PDF
Operating system Concepts
PPTX
EE469-ch1.pptx
PPTX
EE469-ch1.pptx
DOCX
A brief introduction about an operating system and its architecture
PPT
Introduction to OS 1.ppt
PPT
chapter 1 intoduction to operating system
PDF
Operating System Overview.pdf
Os notes
Operating system
Unit I OS.pdf
Os-unit1-Introduction to Operating Systems.pdf
3330701_unit-1_operating-system-concepts.pdf
Os notes 1_5
Advanced computer architecture lesson 1 and 2
Unit 1_Operating system
Intermediate Operating Systems
Os unit i
Os concepts
Introduction to Operating Systems
Operating system Concepts
EE469-ch1.pptx
EE469-ch1.pptx
A brief introduction about an operating system and its architecture
Introduction to OS 1.ppt
chapter 1 intoduction to operating system
Operating System Overview.pdf

More from SANTOSH RATH (20)

DOCX
Lesson plan proforma database management system
DOCX
Lesson plan proforma progrmming in c
DOCX
Expected questions tc
PDF
Expected questions tc
PDF
Module wise format oops questions
PDF
2011dbms
PDF
2006dbms
PDF
( Becs 2208 ) database management system
PDF
Rdbms2010
DOCX
Expected Questions TC
PDF
Expected questions tc
DOCX
Expected questions for dbms
PDF
Expected questions for dbms
PDF
Oops model question
PDF
System programming note
PDF
OS ASSIGNMENT 2
PDF
OS ASSIGNMENT-1
PDF
OS ASSIGNMENT 3
PDF
Ds using c 2009
PDF
Data structure using c bcse 3102 pcs 1002
Lesson plan proforma database management system
Lesson plan proforma progrmming in c
Expected questions tc
Expected questions tc
Module wise format oops questions
2011dbms
2006dbms
( Becs 2208 ) database management system
Rdbms2010
Expected Questions TC
Expected questions tc
Expected questions for dbms
Expected questions for dbms
Oops model question
System programming note
OS ASSIGNMENT 2
OS ASSIGNMENT-1
OS ASSIGNMENT 3
Ds using c 2009
Data structure using c bcse 3102 pcs 1002

Recently uploaded (20)

PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
DOCX
573137875-Attendance-Management-System-original
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Structs to JSON How Go Powers REST APIs.pdf
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PPTX
Geodesy 1.pptx...............................................
PDF
Digital Logic Computer Design lecture notes
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
Welding lecture in detail for understanding
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPTX
additive manufacturing of ss316l using mig welding
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
573137875-Attendance-Management-System-original
Model Code of Practice - Construction Work - 21102022 .pdf
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
UNIT 4 Total Quality Management .pptx
Structs to JSON How Go Powers REST APIs.pdf
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Arduino robotics embedded978-1-4302-3184-4.pdf
Geodesy 1.pptx...............................................
Digital Logic Computer Design lecture notes
Embodied AI: Ushering in the Next Era of Intelligent Systems
CYBER-CRIMES AND SECURITY A guide to understanding
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Welding lecture in detail for understanding
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
additive manufacturing of ss316l using mig welding

Operating system notes

  • 1. GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY Baniatangi, Bhubaneswar FOR 6TH SEM (CSE, ECE, EEE) Prepared By: Santosh Kumar Rath
  • 2. Operating System 1 Introduction to Operating System: An operating system is a program that acts as an intermediary between a user of a computer and the computer hardware. The purpose of an operating system is to provide an environment in which a user can execute program. System Components An operating system is an important part of almost every computer system. A computer system can be divided roughly into four components: (i) Hardware (ii) Operating system (iii) Applications programs (iv) The users 1. Hardware  The hardware- the central processing unit (CPU), the memory, and the input /output (I/O) devices - provides the basic computing resources for the system. 2. Operating System  An operating system is a control program.  A control program controls the execution of user programs to prevent errors and improper use of the computer.
  • 3. Operating System 2  It provides a basis for the application program and acts as an intermediary between the user and computer. 3. Application Program  It help the user to perform the singular or multiple related tasks.  It help the program to solve in real work.  The applications programs- such as compilers, database systems, games, and business programs- define the ways in which these resources are used to solve the computing problems of the users. Simple Batch Systems To speed up processing, jobs with similar needs were batched together and were run through the computer as a group. Thus, the programmers would leave their programs with the operator. The operator would sort programs into batches with similar requirements and, as the computer became available, would run each batch. The output from each job would be sent back to the appropriate programmer. Demerits  The lack of interaction between the user and the job while that job is executing.  In this execution environment, the CPU is often idle. SPOOLING The expansion of spooling is the simultaneous peripheral operation on-line .For example if two or more users issue the print command, and the
  • 4. Operating System 3 printer can accept the requests .Even the printer printing some other jobs. The printer printing one job at the same time the “spool disk” can load some other jobs. „Spool disk‟ is a temporary buffer, it can read data from the secondary storage devices directly. Simultaneously the CPU executing some other job in the spool disk , at the same time the printer printing the third job. So three jobs are running simultaneously. Multi-programming Systems: It is a technique to execute the number of the program simultaneously by a single processor. In this case the number of the processes are reside in main memory at a time. The operating system picks and begins to execute one of the job in the main memory. In non-multiprogramming system , the CPU can execute only one program at a time , if the running program waiting for any I/O device , the CPU become idle, so it will effect on the performance of the CPU. But in the multiprogramming the CPU switches from that job to another job in the job pool. So the CPU is not idle at any time. This idea is common in other life situations. A lawyer does not have only one client at a time. Rather, several clients may be in the process of being served at the same time..) Multiprogramming is the first instance where the operating system must make decisions for the users. Advantages:-
  • 5. Operating System 4  Multiprogramming increases CPU utilization .  CPU is never idle , so the performance of CPU will increase.  Throughput of the CPU may also increase. Time-Sharing Systems Time sharing, or multitasking, is a logical extension of multiprogramming. Multiple jobs are executed by the CPU switching between them, but the switches occur so frequently that the users may interact with each program while it is running. Time-sharing systems were developed to provide interactive use of a computer system at a reasonable cost. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared computer. Time-sharing operating systems are CTTS , Multics , Cal ,Unix . Distributed Systems A distributed system is a collection of physically separate, possibly heterogeneous computer systems that are networked to provide the users with access to the various resources that the system maintains. The processor can‟t share the memory or clock , each processor has its. Advantages:  Increases computation speed  Increases functionality  Increases the data availability, and reliability.
  • 6. Operating System 5 Network Operating System : A network operating system is an operating system that provides features such as file sharing across the network and that includes a communication scheme that allows different processes on different computers to exchange messages. A computer running a network operating system acts autonomously from all other computers on the network, although it is aware of the network and is able to communicate with other networked computers. Parallel System: A system consisting of more than one processor and it is tightly coupled then it is known as parallel system. In parallel systems number of the processor executing their job parallel. Tightly Coupled:-The system having more than one processor in close communication ,sharing the computer bus , the clock , and some times memory and peripheral devices. These are referred to as the “tightly coupled” system. Advantages:- Increases throughput. Increases reliability. Special-Purpose Systems:
  • 7. Operating System 6 The discussion thus far has focused on general-purpose computer systems that we are all familiar with. There are, however, different classes of computer systems whose functions are more limited and whose objective is to deal with limited computation domains. Real-Time Embedded Systems A real-time operating system (RTOS) is an operating system (OS) intended to serve real- time application requests. It must be able to process data as it comes in, typically without buffering delays. A real-time system is used when rigid time requirements have been placed on the operation of a processor or the flow of data; thus, it is often used as a control device in a dedicated application. Sensors bring data to the computer. The computer must analyze the data and possibly adjust controls to modify the sensor inputs. Systems that control scientific experiments, medical imaging systems, industrial control systems, and certain display systems are realtime systems. Some automobile-engine fuel-injection systems, home-appliance controllers, and weapon systems are also real- time systems. A real-time system has well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system will fail. For instance, it would not do for a robot arm to be instructed to halt after it had smashed into the car it was building. A real-time system functions correctly only if it returns the correct result within its time constraints. Contrast this system with a time-sharing system, where it is desirable (but not mandatory) to respond quickly, or a batch system, which may have no time constraints at all.
  • 8. Operating System 7 SYSTEM STRUCTURES Operating-System Services OS provides an environment for execution of programs. It provides certain services to programs and to the users of those programs. OS services are provided for the convenience of the programmer, to make the programming task easier. One set of SOS services provides functions that are helpful to the user –  User interface: All OS have a user interface(UI).Interfaces are of three types- Command Line Interface: uses text commands and a method for entering them Batch interface: commands and directives to control those commands are entered into files and those files are executed. Graphical user interface: This is a window system with a pointing device to direct I/O, choose from menus and make selections and a keyboard to enter text.  Program execution: System must be able to load a program into memory and run that program. The program must be able to end its execution either normally or abnormally.
  • 9. Operating System 8  I/O operations: A running program may require I/O which may involve a file or an I/O device. For efficiency and protection, users cannot control I/O devices directly.  File system manipulation: Programs need to read and write files and directories. They also need to create and delete them by name, search for a given file, and list file information.  Communications: One process might need to exchange information with another process.Such communication may occur between processes that are executing on the same computer or between processes that are executing on different computer systems tied together by a computer network. Communications may be implemented via shared memory or through message passing.  Error detection: OS needs to be constantly aware of possible errors. Errors may occur in the CPU and memory hardware, in I/O devices and in the user program. For each type of error, OS takes appropriate action to ensure correct and consistent computing. Another set of OS functions exist for ensuring efficient operation of the system. They are- a. Resource allocation: When there are multiple users or multiple jobs running at the same time, resources must be allocated to each of them. Different types of resources such as CPU cycles, main memory and file storage are managed by the operating system.
  • 10. Operating System 9 b. Accounting: Keeping track of which users use how much and what kinds of computer resources. c. Protection and security: Controlling the use of information stored in a multiuser or networked computer system. Protection involves ensuring that all access to system resources is controlled. Security starts with requiring each user to authenticate himself or herself to the system by means of password and to gain access to system resources. System Calls System calls provide an interface to the services made available by an operating system.
  • 11. Operating System 10 An example to illustrate how system calls are used: Writing a simple program to read data from one file and copy them to another file- a) First input required is names of two files – input file and output file. Names can be specified in many ways- One approach is for the program to ask the user for the names of two files. In an interactive system, this approach will require a sequence of system calls, to write a prompting message on screen and then read from the keyboard the characters that define the two files. On mouse based and icon based systems, a menu of file names is displayed in a window where the user can use the mouse to select the source names and a window can be opened for the destination name to be specified. b) Once the two file names are obtained, program must open the input file and create the output file. Each of these operations requires another system call. Possible error conditions –When the program tries to open input file, no file of that name may exist or file is protected against access. Program prints a message on console and terminates abnormally. If input file exists, we must create a new output file. If the output file with the same name exists, the situation caused the program to abort or delete the existing file and create a new one. Another option is to ask the user(via a sequence of system calls) whether to replace the existing file or to abort the program.
  • 12. Operating System 11 When both files are set up, a loop reads from the input file and writes to the output file (system calls respectively). Each read and write must return status information regarding various possible error conditions. After entire file is copied, program closes both files, write a message to the console or window and finally terminate normally. Application developers design programs according to application programming interface (API). API specifies set of functions that are available to an application programmer. The functions that make up the API typically invoke the actual system calls on behalf of the application programmer. Benefits of programming rather than invoking actual system calls:  Program portability – An application programmer designing a program using an API can expect program to compile and run on any system that supports the same API.  Actual system calls can be more detailed and difficult to work with than the API available to an application programmer. The run time support system ( a set of functions built into libraries included with a compiler) for most programming languages provides a system call
  • 13. Operating System 12 interface that serves as a link to system calls made available by OS. The system call interface intercepts function calls in the API and invokes the necessary system call within the operating system. A number is associated with each system call and the system call interface maintains a table indexed according to these numbers. System call interface then invokes the intended system call in the OS kernel and returns the status of the system call and return any values. System calls occur in different ways, depending on the computer in use – more information is required than simply the identity of the desired system call. The exact type and amount of information vary according to the particular OS and call. Three general methods are used to pass parameters to OS- I. Pass the parameters in registers II. Storing parameters in blocks or tables in memory and the address of the block id passed as a parameter in a register III. Placing or pushing parameters onto the stack by the program and popping off the stack by the OS. Types of system calls Five major categories: 1) Process control end, abort load, execute
  • 14. Operating System 13 create process, terminate process get process attributes, set process attributes wait for time wait event, signal event allocate and free memory 2) File Management create file, delete file open, close read, write, reposition get file attributes, set file attributes 3) Device management request device, release device read, write, reposition get device attributes, set device attributes logically attach or detach devices 4) Information maintenance get time or date, set time or date get system data, set system data get process, file or device attributes set process, file or device attributes 5) Communications create, delete communication connection send, receive messages
  • 15. Operating System 14 transfer status information attach or detach remote devices System Programs
  • 16. Operating System 15 System programs provide a convenient environment for program development and execution. They can be divided into these categories- File management: These programs create, delete, copy, rename, print, dump, list and manipulate files and directories. Status information: Some programs ask the system for the date, time, and amount of available memory or disk space, number of users. File modification: Text editors may be available to create and modify the content of files stored on disk or other storage devices. Programming language support: Compilers, assemblers, debuggers and interpreters for common programming languages are often provided to the user with the OS. Program loading and execution: Once a program is assembled or compiled, it must be loaded into memory to be executed. System provides absolute loaders, relocatable loaders, linkage editors and overlay loaders. Communications: These programs provide the mechanism for creating virtual connections among processes, users and computer systems. In addition to systems programs, OS are supplied with programs that are useful in solving common problems or performing operations. Such programs include web browsers, word processors and text formatters, spread sheets, database systems etc. These programs are known as system utilities or application programs.
  • 17. Operating System 16 PROCESS MANAGEMENT Current day computer systems allow multiple programs to be loaded into memory and executed concurrently. Process is nothing but a program in execution. A process is the unit of work in a modern time sharing system. A system is a collection of processes: operating system processes executing operating system code and user processes executing user code. By switching CPU between processes, the operating system can make the computer more productive. Overview A batch system executes jobs whereas a time shared system has user programs or tasks. On a single user system, user may be able to run several programs at one time: a word processor, a web browser and an e-mail package. All of these are called processes. The Process A process is a program in execution. A process is more than a program code sometimes known as text section. It also includes the current activity as represented by the value of the program counter and the contents of the processor‟s registers. A process generally also includes the process stack which contains temporary data and a data section which contains global variables. A process may also include a heap which is memory that is dynamically allocated during process run time. Program itself is not a
  • 18. Operating System 17 process, a program is a passive entity such as a file containing a list of instructions stored on disk (called an executable file) whereas process is an active entity with a program counter specifying the next instruction to execute and a set of associated resources. A program becomes a process when an executable file is loaded into memory. Although two processes may be associated with the same program, they are considered two separate execution sequences. Process State As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the following states- New: The process is being created. Running: Instructions are being executed. Waiting: The process is waiting for some event to occur.
  • 19. Operating System 18 Ready: The process is waiting to be assigned to the processor. Terminated: The process has finished execution. Process Control Block Each process is represented in the operating system by a process control block (PCB) also called task control block. It contains many pieces of information associated with a specific process including: Process State: The state may be new, ready, running, waiting, halted etc. Process Number: Unique number which is assigned to a program at time of its execution. Program Counter: The counter indicates the address of the next instruction to be executed for this process. CPU registers: The registers vary in number and type depending on the computer architecture.
  • 20. Operating System 19 CPU scheduling information: This information includes a process priority, pointers to scheduling queues, and other scheduling parameters. Memory management information: This information may include such information as the value of base and limit registers etc. Accounting information: This information includes the amount of CPU and real time used, time limits etc. I/O status information: This information includes the list of I/O devices allocated to the process, etc. Threads Process is a program that performs a single thread of execution. A single thread of control allows the process to perform only one task at one time. Process Scheduling
  • 21. Operating System 20 The objective of multi programming is to have some process running at all times to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process for program execution on the CPU. For a single processer system, there will never be more than one running process. Scheduling Queues Job queue: As processes enter the system, they are put inside the job queue, which consists of all processes in the system. Ready queue: The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue. This queue is stored as a linked list. A ready queue header contains pointers
  • 22. Operating System 21 to the first and final PCB‟s in the list. Each PCB includes a pointer field that points to the next PCB in the ready queue. Device queue: When a process is allocated the CPU, it executes for a while and eventually quits, is interrupted or waits for the occurrence of a particular event such as the completion of an I/O request. The list of processes waiting for a particular I/O device is called a device queue. Each device has its own device queue. Queuing diagram A common representation for process scheduling is queuing diagram. Each rectangular box represents a queue. Two types of queues are present: ready queue and a set of device queues. Circles represent the resources that serve the queues and the arrows indicate the flow of processes in the system.
  • 23. Operating System 22 A new process is initially put in the ready queue. It waits there until it is selected for execution or is dispatched. Once the process is allocated the CPU and is executing, one of the following events might occur – a) The process could issue and I/O request and then be placed in the I/O queue. b) The process could create a new sub process and wait for the sub process‟s termination. c) The process could be removed forcibly from the CPU as a result of an interrupt and be put back in the ready queue. Schedulers A process migrated among the various scheduling queues throughout its life time. The OS must select for scheduling purposes, processes from these queues and this selection is carried out by scheduler. In a batch system, more processes are submitted than can be executed immediately. These processes are spooled to a mass storage device (disk) where they are kept for later execution. The long term scheduler or job scheduler selects processes from this pool and loads them into memory for execution. The short term scheduler or CPU scheduler selects from among the processes that are ready to execute and allocates the CPU to one of them. The distinction between these two lies in the frequency of execution. The long term scheduler controls the degree of multi programming (the number of processes in memory). Most
  • 24. Operating System 23 processes can be described as either I/O bound or CPU bound. An I/O bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU bound process generates I/O requests infrequently using more of its time doing computations. Long term scheduler must select a good mix of I/O bound and CPU bound processes. A system with best performance will have a combination of CPU bound and I/O bound processes. Some operating systems such as time sharing systems may introduce additional/intermediate level of scheduling. Idea behind medium term scheduler is that sometimes it can be advantageous to remove processes from memory and thus reduce the degree of multiprogramming. Later the process can be reintroduced into memory and its execution can be continued where it left off. This scheme is called swapping. The process is swapped out and is later swapped in by the medium term scheduler. Context switch Interrupts cause the OS to change a CPU from its current task and to run a kernel routine. When an interrupt occurs, the system needs to save the
  • 25. Operating System 24 current context of the process currently running on the CPU so that it can restore that context when its processing is done, essentially suspending the process and then resuming it. The context is represented in the PCB of the process, it includes the value of CPU registers, process state and memory management information. We perform a state save of the current state of the CPU and then state restore to resume operations. Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process. This task is known as context switch. Kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. Context switch time is pure overhead. Its speed varies from machine to machine depending on the memory speed, the number of registers that must be copied and the existence of special instructions. Context switch times are highly dependent on hardware support. Multithreading Thread is a basic unit of CPU utilization. Support for threads may be provided either at the user level for user threads or by the kernel for kernel threads. User threads are supported above the kernel and are managed without kernel support whereas kernel threads are supported and managed directly by the operating system. Benefits of multi threaded programming are: Responsiveness: Multi threading an interactive application may allow a program to continue running even if part of it is blocked or is
  • 26. Operating System 25 performing a lengthy operation, thereby increasing responsiveness to the user. Resource interaction: Threads share the memory and the resources of the process to which they belong. The benefit of sharing code and data is that it allows an application to have several different threads of activity within the same address space. Economy: Allocating memory and resources for process creation is costly. Because threads share resources of the process to which they belong, it is more economical to create and context switch threads. Utilization of multi processor architecture: The benefits of multi threading can be greatly increased in a multi processor architecture where threads may be running in parallel on different processors. Multi threading on a multi CPU machine increases concurrency. User threads are supported above the kernel and are managed without kernel support whereas kernel threads are supported and managed directly by the operating system. CPU scheduling is the basis of multi programmed operating systems. By switching CPU among processes, the OS can make the computer more productive. On operating systems that support threads, it is kernel level threads and not processes that are scheduled by operating system. But process scheduling and thread scheduling are often used interchangeably.
  • 27. Operating System 26 In a single processor system, only one process can run at a time; others must wait until CPU is free and can be rescheduled. The objective of multi programming is to have some process running at all times to maximize CPU utilization. Under multi programming, several processes are kept in memory at one time. When one process has to wait, the OS takes the CPU away from that process and gives the CPU to another process. As CPU is one of the primary computer resources, its scheduling is central to operating system design. Three common ways of establishing a relationship between user level and kernel level threads are: Many to one model - Maps many user level threads to one kernel thread. Thread management is done by the thread library in user space hence it is efficient but entire process will block if a thread makes a blocking system call. As only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multi processors.
  • 28. Operating System 27 One to one model - Maps each user thread to kernel thread. It allows more concurrency by allowing another thread to run when a thread makes a blocking system call; it also allows multiple threads to run in parallel on multi processors. Disadvantage is that creating a user thread requires creating the corresponding kernel thread. Many to many model - Multiplexes many user level threads to a smaller or equal number of kernel level threads. The number of kernel threads may be specific to either a particular application or a particular machine. Developers can create as many user threads as necessary and the corresponding kernel threads can run in parallel on a multi processor. Also, when a thread performs a blocking system call, the kernel can schedule another thread for execution.
  • 29. Operating System 28 On operating systems that support threads, it is kernel level threads that are being scheduled by operating system. User level threads are managed by thread library. To run on a CPU, user level threads must be mapped to an associated kernel level thread although this mapping may be indirect and may use a light weight process. PROCESS SCHEDULING: Process is an executing program with a single thread of control. Most operating systems provide features enabling a process to contain multiple threads of control. A thread is a basic unit of CPU utilization; it comprises of a thread ID, a program counter, a register set and a stack. It shares with other threads belonging to the same process its code section, data section and other operating system resources such as open files and signals. A traditional or heavy weight process has a single thread of control. CPU – I/O burst cycle: Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states. Process execution begins with CPU burst followed by an I/O burst and so on. The final CPU burst ends with a system request to terminate execution.
  • 30. Operating System 29 An I/O bound program has many short CPU bursts. A CPU bound program might have a few long CPU bursts. CPU scheduler: Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the short term scheduler or CPU scheduler. The scheduler selects a process from the processes in memory that are ready to execute and allocates the CPU to that process. The ready queue is not necessarily a first in first out queue. A ready queue can be implemented as a FIFO queue, a priority queue, a tree or an
  • 31. Operating System 30 unordered linked list. All the processes in the ready queue are lined up waiting for a chance to run on the CPU. The records in the queue are process control blocks of the processes. Pre-emptive scheduling: CPU scheduling decisions may take place under the following four conditions- a) When a process switches from the running state to the waiting state b) When a process switches from the running state to the ready state c) When a process switches from the waiting state to the ready state d) When a process terminates When scheduling takes place under conditions 1 and 4, scheduling scheme is non - pre emptive or co-operative. Else it is called pre emptive. Under non pre emptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. Pre emptive scheduling incurs cost associated with access to shared data. Pre emption also affects the design of the operating system kernel. Dispatcher: The dispatcher is the module that gives control of the CPU to the process selected by the short term scheduler. This function involves- a) Switching context
  • 32. Operating System 31 b) Switching to user mode c) Jumping to the proper location in the user program to restart that program Dispatcher should be as fast as possible since it is invoked during every process switch. The time it takes for the dispatcher to stop one process and start another running is called dispatch latency. Scheduling criteria: Different CPU scheduling algorithms have different properties. Criteria for comparing CPU scheduling algorithms- a) CPU utilization: Keep the CPU as busy as possible b) Through put: One measure of work of CPU is the number of processes that are completed per time unit called through put. c) Turnaround time: The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU and doing I/O. d) Waiting time: It is the sum of the periods spent waiting in the ready queue. e) Response time: The time it takes for the process to start responding but is not the time it takes to output the response.
  • 33. Operating System 32 It is desirable to maximize CPU utilization and through put and to minimize turnaround time, waiting time and response time. Scheduling algorithms: First Come First Serve: This is the simplest CPU scheduling algorithm. The process that requests the CPU first is allocated the CPU first. The implementation of FCFS is managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. The average waiting time under FCFS is often quite long. The FCFS scheduling algorithm is non pre-emptive. Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU either by terminating or by requesting I/O. It is simple, fair, but poor performance. Average queuing time may be long.
  • 34. Operating System 33 Shortest Job First: This algorithm associates with each process the length of the process‟s next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. Priority:
  • 35. Operating System 34 The SJF is a special case of the general priority scheduling algorithm. A priority is associated with each process and CPU is allocated to the process with highest priority. Equal priority processes are scheduled in FCFS order. An SJF algorithm is simply a priority algorithm where the priority is the inverse of the predicted next CPU burst. The larger the CPU burst, the lower the priority. Priorities are generally indicated by some fixed range of numbers usually 0 to 7. Priorities can be defined either internally or externally. Internally defined priorities use some measurable quantity to computer the priority of a process. External priorities are set by criteria outside the operating system. Priority scheduling can be either pre emptive or non pre emptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process. A preemptive priority scheduling algorithm will preempt the CPU is the priority of the newly arrived process is higher than the priority of the currently running process. A non preemptive priority scheduling algorithm will put the new process at the head of the ready queue. The major problem with priority scheduling algorithm is indefinite blocking or starvation. A process that is ready to run but waiting for the CPU can be considered blocked. A priority scheduling algorithm can leave some low priority processes waiting indefinitely. A solution to the problem of indefinite blockage of low priority processes is
  • 36. Operating System 35 aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time. Round Robin: The round robin scheduling algorithm is designed for time sharing systems. It is similar to FCFS scheduling but preemption is added to switch between processes. A small unit of time called a time quantum or time slice is defined. A time quantum is generally from 10 to 100 milli seconds. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue allocating the CPU to each process for a time interval of up to 1 time quantum. To implement RR scheduling, ready queue is implemented as a FIFO queue. New processes are added to the tail of ready queue. The CPU scheduler picks the first process from the ready queue, sets the timer to interrupt after 1 time quantum and dispatches the process. The process may either have a CPU burst of less than 1 time quantum. Then, the process itself will release the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue. Else if the CPU burst of the currently running process is longer than 1 time quantum, the timer will go off and will cause an interrupt to the operating system. A context switch will be executed and the process will be put at the tail of the ready queue. The CPU scheduler will then select the next process in the ready queue.
  • 37. Operating System 36 In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a row. If a process‟s CPU burst exceeds 1 time quantum, that process is preempted and is put back in the ready queue. Thus this algorithm is preemptive. The performance of the RR algorithm depends heavily on the size of time quantum. If the time quantum is extremely large, the RR policy is same as the FCFS policy. If the time quantum is extremely small, the RR approach is called processor sharing. Algorithm evaluation Selection of an algorithm is difficult. The first problem is defining the criteria to be used in selecting an algorithm. Criteria are often defined in terms of CPU utilization, response time or through put. Criteria includes several measures such as-
  • 38. Operating System 37 a) Maximizing CPU utilization under the constraint that the maximum response time is 1 second. b) Maximizing through put such that turnaround time is linearly proportional to total execution time.