SlideShare a Scribd company logo
UNIT 2
Process and
Threads
Management
Operating System (OS)
GTU # 3140702
Process concept?
● Process is a program under execution.
● It is an instance of an executing program, including the current values of the
pro-
● gram counter, registers & variables.
● Process is an abstraction of a running program.
● each process has its own address space
1) text region: store code of process
2) data region :store dynamically allocated
memory
3) stack region :store instruction and
What is Process?
● Process can run to completion after getting all the resources and requested
hardware and software
● process may be a independent of other processes in the system
● It has its own private memory area and virtual CPU in which its run
● Process is an abstraction of a running program.
Difference between process and program
What is Process Control Block (PCB)?
? A Process Control Block (PCB) is a data structure maintained by the operating
system for every process.
? PCB is used for storing the collection of information about the processes.
? The PCB is identified by an integer process ID (PID).
? A PCB keeps all the information needed to keep track of a process.
? The PCB is maintained for a process throughout its lifetime and is deleted once
the process terminates.
? The architecture of a PCB is completely dependent on operating system and
may contain different information in different operating systems.
? PCB lies in kernel memory space.
Fields of Process Control Block (PCB)
? Process ID - Unique identification for each of the process in
the operating system.
? Process State - The current state of the process i.e.,
whether it is ready, running, waiting.
? Pointer - A pointer to parent process.
? Priority - Priority of a process.
? Program Counter - Program Counter is a pointer to the
address of the next instruction to be executed for this
process.
? CPU registers - Various CPU registers where process need
to be stored for execution for running state.
? IO status information - This includes a list of I/O devices
allocated to the process.
? Accounting information - This includes the amount of CPU
used for process execution, time limits etc.
● The state of a process is defined by the current activity of that process.
● During execution, process changes its state.
Process states
1. New: OS creates a new process and resources are not allocated
2. Ready: Whenever a process is created, it directly enters in the ready state, in
which, it waits for the CPU to be assigned. The processes which are ready for the
execution
3. Running: Process that is currently being executed and all the resources are
allocated
4. Waiting: waiting until some event occur such as completion of an input -
output operation
5. Completion or termination: When a process finishes its execution, it comes
in the termination state. All the context of the process (Process Control Block) will
also be deleted the process will be terminated by the Operating system.
.
Sr No State Transition Remarks
1 Ready to running Process to dispatch
2 Running to Ready Process time slices expires
3 Running to Blocked When a process block
4 Blocked to Ready the event for which it has been waiting
occur
5 Ready to exit parent process may terminate
6 Running to Exit currently running process is complete
Process states transition
? When and how these transitions occur (process
moves from one state to another)?
1. Process blocks for input or waits for an event (i.e.
printer is not available)
2. Scheduler picks another process
▪ End of time-slice or pre-emption.
3. Scheduler picks this process
4. Input becomes available, event arrives (i.e. printer
become available)
? Processes are always either executing (running)
or waiting to execute (ready) or waiting for an
event (blocked) to occur.
Runni
ng
Blocke
d
Ready
3 2
1
4
Queue Diagram
Adm
it
Ready
Queue
Process
is
Schedul
ed
to run
Dispatch
Time-
out
Event
Wait
Process
is
complet
ed
Exit
Process
or
Blocked
Queue
Event
Occur
s
Ne
w
Rea
dy
Runni
ng
Exi
t
Block
ed
Adm
it
Event
Occur
s
Dispatc
h Releas
e
Time-
out
Eve
nt
Wait
HD
D
RA
M
RA
M
RA
M
-
Process Creation
? Operating system creates process in following situation
● Starting a new job
● User request for creating a new job
● To provide new services by OS
● System call from currently running process
OS creates process with specified attributes and identifiers and may create new
sub processes
Process Creation
1. System initialization
⮩ At the time of system (OS) booting various processes
are created
⮩ Foreground and background processes are created
⮩ Background process – that do not interact with user
e.g. process to accept mail
⮩ Foreground Process – that interact with user
2. Execution of a process creation system call
(fork) by running process
⮩ Running process will issue system call (fork) to create
one or more new process to help it.
⮩ A process fetching large amount of data and execute it
will create two different processes one for fetching
data and another to execute it.
P
3
P
2
P1
Process Creation
3. A user request to create a new process
⮩ Start process by clicking an icon (opening word file
by double click) or by typing command.
4. Initialization of batch process
⮩ Applicable to only batch system found on large
mainframe
Process Termination
❏ When process finishes its normal execution that is delete using exit()
❏ process memory space become free
❏ OS passes the child exit status to the parent process
❏ deallocates the resources hold by the process
❏ Reasons
❏ Normal completion of operation
❏ memory is not available
❏ time slices expired
❏ parent termination
❏ FAilure of I/O
❏ Request from parent process
Process Termination
1. Normal exit (voluntary)
⮩ Terminated because process
has done its work.
2. Error exit (voluntary)
⮩ The process discovers a fatal error e.g. user types
the command cc foo.c to compile the program
foo.c and no such file exists, the compiler simply
exit.
Process Termination
3. Fatal error (involuntary)
⮩ An error caused by a process often due to
a program bug e.g. executing an illegal
instruction, referencing nonexistent
memory or divided by zero.
4. Killed by another process
(involuntary)
⮩ A process executes a system call
telling the OS to kill some other
process using kill system call.
Process Hierarchies
? Parent process can create child process, child
process can create its own child process.
? UNIX has hierarchy concept which is known as
process group
? Windows has no concept of hierarchy
⮩ All the process as treated equal (use handle concept)
P
1
P
3
P
4
P
2
Parent
process
Child
process
Parent
process
Child
process
P
5
P
6
P
3
P
5
P
6
Handle
? When a process is created, the parent process is
given a special token called handle.
? This handle is used to control the child
process.
? A process is free to pass this token to some
other process.
P
1
P
3
P
4
P
2
Parent
process
Child
process
Multiprogramming
? The real CPU switches back and forth from process to process.
? This rapid switching back and forth is called multiprogramming.
? The number of processes loaded simultaneously in memory is called degree of
multiprogramming.
Multiprogramming execution
? There are three processes, one processor (CPU), three logical program counter
(one for each processes) in memory and one physical program counter in
processor.
? Here CPU is free (no process is running).
? No data in physical program counter.
Physical
Program Counter
Logical
Program Counter
Logical
Program Counter
Logical
Program Counter
P1 P2 P3 Memor
y
Processo
r
P1
P2
P3
Multiprogramming execution
? CPU is allocated to process P1 (process P1 is running).
? Data of process P1 is copied from its logical program counter to the physical
program counter.
Physical
Program Counter
Logical
Program Counter
Logical
Program Counter
Logical
Program Counter
P1 P2 P3 Memor
y
Processo
r
P1
P2
P3
P
1
Multiprogramming execution
? CPU switches from process P1 to process P2.
? CPU is allocated to process P2 (process P2 is running).
? Data of process P1 is copied back to its logical program counter.
? Data of process P2 is copied from its logical program counter to the physical
program counter.
Physical
Program Counter
Logical
Program Counter
Logical
Program Counter
Logical
Program Counter
P1 P2 P3 Memor
y
Processo
r
P1
P2
P3
P
1
P
2
Multiprogramming execution
? CPU switches from process P2 to process P3.
? CPU is allocated to process P3 (process P3 is running).
? Data of process P2 is copied back its logical program counter.
? Data of process P3 is copied from its logical program counter to the physical
program counter.
Physical
Program Counter
Logical
Program Counter
Logical
Program Counter
Logical
Program Counter
P1 P2 P3 Memor
y
Processo
r
P2
P3
P
2
P
3
P1
Context switching
? Context switch means stopping one
process and restarting another
process.
? When an event occur, the OS saves the
state of an active process (into its
PCB) and restore the state of new
process (from its PCB).
? Context switching is purely overhead
because system does not perform any
useful work while context switch.
? Sequence of action:
1. OS takes control (through interrupt)
2. Saves context of running process in the
process PCB
3. Reload context of new process from the
new process PCB
4. Return control to new process
Context switching
? Context switch means can occur in only kernel mode
? It is highly dependent on hardware support
? It will be the most costly operation on operating system
? Situation in which context switch needs :- multitasking , interrupt
handling ,user and kernel mode switching
THREADS
? Thread is light weight process created by a process.
? Processes are used to execute large, ‘heavyweight’ jobs such as working in
word, while threads are used to carry out smaller or ‘lightweight’ jobs such as
auto saving a word document.
Thread
s
What is Threads?
? Thread is light weight process created by a process.
? Thread is a single sequence stream within a process.
? Thread has it own
? program counter that keeps track of which instruction to execute next.
? system registers which hold its current working variables.
? stack which contains the execution history.
? Every program has at least on e thread
? Each thread can execute a set of instruction independent of other thread and
processes
Difference between thread and process
THREAD PROCESS
Lightweight process heavyweight process
operating system is not required
for thread switching
operating system is required for
thread switching
One thread can read ,write or
even completely clean another
threads stacks
each process operates
independently
If one thread is blocked and
waiting then second thread in the
same task can run
If one process is blocked then
other process can not execute
until first process is unblocked
use fewer resources use more resources
Similarities between Process & Thread
? Like processes threads share CPU and only one thread is running at a time.
? Like processes threads within a process execute sequentially.
? Like processes thread can create children's.
? Like a traditional process, a thread can be in any one of several states:
running, blocked, ready or terminated.
? Like process threads have Program Counter, Stack, Registers and State.
Benefits/Advantages of Threads
? Threads minimize the context switching time.
? Use of threads provides concurrency within a process.
? Efficient communication.
? It is more easy to create and context switch threads.
? Threads can execute in parallel on multiprocessors.
? With threads, an application can avoid per-process overheads
⮩ Thread creation, deletion, switching easier than processes.
? Threads have full access to address space (easy sharing).
Thread Lifecycle
1. New: Thread is created
2. Ready: Executing a thread OS put thread into ready queue
3. Running: Highest priority ready thread enters the running running state
4. Blocked: Thread is waiting for a lock to access an object
5. Waiting: waiting for another thread to perform an action
6. Sleeping: Sleep for specified time after the expiration of time it enters into
ready state
7. Dead: thread complete it task or operation
.
Single Threaded Process VS Multiple Threaded Process
▪ A single-threaded process
is a process with a single
thread.
▪ A multi-threaded process
is a process with multiple
threads.
▪ The multiple threads have
its own registers, stack
and counter but they
share the code and data
segment.
Types of Threads
1. Kernel Level Thread
2. User Level Thread
User
Level
Threa
ds
Kernel
Level
Threa
ds
User level thread
? User-level threads are small and much faster than kernel level threads.
? They are represented by a program counter(PC), stack, registers
? there is no kernel involvement in synchronization for user-level threads.
? Created by runtime libraries that are transparent to the OS
? Do not invoke the Kernel for scheduling decision
? Also called many to one mapping
Advantages
● User-level threads are easier and faster to create
● They can also be more easily managed.
● User-level threads can be run on any operating system.
● There are no kernel mode privileges required for thread switching in user-level threads.
Disadvantages
● Multithreaded applications in user-level threads cannot use multiprocessing to their
advantage.
● The entire process is blocked if one user-level thread performs blocking operation.
Kernel level thread
? handled by the operating system directly and the thread management is done by
the kernel.
? slower than user-level threads.
? Thread controlled and created by system call
Advantages
● Multiple threads of the same process can be scheduled on different processors
● It can also be multithreaded.
● If l thread is blocked, another thread of the same process can be scheduled by
the kernel.
Disadvantages
● Slower than the user level thread
● There will be overhead and increased in Kernel complexity
Difference between user level and kernel level thread
USER LEVEL THREAD KERNEL LEVEL THREAD
User thread are implemented by users. Kernel threads are implemented by OS.
OS doesn’t recognize user level
threads.
Kernel threads are recognized by OS.
Implementation of user threads is
easy.
Implementation of kernel thread is
complex.
Context switch time is less. Context switch time is more.
If one user level thread perform
blocking operation then entire process
will be blocked.
If one kernel thread perform blocking
operation then another thread with in
same
process can continue execution.
multithread application cannot take multithread application take
Hybrid Threads
? Combines the advantages of user level and
kernel level thread.
? It uses kernel level thread and then multiplex
user level thread on to some or all of kernel
threads.
? Gives flexibility to programmer that how many
kernel level threads to use and how many user
level thread to multiplex on each one.
? Kernel is aware of only kernel level threads and
schedule it.
Multi Threading Models
1) ONE TO ONE
● Each user threads mapped to one kernel thread.
● Create more concurrency
● multiple thread are run in parallel on multiprocessor system
● number of threads increase memory is also increased
● System performance is slow
● Problem with this model is that creating a user thread requires the
corresponding kernel thread.
Multi Threading Models
2) MANY TO ONE
● Multiple user threads mapped to one kernel thread.
● System performance is increase by customizing the thread
library scheduling process
● Problem with this model is that a user thread can block
entire process because we have only one kernel thread.
● does not allow individual process to be split across multiple
CPU
● multiple threads cannot run in parallel as only one thread
can access the kernel at a time.
Multi Threading Models
3) MANY TO MANY
● Multiple user threads multiplex to more than one kernel
threads.
● Advantage with this model is that a user thread can not block
entire process because we have multiple kernel thread.
● There can be as many user threads as required and their
corresponding kernel threads can run in parallel on a
multiprocessor.
● there is one limitation that OS design becomes complicated
System calls
? A system call is the programmatic way in which a computer program requests a
service from the kernel of the operating system it is executed on.
? A system call is a way for programs to interact with the operating system.
? A computer program makes a system call when it makes a request to the
operating system’s kernel.
? System call provides the services of the operating system to the user programs
via Application Program Interface(API).
? It provides an interface between a process and operating system to allow user-
level processes to request services of the operating system.
? System calls are the only entry points into the kernel system.
? All programs needing resources must use system calls.
System calls
? ps (process status):- The ps (process status) command is used to provide
information about the currently running processes, including their process
identification numbers (PIDs).
? fork:- Fork system call is used for creating a new process, which is called child
process, which runs concurrently with the process that makes the fork() call
(parent process).
? wait:- Wait system call blocks the calling process until one of its child processes
exits or a signal is received. After child process terminates, parent continues its
execution after wait system call instruction.
? exit:- Exit system call terminates the running process normally.
? exec family:- The exec family of functions replaces the current running process
with a new process.
System calls
? ps (process status):- The ps (process status) command is used to provide
information about the currently running processes, including their process
identification numbers (PIDs).
Syntax:-ps [option]
it sends without any option then it display at least two processes currently on the
system :the shell and ps
PID – This is the unique process ID
TTY – This is the typeof terminal that the user is logged in to
TIME – This is the time in minutes and seconds that the process has been running
CMD – The command that launched the process
System calls
? fork:- Fork system call is used for creating a new process, which is called child
process, which runs concurrently with the process that makes the fork() call
(parent process).
Syntax:-pid=fork();
what fork system do:-
1. create child process that is clone of parent
2. child running same program of parents
3. child inherit open file descriptor from the parents
4. child begin life with the same register values as parent
what kernel does:-
5. it allocates a slot in the process table for the new process
6. assigns unique ID number to the childs
7. make logical copy of the parent process
8. increments the file and table counter
System calls
? Uses of Fork:-
when process wants to duplicate itself so parents and child can execute
different sections of code at same time
Example : in networking if parents wait for the service request from client
when the request arrives the parents called fork and let the child handle the
request and then parent goes back for waiting for next service
SCHEDULING
what is scheduling
? The process scheduling is the activity of the process manager that handles the
removal of
? the running process from the CPU and the selection of another process on the
basis of a
? particular strategy.
? It is an essential part of a Multiprogramming operating system.
objective
? Share time fairly
? prevent starvation of a process
? Use processor efficiently
? Have low overhead
SCHEDULING
characteristics of good scheduling algorithm
CPU Utilization should be designed so that CPU remains busy as possible.
−
Throughput Throughput is the amount of work completed in a unit of time. The
−
scheduling algorithm must look to maximize the number of jobs processed per time unit.
Response time It is the time taken to start responding to the request. A scheduler
−
must aim to minimize response time for interactive users.
Turnaround time Turnaround time refers to the time between the moment of
−
submission of a job and the time of its completion.
Turnaround Time= Waiting Time + Burst Time(Execution Time)
Waiting time It is the time a job waits for resource .The aim is to minimize the waiting
−
time.
Waiting time = Response Time- Arrival Time
Scheduling Queue
Scheduling Queue
● All processes, upon entering into the system, are stored in the Job Queue.
● Processes in the Ready state are placed in the Ready Queue.
● Processes waiting for a device to become available are placed in Device Queues.
There are unique device queues available for each I/O device.
A new process is initially put in the Ready queue. It waits in the ready queue until it
is selected for execution(or dispatched). Once the process is assigned to the CPU
and is executing, one of the following several events can occur:
● The process could issue an I/O request, and then be placed in the I/O queue.
● The process could create a new subprocess and wait for its termination.
● The process could be removed forcibly from the CPU, as a result of an interrupt,
and be put back in the ready queue.
Type of Scheduler
MEDIUM
TERM
SCHEDULE
R
SHORT
TERM
SCHEDULE
R
Type of Scheduler
1) LONG TERM SCHEDULER
● It decide which program must get into the job queue. From the job queue, the
Job Processor, selects processes and loads them into the memory for execution.
● Primary aim is to maintain a good degree of Multiprogramming. An optimal
degree of Multiprogramming means the average rate of process creation is
equal to the average departure rate of processes from the execution memory.
● used in real time OS
1) SHORT TERM SCHEDULER
● This is also known as CPU Scheduler and Dispatcher
● It runs very frequently.
● is responsible for ensuring there is no starvation(If it selects a process with a
Type of Scheduler
3) MEDIUM TERM SCHEDULER
● This scheduler removes the processes from memory
● reduces the degree of multiprogramming.
● This can also be called as suspending and resuming the process.
● also useful to improve the mix of I/O bound and CPU bound processes in the
memory.
Difference between type of scheduler
Scheduling Algorithm
● is a process of determining which process will own CPU for execution while another
process is on hold.
● The main task of CPU scheduling is to make sure that whenever the CPU remains idle,
the OS at least select one of the processes available in the ready queue for execution.
Preemptive Scheduling
the tasks are mostly assigned with their priorities. Sometimes it is important to run a task
with a higher priority before another lower priority task, even if the lower priority task is
still running. The lower priority task resumes when the higher priority task finishes its
execution.
Non-Preemptive Scheduling
Difference between Preemptive and non Preemptive
Preemptive Non- Preemptive
In this resources are allocated to a
process for a limited time.
Once resources are allocated to a process,
the process holds it till it completes its burst
time or switches to waiting state.
Process can be interrupted in between.
Process can not be interrupted until it
terminates
CPU utilization is high. It is low in non preemptive scheduling.
incurs the cost associated with access
shared data
does not increase the cost
it is more complex Simple but very inefficient
Type of Scheduling Algorithm
1) FCFS (FIRST COME FIRST SERVE)
EXAMPLE.1
● p1+p2+p3+p4
● Average waiting time: (0+21+24+30)/4 =18.75ms
● Turnaround Time=Waiting Time+Burst Time
p1=0+21=21
p2=21+3=24
p3=24+6=30
p4=30+2=32
1)FCFS (FIRST COME FIRST SERVE)
● It is is a non-preemptive scheduling algorithm that is easy to understand and
use.
● the process which reaches first is executed first
● Example of FCFS: buying tickets at the ticket counter.
● simple to use and implement.
● poor in performance due to high waiting times.
● the resource utilization is poor in FCFS
EXAMPLE.1
● p1+p2+p3+p4
● Average waiting time: (0+21+24+30)/4 =18.75ms
EXAMPLE.2
PROCESS BURST TIME
P1 4
P2 7
P3 3
P4 3
P5 5
PROCESS BURST TIME
P1 10
P2 6
P3 12
P4 15
EXAMPLE.3
SOLUTION OF EXAMPLE.2
PROCESS BURST TIME/
priority
P1 4 3
P2 7 5
P3 3 2
P4 3 1
P5 5 4
PROCESS BURST TIME
P1 10
P2 6
P3 12
P4 15
SOLUTION OF
EXAMPLE 3
● Average waiting time: (0+4+11+14+17)/5 =9.2ms
● Average waiting time: (0+10+16+28)/ =13.5ms
2) SJF (SHORTEST JOB FIRST)
● It based on the length of their CPU cycle time
● reduce average time over FCFS
● non preemptive algorithm
ALGORITHM:-1) Sort all the process according to the arrival time.
1)Then select that process which has minimum arrival time and minimum
Burst time.
2)After completion of process make a pool of process which after till the
completion of previous process and select that process among the pool
which is having minimum Burst time.
EXAMPLE:
Average waiting time: (6+0+11+2)/4 =4.75ms
PROCESS BURST TIME
P1 5
P2 2
P3 6
P4 4
PROCESS WAITING TIME TURNAROUND TIME
P1 6 6+5=11
P2 0 0+2=2
P3 11 11+6=17
P4 2 2+4=6
PROCESS BURST
TIME/PRIORITY
P1 10 2
P2 6 1
P3 12 4
P4 15 3
EXAMPLE 1.
PROCESS BURST
TIME/priority
P1 4 2
P2 7 1
P3 3 3
P4 2 4
EXAMPLE 2.
SOLUTION OF EXAMPLE
PROCESS BURST TIME
P1 10
P2 6
P3 12
P4 15
Average waiting time: (0+6+16+28)/4
=12.5ms
PROCESS WAITING TIME TURNAROUND TIME
P1 6 6+10=16
P2 0 0+6=6
P3 16 16+12=28
P4 28 28+15=43
PROCESS BURST TIME
P1 4
P2 7
P3 3
P4 2
PROCESS WAITING TIME TURNAROUND TIME
P1 5 5+4=9
P2 9 9+7=16
P3 2 2+3=5
P4 0 0+2=2
Average waiting time: (0+2+5+9)/4
3) PRIORITY SCHEDULING
● Each process is assigned a priority. Process with the highest priority is to be executed
first and so on.
● Processes with the same priority are executed on first come first served basis.
PROBLEM:- Waiting time is more for lower priority proceed
Starvation problem
EXAMPLE:
Average waiting time: (10+15+0+6)/4 =7.75ms
PROC
ESS
BURST
TIME
Priority
P1 5 3
P2 2 4
P3 6 1
P4 4 2
PROCESS WAITING TIME TURNAROUND TIME
P1 10 10+5=15
P2 15 15+2=17
P3 0 0+6=6
P4 6 6+4=10
4) ROUND ROBIN SCHEDULING
● Round Robin is the preemptive process scheduling algorithm.
● Each process is provided a fix time to execute, it is called a quantum.
● Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
● Context switching is used to save states of preempted processes.
● QUANTUM SLICE 2 UNIT
PROCESS WAITING TIME TURNAROUND TIME
P1 0+(9-2)+(13-11)=9 9+5=14
P2 2+(11-4)=9 9+3=12
P3 4 4+1=5
P4 5 5+2=7
P5 7+(12-9)=10 10+3=13
Average waiting time:
(9+9+4+5+10)/5=7.4ms
P1 P2 P3 P4 P5 P1 P2 P5 P1
0 2 4 5 7 9 11
12 13 14
ADVANTAGE
● In terms of average response time, this algorithm gives the best performance.
● With the help of this algorithm, all the jobs get a fair allocation of CPU.
● In this algorithm, there are no issues of starvation or convoy effect.
● This algorithm is cyclic in nature.
DISADVANTAGE
● This algorithm spends more time on context switches.
● For small quantum, it is time-consuming scheduling.
● This algorithm offers a larger waiting time and response time.
● In this, there is low throughput.
● If time quantum is less for scheduling then its Gantt chart seems to be too big.
EXAMPLE
QUANTUM SLICE 1 UNIT
PROCESS BURST TIME
A 4
B 1
C 8
D 1
PROCESS BURST TIME
P1 6
P2 5
P3 2
P4 3
P5 7
QUANTUM SLICE 2 UNIT
SOLUTION EXAMPLE
Average waiting time: (5+1+6+3)/4 =
3.75ms
PROCESS WAITING TIME TURNAROUND TIME
A 0+3+1+1=5 5+4=9
B 1 1+1=2
C 2+2+1+1=6 6+8=14
D 3 3+1=4
PROCESS WAITING TIME TURNAROUND TIME
P1 0+8+5=13 13+6=19
P2 2+8+5=15 15+5=20
P3 4 4+2=6
P4 6+6=12 12+3=15
P5 8+5+3=16 16+7=23
Average waiting time:
(13+15+4+12+16)/5 =12ms
PROCESS BURST TIME
A 4
B 1
C 8
D 1
PROCESS BURST TIME
P1 6
P2 5
P3 2
P4 3
P5 7
COMPARISON BETWEEN FCFS AND ROUND ROBIN
FCFS Round Robin
non preemptive in nature preemptive in nature
Response time is high provide good response time
FCFS is inconvenient to use in the time
sharing system.
It is mainly designed for the time sharing
system and hence convenient to use.
Average waiting time is generally not
minimal in First Come First Served
Scheduling Algorithm.
In Round Robin Scheduling Algorithm
average waiting time is minimal.
The process is simply processed in the
order of their arrival in FCFS.
It is similar like FCFS in processing but uses
time quantum.
COMPARISON OF CPU SCHEDULING ALGORITHM
FCFS Round Robin PRIORITY SJF
POLICY
non
preemptive
preemptive in
nature
non preemptive non preemptive
ADVANTAGES
Easy to
implement
minimum
overhead
provide good
response time
provide fair CPU
time
Ensure fast
completion of
important jobs
Minimize
average waiting
time
DISADVANTAG
E
Average
waiting is more
require selection
of good time
slices
starvation
problem
indefinite
postponement
of some job
THREAD SCHEDULING
LOAD SHARING
Tread are not
assigned to
processor ,it selects
a thread from global
queue serving all
processors
load is evenly
distributed
FCFS:-on arrival of
thread it is place in
queue and next
thread selected
when processor is
idle
Smaller no of
thread:- organized
according to the
priority and selects
highest priority
thread
Preemptive smallest
advantage :
No centralized
scheduler is
required
load distribution is
equal
disadvantage:
High degree of
coordination is
required
THREAD SCHEDULING
GANG SCHEDULING ● Group of related thread are scheduled as a
unit
● All members of gang run simultaneously on
different time shared cpu
DEDICATED PROCESSOR
ASSIGNMENT
● Provides implicit scheduling for the duration
of program execution
● processor are chosen from the available pool
DYNAMIC SCHEDULING ● numbers of threads in process altered
dynamically by the application
● operating system involve in making
scheduling decisions
● it adjust load to improve the use
REAL TIME SCHEDULING
● Hard real time system:- tasks be completed within their required deadlines .Failure to
meet lead to critical catastrophic system failure such as physical damage or loss of life
● Firm real time system:- a few miss deadlines will not lead to total failure but missing
more than a few may lead to complete and catastrophic failure.
● Soft real time system:- provide priority of real time tasks over non real time tasks.
performance degradation is tolerated by failure to meet several deadlines time
constraints with decreased quality but no critical consequences
REAL TIME SCHEDULING
CHARACTERISTICS
Determinism:- operation are performed at fixed times or intervals
Responsiveness:- time required to service the interrupt
User control:- Specify paging or process swapping
Decide which processes must reside in main memory
Select algorithm for disk scheduling
Establish the right of process
Control over task priorities
Reliability:- control the failure of the process . mean time between the failures should be
very high
CLASS OF ALGORITHM
Static table-driven approaches:
● These algorithms usually perform a static analysis associated with scheduling
and capture the schedules that are advantageous.
● This helps in providing a schedule that can point out a task with which the
execution must be started at run time.
● Input required: periodic arrival time, execution time, periodic ending ,deadline
and priority of task
Static priority-driven preemptive approaches:
● it provides a useful way of assigning priorities among various tasks in
preemptive scheduling.
● Priority is related to the time constraints on the task
CLASS OF ALGORITHM
Dynamic planning-based approaches:
Here, the feasible schedules are identified dynamically (at run time). It carries a
certain fixed time interval and a process is executed if and only if satisfies the time
constraint.
Dynamic best effort approaches:
These types of approaches consider deadlines instead of feasible schedules.
Therefore the task is aborted if its deadline is reached. This approach is used widely
is most of the real-time systems.
RATE MONOTONIC PRIORITY ASSIGNMENT(RM)
● Static priority scheduling
● process with lowest period will get the highest priority
● Where n is the number of processes in the process set, Ci is the computation time of the
process, Ti is the Time period for the process to run and U is the processor utilization.
ADVANTAGE: Simple to understand
Easy to implement
Stable algorithm
DISADVANTAGE: Lower CPU utilization
Processes involved should not share the resources with other processes.
Process running with highest priority that needs to run, will preempt all the
RATE MONOTONIC PRIORITY ASSIGNMENT(RM)
U= 0.5/3 +1/4 +2/6 = 0.167+ 0.25 + 0.
= 0.75(LESS THAN 1 CAN BE EXECUT
NOTE:
T1 execute 0.5 unit in every 3 time sli
T2 execute 1 unit in every 4 time slice
T3 execute 2 unit in every 6 time slice
RATE MONOTONIC PRIORITY ASSIGNMENT(RM)
● task with shorter period has higher priority so T1 has high priority, T2 has
intermediate priority and T3 has lowest priority. At t=0 all the tasks are released. Now
T1 has highest priority so it executes first till t=0.5.
● At t=0.5 task T2 has higher priority than T3 so it executes first for one-time units till
t=1.5. After its completion only T3 remain so it starts its execution and executes till t=3.
● At t=3 T1 releases, as it has higher priority than T3 so it preempts or blocks T3 and
starts it execution till t=3.5. After that the remaining part of T3 executes.
● At t=4 T2 releases and completes it execution as there is no task running in the system
at this time.
● At t=6 both T1 and T3 are released at the same time but T1 has higher priority due to
shorter period so it preempts T3 and executes till t=6.5, after that T3 starts running
and executes till t=8.
● At t=8 T2 with higher priority than T3 releases so it preempts T3 and starts its
execution.
RATE MONOTONIC PRIORITY ASSIGNMENT(RM)
TASK CPU BURST TIME/EXECUTION
TIME
PERIOD/
TIME
T1 6 9
T2 5 15
T3 1 5
U = 6/9+ 5/15+1/5= 1.1999
It is more than 1 so test is fail
TASK CPU BURST TIME/EXECUTION
TIME
PERIOD/
TIME
T1 50 100
T2 30 200
T3 100 500
U =50/100+30/200+100/500= 0.
It is less than 1 so test is sched
EARLIEST DEADLINE FIRST SCHEDULING (EDF)
● an optimal dynamic priority scheduling algorithm used in real-time systems.
● It can be used for both static and dynamic real-time scheduling.
● EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according
to the absolute deadline. The task whose deadline is closest gets the highest priority.
● In EDF, any executing task can be preempted if any other periodic instance with an
earlier deadline is ready for execution and becomes active.
ADVANTAGE: It is optimal
Give best CPU utilization
DISADVANTAGE: need dynamic priority
performance degrade under overloaded
Difficult to implement
● U= 1/4 +2/6 +3/8 = 0.25 +
0.333 +0.375 = 0.95
EARLIEST DEADLINE FIRST SCHEDULING (EDF)
● At t=0 all the tasks are released T1 has higher priority as its deadline is 4 earlier than T2
whose deadline is 6 and T3 whose deadline is 8,
● At t=1 again absolute deadlines are compared and T2 has shorter deadline so it
executes and after that T3 starts execution but at t=4 T1, at this instant both T1 and T3
has same deadlines so ties are broken randomly so we continue to execute T3.
● At t=6 T2 deadline of T1 is earliest than T2 so it starts execution and after that T2 begins
to execute. At t=8 again T1 and T2 have same deadlines i.e. t=16, so ties are broken
randomly an T2 continues its execution and then T1 completes. Now at t=12 T1 and T2
has same deadlines therefore ties broken randomly and we continue to execute T3.
● At t=13 T1 begins it execution and ends at t=14. Now T2 is the only task in the system so
it completes it execution.
● At t=16 T1 and T2 are released together, priorities are decided according to absolute
deadlines so T1 execute first as its deadline is t=20 and T3’s deadline is t=24.After T1
completion T3 starts and reaches at t=17 t=24 so ties broken randomly ant we T
continue to execute T3.
● At t=20 both T1 and T2 are in the system and both have same deadline t=24 so again
ties broken randomly and T2 executes. After that T1 completes it execution. In the same
EARLIEST DEADLINE FIRST SCHEDULING (EDF)

More Related Content

PPTX
OS (1).pptx
PPTX
operating system module 2 presentation notes
PDF
process.pdfzljwiyrouyaeutoaetodtusiokklhh
PPTX
Program control board in Operating System
PDF
OS-Process.pdf
PDF
UNIT - 3 PPT(Part- 1)_.pdf
PPTX
Lecture 2 process
PPTX
Process management
OS (1).pptx
operating system module 2 presentation notes
process.pdfzljwiyrouyaeutoaetodtusiokklhh
Program control board in Operating System
OS-Process.pdf
UNIT - 3 PPT(Part- 1)_.pdf
Lecture 2 process
Process management

Similar to Process and thread Management Operating system (20)

PPTX
UNIT I-Processes.pptx
DOC
Operating Systems Unit Two - Fourth Semester - Engineering
PPTX
ch2nvbjdvsbjbjfjjfjf j Process Mangement.pptx
PPT
Ch03- PROCESSES.ppt
PPTX
OS_Unit II - Process Management_CATI.pptx
PPTX
Operating Systems - Processor Management
PDF
Operating System-Concepts of Process
PPTX
UNIT 2 OS.pptx Introduction of Operating System
PPTX
Module- Operating Systems presentation Two
PPTX
Operating System Periodic Test 1 Answers.pptx
PPTX
Operating System Periodic test answers t
PPTX
Operating System Periodic Test 1 Answers
PPTX
Operating Systems: Processor Management
PPTX
unit 2- process management of Operating System
PDF
Chapter 3.pdf
PDF
Process And Scheduling Algorithms in os
PDF
OS - Process Concepts
PPTX
Operating Systems chap 2_updated2 (1).pptx
PDF
UNIT I-Processes.pptx
Operating Systems Unit Two - Fourth Semester - Engineering
ch2nvbjdvsbjbjfjjfjf j Process Mangement.pptx
Ch03- PROCESSES.ppt
OS_Unit II - Process Management_CATI.pptx
Operating Systems - Processor Management
Operating System-Concepts of Process
UNIT 2 OS.pptx Introduction of Operating System
Module- Operating Systems presentation Two
Operating System Periodic Test 1 Answers.pptx
Operating System Periodic test answers t
Operating System Periodic Test 1 Answers
Operating Systems: Processor Management
unit 2- process management of Operating System
Chapter 3.pdf
Process And Scheduling Algorithms in os
OS - Process Concepts
Operating Systems chap 2_updated2 (1).pptx
Ad

More from ankitashah871482 (7)

PPTX
securityandprotection Design Principles Of Security
PDF
unix-linuxospart1-241207172845-11e147aa.pdf
PPTX
Security Environment, Design Principles Of Security
PPTX
Virtualization Concepts: Virtual machines
PDF
Inter Process Communication in operating system
PDF
Concurrency in Operating system_12345678
PDF
Deadlock_Operating system presentation .pdf
securityandprotection Design Principles Of Security
unix-linuxospart1-241207172845-11e147aa.pdf
Security Environment, Design Principles Of Security
Virtualization Concepts: Virtual machines
Inter Process Communication in operating system
Concurrency in Operating system_12345678
Deadlock_Operating system presentation .pdf
Ad

Recently uploaded (20)

PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Sustainable Sites - Green Building Construction
PDF
composite construction of structures.pdf
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
OOP with Java - Java Introduction (Basics)
PDF
Digital Logic Computer Design lecture notes
PPT
Project quality management in manufacturing
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
additive manufacturing of ss316l using mig welding
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
CYBER-CRIMES AND SECURITY A guide to understanding
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Sustainable Sites - Green Building Construction
composite construction of structures.pdf
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
OOP with Java - Java Introduction (Basics)
Digital Logic Computer Design lecture notes
Project quality management in manufacturing
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
UNIT 4 Total Quality Management .pptx
CH1 Production IntroductoryConcepts.pptx
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
additive manufacturing of ss316l using mig welding
Internet of Things (IOT) - A guide to understanding
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...

Process and thread Management Operating system

  • 2. Process concept? ● Process is a program under execution. ● It is an instance of an executing program, including the current values of the pro- ● gram counter, registers & variables. ● Process is an abstraction of a running program. ● each process has its own address space 1) text region: store code of process 2) data region :store dynamically allocated memory 3) stack region :store instruction and
  • 3. What is Process? ● Process can run to completion after getting all the resources and requested hardware and software ● process may be a independent of other processes in the system ● It has its own private memory area and virtual CPU in which its run ● Process is an abstraction of a running program.
  • 5. What is Process Control Block (PCB)? ? A Process Control Block (PCB) is a data structure maintained by the operating system for every process. ? PCB is used for storing the collection of information about the processes. ? The PCB is identified by an integer process ID (PID). ? A PCB keeps all the information needed to keep track of a process. ? The PCB is maintained for a process throughout its lifetime and is deleted once the process terminates. ? The architecture of a PCB is completely dependent on operating system and may contain different information in different operating systems. ? PCB lies in kernel memory space.
  • 6. Fields of Process Control Block (PCB) ? Process ID - Unique identification for each of the process in the operating system. ? Process State - The current state of the process i.e., whether it is ready, running, waiting. ? Pointer - A pointer to parent process. ? Priority - Priority of a process. ? Program Counter - Program Counter is a pointer to the address of the next instruction to be executed for this process. ? CPU registers - Various CPU registers where process need to be stored for execution for running state. ? IO status information - This includes a list of I/O devices allocated to the process. ? Accounting information - This includes the amount of CPU used for process execution, time limits etc.
  • 7. ● The state of a process is defined by the current activity of that process. ● During execution, process changes its state. Process states
  • 8. 1. New: OS creates a new process and resources are not allocated 2. Ready: Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU to be assigned. The processes which are ready for the execution 3. Running: Process that is currently being executed and all the resources are allocated 4. Waiting: waiting until some event occur such as completion of an input - output operation 5. Completion or termination: When a process finishes its execution, it comes in the termination state. All the context of the process (Process Control Block) will also be deleted the process will be terminated by the Operating system. .
  • 9. Sr No State Transition Remarks 1 Ready to running Process to dispatch 2 Running to Ready Process time slices expires 3 Running to Blocked When a process block 4 Blocked to Ready the event for which it has been waiting occur 5 Ready to exit parent process may terminate 6 Running to Exit currently running process is complete Process states transition
  • 10. ? When and how these transitions occur (process moves from one state to another)? 1. Process blocks for input or waits for an event (i.e. printer is not available) 2. Scheduler picks another process ▪ End of time-slice or pre-emption. 3. Scheduler picks this process 4. Input becomes available, event arrives (i.e. printer become available) ? Processes are always either executing (running) or waiting to execute (ready) or waiting for an event (blocked) to occur. Runni ng Blocke d Ready 3 2 1 4
  • 12. Process Creation ? Operating system creates process in following situation ● Starting a new job ● User request for creating a new job ● To provide new services by OS ● System call from currently running process OS creates process with specified attributes and identifiers and may create new sub processes
  • 13. Process Creation 1. System initialization ⮩ At the time of system (OS) booting various processes are created ⮩ Foreground and background processes are created ⮩ Background process – that do not interact with user e.g. process to accept mail ⮩ Foreground Process – that interact with user 2. Execution of a process creation system call (fork) by running process ⮩ Running process will issue system call (fork) to create one or more new process to help it. ⮩ A process fetching large amount of data and execute it will create two different processes one for fetching data and another to execute it. P 3 P 2 P1
  • 14. Process Creation 3. A user request to create a new process ⮩ Start process by clicking an icon (opening word file by double click) or by typing command. 4. Initialization of batch process ⮩ Applicable to only batch system found on large mainframe
  • 15. Process Termination ❏ When process finishes its normal execution that is delete using exit() ❏ process memory space become free ❏ OS passes the child exit status to the parent process ❏ deallocates the resources hold by the process ❏ Reasons ❏ Normal completion of operation ❏ memory is not available ❏ time slices expired ❏ parent termination ❏ FAilure of I/O ❏ Request from parent process
  • 16. Process Termination 1. Normal exit (voluntary) ⮩ Terminated because process has done its work. 2. Error exit (voluntary) ⮩ The process discovers a fatal error e.g. user types the command cc foo.c to compile the program foo.c and no such file exists, the compiler simply exit.
  • 17. Process Termination 3. Fatal error (involuntary) ⮩ An error caused by a process often due to a program bug e.g. executing an illegal instruction, referencing nonexistent memory or divided by zero. 4. Killed by another process (involuntary) ⮩ A process executes a system call telling the OS to kill some other process using kill system call.
  • 18. Process Hierarchies ? Parent process can create child process, child process can create its own child process. ? UNIX has hierarchy concept which is known as process group ? Windows has no concept of hierarchy ⮩ All the process as treated equal (use handle concept) P 1 P 3 P 4 P 2 Parent process Child process Parent process Child process P 5 P 6 P 3 P 5 P 6
  • 19. Handle ? When a process is created, the parent process is given a special token called handle. ? This handle is used to control the child process. ? A process is free to pass this token to some other process. P 1 P 3 P 4 P 2 Parent process Child process
  • 20. Multiprogramming ? The real CPU switches back and forth from process to process. ? This rapid switching back and forth is called multiprogramming. ? The number of processes loaded simultaneously in memory is called degree of multiprogramming.
  • 21. Multiprogramming execution ? There are three processes, one processor (CPU), three logical program counter (one for each processes) in memory and one physical program counter in processor. ? Here CPU is free (no process is running). ? No data in physical program counter. Physical Program Counter Logical Program Counter Logical Program Counter Logical Program Counter P1 P2 P3 Memor y Processo r P1 P2 P3
  • 22. Multiprogramming execution ? CPU is allocated to process P1 (process P1 is running). ? Data of process P1 is copied from its logical program counter to the physical program counter. Physical Program Counter Logical Program Counter Logical Program Counter Logical Program Counter P1 P2 P3 Memor y Processo r P1 P2 P3 P 1
  • 23. Multiprogramming execution ? CPU switches from process P1 to process P2. ? CPU is allocated to process P2 (process P2 is running). ? Data of process P1 is copied back to its logical program counter. ? Data of process P2 is copied from its logical program counter to the physical program counter. Physical Program Counter Logical Program Counter Logical Program Counter Logical Program Counter P1 P2 P3 Memor y Processo r P1 P2 P3 P 1 P 2
  • 24. Multiprogramming execution ? CPU switches from process P2 to process P3. ? CPU is allocated to process P3 (process P3 is running). ? Data of process P2 is copied back its logical program counter. ? Data of process P3 is copied from its logical program counter to the physical program counter. Physical Program Counter Logical Program Counter Logical Program Counter Logical Program Counter P1 P2 P3 Memor y Processo r P2 P3 P 2 P 3 P1
  • 25. Context switching ? Context switch means stopping one process and restarting another process. ? When an event occur, the OS saves the state of an active process (into its PCB) and restore the state of new process (from its PCB). ? Context switching is purely overhead because system does not perform any useful work while context switch. ? Sequence of action: 1. OS takes control (through interrupt) 2. Saves context of running process in the process PCB 3. Reload context of new process from the new process PCB 4. Return control to new process
  • 26. Context switching ? Context switch means can occur in only kernel mode ? It is highly dependent on hardware support ? It will be the most costly operation on operating system ? Situation in which context switch needs :- multitasking , interrupt handling ,user and kernel mode switching
  • 27. THREADS ? Thread is light weight process created by a process. ? Processes are used to execute large, ‘heavyweight’ jobs such as working in word, while threads are used to carry out smaller or ‘lightweight’ jobs such as auto saving a word document. Thread s
  • 28. What is Threads? ? Thread is light weight process created by a process. ? Thread is a single sequence stream within a process. ? Thread has it own ? program counter that keeps track of which instruction to execute next. ? system registers which hold its current working variables. ? stack which contains the execution history. ? Every program has at least on e thread ? Each thread can execute a set of instruction independent of other thread and processes
  • 29. Difference between thread and process THREAD PROCESS Lightweight process heavyweight process operating system is not required for thread switching operating system is required for thread switching One thread can read ,write or even completely clean another threads stacks each process operates independently If one thread is blocked and waiting then second thread in the same task can run If one process is blocked then other process can not execute until first process is unblocked use fewer resources use more resources
  • 30. Similarities between Process & Thread ? Like processes threads share CPU and only one thread is running at a time. ? Like processes threads within a process execute sequentially. ? Like processes thread can create children's. ? Like a traditional process, a thread can be in any one of several states: running, blocked, ready or terminated. ? Like process threads have Program Counter, Stack, Registers and State.
  • 31. Benefits/Advantages of Threads ? Threads minimize the context switching time. ? Use of threads provides concurrency within a process. ? Efficient communication. ? It is more easy to create and context switch threads. ? Threads can execute in parallel on multiprocessors. ? With threads, an application can avoid per-process overheads ⮩ Thread creation, deletion, switching easier than processes. ? Threads have full access to address space (easy sharing).
  • 33. 1. New: Thread is created 2. Ready: Executing a thread OS put thread into ready queue 3. Running: Highest priority ready thread enters the running running state 4. Blocked: Thread is waiting for a lock to access an object 5. Waiting: waiting for another thread to perform an action 6. Sleeping: Sleep for specified time after the expiration of time it enters into ready state 7. Dead: thread complete it task or operation .
  • 34. Single Threaded Process VS Multiple Threaded Process ▪ A single-threaded process is a process with a single thread. ▪ A multi-threaded process is a process with multiple threads. ▪ The multiple threads have its own registers, stack and counter but they share the code and data segment.
  • 35. Types of Threads 1. Kernel Level Thread 2. User Level Thread User Level Threa ds Kernel Level Threa ds
  • 36. User level thread ? User-level threads are small and much faster than kernel level threads. ? They are represented by a program counter(PC), stack, registers ? there is no kernel involvement in synchronization for user-level threads. ? Created by runtime libraries that are transparent to the OS ? Do not invoke the Kernel for scheduling decision ? Also called many to one mapping Advantages ● User-level threads are easier and faster to create ● They can also be more easily managed. ● User-level threads can be run on any operating system. ● There are no kernel mode privileges required for thread switching in user-level threads. Disadvantages ● Multithreaded applications in user-level threads cannot use multiprocessing to their advantage. ● The entire process is blocked if one user-level thread performs blocking operation.
  • 37. Kernel level thread ? handled by the operating system directly and the thread management is done by the kernel. ? slower than user-level threads. ? Thread controlled and created by system call Advantages ● Multiple threads of the same process can be scheduled on different processors ● It can also be multithreaded. ● If l thread is blocked, another thread of the same process can be scheduled by the kernel. Disadvantages ● Slower than the user level thread ● There will be overhead and increased in Kernel complexity
  • 38. Difference between user level and kernel level thread USER LEVEL THREAD KERNEL LEVEL THREAD User thread are implemented by users. Kernel threads are implemented by OS. OS doesn’t recognize user level threads. Kernel threads are recognized by OS. Implementation of user threads is easy. Implementation of kernel thread is complex. Context switch time is less. Context switch time is more. If one user level thread perform blocking operation then entire process will be blocked. If one kernel thread perform blocking operation then another thread with in same process can continue execution. multithread application cannot take multithread application take
  • 39. Hybrid Threads ? Combines the advantages of user level and kernel level thread. ? It uses kernel level thread and then multiplex user level thread on to some or all of kernel threads. ? Gives flexibility to programmer that how many kernel level threads to use and how many user level thread to multiplex on each one. ? Kernel is aware of only kernel level threads and schedule it.
  • 40. Multi Threading Models 1) ONE TO ONE ● Each user threads mapped to one kernel thread. ● Create more concurrency ● multiple thread are run in parallel on multiprocessor system ● number of threads increase memory is also increased ● System performance is slow ● Problem with this model is that creating a user thread requires the corresponding kernel thread.
  • 41. Multi Threading Models 2) MANY TO ONE ● Multiple user threads mapped to one kernel thread. ● System performance is increase by customizing the thread library scheduling process ● Problem with this model is that a user thread can block entire process because we have only one kernel thread. ● does not allow individual process to be split across multiple CPU ● multiple threads cannot run in parallel as only one thread can access the kernel at a time.
  • 42. Multi Threading Models 3) MANY TO MANY ● Multiple user threads multiplex to more than one kernel threads. ● Advantage with this model is that a user thread can not block entire process because we have multiple kernel thread. ● There can be as many user threads as required and their corresponding kernel threads can run in parallel on a multiprocessor. ● there is one limitation that OS design becomes complicated
  • 43. System calls ? A system call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on. ? A system call is a way for programs to interact with the operating system. ? A computer program makes a system call when it makes a request to the operating system’s kernel. ? System call provides the services of the operating system to the user programs via Application Program Interface(API). ? It provides an interface between a process and operating system to allow user- level processes to request services of the operating system. ? System calls are the only entry points into the kernel system. ? All programs needing resources must use system calls.
  • 44. System calls ? ps (process status):- The ps (process status) command is used to provide information about the currently running processes, including their process identification numbers (PIDs). ? fork:- Fork system call is used for creating a new process, which is called child process, which runs concurrently with the process that makes the fork() call (parent process). ? wait:- Wait system call blocks the calling process until one of its child processes exits or a signal is received. After child process terminates, parent continues its execution after wait system call instruction. ? exit:- Exit system call terminates the running process normally. ? exec family:- The exec family of functions replaces the current running process with a new process.
  • 45. System calls ? ps (process status):- The ps (process status) command is used to provide information about the currently running processes, including their process identification numbers (PIDs). Syntax:-ps [option] it sends without any option then it display at least two processes currently on the system :the shell and ps PID – This is the unique process ID TTY – This is the typeof terminal that the user is logged in to TIME – This is the time in minutes and seconds that the process has been running CMD – The command that launched the process
  • 46. System calls ? fork:- Fork system call is used for creating a new process, which is called child process, which runs concurrently with the process that makes the fork() call (parent process). Syntax:-pid=fork(); what fork system do:- 1. create child process that is clone of parent 2. child running same program of parents 3. child inherit open file descriptor from the parents 4. child begin life with the same register values as parent what kernel does:- 5. it allocates a slot in the process table for the new process 6. assigns unique ID number to the childs 7. make logical copy of the parent process 8. increments the file and table counter
  • 47. System calls ? Uses of Fork:- when process wants to duplicate itself so parents and child can execute different sections of code at same time Example : in networking if parents wait for the service request from client when the request arrives the parents called fork and let the child handle the request and then parent goes back for waiting for next service
  • 48. SCHEDULING what is scheduling ? The process scheduling is the activity of the process manager that handles the removal of ? the running process from the CPU and the selection of another process on the basis of a ? particular strategy. ? It is an essential part of a Multiprogramming operating system. objective ? Share time fairly ? prevent starvation of a process ? Use processor efficiently ? Have low overhead
  • 49. SCHEDULING characteristics of good scheduling algorithm CPU Utilization should be designed so that CPU remains busy as possible. − Throughput Throughput is the amount of work completed in a unit of time. The − scheduling algorithm must look to maximize the number of jobs processed per time unit. Response time It is the time taken to start responding to the request. A scheduler − must aim to minimize response time for interactive users. Turnaround time Turnaround time refers to the time between the moment of − submission of a job and the time of its completion. Turnaround Time= Waiting Time + Burst Time(Execution Time) Waiting time It is the time a job waits for resource .The aim is to minimize the waiting − time. Waiting time = Response Time- Arrival Time
  • 51. Scheduling Queue ● All processes, upon entering into the system, are stored in the Job Queue. ● Processes in the Ready state are placed in the Ready Queue. ● Processes waiting for a device to become available are placed in Device Queues. There are unique device queues available for each I/O device. A new process is initially put in the Ready queue. It waits in the ready queue until it is selected for execution(or dispatched). Once the process is assigned to the CPU and is executing, one of the following several events can occur: ● The process could issue an I/O request, and then be placed in the I/O queue. ● The process could create a new subprocess and wait for its termination. ● The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue.
  • 53. Type of Scheduler 1) LONG TERM SCHEDULER ● It decide which program must get into the job queue. From the job queue, the Job Processor, selects processes and loads them into the memory for execution. ● Primary aim is to maintain a good degree of Multiprogramming. An optimal degree of Multiprogramming means the average rate of process creation is equal to the average departure rate of processes from the execution memory. ● used in real time OS 1) SHORT TERM SCHEDULER ● This is also known as CPU Scheduler and Dispatcher ● It runs very frequently. ● is responsible for ensuring there is no starvation(If it selects a process with a
  • 54. Type of Scheduler 3) MEDIUM TERM SCHEDULER ● This scheduler removes the processes from memory ● reduces the degree of multiprogramming. ● This can also be called as suspending and resuming the process. ● also useful to improve the mix of I/O bound and CPU bound processes in the memory.
  • 55. Difference between type of scheduler
  • 56. Scheduling Algorithm ● is a process of determining which process will own CPU for execution while another process is on hold. ● The main task of CPU scheduling is to make sure that whenever the CPU remains idle, the OS at least select one of the processes available in the ready queue for execution. Preemptive Scheduling the tasks are mostly assigned with their priorities. Sometimes it is important to run a task with a higher priority before another lower priority task, even if the lower priority task is still running. The lower priority task resumes when the higher priority task finishes its execution. Non-Preemptive Scheduling
  • 57. Difference between Preemptive and non Preemptive Preemptive Non- Preemptive In this resources are allocated to a process for a limited time. Once resources are allocated to a process, the process holds it till it completes its burst time or switches to waiting state. Process can be interrupted in between. Process can not be interrupted until it terminates CPU utilization is high. It is low in non preemptive scheduling. incurs the cost associated with access shared data does not increase the cost it is more complex Simple but very inefficient
  • 58. Type of Scheduling Algorithm
  • 59. 1) FCFS (FIRST COME FIRST SERVE) EXAMPLE.1 ● p1+p2+p3+p4 ● Average waiting time: (0+21+24+30)/4 =18.75ms ● Turnaround Time=Waiting Time+Burst Time p1=0+21=21 p2=21+3=24 p3=24+6=30 p4=30+2=32
  • 60. 1)FCFS (FIRST COME FIRST SERVE) ● It is is a non-preemptive scheduling algorithm that is easy to understand and use. ● the process which reaches first is executed first ● Example of FCFS: buying tickets at the ticket counter. ● simple to use and implement. ● poor in performance due to high waiting times. ● the resource utilization is poor in FCFS EXAMPLE.1 ● p1+p2+p3+p4 ● Average waiting time: (0+21+24+30)/4 =18.75ms
  • 61. EXAMPLE.2 PROCESS BURST TIME P1 4 P2 7 P3 3 P4 3 P5 5 PROCESS BURST TIME P1 10 P2 6 P3 12 P4 15 EXAMPLE.3
  • 62. SOLUTION OF EXAMPLE.2 PROCESS BURST TIME/ priority P1 4 3 P2 7 5 P3 3 2 P4 3 1 P5 5 4 PROCESS BURST TIME P1 10 P2 6 P3 12 P4 15 SOLUTION OF EXAMPLE 3 ● Average waiting time: (0+4+11+14+17)/5 =9.2ms ● Average waiting time: (0+10+16+28)/ =13.5ms
  • 63. 2) SJF (SHORTEST JOB FIRST) ● It based on the length of their CPU cycle time ● reduce average time over FCFS ● non preemptive algorithm ALGORITHM:-1) Sort all the process according to the arrival time. 1)Then select that process which has minimum arrival time and minimum Burst time. 2)After completion of process make a pool of process which after till the completion of previous process and select that process among the pool which is having minimum Burst time. EXAMPLE: Average waiting time: (6+0+11+2)/4 =4.75ms PROCESS BURST TIME P1 5 P2 2 P3 6 P4 4 PROCESS WAITING TIME TURNAROUND TIME P1 6 6+5=11 P2 0 0+2=2 P3 11 11+6=17 P4 2 2+4=6
  • 64. PROCESS BURST TIME/PRIORITY P1 10 2 P2 6 1 P3 12 4 P4 15 3 EXAMPLE 1. PROCESS BURST TIME/priority P1 4 2 P2 7 1 P3 3 3 P4 2 4 EXAMPLE 2.
  • 65. SOLUTION OF EXAMPLE PROCESS BURST TIME P1 10 P2 6 P3 12 P4 15 Average waiting time: (0+6+16+28)/4 =12.5ms PROCESS WAITING TIME TURNAROUND TIME P1 6 6+10=16 P2 0 0+6=6 P3 16 16+12=28 P4 28 28+15=43 PROCESS BURST TIME P1 4 P2 7 P3 3 P4 2 PROCESS WAITING TIME TURNAROUND TIME P1 5 5+4=9 P2 9 9+7=16 P3 2 2+3=5 P4 0 0+2=2 Average waiting time: (0+2+5+9)/4
  • 66. 3) PRIORITY SCHEDULING ● Each process is assigned a priority. Process with the highest priority is to be executed first and so on. ● Processes with the same priority are executed on first come first served basis. PROBLEM:- Waiting time is more for lower priority proceed Starvation problem EXAMPLE: Average waiting time: (10+15+0+6)/4 =7.75ms PROC ESS BURST TIME Priority P1 5 3 P2 2 4 P3 6 1 P4 4 2 PROCESS WAITING TIME TURNAROUND TIME P1 10 10+5=15 P2 15 15+2=17 P3 0 0+6=6 P4 6 6+4=10
  • 67. 4) ROUND ROBIN SCHEDULING ● Round Robin is the preemptive process scheduling algorithm. ● Each process is provided a fix time to execute, it is called a quantum. ● Once a process is executed for a given time period, it is preempted and other process executes for a given time period. ● Context switching is used to save states of preempted processes. ● QUANTUM SLICE 2 UNIT PROCESS WAITING TIME TURNAROUND TIME P1 0+(9-2)+(13-11)=9 9+5=14 P2 2+(11-4)=9 9+3=12 P3 4 4+1=5 P4 5 5+2=7 P5 7+(12-9)=10 10+3=13 Average waiting time: (9+9+4+5+10)/5=7.4ms P1 P2 P3 P4 P5 P1 P2 P5 P1 0 2 4 5 7 9 11 12 13 14
  • 68. ADVANTAGE ● In terms of average response time, this algorithm gives the best performance. ● With the help of this algorithm, all the jobs get a fair allocation of CPU. ● In this algorithm, there are no issues of starvation or convoy effect. ● This algorithm is cyclic in nature. DISADVANTAGE ● This algorithm spends more time on context switches. ● For small quantum, it is time-consuming scheduling. ● This algorithm offers a larger waiting time and response time. ● In this, there is low throughput. ● If time quantum is less for scheduling then its Gantt chart seems to be too big.
  • 69. EXAMPLE QUANTUM SLICE 1 UNIT PROCESS BURST TIME A 4 B 1 C 8 D 1 PROCESS BURST TIME P1 6 P2 5 P3 2 P4 3 P5 7 QUANTUM SLICE 2 UNIT
  • 70. SOLUTION EXAMPLE Average waiting time: (5+1+6+3)/4 = 3.75ms PROCESS WAITING TIME TURNAROUND TIME A 0+3+1+1=5 5+4=9 B 1 1+1=2 C 2+2+1+1=6 6+8=14 D 3 3+1=4 PROCESS WAITING TIME TURNAROUND TIME P1 0+8+5=13 13+6=19 P2 2+8+5=15 15+5=20 P3 4 4+2=6 P4 6+6=12 12+3=15 P5 8+5+3=16 16+7=23 Average waiting time: (13+15+4+12+16)/5 =12ms PROCESS BURST TIME A 4 B 1 C 8 D 1 PROCESS BURST TIME P1 6 P2 5 P3 2 P4 3 P5 7
  • 71. COMPARISON BETWEEN FCFS AND ROUND ROBIN FCFS Round Robin non preemptive in nature preemptive in nature Response time is high provide good response time FCFS is inconvenient to use in the time sharing system. It is mainly designed for the time sharing system and hence convenient to use. Average waiting time is generally not minimal in First Come First Served Scheduling Algorithm. In Round Robin Scheduling Algorithm average waiting time is minimal. The process is simply processed in the order of their arrival in FCFS. It is similar like FCFS in processing but uses time quantum.
  • 72. COMPARISON OF CPU SCHEDULING ALGORITHM FCFS Round Robin PRIORITY SJF POLICY non preemptive preemptive in nature non preemptive non preemptive ADVANTAGES Easy to implement minimum overhead provide good response time provide fair CPU time Ensure fast completion of important jobs Minimize average waiting time DISADVANTAG E Average waiting is more require selection of good time slices starvation problem indefinite postponement of some job
  • 73. THREAD SCHEDULING LOAD SHARING Tread are not assigned to processor ,it selects a thread from global queue serving all processors load is evenly distributed FCFS:-on arrival of thread it is place in queue and next thread selected when processor is idle Smaller no of thread:- organized according to the priority and selects highest priority thread Preemptive smallest advantage : No centralized scheduler is required load distribution is equal disadvantage: High degree of coordination is required
  • 74. THREAD SCHEDULING GANG SCHEDULING ● Group of related thread are scheduled as a unit ● All members of gang run simultaneously on different time shared cpu DEDICATED PROCESSOR ASSIGNMENT ● Provides implicit scheduling for the duration of program execution ● processor are chosen from the available pool DYNAMIC SCHEDULING ● numbers of threads in process altered dynamically by the application ● operating system involve in making scheduling decisions ● it adjust load to improve the use
  • 75. REAL TIME SCHEDULING ● Hard real time system:- tasks be completed within their required deadlines .Failure to meet lead to critical catastrophic system failure such as physical damage or loss of life ● Firm real time system:- a few miss deadlines will not lead to total failure but missing more than a few may lead to complete and catastrophic failure. ● Soft real time system:- provide priority of real time tasks over non real time tasks. performance degradation is tolerated by failure to meet several deadlines time constraints with decreased quality but no critical consequences
  • 76. REAL TIME SCHEDULING CHARACTERISTICS Determinism:- operation are performed at fixed times or intervals Responsiveness:- time required to service the interrupt User control:- Specify paging or process swapping Decide which processes must reside in main memory Select algorithm for disk scheduling Establish the right of process Control over task priorities Reliability:- control the failure of the process . mean time between the failures should be very high
  • 77. CLASS OF ALGORITHM Static table-driven approaches: ● These algorithms usually perform a static analysis associated with scheduling and capture the schedules that are advantageous. ● This helps in providing a schedule that can point out a task with which the execution must be started at run time. ● Input required: periodic arrival time, execution time, periodic ending ,deadline and priority of task Static priority-driven preemptive approaches: ● it provides a useful way of assigning priorities among various tasks in preemptive scheduling. ● Priority is related to the time constraints on the task
  • 78. CLASS OF ALGORITHM Dynamic planning-based approaches: Here, the feasible schedules are identified dynamically (at run time). It carries a certain fixed time interval and a process is executed if and only if satisfies the time constraint. Dynamic best effort approaches: These types of approaches consider deadlines instead of feasible schedules. Therefore the task is aborted if its deadline is reached. This approach is used widely is most of the real-time systems.
  • 79. RATE MONOTONIC PRIORITY ASSIGNMENT(RM) ● Static priority scheduling ● process with lowest period will get the highest priority ● Where n is the number of processes in the process set, Ci is the computation time of the process, Ti is the Time period for the process to run and U is the processor utilization. ADVANTAGE: Simple to understand Easy to implement Stable algorithm DISADVANTAGE: Lower CPU utilization Processes involved should not share the resources with other processes. Process running with highest priority that needs to run, will preempt all the
  • 80. RATE MONOTONIC PRIORITY ASSIGNMENT(RM) U= 0.5/3 +1/4 +2/6 = 0.167+ 0.25 + 0. = 0.75(LESS THAN 1 CAN BE EXECUT NOTE: T1 execute 0.5 unit in every 3 time sli T2 execute 1 unit in every 4 time slice T3 execute 2 unit in every 6 time slice
  • 81. RATE MONOTONIC PRIORITY ASSIGNMENT(RM) ● task with shorter period has higher priority so T1 has high priority, T2 has intermediate priority and T3 has lowest priority. At t=0 all the tasks are released. Now T1 has highest priority so it executes first till t=0.5. ● At t=0.5 task T2 has higher priority than T3 so it executes first for one-time units till t=1.5. After its completion only T3 remain so it starts its execution and executes till t=3. ● At t=3 T1 releases, as it has higher priority than T3 so it preempts or blocks T3 and starts it execution till t=3.5. After that the remaining part of T3 executes. ● At t=4 T2 releases and completes it execution as there is no task running in the system at this time. ● At t=6 both T1 and T3 are released at the same time but T1 has higher priority due to shorter period so it preempts T3 and executes till t=6.5, after that T3 starts running and executes till t=8. ● At t=8 T2 with higher priority than T3 releases so it preempts T3 and starts its execution.
  • 82. RATE MONOTONIC PRIORITY ASSIGNMENT(RM) TASK CPU BURST TIME/EXECUTION TIME PERIOD/ TIME T1 6 9 T2 5 15 T3 1 5 U = 6/9+ 5/15+1/5= 1.1999 It is more than 1 so test is fail TASK CPU BURST TIME/EXECUTION TIME PERIOD/ TIME T1 50 100 T2 30 200 T3 100 500 U =50/100+30/200+100/500= 0. It is less than 1 so test is sched
  • 83. EARLIEST DEADLINE FIRST SCHEDULING (EDF) ● an optimal dynamic priority scheduling algorithm used in real-time systems. ● It can be used for both static and dynamic real-time scheduling. ● EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the absolute deadline. The task whose deadline is closest gets the highest priority. ● In EDF, any executing task can be preempted if any other periodic instance with an earlier deadline is ready for execution and becomes active. ADVANTAGE: It is optimal Give best CPU utilization DISADVANTAGE: need dynamic priority performance degrade under overloaded Difficult to implement
  • 84. ● U= 1/4 +2/6 +3/8 = 0.25 + 0.333 +0.375 = 0.95 EARLIEST DEADLINE FIRST SCHEDULING (EDF)
  • 85. ● At t=0 all the tasks are released T1 has higher priority as its deadline is 4 earlier than T2 whose deadline is 6 and T3 whose deadline is 8, ● At t=1 again absolute deadlines are compared and T2 has shorter deadline so it executes and after that T3 starts execution but at t=4 T1, at this instant both T1 and T3 has same deadlines so ties are broken randomly so we continue to execute T3. ● At t=6 T2 deadline of T1 is earliest than T2 so it starts execution and after that T2 begins to execute. At t=8 again T1 and T2 have same deadlines i.e. t=16, so ties are broken randomly an T2 continues its execution and then T1 completes. Now at t=12 T1 and T2 has same deadlines therefore ties broken randomly and we continue to execute T3. ● At t=13 T1 begins it execution and ends at t=14. Now T2 is the only task in the system so it completes it execution. ● At t=16 T1 and T2 are released together, priorities are decided according to absolute deadlines so T1 execute first as its deadline is t=20 and T3’s deadline is t=24.After T1 completion T3 starts and reaches at t=17 t=24 so ties broken randomly ant we T continue to execute T3. ● At t=20 both T1 and T2 are in the system and both have same deadline t=24 so again ties broken randomly and T2 executes. After that T1 completes it execution. In the same EARLIEST DEADLINE FIRST SCHEDULING (EDF)