SlideShare a Scribd company logo
Operating systems 2018
1DT044 and 1DT096
2018-02-07 karl.marklund@it.uu.se Uppsala University
CPU scheduling
Lecture 3
Module 3
Module 3-cpu-scheduling
ready queue CPU
I/O queue
I/O
event
I/O
request
job
termination
job
creation
Multiprogramming
A schematic view of multiprogramming
A job only leave the
CPU when requesting
I/O.
In RAM
In RAM
ready queue CPU
I/O queue
I/O
event
I/O
request
job
termination
job
creation
Multitasking
A schematic view of multitasking
time slice
expires
A job can be forced to
leave the CPU by a
timer.
In RAM
In RAM
ready queue CPU
I/O queue
I/O
event
I/O
request
process
termination
time slice
expires
fork a
child
a new child process is created
Process creation
Processes are
created by forking,
i.e., the parent
process creates a
new process where
the new process is
a copy of the
parent.
In RAM
In RAM
ready queue
ready queue CPU
I/O queueI/O event I/O request
process
termination
time slice
expires
fork a child
STS
The Short-term scheduler (STS), aka CPU scheduler,
selects which process in the in memory ready queue that
should be executed next and allocates CPU.
Short-term scheduler
In RAM
In RAM
Scheduler dispatch
The CPU scheduler selects one process from among the
processes in memory that are READY to execute. The
scheduler dispatcher then gives the selected process control
of the CPU. This action is called scheduler dispatch (SD).
ready running terminated
waiting
new
SD
Dispatcher module gives control of the CPU to
the process selected by the short-term
scheduler; this involves:
★ switching context
★ switching to user mode
★ jumping to the proper location in the user
program to resume execution of that
program.
Dispatch latency: time it takes for the
dispatcher to stop one process and start another.
Process
Control Block
(PCB)
Process Control Block (PCB)
The process control block (PCB) is a data
structure in the operating system kernel
containing the information needed to
manage a particular process. 
Source https://en.wikipedia.org/wiki/Process_control_block 2018-01-21
In brief, the PCB serves as the repository
for any information that may vary from
process to process.
Process Control Block (PCB)
Process id (PID)
Process state (new, ready, running,
waiting or terminated)
CPU Context
I/O status information
Memory management information
CPU scheduling information
Example of information stored in the PCB
ready queue
ready queue CPU
I/O queueI/O event I/O request
process
termination
time slice
expires
fork a childLTS
STS
The Long-term scheduler (LTS) (aka job scheduler) decides whether a new
process should be brought into the ready queue in main memory or delayed.
When a process is ready to execute, it is added to the job pool (on disk). When
RAM is sufficiently free, some processes are brought from the job pool to the
ready queue (in RAM).
Long-term scheduler (1)
In RAM
In RAM
Job pool
In secondary storage
ready queue
ready queue CPU
I/O queueI/O event I/O request
process
termination
time slice
expires
fork a childLTS
STS
On some systems, the long-term scheduler may be absent or
minimal. For example, time-sharing systems such as UNIX and
Microsoft Windows systems often have no long-term scheduler but
simply put every new process in memory for the short-term scheduler.
In RAM
In RAM
Job pool
In secondary storage
Long-term scheduler (2)
ready queue CPU
I/O queueI/O event I/O request
process
termination
time slice
expires
fork a childLTS
STS
Swapped outMTS MTS
The medium-term scheduler (MTS) temporarily removes processes from
main memory and places them in secondary storage and vice versa,
which is commonly referred to as "swapping in" and "swapping out".
Medium-term scheduler
In RAM
In RAM
In secondary storage
Job pool
In secondary storage
Scheduling
algorithms
ready
queue
CPU
I/O queueI/O event I/O request
process
termination
time slice
expires
fork a child
Short Term
Scheduler (STS)
Algorithms?
Scheduling algorithms
Perfomance
Performance is a context dependent metric.
What do we mean by performance?
Scheduling
criteria
ready queue CPU
I/O queue
I/O
event
I/O
request
job
termination
job
creation
Multiprogramming
Multiprogramming maximises CPU utilisation.
Scheduling
criteria
CPU utilisation is not the only criteria ...
Criteria Definition Goal
CPU utilization
The % of time the CPU is executing user
level process code.
Maximize
Throughput
Number of processes that complete their
execution per time unit.
Maximize
Turnaround time
Amount of time to execute a particular
process.
Minimize
Waiting time
Amount of time a process has been
waiting in the ready queue.
Minimize
Response time
Amount of time it takes from when a
request was submitted until the first
response is produced.
Minimize
Scheduling criteria
All animals are
equal, but some
animals are more
equal than others.
George Orwell
Animal farm (1945)
(An example of political satire)
Are all processes
equal, or are
some processes
more equal than
others?
Karl Marklund
Operating systems 2018
Classification
of processes
Do all processes behave the same?
Do all processes have the same needs?
CPU bursts and I/O bursts
When a program executes it alternates between CPU
bursts and I/O bursts.
Histogram of CPU-burst times
Number of
CPU bursts
CPU burst
duration
(ms)
Long CPU burst are very rare - why?
Long CPU burst means long period working with memory
and registers without any output to screen or any input
from a user or any input from file or any output to file.
But, eventually, every program does at least one of these.
Even background programs are usually reading/writing files.
An I/O-bound process spends
more time doing I/O than
computations and is characterised
by many short CPU bursts.
An CPU-bound process spends
more time doing computations and
is characterised by few very long
CPU bursts.
ready queue CPU
I/O queueI/O event I/O request
process
termination
time slice
expires
fork a childLTS
STS
Swapped outMTS MTS
The medium-term scheduler (MTS) can be used to maintain a good
balance between I/O bound and CPU bound processes in the ready
queue.
I/O bound and CPU bound processes
In RAM
In RAM
In secondary storage
Process A Process B Process Z
The operating systems controls the hardware and coordinates its use
among the various application programs for the various user. An operating
system provides an environment for the execution of programs.
Computer hardware
Human user Human user
Not all processes
interacts with human
users.
★ Interactive
★ Batch
★ Real-Time
★ I/O Bound
★ CPU Bound
In general, processes can be classified by the following
characteristics.
Classification of processes
Interactive
Interactive processes interact constantly with their human users.
★ Spend a lot of time waiting for keypresses and mouse
operations.
★ When input is received, the process must be woken up
quickly, or the user will find the system to be
unresponsive.
★ Typically, the average delay must fall between 50 and 150
ms. The variance of such delay must also be bounded, or
the user will find the system to be erratic.
Typical examples:
★ Command shells and interpreters, text editors, graphical
applications and games.
Batch
Batch processes do not interact with human users.
★ Do not need to be responsive.
★ Often run in the background.
★ Often penalised by the scheduler.
Typical examples:
★ Compilers, database search engines, and scientific
computations.
Real-time
Real-time processes have very strong scheduling
requirements.
★ Such processes should never be blocked by lower-
priority processes.
★ Should have a short response time.
★ Most important, response time should have a minimum
variance.
Typical examples:
★ Video and sound applications, robot controllers, and
programs that collect data from physical sensors.
The two classifications above are somewhat independent. For
instance, a batch process can be either I/O-bound (e.g., a database
server) or CPU-bound (e.g., an image-rendering program).
Interactive
Batch
Real-time
CPU-bound
IO-bound
Relation?
In general, there is no way to distinguish between interactive and
batch programs. In order to offer a good response time to
interactive applications, Linux (like all Unix kernels) implicitly
favours I/O-bound processes over CPU-bound ones.
Scheduling
algorithms
ready
queue
CPU
I/O queueI/O event I/O request
process
termination
fork a child
Short Term Scheduler (STS)
Algorithms
• First-Come, First-Served (FCFS)
• Shortest Job First (SJF)
• Round-Robin (RR)
• Other alternatives?
Scheduling algorithms
Evaluation of
CPU scheduling
algorithms
Trace with CPU and I/O burst times
Evaluation of CPU schedulers by simulation
To evaluate different CPU scheduling algorithms, data obtained from
instrumented executables can be used as input in simulations.
Model of
CPU
scheduling
ready running terminated
waiting
new
Simplified model of CPU
scheduling
To make it easy to reason about CPU scheduling we will
use a simplified model.
Events causing a
scheduler dispatch
ready running terminated
waiting
new
SD
1
2
5
4
3
1 A new process arrives at the ready queue.
ready running terminated
waiting
new
SD
1
2
5
4
3
2 Time slice expires or other forms of preemption.
3 The running process terminates.
4 The running process requests I/O.
5 An I/O request completes.
Preemptive and nonpreemptive
Scheduler dispatch can be preemptive or nonpreemptive.
ready running terminated
waiting
new
SD
1
2
5
4
3
A preemptive dispatch is caused by an event external to the
running running process.
A nonpreemptive dispatch is caused by the running process
itself.
Events causing a preemptive scheduler dispatch:
Events causing a nonpreemptive scheduler dispatch:
Preemptive and nonpreemptive
Scheduler dispatch can be preemptive or nonpreemptive.
1 2
3 4
ready running terminated
waiting
new
SD
1
2
5
4
3
5
ready running terminated
waiting
new
4
3
CPU burst
With CPU burst we mean the time spent by a running process
using the CPU before terminating or performing a
blocking system call.
3 4
ready running terminated
waiting
new
Preemption
A process may get preempted , i.e., forced off the CPU
and put back in the ready queue before completing its CPU
burst.
P
P
ready running terminated
waiting
new
Process arrival
We make no difference between a new process arriving to
the ready queue and a process coming back to the ready
queue after completion of a blocking system call.
1
5
1
5
ready running terminated
waiting
new
Waiting time
With waiting time we mean the total time a process has been waiting in the
ready queue until terminating or performing a blocking system call.
A process may get preempted and forced off the CPU and put back in
the ready queue before completing its CPU burst and have more waiting
time added.
4
3
3 4
P
P
ready running terminated
waiting
new
Response time
The model we use to study and compare different CPU scheduling algorithms
is abstract and don't take into account what a response is and that it may take
time to produce a response once a task gets to execute on the CPU. In this
model response time is defined as the time from when a task enters the ready
queue ( or ) to the time the task first gets to execute on the cpu .1
1
5
SD
5 SD
PID
CPU burst
time
Process representation
When studying CPU scheduling a process will be
represented by process ID (PID) and the next
CPU burst time.
FCFS
First-Come, First-Served
ready queue
ready queue CPU
I/O queueI/O event I/O request
process
termination
fork a child
FCFS
The first come, first served (commonly called FIFO ‒ first
in, first out) process scheduling algorithm is the simplest
process scheduling algorithm. Processes are executed on
the CPU in the same order they arrive to the ready queue.
First-Come, First-Served (FCFS)
Ready queue (FIFO)
CPU
I/O queueI/O event I/O request
process
termination
fork a child
FCFS
Scheduling algorithms
P3 P2 P1
3 3 24
Gantt Chart for the FCFS schedule
PID Waiting time
P1 0
P2 24
P3 27
Average waiting time
(0 + 24 + 27)/3 = 17
PID P3 P2 P1
CPU burst
time
3 3 24
P1 P2 P3
0 24 27 30
The processes arrive to the
ready queue in the order:
P1, P2, P3.
PID P1 P3 P2
CPU burst
time
24 3 3
What if the same processes
arrive to the ready queue in
the order: P2, P3, P1.
PID Waiting time
P1 6
P2 0
P3 3
Average waiting time
(6 + 0 + 3)/3 = 3
P2 P3 P1
0 3 6 30
Gantt Chart for the FCFS schedule
The convoy effect
When using FCFS scheduling, if I/O bound (short CPU
burst) processes are scheduled after CPU bound (long
CPU burst) processes, the average waiting time increases.
Long CPU burstShort CPU burstShort CPU burst
PID P1 P3 P2
CPU burst time 24 3 3
PID Waiting time
P1 6
P2 0
P3 3
P2 P3 P1
0 3 6 30
Gantt Charts for the FCFS schedules
PID P3 P2 P1
CPU burst time 3 3 24
P1 P2 P3
0 24 27 30
Average waiting
time
PID Waiting time
P1 0
P2 24
P3 27
(0 + 24 + 27)/3 = 17
Average waiting
time
(6 + 0 + 3)/3 = 3
The convoy effect
When using FCFS scheduling, if I/O bound (short CPU burst)
processes are scheduled after CPU bound (long CPU burst)
processes, the average waiting time increases.
First-Come, First-Served (FCFS)
Source: https://en.wikibooks.org/wiki/Operating_System_Design/Scheduling_Processes/FCFS 2016-02-02
Advantages
★ Simple
★ Easy
★ First come, first served
Disadvantages
★ This scheduling method is nonpreemptive, that is, a
process will execute its CPU burst until it finishes.
★ Because of this nonpreemptive scheduling, short
processes which are at the back of the queue have to
wait for the long process at the front to finish making
the average waiting time increase - the convoy effect.
What is the optimal
schedule?
To answer this question we must first
define what we mean with optimal.
What schedule
minimises the
average waiting
time?
A better question
In general, if we have CPU bursts x1, ... xn, calculate the total
waiting time Twait.
Twait = 0 +
x1 +
x1 + x2 +
x1 + x2 + x3 +
... +
x1 + x2 + x3 + ... + xn-1
= (n-1)x1 + (n-2)x2 + ... + xn-1
Now, calculate the average waiting time.
Average(Twait) = [(n-1)x1 + (n-2)x2 + ... + xn-1]/n
Scheduling the shortest job first gives the minimal average waiting time.
The average waiting time is reduced if the xi's that are multiplied
the most times are the smallest ones.
SJF
Shortest Job First
ready queue
ready queue CPU
I/O queueI/O event I/O request
process
termination
fork a child
SJF
Shortest Job First (SJF) scheduling assigns the process
estimated to complete fastest, i.e, the process with
shortest CPU burst, to the CPU as soon as CPU time is
available.
Shortest Job First (SJF)
Shortest Job First (SJF)
Shortest Job First (SJF) scheduling assigns the process estimated
to complete fastest to the CPU as soon as CPU time is available.
★ Associate with each process the length of its next CPU
burst. Use these lengths to schedule the process with the
shortest burst time.
★ Also knows as Shortest Process Next (SPN) scheduling.
★ Also known as Shortest job next (SJN) scheduling.
★ SJF is optimal – gives minimum average waiting time for
a given set of processes.
Source: https://en.wikibooks.org/wiki/Operating_System_Design/Scheduling_Processes/SPN 2016-02-02
https://en.wikipedia.org/wiki/Shortest_job_next 2016-02-02
The difficulty is knowing the length of the next CPU burst.
Burst from the future?
Can the the length
of the next CPU
burst be determined
dynamically?
Exponential
averaging
Can only estimate the length of the next CPU
burst. Estimation can be done by using the
length of previous CPU bursts, using exponential
averaging.
α = 0 τn+1
= τn
Recent history does not affect the estimate.
α =1 τn+1
= α tn
Only the actual last CPU burst affect the estimate.
Analysis: what happens when α → 0 or when α → 1 ?
Exponential averaging
τ0 = 10, α = 0.5
= 0.5*(previous burst + previous estimate)
8 6 6 5 9 1110
τ1 = 0.5*( t0 + τ0) = 0.5*(6 + 10) = 8
0 1 2 3 4 5 6 7
4 6 4 13 13 136
Time
12
13
Exponential averaging example
13
8
13
Exponential averaging
If we expand the formula for Tn+1
by substituting for Tn we get:
Since both α and (1 - α) are less than or equal to 1, each
successive term has less weight than its predecessor.
ready queue
ready queue CPU
I/O queueI/O event I/O request
process
termination
fork a child
SJF
Can use exponential averaging to estimate the next CPU
burst for each process.
Shortest Job First (SJF)
Gantt Chart for the SJF schedule
Average waiting time
(3 + 16 + 9 + 0)/4 = 28/4 = 7
PID P4 P3 P2 P1
CPU burst
time
3 7 8 6
P4 P1 P3 P2
0 3 9 16 24
Processes in the ready
queue.
Process
Waiting
time
P1 3
P2 16
P3 9
P4 0
Estimated values of CPU bursts
ready queue
ready queue CPU
I/O queueI/O event I/O request
process
termination
fork a child
SJF
But does all processes arrive at the
same time to the ready queue?
Arrival time
Must keep track of when a
process arrives to the ready
queue.
Ready queue
Process
Arrival
time
Burst
Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
Example of SJF
P1
0
P3
7
P2
8
P4
12 16
Time Action
0 Only P1 ready to execute
7
When P1 finish, the process in the ready queue with the
shortest burst time is selected for dispatch, in this
example P3.
8
When P3 finish, both P2 and P4 have burst time 4. Use
FIFO to break ties. In this example P2 arrives before P4,
hence P2 is selected for execution
12 When P2 finish, only P4 remains in the ready queue.
16 The ready queue is now empty, all processes done.
Gantt Chart for the SJF schedule
Ready queue SJF
Process
Arrival
time
Burst
Time
Tdispatch - Tarrival =
Waiting
time
P1 0 7 0 - 0 = 0
P2 2 4 8 - 2 = 6
P3 4 1 7 - 4 = 3
P4 5 4 12 - 5 = 7
Example of SJF - Average waiting time
Average waiting time
(0 + 6 + 3 + 7)/4 = 16/4 = 4
P1
0
P3
7
P2
8
P4
12 16
Gantt Chart for the SJF schedule
ready queue
ready queue CPU
I/O queueI/O event I/O request
process
termination
fork a child
?
When a process with a shorter burst time compared to the
currently scheduled process arrives to the ready queue ...
... wouldn’t it be more optimal (minimising the average waiting
time) to preempt the running process and switch to newly
arrived process?
A
P
A
P
ready running terminated
waiting
new
Preemption
A process may get preempted , i.e., forced off the CPU
and put back in the ready queue before completing its CPU
burst.
P
P
PSJF
Preemptive Shortest Job
First
ready queue
ready queue CPU
I/O queueI/O event I/O request
process
termination
fork a child
PSJFA
P
An extension of SJF where the currently running process is
preempted if the CPU burst of a process arriving to the ready
queue is shorted than the remaining CPU burst of the currently
running process.
Preemptive Shortest Job First (PSJF)
P
A
Preemptive Shortest Job First (PSJF)
The currently running process is preempted if the CPU burst of a
process arriving to the ready queue is shorted than the remaining
CPU burst of the currently running process.
★ Also known as shortest remaining time first (SRTF).
★ The currently executing process will always run until completion
or until a new process is added to the ready queue that requires
a smaller amount of time to complete.
★ Shortest remaining time is advantageous because short
processes are handled very quickly.
★ Requires very little overhead since a decision is made only
when a process completes or a new process is added, and
when a new process is added the algorithm only needs to
compare the currently executing process with the new process,
ignoring all other processes currently in the ready queue.
Source: https://en.wikipedia.org/wiki/Shortest_remaining_time 2016-02-02
SJF: The currently running process is allowed to continue
to execute.
PSJF: The currently running process is preempted if the
CPU burst of the newly arrived process is shorter than the
remaining CPU burst of the currently running process.
ready running terminated
new
1
5
waiting
SJV vs PSJF
1 5&
Ready queue
Process
Arrival
time
Burst
time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
Example of PSJF
P1
0
P2
2
P3
4
P2
5
P1
11 16
T Action
0 P1 is the only process ready to run.
2
When P2 arrives, P1 is preempted since the burst time of P2 (4)
is smaller than the remaining burst time of P1 (5).
4
When P3 arrives, P2 is preempted since the burst time of P3 (1)
is smaller than the remaining burst time of P1 (5) and P2 (2).
5
P3 is done and P4 (4) arrives to the ready queue where P1 (5)
and P2 (2) already waits. P2 has the smallest remaining burst
time and is selected to run next.
7
P2 is done. P1 (5) and P4 (4) waits in the ready queue. P4 (4)
has the smallest remaining burst time and is selected to run next.
11
P4 is done. Only P1 (5) waits in the ready queue and is selected
to run next.
16 The ready queue is now empty, all processes done.
Gantt Chart for the PSJF schedule
P4
7
P
P
P1
0
P2
2
P3
4
P2
5
P1
11 16
Gantt Chart for the PSJF schedule
P4
7
READY QUEUE PSJF
Process
Arrival
time
Burst
time
Tdispatch - Tarrival =
Waiting
time
P1 0 7
0 - 0 = 0
11 - 2 = 9
P2 2 4
2 - 2 = 0
5 - 4 = 1
P3 4 1 4 - 4 = 0
P4 5 4 7 - 5 = 2
Processes may
have to wait in
the ready queue
more than once.
Average waiting time
(9 + 1 + 0 + 2)/4 = 12/4 = 3
Average waiting time
SJF vs PSJF SJF, average waiting time = 4
READY QUEUE
Process
Arrival
time
Burst
Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
PSJF, average waiting time = 3
SJF gives the optimal average waiting time for a given set
of processes currently in the ready queue.
PSJF aims at decreasing the average waiting time by
allowing a newly arriving process to preempt the currently
running process if the CPU burst of the new process is
shorter than what remains of the currently running
process.
P1
0
P3
7
P2
8
P4
12 16
P1
0
P2
2
P3
4
P2
5
P1
11 16
P4
7
Priority
Scheduling
A priority number (integer) is associated with each process.
The CPU is allocated to the process with the highest priority
(smallest integer = highest priority).
★ Preemption – should a higher priority process be
allowed to preempt a running process with lower priority?
★ Starvation – low priority processes may never execute.
★ Ageing – ensure that jobs with lower priority will
eventually complete their execution. Ageing can be
implemented by increasing the priority of a process as
time progresses.
★ SJF is a priority scheduling algorithm where priority is
the predicted next CPU burst time.
Priority scheduling
RR
Round Robin
Robin = Rödhake in Swedish
In general, round-robin refers to a pattern or
ordering whereby items are encountered or
processed sequentially, often beginning again at the
start in a circular manner.
Source: https://en.wikipedia.org/wiki/Round-robin 2016-01-31
Round Robin is one of the simplest CPU scheduling
algorithms that also prevents starvation.
Etymology
The phrase round-robin actually has nothing
whatever to do with a bird, robin or any other kind.
Source: https://en.wikipedia.org/wiki/Round-robin_(document) 2016-02-02
★ The term round-robin dates from the 17th-century French
ruban rond (round ribbon).
★ Originally, round-robin is a document signed by multiple
parties in a circle..
★ Round-robin described the practice of signatories to
petitions against authority (usually Government officials
petitioning the Crown) appending their names on a
document in a non-hierarchical circle or ribbon pattern (and
so disguising the order in which they have signed) so that
none may be identified as a ringleader.
ready queue
ready queue
I/O queueI/O event I/O request
process
termination
fork a child
RR
Round Robin (RR) is a scheduling algorithm where time slices
are assigned to each process in equal portions and in circular
order.
Round Robin (RR)
CPU
T
time slice
Metric Description
Performance A context dependent metric. What do we mean
by performance?
Waiting time Amount of time a process has been waiting in
the ready queue.
Turn around time Amount of time to execute a particular process.
Response time Amount of time it takes from when a request
was submitted until the first response is
produced.
Characteristics of CPU scheduling algorithms
Each process gets a small unit of CPU time (time quantum),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready
queue.
If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once.
Round Robin (RR)
Response time and extreme
behaviours (RR)
A nice property of RR is that there is an upper bound
the response time.
Upper bound for response time: (n - 1)q time units.
Extreme behaviours
★ q large ⇒
★ q small ⇒ q must be large with respect to
context switch, otherwise the
overhead is too high.
FCFS/FIFO
The Gantt chart
READY QUEUE
Process Burst Time
P1 24
P2 3
P3 3
Example of RR with Time Quantum = 4
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Turnaround time
Amount of time to execute a
particular process.
Response time
Amount of time it takes from
when a request was
submitted until the first
response is produced.
preemption
Round Robin typically
have higher average
turnaround times
than SJF, but better
response times.
Time Quantum and Context Switch Time
The RR time quantum (q) affects the number of context switches for
a process.
Average Turnaround Time = (13 + 6 + 7 + 17)/4 = 10.75
q = 3 P1 P2 P3 P4 P1 P4 P4
0 3 6 7 10 13 16 17
q = 7 P1 P2 P3 P4
0 6 9 10 17
Average Turnaround Time = (6 + 9 + 10 + 17)/4 = 10.5
Average Turnaround Time = (15 + 8 + 9 + 17)/4 = 12.25
q = 5 P1 P2 P3 P4 P1 P4
0 5 8 9 14 15 17
Turnaround time varies with the time quantum
Turnaround time = amount of time to execute a particular process.
When using Round Robin scheduling, the average turnaround
time will depend on the time quantum (q).
Foreground
and
background
processes
Foreground and background
processes
Sometimes process can easily be classified into two groups,
one set of processes that interacts with users and one set
that doesn’t.
Background process (batch)
A process that don’t interacts with any user is called a
background process or a batch process.
Foreground process (interactive)
A process that interacts with users is callad a foreground
process or an interactive process.
Multilevel
queue
scheduling
General classification of processes:
★ foreground (interactive)
★ background (batch)
In a multi-level queue scheduling algorithm, there will be n number
of queues, where n is the number of groups the processes are
classified into.
★ Each queue will be assigned a priority and will have its own
scheduling algorithm like Round-robin scheduling or FCFS.
Multilevel queue scheduling (1)
A multi-level queue scheduling algorithm is used in scenarios where
the processes can be classified into groups based on properties like
process type, CPU time, IO access, memory size, etc
Source: https://en.wikipedia.org/wiki/Multilevel_queue 2016-02-02
Each queue has its own
scheduling algorithm.
There must be scheduling
among the queues.
For example, the foreground queue may have absolute priority over the
background queue. If an interactive process enters the ready queue while a
batch process is running, the batch process will be preempted.
Multilevel queue scheduling (2)
Use several ready queues. A process is permanently assigned to
one queue, generally based on some property of the process, such
as memory size, process priority, or process type.
Ready queue is partitioned into separate queues:
★ foreground (interactive)
★ background (batch)
Each queue has its own scheduling algorithm
★ foreground – RR
★ background – FCFS
Multilevel queue scheduling (3)
How to select scheduling algorithms for the various queues?
★ Fixed priority scheduling; (i.e., serve all from
foreground then from background). Possibility of
starvation.
★ Time slice – each queue gets a certain amount of
CPU time which it can schedule amongst its
processes.
Multilevel queue scheduling (4)
Scheduling must be done between the queues.
CPU time
80 %
Round Robin (RR)
Foreground processes
20 %
First Come First Served (FCFS)
Background processes
Multilevel queue scheduling (5)
Example of time slicing among multilevel queues. Foreground
process are given 80 % of the CPU time for RR scheduling and
background processes are given 20 % of the CPU time for FCFC
scheduling.
Multilevel
feedback queue
scheduling
The idea is to separate processes according
to the characteristics of their CPU bursts. If
a process uses too much CPU time it will be
moved to a lower-priority queue.
Design objectives
Multilevel feedback queue scheduling design objectives.
★ Give preference to short CPU bursts.
★ Give preference to I/O bound processes.
★ Separate processes into categories based on
their need for the CPU.
Source: https://en.wikipedia.org/wiki/Multilevel_feedback_queue 2018-02-07
Q0 - RR, q = 8 ms
Q1 - RR, q = 16 ms
Q2 - FCFS
Priorities
• The scheduler first executes all
processes in Q0.
• Only when Q0 is empty will it
execute processes in Q1.
• Similarly, processes in Q2 will only
be executed if Q0 and Q1 are
empty.
Preemption
• A process that arrives at Q1 will
preempt a process in Q2.
• A process in Q1 will in turn be
preempted by a process arriving at
Q0.
Example
Q0 - RR, q = 8 ms
Q1 - RR, q = 16 ms
Q2 - FCFS
1
A new job enters queue Q0 which is served FCFS. Each job gets at most 8
milliseconds of CPU time.
1
2
If it does not finish in 8 milliseconds, the job is moved to queue Q1.2
If it still does not complete, it is preempted and moved to queue Q2.4
4
3
At Q1 the job is again served FCFS and receives 16 additional milliseconds.3
5
Once in Q2, processes are scheduled using FCFS but are run only when Q0 and
Q1 are empty.
5
Indefinite
blocking
(starvation)
In computer science, starvation is a problem encountered in
multitasking where a process is perpetually denied necessary
resources. Without those resources, the program can never finish
its task.
Problem with any sort of priority scheduling?
A process that is ready to run but waiting for the CPU can be
considered blocked.
A priority scheduling algorithm can leave some low-priority
processes waiting indefinitely.
A steady stream of higher-priority processes can prevent a low-
priority process from ever getting the CPU.
Indefinite blocking (starvation)
Ageing
Ageing is a scheduling technique
used to avoid starvation.
Ageing is used to ensure that jobs with lower priority will eventually
complete their execution.
★ Ageing can be implemented by increasing the priority of a
process as time progresses.
Ageing
A process can move between the
various queues.
★ Ageing can be implemented
this way.
Q0 - RR, q = 8 ms
Q1 - RR, q = 16 ms
Q2 - FCFS
Multilevel feedback queue
scheduling
Source: https://en.wikipedia.org/wiki/Aging_(scheduling) 2016-02-05

More Related Content

PPTX
Software development process
PDF
Inter Process Communication
PDF
U-Boot - An universal bootloader
PPTX
Cgroups, namespaces and beyond: what are containers made from?
PPTX
Computer hardware
PDF
The Mobile Ecosystem
PPTX
System engineering
PPTX
THE COMPUTER MOTHERBOARD AND ITS COMPONENTS
Software development process
Inter Process Communication
U-Boot - An universal bootloader
Cgroups, namespaces and beyond: what are containers made from?
Computer hardware
The Mobile Ecosystem
System engineering
THE COMPUTER MOTHERBOARD AND ITS COMPONENTS

What's hot (20)

PPTX
Parallel Programming
PPTX
Operating system 08 time sharing and multitasking operating system
PDF
PPTX
Linux Device Driver’s
PPTX
Device Drivers
PDF
Embedded Android : System Development - Part IV
PPT
Lecture 2
PPT
Learning AOSP - Android Linux Device Driver
PDF
CPU vs. GPU presentation
PPT
Chapter 13 - I/O Systems
PDF
Iterative process planning.pdf
PPTX
Round robin scheduling
PPT
CHAPTER 6 REQUIREMENTS MODELING: SCENARIO based Model , Class based moddel
PPT
HCI 3e - Ch 17: Models of the system
PPTX
Linux device drivers
PPSX
Requirement Elicitation
PPTX
Operating system 23 process synchronization
PPTX
Best Basic Computer Training in Ambala ! BATRA COMPUTER CENTRE
PPT
Prototype model
PPT
Features of tcp (part 2) .68
Parallel Programming
Operating system 08 time sharing and multitasking operating system
Linux Device Driver’s
Device Drivers
Embedded Android : System Development - Part IV
Lecture 2
Learning AOSP - Android Linux Device Driver
CPU vs. GPU presentation
Chapter 13 - I/O Systems
Iterative process planning.pdf
Round robin scheduling
CHAPTER 6 REQUIREMENTS MODELING: SCENARIO based Model , Class based moddel
HCI 3e - Ch 17: Models of the system
Linux device drivers
Requirement Elicitation
Operating system 23 process synchronization
Best Basic Computer Training in Ambala ! BATRA COMPUTER CENTRE
Prototype model
Features of tcp (part 2) .68
Ad

Similar to Module 3-cpu-scheduling (20)

PPTX
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptx
PPTX
Operating System.pptx
PPTX
Module- Operating Systems presentation Two
DOCX
Process concept
PPTX
Os unit 3 , process management
PPTX
Lecture 4 process cpu scheduling
PDF
Operating System
DOCX
Process scheduling
PPTX
Process Scheduling Algorithms | Interviews | Operating system
PDF
operating systems classification university lec 2
PPT
OS_Unit_II_Ch3 Process and CPU Scheduling
PDF
Operating System-Concepts of Process
PDF
Lecture- 2_Process Management.pdf
PPT
Chapter No 4 CPU Scheduling and Algorithms.ppt
PPTX
Unit 2_OS process management
PPTX
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
PDF
Ch6 cpu scheduling
PDF
OS - Process Concepts
PPTX
Operating System Process Management.pptx
PDF
OS-Process.pdf
CPU Scheduling Criteria CPU Scheduling Criteria (1).pptx
Operating System.pptx
Module- Operating Systems presentation Two
Process concept
Os unit 3 , process management
Lecture 4 process cpu scheduling
Operating System
Process scheduling
Process Scheduling Algorithms | Interviews | Operating system
operating systems classification university lec 2
OS_Unit_II_Ch3 Process and CPU Scheduling
Operating System-Concepts of Process
Lecture- 2_Process Management.pdf
Chapter No 4 CPU Scheduling and Algorithms.ppt
Unit 2_OS process management
LM9 - OPERATIONS, SCHEDULING, Inter process xommuncation
Ch6 cpu scheduling
OS - Process Concepts
Operating System Process Management.pptx
OS-Process.pdf
Ad

Recently uploaded (20)

PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Cloud computing and distributed systems.
PDF
Empathic Computing: Creating Shared Understanding
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Electronic commerce courselecture one. Pdf
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
Programs and apps: productivity, graphics, security and other tools
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Understanding_Digital_Forensics_Presentation.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
Network Security Unit 5.pdf for BCA BBA.
Cloud computing and distributed systems.
Empathic Computing: Creating Shared Understanding
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Electronic commerce courselecture one. Pdf
The AUB Centre for AI in Media Proposal.docx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Programs and apps: productivity, graphics, security and other tools

Module 3-cpu-scheduling

  • 1. Operating systems 2018 1DT044 and 1DT096 2018-02-07 karl.marklund@it.uu.se Uppsala University CPU scheduling Lecture 3 Module 3
  • 3. ready queue CPU I/O queue I/O event I/O request job termination job creation Multiprogramming A schematic view of multiprogramming A job only leave the CPU when requesting I/O. In RAM In RAM
  • 4. ready queue CPU I/O queue I/O event I/O request job termination job creation Multitasking A schematic view of multitasking time slice expires A job can be forced to leave the CPU by a timer. In RAM In RAM
  • 5. ready queue CPU I/O queue I/O event I/O request process termination time slice expires fork a child a new child process is created Process creation Processes are created by forking, i.e., the parent process creates a new process where the new process is a copy of the parent. In RAM In RAM
  • 6. ready queue ready queue CPU I/O queueI/O event I/O request process termination time slice expires fork a child STS The Short-term scheduler (STS), aka CPU scheduler, selects which process in the in memory ready queue that should be executed next and allocates CPU. Short-term scheduler In RAM In RAM
  • 7. Scheduler dispatch The CPU scheduler selects one process from among the processes in memory that are READY to execute. The scheduler dispatcher then gives the selected process control of the CPU. This action is called scheduler dispatch (SD). ready running terminated waiting new SD
  • 8. Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: ★ switching context ★ switching to user mode ★ jumping to the proper location in the user program to resume execution of that program. Dispatch latency: time it takes for the dispatcher to stop one process and start another.
  • 10. Process Control Block (PCB) The process control block (PCB) is a data structure in the operating system kernel containing the information needed to manage a particular process.  Source https://en.wikipedia.org/wiki/Process_control_block 2018-01-21 In brief, the PCB serves as the repository for any information that may vary from process to process.
  • 11. Process Control Block (PCB) Process id (PID) Process state (new, ready, running, waiting or terminated) CPU Context I/O status information Memory management information CPU scheduling information Example of information stored in the PCB
  • 12. ready queue ready queue CPU I/O queueI/O event I/O request process termination time slice expires fork a childLTS STS The Long-term scheduler (LTS) (aka job scheduler) decides whether a new process should be brought into the ready queue in main memory or delayed. When a process is ready to execute, it is added to the job pool (on disk). When RAM is sufficiently free, some processes are brought from the job pool to the ready queue (in RAM). Long-term scheduler (1) In RAM In RAM Job pool In secondary storage
  • 13. ready queue ready queue CPU I/O queueI/O event I/O request process termination time slice expires fork a childLTS STS On some systems, the long-term scheduler may be absent or minimal. For example, time-sharing systems such as UNIX and Microsoft Windows systems often have no long-term scheduler but simply put every new process in memory for the short-term scheduler. In RAM In RAM Job pool In secondary storage Long-term scheduler (2)
  • 14. ready queue CPU I/O queueI/O event I/O request process termination time slice expires fork a childLTS STS Swapped outMTS MTS The medium-term scheduler (MTS) temporarily removes processes from main memory and places them in secondary storage and vice versa, which is commonly referred to as "swapping in" and "swapping out". Medium-term scheduler In RAM In RAM In secondary storage Job pool In secondary storage
  • 16. ready queue CPU I/O queueI/O event I/O request process termination time slice expires fork a child Short Term Scheduler (STS) Algorithms? Scheduling algorithms
  • 17. Perfomance Performance is a context dependent metric. What do we mean by performance?
  • 19. ready queue CPU I/O queue I/O event I/O request job termination job creation Multiprogramming Multiprogramming maximises CPU utilisation.
  • 20. Scheduling criteria CPU utilisation is not the only criteria ...
  • 21. Criteria Definition Goal CPU utilization The % of time the CPU is executing user level process code. Maximize Throughput Number of processes that complete their execution per time unit. Maximize Turnaround time Amount of time to execute a particular process. Minimize Waiting time Amount of time a process has been waiting in the ready queue. Minimize Response time Amount of time it takes from when a request was submitted until the first response is produced. Minimize Scheduling criteria
  • 22. All animals are equal, but some animals are more equal than others. George Orwell Animal farm (1945) (An example of political satire)
  • 23. Are all processes equal, or are some processes more equal than others? Karl Marklund Operating systems 2018
  • 24. Classification of processes Do all processes behave the same? Do all processes have the same needs?
  • 25. CPU bursts and I/O bursts When a program executes it alternates between CPU bursts and I/O bursts.
  • 26. Histogram of CPU-burst times Number of CPU bursts CPU burst duration (ms) Long CPU burst are very rare - why? Long CPU burst means long period working with memory and registers without any output to screen or any input from a user or any input from file or any output to file. But, eventually, every program does at least one of these. Even background programs are usually reading/writing files.
  • 27. An I/O-bound process spends more time doing I/O than computations and is characterised by many short CPU bursts. An CPU-bound process spends more time doing computations and is characterised by few very long CPU bursts.
  • 28. ready queue CPU I/O queueI/O event I/O request process termination time slice expires fork a childLTS STS Swapped outMTS MTS The medium-term scheduler (MTS) can be used to maintain a good balance between I/O bound and CPU bound processes in the ready queue. I/O bound and CPU bound processes In RAM In RAM In secondary storage
  • 29. Process A Process B Process Z The operating systems controls the hardware and coordinates its use among the various application programs for the various user. An operating system provides an environment for the execution of programs. Computer hardware Human user Human user Not all processes interacts with human users.
  • 30. ★ Interactive ★ Batch ★ Real-Time ★ I/O Bound ★ CPU Bound In general, processes can be classified by the following characteristics. Classification of processes
  • 31. Interactive Interactive processes interact constantly with their human users. ★ Spend a lot of time waiting for keypresses and mouse operations. ★ When input is received, the process must be woken up quickly, or the user will find the system to be unresponsive. ★ Typically, the average delay must fall between 50 and 150 ms. The variance of such delay must also be bounded, or the user will find the system to be erratic. Typical examples: ★ Command shells and interpreters, text editors, graphical applications and games.
  • 32. Batch Batch processes do not interact with human users. ★ Do not need to be responsive. ★ Often run in the background. ★ Often penalised by the scheduler. Typical examples: ★ Compilers, database search engines, and scientific computations.
  • 33. Real-time Real-time processes have very strong scheduling requirements. ★ Such processes should never be blocked by lower- priority processes. ★ Should have a short response time. ★ Most important, response time should have a minimum variance. Typical examples: ★ Video and sound applications, robot controllers, and programs that collect data from physical sensors.
  • 34. The two classifications above are somewhat independent. For instance, a batch process can be either I/O-bound (e.g., a database server) or CPU-bound (e.g., an image-rendering program). Interactive Batch Real-time CPU-bound IO-bound Relation? In general, there is no way to distinguish between interactive and batch programs. In order to offer a good response time to interactive applications, Linux (like all Unix kernels) implicitly favours I/O-bound processes over CPU-bound ones.
  • 36. ready queue CPU I/O queueI/O event I/O request process termination fork a child Short Term Scheduler (STS) Algorithms • First-Come, First-Served (FCFS) • Shortest Job First (SJF) • Round-Robin (RR) • Other alternatives? Scheduling algorithms
  • 38. Trace with CPU and I/O burst times Evaluation of CPU schedulers by simulation To evaluate different CPU scheduling algorithms, data obtained from instrumented executables can be used as input in simulations.
  • 40. ready running terminated waiting new Simplified model of CPU scheduling To make it easy to reason about CPU scheduling we will use a simplified model.
  • 41. Events causing a scheduler dispatch ready running terminated waiting new SD 1 2 5 4 3
  • 42. 1 A new process arrives at the ready queue. ready running terminated waiting new SD 1 2 5 4 3 2 Time slice expires or other forms of preemption. 3 The running process terminates. 4 The running process requests I/O. 5 An I/O request completes.
  • 43. Preemptive and nonpreemptive Scheduler dispatch can be preemptive or nonpreemptive. ready running terminated waiting new SD 1 2 5 4 3 A preemptive dispatch is caused by an event external to the running running process. A nonpreemptive dispatch is caused by the running process itself.
  • 44. Events causing a preemptive scheduler dispatch: Events causing a nonpreemptive scheduler dispatch: Preemptive and nonpreemptive Scheduler dispatch can be preemptive or nonpreemptive. 1 2 3 4 ready running terminated waiting new SD 1 2 5 4 3 5
  • 45. ready running terminated waiting new 4 3 CPU burst With CPU burst we mean the time spent by a running process using the CPU before terminating or performing a blocking system call. 3 4
  • 46. ready running terminated waiting new Preemption A process may get preempted , i.e., forced off the CPU and put back in the ready queue before completing its CPU burst. P P
  • 47. ready running terminated waiting new Process arrival We make no difference between a new process arriving to the ready queue and a process coming back to the ready queue after completion of a blocking system call. 1 5 1 5
  • 48. ready running terminated waiting new Waiting time With waiting time we mean the total time a process has been waiting in the ready queue until terminating or performing a blocking system call. A process may get preempted and forced off the CPU and put back in the ready queue before completing its CPU burst and have more waiting time added. 4 3 3 4 P P
  • 49. ready running terminated waiting new Response time The model we use to study and compare different CPU scheduling algorithms is abstract and don't take into account what a response is and that it may take time to produce a response once a task gets to execute on the CPU. In this model response time is defined as the time from when a task enters the ready queue ( or ) to the time the task first gets to execute on the cpu .1 1 5 SD 5 SD
  • 50. PID CPU burst time Process representation When studying CPU scheduling a process will be represented by process ID (PID) and the next CPU burst time.
  • 52. ready queue ready queue CPU I/O queueI/O event I/O request process termination fork a child FCFS The first come, first served (commonly called FIFO ‒ first in, first out) process scheduling algorithm is the simplest process scheduling algorithm. Processes are executed on the CPU in the same order they arrive to the ready queue. First-Come, First-Served (FCFS)
  • 53. Ready queue (FIFO) CPU I/O queueI/O event I/O request process termination fork a child FCFS Scheduling algorithms P3 P2 P1 3 3 24
  • 54. Gantt Chart for the FCFS schedule PID Waiting time P1 0 P2 24 P3 27 Average waiting time (0 + 24 + 27)/3 = 17 PID P3 P2 P1 CPU burst time 3 3 24 P1 P2 P3 0 24 27 30 The processes arrive to the ready queue in the order: P1, P2, P3.
  • 55. PID P1 P3 P2 CPU burst time 24 3 3 What if the same processes arrive to the ready queue in the order: P2, P3, P1. PID Waiting time P1 6 P2 0 P3 3 Average waiting time (6 + 0 + 3)/3 = 3 P2 P3 P1 0 3 6 30 Gantt Chart for the FCFS schedule
  • 56. The convoy effect When using FCFS scheduling, if I/O bound (short CPU burst) processes are scheduled after CPU bound (long CPU burst) processes, the average waiting time increases. Long CPU burstShort CPU burstShort CPU burst
  • 57. PID P1 P3 P2 CPU burst time 24 3 3 PID Waiting time P1 6 P2 0 P3 3 P2 P3 P1 0 3 6 30 Gantt Charts for the FCFS schedules PID P3 P2 P1 CPU burst time 3 3 24 P1 P2 P3 0 24 27 30 Average waiting time PID Waiting time P1 0 P2 24 P3 27 (0 + 24 + 27)/3 = 17 Average waiting time (6 + 0 + 3)/3 = 3 The convoy effect When using FCFS scheduling, if I/O bound (short CPU burst) processes are scheduled after CPU bound (long CPU burst) processes, the average waiting time increases.
  • 58. First-Come, First-Served (FCFS) Source: https://en.wikibooks.org/wiki/Operating_System_Design/Scheduling_Processes/FCFS 2016-02-02 Advantages ★ Simple ★ Easy ★ First come, first served Disadvantages ★ This scheduling method is nonpreemptive, that is, a process will execute its CPU burst until it finishes. ★ Because of this nonpreemptive scheduling, short processes which are at the back of the queue have to wait for the long process at the front to finish making the average waiting time increase - the convoy effect.
  • 59. What is the optimal schedule? To answer this question we must first define what we mean with optimal.
  • 60. What schedule minimises the average waiting time? A better question
  • 61. In general, if we have CPU bursts x1, ... xn, calculate the total waiting time Twait. Twait = 0 + x1 + x1 + x2 + x1 + x2 + x3 + ... + x1 + x2 + x3 + ... + xn-1 = (n-1)x1 + (n-2)x2 + ... + xn-1 Now, calculate the average waiting time. Average(Twait) = [(n-1)x1 + (n-2)x2 + ... + xn-1]/n Scheduling the shortest job first gives the minimal average waiting time. The average waiting time is reduced if the xi's that are multiplied the most times are the smallest ones.
  • 63. ready queue ready queue CPU I/O queueI/O event I/O request process termination fork a child SJF Shortest Job First (SJF) scheduling assigns the process estimated to complete fastest, i.e, the process with shortest CPU burst, to the CPU as soon as CPU time is available. Shortest Job First (SJF)
  • 64. Shortest Job First (SJF) Shortest Job First (SJF) scheduling assigns the process estimated to complete fastest to the CPU as soon as CPU time is available. ★ Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest burst time. ★ Also knows as Shortest Process Next (SPN) scheduling. ★ Also known as Shortest job next (SJN) scheduling. ★ SJF is optimal – gives minimum average waiting time for a given set of processes. Source: https://en.wikibooks.org/wiki/Operating_System_Design/Scheduling_Processes/SPN 2016-02-02 https://en.wikipedia.org/wiki/Shortest_job_next 2016-02-02 The difficulty is knowing the length of the next CPU burst.
  • 65. Burst from the future?
  • 66. Can the the length of the next CPU burst be determined dynamically?
  • 67. Exponential averaging Can only estimate the length of the next CPU burst. Estimation can be done by using the length of previous CPU bursts, using exponential averaging.
  • 68. α = 0 τn+1 = τn Recent history does not affect the estimate. α =1 τn+1 = α tn Only the actual last CPU burst affect the estimate. Analysis: what happens when α → 0 or when α → 1 ?
  • 70. τ0 = 10, α = 0.5 = 0.5*(previous burst + previous estimate) 8 6 6 5 9 1110 τ1 = 0.5*( t0 + τ0) = 0.5*(6 + 10) = 8 0 1 2 3 4 5 6 7 4 6 4 13 13 136 Time 12 13 Exponential averaging example 13 8 13
  • 71. Exponential averaging If we expand the formula for Tn+1 by substituting for Tn we get: Since both α and (1 - α) are less than or equal to 1, each successive term has less weight than its predecessor.
  • 72. ready queue ready queue CPU I/O queueI/O event I/O request process termination fork a child SJF Can use exponential averaging to estimate the next CPU burst for each process. Shortest Job First (SJF)
  • 73. Gantt Chart for the SJF schedule Average waiting time (3 + 16 + 9 + 0)/4 = 28/4 = 7 PID P4 P3 P2 P1 CPU burst time 3 7 8 6 P4 P1 P3 P2 0 3 9 16 24 Processes in the ready queue. Process Waiting time P1 3 P2 16 P3 9 P4 0 Estimated values of CPU bursts
  • 74. ready queue ready queue CPU I/O queueI/O event I/O request process termination fork a child SJF But does all processes arrive at the same time to the ready queue?
  • 75. Arrival time Must keep track of when a process arrives to the ready queue.
  • 76. Ready queue Process Arrival time Burst Time P1 0 7 P2 2 4 P3 4 1 P4 5 4 Example of SJF P1 0 P3 7 P2 8 P4 12 16 Time Action 0 Only P1 ready to execute 7 When P1 finish, the process in the ready queue with the shortest burst time is selected for dispatch, in this example P3. 8 When P3 finish, both P2 and P4 have burst time 4. Use FIFO to break ties. In this example P2 arrives before P4, hence P2 is selected for execution 12 When P2 finish, only P4 remains in the ready queue. 16 The ready queue is now empty, all processes done. Gantt Chart for the SJF schedule
  • 77. Ready queue SJF Process Arrival time Burst Time Tdispatch - Tarrival = Waiting time P1 0 7 0 - 0 = 0 P2 2 4 8 - 2 = 6 P3 4 1 7 - 4 = 3 P4 5 4 12 - 5 = 7 Example of SJF - Average waiting time Average waiting time (0 + 6 + 3 + 7)/4 = 16/4 = 4 P1 0 P3 7 P2 8 P4 12 16 Gantt Chart for the SJF schedule
  • 78. ready queue ready queue CPU I/O queueI/O event I/O request process termination fork a child ? When a process with a shorter burst time compared to the currently scheduled process arrives to the ready queue ... ... wouldn’t it be more optimal (minimising the average waiting time) to preempt the running process and switch to newly arrived process? A P A P
  • 79. ready running terminated waiting new Preemption A process may get preempted , i.e., forced off the CPU and put back in the ready queue before completing its CPU burst. P P
  • 81. ready queue ready queue CPU I/O queueI/O event I/O request process termination fork a child PSJFA P An extension of SJF where the currently running process is preempted if the CPU burst of a process arriving to the ready queue is shorted than the remaining CPU burst of the currently running process. Preemptive Shortest Job First (PSJF) P A
  • 82. Preemptive Shortest Job First (PSJF) The currently running process is preempted if the CPU burst of a process arriving to the ready queue is shorted than the remaining CPU burst of the currently running process. ★ Also known as shortest remaining time first (SRTF). ★ The currently executing process will always run until completion or until a new process is added to the ready queue that requires a smaller amount of time to complete. ★ Shortest remaining time is advantageous because short processes are handled very quickly. ★ Requires very little overhead since a decision is made only when a process completes or a new process is added, and when a new process is added the algorithm only needs to compare the currently executing process with the new process, ignoring all other processes currently in the ready queue. Source: https://en.wikipedia.org/wiki/Shortest_remaining_time 2016-02-02
  • 83. SJF: The currently running process is allowed to continue to execute. PSJF: The currently running process is preempted if the CPU burst of the newly arrived process is shorter than the remaining CPU burst of the currently running process. ready running terminated new 1 5 waiting SJV vs PSJF 1 5&
  • 84. Ready queue Process Arrival time Burst time P1 0 7 P2 2 4 P3 4 1 P4 5 4 Example of PSJF P1 0 P2 2 P3 4 P2 5 P1 11 16 T Action 0 P1 is the only process ready to run. 2 When P2 arrives, P1 is preempted since the burst time of P2 (4) is smaller than the remaining burst time of P1 (5). 4 When P3 arrives, P2 is preempted since the burst time of P3 (1) is smaller than the remaining burst time of P1 (5) and P2 (2). 5 P3 is done and P4 (4) arrives to the ready queue where P1 (5) and P2 (2) already waits. P2 has the smallest remaining burst time and is selected to run next. 7 P2 is done. P1 (5) and P4 (4) waits in the ready queue. P4 (4) has the smallest remaining burst time and is selected to run next. 11 P4 is done. Only P1 (5) waits in the ready queue and is selected to run next. 16 The ready queue is now empty, all processes done. Gantt Chart for the PSJF schedule P4 7 P P
  • 85. P1 0 P2 2 P3 4 P2 5 P1 11 16 Gantt Chart for the PSJF schedule P4 7 READY QUEUE PSJF Process Arrival time Burst time Tdispatch - Tarrival = Waiting time P1 0 7 0 - 0 = 0 11 - 2 = 9 P2 2 4 2 - 2 = 0 5 - 4 = 1 P3 4 1 4 - 4 = 0 P4 5 4 7 - 5 = 2 Processes may have to wait in the ready queue more than once. Average waiting time (9 + 1 + 0 + 2)/4 = 12/4 = 3 Average waiting time
  • 86. SJF vs PSJF SJF, average waiting time = 4 READY QUEUE Process Arrival time Burst Time P1 0 7 P2 2 4 P3 4 1 P4 5 4 PSJF, average waiting time = 3 SJF gives the optimal average waiting time for a given set of processes currently in the ready queue. PSJF aims at decreasing the average waiting time by allowing a newly arriving process to preempt the currently running process if the CPU burst of the new process is shorter than what remains of the currently running process. P1 0 P3 7 P2 8 P4 12 16 P1 0 P2 2 P3 4 P2 5 P1 11 16 P4 7
  • 88. A priority number (integer) is associated with each process. The CPU is allocated to the process with the highest priority (smallest integer = highest priority). ★ Preemption – should a higher priority process be allowed to preempt a running process with lower priority? ★ Starvation – low priority processes may never execute. ★ Ageing – ensure that jobs with lower priority will eventually complete their execution. Ageing can be implemented by increasing the priority of a process as time progresses. ★ SJF is a priority scheduling algorithm where priority is the predicted next CPU burst time. Priority scheduling
  • 90. Robin = Rödhake in Swedish In general, round-robin refers to a pattern or ordering whereby items are encountered or processed sequentially, often beginning again at the start in a circular manner. Source: https://en.wikipedia.org/wiki/Round-robin 2016-01-31
  • 91. Round Robin is one of the simplest CPU scheduling algorithms that also prevents starvation.
  • 92. Etymology The phrase round-robin actually has nothing whatever to do with a bird, robin or any other kind. Source: https://en.wikipedia.org/wiki/Round-robin_(document) 2016-02-02 ★ The term round-robin dates from the 17th-century French ruban rond (round ribbon). ★ Originally, round-robin is a document signed by multiple parties in a circle.. ★ Round-robin described the practice of signatories to petitions against authority (usually Government officials petitioning the Crown) appending their names on a document in a non-hierarchical circle or ribbon pattern (and so disguising the order in which they have signed) so that none may be identified as a ringleader.
  • 93. ready queue ready queue I/O queueI/O event I/O request process termination fork a child RR Round Robin (RR) is a scheduling algorithm where time slices are assigned to each process in equal portions and in circular order. Round Robin (RR) CPU T time slice
  • 94. Metric Description Performance A context dependent metric. What do we mean by performance? Waiting time Amount of time a process has been waiting in the ready queue. Turn around time Amount of time to execute a particular process. Response time Amount of time it takes from when a request was submitted until the first response is produced. Characteristics of CPU scheduling algorithms
  • 95. Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. Round Robin (RR)
  • 96. Response time and extreme behaviours (RR) A nice property of RR is that there is an upper bound the response time. Upper bound for response time: (n - 1)q time units. Extreme behaviours ★ q large ⇒ ★ q small ⇒ q must be large with respect to context switch, otherwise the overhead is too high. FCFS/FIFO
  • 97. The Gantt chart READY QUEUE Process Burst Time P1 24 P2 3 P3 3 Example of RR with Time Quantum = 4 P1 P2 P3 P1 P1 P1 P1 P1 0 4 7 10 14 18 22 26 30 Turnaround time Amount of time to execute a particular process. Response time Amount of time it takes from when a request was submitted until the first response is produced. preemption Round Robin typically have higher average turnaround times than SJF, but better response times.
  • 98. Time Quantum and Context Switch Time The RR time quantum (q) affects the number of context switches for a process.
  • 99. Average Turnaround Time = (13 + 6 + 7 + 17)/4 = 10.75 q = 3 P1 P2 P3 P4 P1 P4 P4 0 3 6 7 10 13 16 17 q = 7 P1 P2 P3 P4 0 6 9 10 17 Average Turnaround Time = (6 + 9 + 10 + 17)/4 = 10.5 Average Turnaround Time = (15 + 8 + 9 + 17)/4 = 12.25 q = 5 P1 P2 P3 P4 P1 P4 0 5 8 9 14 15 17 Turnaround time varies with the time quantum Turnaround time = amount of time to execute a particular process. When using Round Robin scheduling, the average turnaround time will depend on the time quantum (q).
  • 101. Foreground and background processes Sometimes process can easily be classified into two groups, one set of processes that interacts with users and one set that doesn’t. Background process (batch) A process that don’t interacts with any user is called a background process or a batch process. Foreground process (interactive) A process that interacts with users is callad a foreground process or an interactive process.
  • 103. General classification of processes: ★ foreground (interactive) ★ background (batch) In a multi-level queue scheduling algorithm, there will be n number of queues, where n is the number of groups the processes are classified into. ★ Each queue will be assigned a priority and will have its own scheduling algorithm like Round-robin scheduling or FCFS. Multilevel queue scheduling (1) A multi-level queue scheduling algorithm is used in scenarios where the processes can be classified into groups based on properties like process type, CPU time, IO access, memory size, etc Source: https://en.wikipedia.org/wiki/Multilevel_queue 2016-02-02
  • 104. Each queue has its own scheduling algorithm. There must be scheduling among the queues. For example, the foreground queue may have absolute priority over the background queue. If an interactive process enters the ready queue while a batch process is running, the batch process will be preempted. Multilevel queue scheduling (2) Use several ready queues. A process is permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type.
  • 105. Ready queue is partitioned into separate queues: ★ foreground (interactive) ★ background (batch) Each queue has its own scheduling algorithm ★ foreground – RR ★ background – FCFS Multilevel queue scheduling (3) How to select scheduling algorithms for the various queues?
  • 106. ★ Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. ★ Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes. Multilevel queue scheduling (4) Scheduling must be done between the queues.
  • 107. CPU time 80 % Round Robin (RR) Foreground processes 20 % First Come First Served (FCFS) Background processes Multilevel queue scheduling (5) Example of time slicing among multilevel queues. Foreground process are given 80 % of the CPU time for RR scheduling and background processes are given 20 % of the CPU time for FCFC scheduling.
  • 108. Multilevel feedback queue scheduling The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time it will be moved to a lower-priority queue.
  • 109. Design objectives Multilevel feedback queue scheduling design objectives. ★ Give preference to short CPU bursts. ★ Give preference to I/O bound processes. ★ Separate processes into categories based on their need for the CPU. Source: https://en.wikipedia.org/wiki/Multilevel_feedback_queue 2018-02-07
  • 110. Q0 - RR, q = 8 ms Q1 - RR, q = 16 ms Q2 - FCFS Priorities • The scheduler first executes all processes in Q0. • Only when Q0 is empty will it execute processes in Q1. • Similarly, processes in Q2 will only be executed if Q0 and Q1 are empty. Preemption • A process that arrives at Q1 will preempt a process in Q2. • A process in Q1 will in turn be preempted by a process arriving at Q0.
  • 111. Example Q0 - RR, q = 8 ms Q1 - RR, q = 16 ms Q2 - FCFS 1 A new job enters queue Q0 which is served FCFS. Each job gets at most 8 milliseconds of CPU time. 1 2 If it does not finish in 8 milliseconds, the job is moved to queue Q1.2 If it still does not complete, it is preempted and moved to queue Q2.4 4 3 At Q1 the job is again served FCFS and receives 16 additional milliseconds.3 5 Once in Q2, processes are scheduled using FCFS but are run only when Q0 and Q1 are empty. 5
  • 113. In computer science, starvation is a problem encountered in multitasking where a process is perpetually denied necessary resources. Without those resources, the program can never finish its task. Problem with any sort of priority scheduling? A process that is ready to run but waiting for the CPU can be considered blocked. A priority scheduling algorithm can leave some low-priority processes waiting indefinitely. A steady stream of higher-priority processes can prevent a low- priority process from ever getting the CPU. Indefinite blocking (starvation)
  • 114. Ageing Ageing is a scheduling technique used to avoid starvation.
  • 115. Ageing is used to ensure that jobs with lower priority will eventually complete their execution. ★ Ageing can be implemented by increasing the priority of a process as time progresses. Ageing A process can move between the various queues. ★ Ageing can be implemented this way. Q0 - RR, q = 8 ms Q1 - RR, q = 16 ms Q2 - FCFS Multilevel feedback queue scheduling Source: https://en.wikipedia.org/wiki/Aging_(scheduling) 2016-02-05