More Related Content
Operating Systems CPU Scheduling Process ch6.ppt operating System batch Processing Various CPU Scheduling Algorithms in OS.ppt ch6 (2).ppt operating system by williamm Similar to Scheduling topic in operating systems (OS). (20)
Operating System - CPU Scheduling Introduction operating system in computer science .pdf operating system in computer science ch05.pdf cloud computing chapter one in computer science CPU SCHEDULINGCPU SCHEDULINGCPU SCHEDULINGCPU SCHEDULING.pptx Operating System CPU Scheduling slide with OS Lec_7.pptx presentation easy way to down ch5jnkhhkjhjkhhkhkjhuyuyiuyiyiuyuiyuyu-1.ppt Process scheduling : operating system ( Btech cse ) oprations of internet.ppt Chapter Five, operating Systems ,Information And Technology ch5.pptx CUP Scheduling and its details in OS Recently uploaded (20)
Embodied AI: Ushering in the Next Era of Intelligent Systems The CXO Playbook 2025 â Future-Ready Strategies for C-Suite Leaders Cerebrai... web development for engineering and engineering Digital Logic Computer Design lecture notes R24 SURVEYING LAB MANUAL for civil enggi keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk Project quality management in manufacturing Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx Engineering Ethics, Safety and Environment [Autosaved] (1).pptx Construction Project Organization Group 2.pptx composite construction of structures.pdf IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026 Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4... PPT on Performance Review to get promotions Operating System & Kernel Study Guide-1 - converted.pdf Well-logging-methods_new................ BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks Scheduling topic in operating systems (OS).
- 2. 6.2 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Chapter 6: CPU Scheduling
ïź Basic Concepts
ïź Scheduling Criteria
ïź Scheduling Algorithms
ïź Thread Scheduling
ïź Multiple-Processor Scheduling
ïź Real-Time CPU Scheduling
ïź Operating Systems Examples
ïź Algorithm Evaluation
- 3. 6.3 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Objectives
ïź To introduce CPU scheduling, which is the basis for
multiprogrammed operating systems
ïź To describe various CPU-scheduling algorithms
ïź To discuss evaluation criteria for selecting a CPU-scheduling
algorithm for a particular system
ïź To examine the scheduling algorithms of several operating
systems
- 4. 6.4 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Basic Concepts
Basic Concepts
ïź In a single-processor system, only one process can run at a time.
ïź Othersmust wait until the CPU is free and can be rescheduled.
ïź The objective ofmultiprogramming is to have some process running
at all times, to maximizeCPU utilization. The idea is relatively
simple.
ïź A process is executed until it must wait, typically for the completion
of some I/O request. In a simplecomputer system, the CPU then just
sits idle.
ïź All this waiting time is wasted;no useful work is accomplished.
- 5. 6.5 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
ïź With multiprogramming, we try to use this time productively. Several
processes are kept in memory at one time.
ïź When one process has to wait, the operating system takes the CPU
away from that process and gives the CPU to another process.
ïź This pattern continues. Everytime one process has to wait, another
process can take over use of the CPU.
ïź Scheduling of this kind is a fundamental operating-system function.
ïź Almost all computer resources are scheduled before use. The CPU
is, of course,one of the primary computer resources.
ïź Thus, its scheduling is central to operating-system design.
- 6. 6.6 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Basic Concepts
ïź Maximum CPU utilization
obtained with multiprogramming
ïź CPUâI/O Burst Cycle â Process
execution consists of a cycle of
CPU execution and I/O wait
ïź CPU burst followed by I/O burst
ïź CPU burst distribution is of main
concern
- 7. 6.7 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
CPUâI/O Burst Cycle
âą The success of CPU scheduling depends on an observed property of
processes:
âą process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states.
âą Process execution begins with a CPU burst.That is followed by an I/O
burst, which is followed by another CPU burst, then another I/O
burst, and so on.
âą Eventually, the final CPU burst ends with a system request to terminate
execution
- 8. 6.8 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą The durations of CPU bursts have been measured extensively.
Althoughthey vary greatly from process to process and from
computer to computer,they tend to have a frequency curve similar to
that shown in Figure 6.2.
âą The curve is generally characterized as exponential or
hyperexponential, with a large number of short CPU bursts and a
small number of long CPU bursts.
- 10. 6.10 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
CPU Scheduler
ïź Short-term scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
ïŹ Queue may be ordered in various ways
ïź CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2.Switches from running to ready state(for example, when
an interrupt occurs)
3.Switches from waiting to ready(for example, at
completion of I/O)
4. Terminates
ï± For situations 1 and 4, there is no choice in terms of
scheduling.
ï± A new process(if one exists in the ready queue) must
be selected for execution. There is a choice, however,
for situations 2 and 3.
- 11. 6.11 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Scheduling under 1 and 4 is nonpreemptive
ïź All other scheduling is preemptive
ïŹ Consider access to shared data
ïŹ Consider preemption while in kernel mode
ïŹ Consider interrupts occurring during crucial OS activities
- 12. 6.12 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Nonpreemptive& preemptive scheduling
âą Under nonpreemptive scheduling, once the CPU has been allocated to a
process, the process keeps the CPU until it releases the CPU either by
terminating or by switching to the waiting state.
âą This scheduling method was used by Microsoft Windows 3.x.
âą Windows 95 introduced preemptive scheduling, and all subsequent
versions of Windows operating systems have used preemptive
scheduling.
âą preemptive scheduling can result in race conditions when data are
shared among several processes.
âą Consider the case of two processes that share data. While one process
is updating the data, it is preempted so that the second process can run.
âą The second process then tries to read the data, which are in an
inconsistent state.
- 13. 6.13 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą Preemption also affects the design of the operating-system kernel.
âą During the processing of a system call, the kernel maybe busy with
an activity on behalf of a process.
âą Such activities may involve changing important kernel data (for
instance, I/O queues).
- 14. 6.14 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Dispatcher
ïź Dispatcher module gives control of the CPU to the
process selected by the short-term scheduler; this
involves:
ïŹ switching context
ïŹ switching to user mode
ïŹ jumping to the proper location in the user program
to restart that program
ïź Dispatch latency â time it takes for the dispatcher to
stop one process and start another running
- 15. 6.15 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Scheduling Criteria
âą Different CPU-scheduling algorithms have different properties, and
the choice of a particular algorithm may favor one class of
processes over another.
âą In choosing which algorithm to use in a particular situation, we must
consider the properties of the various algorithms.
âą The criteria include the following:
âą CPU utilization. We want to keep the CPU as busy as possible.
Conceptually,CPU utilization can range from 0 to 100 percent. In
a real system, itshould range from 40 percent (for a lightly
loaded system) to 90 percent(for a heavily loaded system).
- 16. 6.16 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą Throughput. -
âą If the CPU is busy executing processes, then work is being done.
âą One measure of work is the number of processes that are
completed per time unit, called throughput.
âą For long processes, this ratemay be one process per hour; for short
transactions, it may be ten processes per second.
âą Turnaround time.â
âą Turn around time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and
doing I/O.
- 17. 6.17 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą Waiting time.â
âą Waiting time is the sum of the periods spent waiting in the ready queue.
âą Response time.â
âą In an interactive system, turnaround time may not be the best criterion.
âą Often, a process can produce some output fairly early and can continue
computing new results while previous results are being output to the
user.
âą Thus, another measure is the time from the submission of a request
until the first response is produced. This measure, called response time,
is the time it takes to start responding, not the time it takes to output the
response. The turnaround time is generally limited by the speed of the
output device.
- 18. 6.18 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Scheduling Algorithm Optimization Criteria
ïź Max CPU utilization
ïź Max throughput
ïź Min turnaround time
ïź Min waiting time
ïź Min response time
- 19. 6.19 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
ïź Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
ïź Waiting time for P1 = 0; P2 = 24; P3 = 27
ïź Average waiting time: (0 + 24 + 27)/3 = 17
P P P
1 2 3
0 24 30
27
- 20. 6.20 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą If the processes arrive in the order P1, P2,
P3, and are served in FCFS order,we get
the result shown in the following Gantt
chart, which is a bar chart that illustrates
a particular schedule, including the start
and finish times of each of the participating
processes
- 21. 6.21 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
ïź The Gantt chart for the schedule is:
ïź Waiting time for P1 = 6; P2 = 0; P3 = 3
ïź Average waiting time: (6 + 0 + 3)/3 = 3
ïź Much better than previous case
ïź Convoy effect - short process behind long process
ïŹ Consider one CPU-bound and many I/O-bound processes
P1
0 3 6 30
P2
P3
- 22. 6.22 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Shortest-Job-First (SJF) Scheduling
ïź Associate with each process the length of its next CPU burst
ïŹ Use these lengths to schedule the process with the shortest
time
ïź SJF is optimal â gives minimum average waiting time for a given
set of processes
ïŹ The difficulty is knowing the length of the next CPU request
ïŹ Could ask the user
- 23. 6.23 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
ïź SJF scheduling chart
ïź Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
P3
0 3 24
P4
P1
16
9
P2
- 24. 6.24 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą The waiting time is 3 milliseconds for
process P1, 16milliseconds for process
P2, 9milliseconds for process P3, and
0milliseconds for process P4.
âą Thus, the average waiting time is (3 + 16 +
9 + 0)/4 = 7 milliseconds.
âą By comparison, if we were using the FCFS
scheduling scheme, the average waiting
time would be 10.25 milliseconds.
- 25. 6.25 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą The real difficulty with the SJF algorithm is
knowing the length of the next CPU
request.
âą For long-term (job) scheduling in a batch
system, we can use the process time limit
that a user specifies when he submits the
job.
- 26. 6.26 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą In this situation, users are motivated to
estimate the process time limit
accurately,since a lower value may mean
faster response but too low a value will
cause a time-limit-exceeded error and
require resubmission.
âą SJF scheduling is used frequently in long-
term scheduling
- 27. 6.27 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą Although the SJF algorithm is optimal, it
cannot be implemented at the level of
short-term CPU scheduling.
âą With short-term scheduling, there is no way
to know the length of the next CPU burst.
âą One approach to this problem is to try to
approximate SJF scheduling.
âą We may not know the length of the next
CPU burst, but we may be able to predict
its value.
- 28. 6.28 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą We expect that the next CPU burst will be
similar in length to the previous ones.
âą By computing an approximation of the
length of the next CPU burst, we can pick
the process with the shortest predicted
CPU burst.
- 29. 6.29 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Determining Length of Next CPU Burst
âą The next CPU burst is generally predicted as an
exponential average of the measured lengths of previous
CPU bursts.
âą We can define the exponential average with the following
formula.
âą The value of tn contains our most recent information, while
"n stores the past history.
âą The parameter # controls the relative weight of recent and
past historyin our prediction.
:
Define
4.
1
0
,
3.
burst
CPU
next
the
for
value
predicted
2.
burst
CPU
of
length
actual
1.
ïŁ
ïŁ
ïœ
ïœ
ï«
ïĄ
ïĄ
ïŽ 1
n
th
n n
t 1
- 30. 6.30 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Prediction of the Length of the Next CPU Burst
- 31. 6.31 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Examples of Exponential Averaging
âą ïĄ =0
ïŹ ïŽn+1 = ïŽn
ï§ Recent history does not count(current conditions are
assumed to be transient)
ïź ïĄ =1
ïŹ ïŽn+1 = ïĄ tn
ïŹ Only the actual last CPU burst counts
ïź If we expand the formula, we get:
ïŽn+1 = ïĄ tn+(1 - ïĄ)ïĄ tn -1 + âŠ
+(1 - ïĄ )j
ïĄ tn -j + âŠ
+(1 - ïĄ )n +1
ïŽ0
ïź Since both ïĄ and (1 - ïĄ) are less than or equal to 1, each
successive term has less weight than its predecessor
- 32. 6.32 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Example of Shortest-remaining-time-first
ïź Now we add the concepts of varying arrival times and preemption to
the analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
ïź Preemptive SJF Gantt Chart
ïź Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5
msec
P4
0 1 26
P1
P2
10
P3
P1
5 17
- 33. 6.33 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Priority Scheduling
ïź A priority number (integer) is associated with each process
ïź The CPU is allocated to the process with the highest priority
(smallest integer ïș highest priority)
ïŹ Preemptive
ïŹ Nonpreemptive
ïź SJF is priority scheduling where priority is the inverse of predicted
next CPU burst time
ïź Problem ïș Starvation â low priority processes may never execute
ïź Solution ïș Aging â as time progresses increase the priority of the
process
- 34. 6.34 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
ïź Priority scheduling Gantt Chart
ïź Average waiting time = 8.2 msec
- 35. 6.35 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Round Robin (RR)
ïź Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
ïź If there are n processes in the ready queue and the time quantum
is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q
time units.
ïź Timer interrupts every quantum to schedule next process
ïź Performance
ïŹ q large ï FIFO
ïŹ q small ï q must be large with respect to context switch,
otherwise overhead is too high
- 36. 6.36 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
ïź The Gantt chart is:
ïź Typically, higher average turnaround than SJF, but better
response
ïź q should be large compared to context switch time
ïź q usually 10ms to 100ms, context switch < 10 usec
P P P
1 1 1
0 18 30
26
14
4 7 10 22
P2
P3
P1
P1
P1
- 37. 6.37 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą Letâs calculate the average waiting time for
this schedule.
âą P1 waits for 6milliseconds (10 - 4), P2
waits for 4 milliseconds, and P3 waits for 7
milliseconds.
âą Thus, the average waiting time is 17/3 =
5.66 milliseconds.
- 38. 6.38 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą If there are n processes in the ready
queue and the time quantum is q, then
each process gets 1/n of the CPU time in
chunks of at most q time units.
âą Each process must wait no longer than (n
â 1) Ă q time units until its next time
quantum.
- 39. 6.39 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą For example, with five processes and a
time quantum of 20 milliseconds, each
process will get up to 20 milliseconds
every 100 milliseconds.
- 40. 6.40 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą In the RR scheduling algorithm, no process is
allocated the CPU for more than 1 time
quantum in a row (unless it is the only
runnable process).
âą If a processâs CPU burst exceeds 1 time
quantum, that process is preempted and is
put back in the ready queue.
âą The RR scheduling algorithm is thus
preemptive.
- 41. 6.41 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą The performance of the RR algorithm
depends heavily on the size of the time
quantum.
âą At one extreme, if the time quantum is
extremely large, the RR policy is the same
as the FCFS policy.
âą In contrast, if the time quantum is
extremely small (say, 1 millisecond), the
RR approach can result in a large number
of context switches.
- 42. 6.42 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą Assume, for example, that we have only
one process of 10 time units.
âą If the quantum is 12 time units, the
process finishes in less than 1 time
quantum, with no overhead.
âą If the quantum is 6 time units, however,
the process requires 2 quanta, resulting in
a context switch.
- 43. 6.43 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą If the time quantum is 1 time unit, then
nine context switcheswill occur, slowing
the execution of the process accordingly
(Figure 6.4).
- 44. 6.44 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Time Quantum and Context Switch Time
- 45. 6.45 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Turnaround Time Varies With The Time Quantum
80% of CPU bursts
should be shorter than q
- 47. 6.47 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Multilevel Queue
ïź Ready queue is partitioned into separate queues, eg:
ïŹ foreground (interactive)
ïŹ background (batch)
ïź Process permanently in a given queue
ïź Each queue has its own scheduling algorithm:
ïŹ foreground â RR
ïŹ background â FCFS
ïź Scheduling must be done between the queues:
ïŹ Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
ïŹ Time slice â each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
ïŹ 20% to background in FCFS
- 49. 6.49 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Multilevel Feedback Queue
ïź A process can move between the various queues; aging can be
implemented this way
ïź Multilevel-feedback-queue scheduler defined by the following
parameters:
ïŹ number of queues
ïŹ scheduling algorithms for each queue
ïŹ method used to determine when to upgrade a process
ïŹ method used to determine when to demote a process
ïŹ method used to determine which queue a process will enter
when that process needs service
- 50. 6.50 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Example of Multilevel Feedback Queue
ïź Three queues:
ïŹ Q0 â RR with time quantum 8
milliseconds
ïŹ Q1 â RR time quantum 16 milliseconds
ïŹ Q2 â FCFS
ïź Scheduling
ïŹ A new job enters queue Q0 which is
served FCFS
ïŽ When it gains CPU, job receives 8
milliseconds
ïŽ If it does not finish in 8
milliseconds, job is moved to
queue Q1
ïŹ At Q1 job is again served FCFS and
receives 16 additional milliseconds
ïŽ If it still does not complete, it is
preempted and moved to queue Q2
- 51. 6.51 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Thread Scheduling
ïź Distinction between user-level and kernel-level threads
ïź When threads supported, threads scheduled, not processes
ïź Many-to-one and many-to-many models, thread library schedules
user-level threads to run on LWP
ïŹ Known as process-contention scope (PCS) since scheduling
competition is within the process
ïŹ Typically done via priority set by programmer
ïź Kernel thread scheduled onto available CPU is system-contention
scope (SCS) â competition among all threads in system
- 52. 6.52 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Process(heavy weight) Threads(user level)
System calls involved in
process
Threre is no system call
involved
OS treats different
processes differently
All user level threads
treated as a single task for
OS
Different process have
different copies of
data,files,code(overhead is
created)
Threads share same copy
of code and data
Context switching is slower Context switching is faster
Blocking a process will not
block another
Blocking a thread will block
entire process
Independent Intradependent
- 53. 6.53 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
User level thread Kernel level thread
User level threads are
managed by user level
library
Kernel level threads are
managed by OS (by default
SYMcalls)
User level threads are
typically fast
Kernel level threads
areslower
Context switching is faster Context switching is slow
If one user level threads
perform blocking operation
entire process get blocked
If one kernel level thread is
bloked,no effect on others
- 54. 6.54 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Pthread Scheduling
ïź API allows specifying either PCS or SCS during thread creation
ïŹ PTHREAD_SCOPE_PROCESS schedules threads using
PCS scheduling
ïŹ PTHREAD_SCOPE_SYSTEM schedules threads using
SCS scheduling
ïź Can be limited by OS â Linux and Mac OS X only allow
PTHREAD_SCOPE_SYSTEM
- 55. 6.55 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Pthread Scheduling API
#include <pthread.h>
#include <stdio.h>
#define NUM_THREADS 5
int main(int argc, char *argv[]) {
int i, scope;
pthread_t tid[NUM THREADS];
pthread_attr_t attr;
/* get the default attributes */
pthread_attr_init(&attr);
/* first inquire on the current scope */
if (pthread_attr_getscope(&attr, &scope) != 0)
fprintf(stderr, "Unable to get scheduling scopen");
else {
if (scope == PTHREAD_SCOPE_PROCESS)
printf("PTHREAD_SCOPE_PROCESS");
else if (scope == PTHREAD_SCOPE_SYSTEM)
printf("PTHREAD_SCOPE_SYSTEM");
else
fprintf(stderr, "Illegal scope value.n");
}
- 56. 6.56 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Pthread Scheduling API
/* set the scheduling algorithm to PCS or SCS */
pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);
/* create the threads */
for (i = 0; i < NUM_THREADS; i++)
pthread_create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM_THREADS; i++)
pthread_join(tid[i], NULL);
}
/* Each thread will begin control in this function */
void *runner(void *param)
{
/* do some work ... */
pthread_exit(0);
}
- 57. 6.57 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Multiple-Processor Scheduling
ïź Load Sharing possible
ïź CPU scheduling more complex when multiple CPUs are
available
ïź Consider Homogeneous processors within a multiprocessor
ï Processors are identical in functionality.
ï We can then use any available processor to run any process in
the queue.
ïź even with homogeneous multiprocessors, there are sometimes
limitations on scheduling
- 58. 6.58 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Approaches to Multiple-Processor Scheduling
ïź Asymmetric multiprocessing â only one processor accesses the system
data structures, alleviating the need for data sharing
ï In a multiprocessor system has all scheduling decisions, I/O processing,
and other system activities handled by a single processorâthe master
server.
ï The other processors execute only user code.
ï Simple because only one processor accesses the system data structures,
reducing the need for data sharing.
ïź Symmetric multiprocessing (SMP) â each processor is self-scheduling, all
processes in common ready queue, or each has its own private queue of
ready processes
ïŹ Currently, most common support SMP, includingWindows, Linux, and
Mac OS X
- 59. 6.59 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Processor Affinity
ïź Processor affinity â process has affinity for processor on which it is
currently running.
ï The data most recently accessed by the process populate the cache for the
processor.
ï As a result, successive memory accesses by the process are often satisfied
in cache memory.
ï if the process migrates to another processor?
ï the second processor must be repopulated.
ï Because of the high cost of invalidating and repopulating caches, most SMP
systems try to avoid migration of processes from one processor to another.
ïŹ soft affinity
ïŹ hard affinity
ïŹ Variations including processor sets
- 61. 6.61 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
ïź soft affinity: When an operating system has a policy of
attempting to keep a process running on the same
processorâbut not guaranteeing that it will do soâwe
have a situation known as soft affinity.
ïź hard affinity: some systems provide system calls that
support hard affinity, thereby allowing a process to
specify a subset of processors on which it may run.
ïź Linux implements soft affinity, but it also provides the
sched setaffinity() system call, which supports hard
affinity.
- 62. 6.62 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Multiple-Processor Scheduling â Load Balancing
ïź If SMP, need to keep all CPUs loaded for efficiency
ïź Load balancing attempts to keep workload evenly distributed
ïź Push migration â periodic task checks load on each
processor, and if found pushes task from overloaded
CPU to other CPUs
ïź Pull migration â idle processors pulls waiting task from
busy processor
- 63. 6.63 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Multicore Processors
ïź Recent trend to place multiple processor cores on same
physical chip
ïź Faster and consumes less power
ïź Multiple threads per core also growing
ïŹ Takes advantage of memory stall to make progress on
another thread while memory retrieve happens
- 65. 6.65 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Real-Time CPU cSheduling
âą Real-time systems are systems that carry
real-time tasks.
âą These tasks need to be performed
immediately with a certain degree of
urgency.
âą Classified into hard and soft real-time
tasks.
- 66. 6.66 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Real-Time CPU cSheduling
ïź CPU scheduling for real-time operating systems involves special issues
ïź Soft real-time systems â no guarantee as to when critical real-time process
will be scheduled. They guarantee only that the process will
be given preference over noncritical processes.
âą Soft real-time systems have degraded performance if their timing needs
cannot be met.
Ex:streaming video
ïź Hard real-time systems â task must be serviced by its deadline. Otherwise
considered as no service.
ïź Hard real-time systems have total failure if their timing needs cannot be met.
ïź Ex:Assembly line robotics,automobile air-bag deployment
- 67. 6.67 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Real-Time CPU Scheduling
ï When an event occurs, the system must respond to and
service it as quickly as possible.
ï We refer to event latency as the amount of time that
elapses from when an event occurs to when it is
serviced.
ï different events have different latency requirements
ïź Two types of latencies affect performance
1. Interrupt latency â Period of time from arrival of
interrupt to start of routine that services interrupt
2. Dispatch latency â time for schedule to take current
process off CPU and switch to another
- 68. 6.68 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Minimizing Latency
âą A real time systems is event driven in nature.When an event occurs, the system must
respond to and service it as quickly as possible.
âą Event Latency is the time between the occurence of a triggering event and the
(completion of) the systemâs response to the event.
tt00
Event E first occurs
t0 t1
Real-time system
respond to E
time
Event Latency
- 69. 6.69 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Interrupt latency
ï When an interrupt occurs, the operating system
must first complete the instruction it is executing
and determine the type of interrupt that
occurred.
ï It must then save the state of the current
process before servicing the interrupt using the
specific interrupt service routine (ISR).
ï The total time required to perform these tasks is
the interrupt latency
- 70. 6.70 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Real-Time CPU Scheduling cont.. --Interrupt Latency
- 71. 6.71 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Dispatch latency
ï The amount of time required for the scheduling
dispatcher to stop one process and start another
is known as dispatch latency.
ï Providing real time tasks with immediate access to the
cpu mandates that real-time OS minimize this latency
as well.
ï The most effective technique for keeping dispatch
latency low is to provide preemptive kernels.
- 72. 6.72 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Real-Time CPU Scheduling (Cont.) â Dispatch
latency
ïź Conflict phase of
dispatch latency:
1. Preemption of
any process
running in kernel
mode
2. Release by low-
priority process
of resources
needed by high-
priority
processes
- 73. 6.73 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
âą Using a technique known as an admission-control algorithm,each
task must specify its needs at the time it attempts to launch.
âą The scheduler does one of two things:It either admits the
process,guranteeing that the process will complete on time,or
âą Rejects the requests as impossible if it cannot gurantee that the task
will be serviced by its deadline.
âą The process of deciding the execution order of rreal-time
tasks,depends on the priority of the task
âą Task A: deadline=5,exe time=3
âą Task B: deadline=7,exe time=4
- 74. 6.74 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Priority-based Scheduling
ïź For real-time scheduling, scheduler must support preemptive, priority-
based scheduling
ïŹ But only guarantees soft real-time
ïź For hard real-time must also provide ability to meet deadlines
ïź Processes have new characteristics:
ï periodic ones require CPU at constant intervals.
ï Once a periodic process has acquired the CPU, it has a fixed
processing time t, a deadline d by which it must be serviced by the
CPU, and a period p.
ïŹ Has processing time t, deadline d, period p
ïŹ 0 †t †d †p
ïŹ Rate of periodic task is 1/p
- 76. 6.76 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Rate monotonic scheduling
ïź Unusual about this form of scheduling is that a process
may have to announce its deadline requirements to the
scheduler.
ïź Then, using a technique known as an admission-
control algorithm, the scheduler does one of two things.
ïź It either admits the process, guaranteeing that the
process will complete on time. (or)
rejects the request as impossible if it cannot guarantee
that the task will be serviced by its deadline.
- 77. 6.77 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
ïź The rate-monotonic scheduling algorithm schedules
periodic tasks using a static priority policy with
preemption.
ïź Higher priority one preempt the lower priority
ïź Upon entering the system, each periodic task is assigned
a priority inversely based on its period.
ïź The shorter the period, the higher the priority;
ïź policy is to assign a higher priority to tasks that require
the CPU more often.
ïź This assumes that the processing time of a periodic
process is the same for each CPU burst.
ïź That is, every time a process acquires the CPU, the
duration of its CPU burst is the same.
- 78. 6.78 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Rate Monotonic Scheduling
ïź A priority is assigned based on the inverse of its period
ïź Shorter periods = higher priority;
ïź Longer periods = lower priority
ïź P1 is assigned a higher priority than P2.
ïź Example:
P1 and P2
ïź Period: 50(P1); 100(P2)
ïź Burst Time: 20(p1), 35(P2)
ïź Deadlines: Before start of next Period.
- 79. 6.79 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Rate Monotonic Scheduling
P1 anPd P2. The periods
for PP1 and P2. The periods
for P1 and P2 are 50 and 100, respectivelyâthat is, p1 = 50
and p2 = 100. The
processing times are t1 = 20 for P1 and t2 = 35 for P2. The
deadline for each
process requires that it complete its CPU burst by the start of
its next period.1 and P2 are 50 and 100, respectivelyâthat is,
p1 = 50 and p2 = 100. The
processing times are t1 = 20 for P1 and t2 = 35 for P2. The
pP
- 80. 6.80 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Missed Deadlines with Rate Monotonic Scheduling
- 81. 6.81 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Earliest Deadline First Scheduling (EDF)
ïź Priorities are assigned according to deadlines:
the earlier the deadline, the higher the priority;
the later the deadline, the lower the priority
- 82. 6.82 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
Proportional Share Scheduling
ïź Proportional share schedulers operate by allocating T shares
among all applications.
ïź An application can receive N shares of time, thus ensuring that
the application will have N/T of the total processor time.
ïź T shares are allocated among all processes in the system
ïź An application receives N shares where N < T
ïź This ensures each application will receive N / T of the total
processor time
- 83. 6.83 Silberschatz, Galvin and Gagne ©2013
Operating System Concepts â 9th
Edition
ïź Eg: T = 100 shares is to be divided among three processes,
ïź A, B, and C. Ais assigned 50 shares â 50%
B is assigned 15 shares â 15%
C is assigned 20 shares â 20%
ïź These must work in conjunction with an admission-control policy to
guarantee that an application receives its allocated shares of time.
ïź This policy will admit a client requesting a particular number of shares only if
sufficient shares are available.
ïź A+B+C= 50 + 15 + 20 = 85 shares
If D requires 30 shares that is denied