SlideShare a Scribd company logo
CPU Scheduling
CPU Scheduling
Basic Concepts
CPU Scheduler (Short-term Scheduler)
Dispatcher
Scheduling Criteria
Optimization Creteria
First-Come, First-Served (FCFS) Algorithm
First-Come, First-Served (FCFS) Algorithm
Shortest Job First (SJF) Algorithm
Example of nonpre-emptive SJF Algorithm
Example of preemptive SJF Algorithm
Priority CPU Scheduling
 In the Shortest Job First scheduling algorithm, the priority of a process is generally the inverse of the
CPU burst time, i.e. the larger the burst time the lower is the priority of that process.
 In case of priority scheduling the priority is not always set as the inverse of the CPU burst time,
rather it can be internally or externally set.
 The scheduling is done on the basis of priority of the process where the process which is most urgent is
processed first.
 Processes with same priority are executed in FCFS manner.
 The priority of process, when internally defined, can be decided based on memory requirements, time
limits ,number of open files, ratio of I/O burst to CPU burst etc.
 Whereas, external priorities are set based on criteria outside the operating system, like the importance of the
process, funds paid for the computer resource use, etc.
Types of Priority Scheduling Algorithm
 Two types
1. Preemptive Priority Scheduling:
If the new process arrived at the ready queue has a higher priority than the
currently running process, the CPU is preempted.
2. Non-Preemptive Priority Scheduling:
In case of non-preemptive priority scheduling algorithm if a new process arrives
with a higher priority than the current running process, the incoming process is
put at the head of the ready queue, which means after the execution of the
current process it will be processed.
Example of Priority Scheduling Algorithm
Example of Priority Scheduling Algorithm
Priority CPU Scheduling
 Priority scheduling can suffer from a major problem known as indefinite
blocking, or starvation, in which a low-priority task can wait forever because
there are always some other jobs around that have higher priority.
 If this problem is allowed to occur, then processes will either run eventually
when the system load lightens ( at say 2:00 a.m. ), or will eventually get lost
when the system is shut down or crashes. ( There are rumors of jobs that have
been stuck for years. )
 One common solution to this problem is aging, in which priorities of jobs
increase the longer they wait. Under this scheme a low-priority job will
eventually get its priority raised high enough that it gets run.
Round Robin Scheduling
 Round robin scheduling is similar to FCFS scheduling, except that CPU
bursts are assigned with limits called time quantum.
 When a process is given the CPU, a timer is set for whatever value has
been set for a time quantum.
 If the process finishes its burst before the time quantum timer expires,
then it is swapped out of the CPU just like the normal FCFS algorithm.
 If the timer goes off first, then the process is swapped out of the CPU and
moved to the back end of the ready queue.
 The ready queue is maintained as a circular queue, so when all processes
have had a turn, then the scheduler gives the first process another turn,
and so on.
Round Robin Scheduling
 RR scheduling can give the effect of all processors sharing the CPU equally,
although the average wait time can be longer than with other scheduling algorithms.
Round Robin Scheduling
 In the above diagram, arrival time is not mentioned so it is
taken as 0 for all processes.
 Note: If arrival time is not given for any problem statement
then it is taken as 0 for all processes; if it is given then the
problem can be solved accordingly.
Round Robin Scheduling
average waiting time=11+5+15+13/4 = 44/4= 11ms
RR Scheduling
RR Scheduling
RR Scheduling
Process Arrival Time Burst Time
p1 0 24
p2 1 3
p3 2 3
p1 p2 p3 p1 p1 p1 P1 p1
0 4 7 10 14 18 22 26 30
Advantages and Disadvantages
 The performance of RR is sensitive to the time quantum selected.
 If the quantum is large enough, then RR reduces to the FCFS algorithm; If it is very small, then each
process gets 1/nth of the processor time and share the CPU equally.
 BUT, a real system invokes overhead for every context switch, and the smaller the time quantum the
more context switches there are. ( See Figure 5.4 below. )
 Most modern systems use time quantum between 10 and 100 milliseconds, and context switch times on
the order of 10 microseconds, so the overhead is small relative to the time quantum.
Multilevel Queue Scheduling
 Another class of scheduling algorithms has been created for situations in
which processes are easily classified into different groups.
 For example, A common division is made between foreground(or
interactive) processes and background (or batch) processes.
 These two types of processes have different response-time requirements,
and so might have different scheduling needs.
 In addition, foreground processes may have priority over background
processes.
Multilevel Queue Scheduling
 Algorithm partitions the ready queue into several separate queues.
 The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
 Each queue has its own scheduling algorithm.
 For example, separate queues might be used for foreground and background
processes.
 The foreground queue might be scheduled by the Round Robin algorithm, while the
background queue is scheduled by an FCFS algorithm.
Multilevel Queue Scheduling
 In addition, there must be scheduling among the queues, which is
commonly implemented as fixed-priority preemptive scheduling.
 For example, The foreground queue may have absolute priority over
the background queue.
 Note that under this algorithm jobs cannot switch from queue to queue
 Once they are assigned a queue, that is their queue until they finish.
Multilevel Queue Scheduling
Let us consider an example of a multilevel queue-scheduling
algorithm with five queues:
Multilevel Queue Scheduling
 Each queue has absolute priority over lower-priority
queues.
 No process in the batch queue, for example, could run
unless the queues for system processes, interactive
processes, and interactive editing processes were all
empty.
 If an interactive editing process entered the ready
queue while a batch process was running, the batch
process will be preempted.
Multilevel Feedback-Queue Scheduling
 Multilevel feedback queue scheduling is similar to the ordinary
multilevel queue scheduling described above, except jobs may
be moved from one queue to another for a variety of reasons:
❖If the characteristics of a job change between CPU-intensive
and I/O intensive, then it may be appropriate to switch a job
from one queue to another.
❖Aging can also be incorporated, so that a job that has waited
for a long time can get bumped up into a higher priority
queue for a while.
Multilevel Feedback-Queue Scheduling
Multilevel Feedback-Queue Scheduling
 Multilevel feedback queue scheduling is the most
flexible, because it can be tuned for any situation.
 But it is also the most complex to implement because of
all the adjustable parameters.
 Some of the parameters which define one of these
systems include:
❖ The number of queues.
❖ The scheduling algorithm for each queue.
❖ The methods used to upgrade or demote processes from
one queue to another. ( Which may be different. )
❖ The method used to determine which queue a process
enters initially.
Multiple-Processor Scheduling
 In multiple-processor scheduling multiple CPU’s are available and hence
Load Sharing becomes possible.
 However multiple processor scheduling is more complex as compared to
single processor scheduling.
 In multiple processor scheduling there are cases when the processors are
identical i.e. HOMOGENEOUS, in terms of their functionality, we can use
any processor available to run any process in the queue.
Multiple-Processor Scheduling
 Approaches to Multiple-Processor Scheduling –
 Asymmetric Multiprocessing: when all the scheduling decisions and I/O
processing are handled by a single processor which is called the Master
Server and the other processors execute only the user code. This is simple
and reduces the need of data sharing.
 Symmetric Multiprocessing: where each processor is self scheduling. All
processes may be in a common ready queue or each processor may have its
own private queue for ready processes. The scheduling proceeds further by
having the scheduler for each processor examine the ready queue and select
a process to execute.
Multiple-Processor Scheduling
 Processor Affinity –
 Processor Affinity means a process has an affinity for the processor on which it
is currently running.
 When a process runs on a specific processor there are certain effects on the
cache memory.
 The data most recently accessed by the process populate the cache for the
processor and as a result successive memory access by the process are often
satisfied in the cache memory.
 Now if the process migrates to another processor, the contents of the cache
memory must be invalidated for the first processor and the cache for the
second processor must be repopulated.
 Because of the high cost of invalidating and repopulating caches, most of the
SMP(symmetric multiprocessing) systems try to avoid migration of processes
from one processor to another and try to keep a process running on the same
processor. This is known as PROCESSOR AFFINITY.
Multiple-Processor Scheduling
 There are two types of processor affinity:
 Soft Affinity – When an operating system has a policy of attempting to keep
a process running on the same processor but not guaranteeing it will do so,
this situation is called soft affinity.
 Hard Affinity – Hard Affinity allows a process to specify a subset of
processors on which it may run. Some systems such as Linux implements
soft affinity but also provide some system calls like sched_setaffinity() that
supports hard affinity.
Multiple-Processor Scheduling
 Load Balancing –
 Load Balancing is the phenomena which keeps the workload evenly distributed across all
processors in an SMP system.
 Load balancing is necessary only on systems where each processor has its own private
queue of process which are eligible to execute.
 On SMP(symmetric multiprocessing), it is important to keep the workload balanced
among all processors to fully utilize the benefits of having more than one processor else
one or more processor will sit idle while other processors have high workloads along with
lists of processors awaiting the CPU.
Multiple-Processor Scheduling
 There are two general approaches to load balancing :
 Push Migration – In push migration a special task routinely checks the load on each
processor and if it finds an imbalance then it evenly distributes load on each processors by
moving the processes from overloaded to idle or less busy processors.
 Pull Migration – Pull Migration occurs when an idle processor pulls a waiting task from a
busy processor for its execution.
Multiple-Processor Scheduling
 Multicore Processors
 In multicore processors multiple processor cores are placed on the same physical chip.
 Each core has a register set to maintain its architectural state and thus appears to the
operating system as a separate physical processor.
 SMP systems that use multicore processors are faster and consume less power than
systems in which each processor has its own physical chip.
Multiple-Processor Scheduling
 Multicore Processors
 However multicore processors may complicate the scheduling problems.
 When processor accesses memory then it spends a significant amount of time waiting for
the data to become available.
 This situation is called MEMORY STALL.
Multiple-Processor Scheduling
 Multicore Processors
 Memory stall occurs for various reasons such as cache miss, which is accessing the data
that is not in the cache memory.
 In such cases the processor can spend upto fifty percent of its time waiting for data to
become available from the memory.
 To solve this problem recent hardware designs have implemented multithreaded processor
cores in which two or more hardware threads are assigned to each core.
 Therefore if one thread stalls while waiting for the memory, core can switch to another
thread.
Multiple-Processor Scheduling
 Multicore Processors: There are two ways to multithread a processor :
 Coarse-grained multithreading: In this method, when an event like a memory stall
occurs, the processor switches to another thread to begin execution.
 The cost of switching is higher as the instruction pipeline must be terminated before
other threads can begin execution.
 Fine-grained multithreading: In this multithreading, the processor switches between
threads at a much finer level, mainly at the boundary of an instruction cycle.
 The architectural design of fine grained systems include logic for thread switching and
as a result the cost of switching between threads is small.
Multiple-Processor Scheduling
 Virtualization and Threading
 In this type of multiple-processor scheduling even a single CPU system acts like a
multiple-processor system.
 In a system with Virtualization, the virtualization presents one or more virtual CPU to
each of virtual machines running on the system and then schedules the use of physical
CPU among the virtual machines.
Multiple-Processor Scheduling- Virtualization and Threading
 Most virtualized environments have one host operating system and many guest
operating systems.
 The host operating system creates and manages the virtual machines.
 Each virtual machine has a guest operating system installed and applications run within
that guest.
 Each guest operating system may be assigned for specific use cases, applications or users
including time sharing or even real-time operation.
 The guest OSes don't realize their processors are virtual, and make scheduling decisions
on the assumption of real processors.

More Related Content

PPTX
CPU Scheduling in OS Presentation
PPT
Computer architecture
PPTX
Shortest job first Scheduling (SJF)
PPTX
SCHEDULING ALGORITHMS
PPTX
Assembly and Machine Code
PDF
OS - Process Concepts
PPTX
Introduction of Memory Management
PPTX
Computer architecture virtual memory
CPU Scheduling in OS Presentation
Computer architecture
Shortest job first Scheduling (SJF)
SCHEDULING ALGORITHMS
Assembly and Machine Code
OS - Process Concepts
Introduction of Memory Management
Computer architecture virtual memory

What's hot (20)

PPTX
Dead Code Elimination
PDF
Hadoop combiner and partitioner
PPTX
Process synchronization in Operating Systems
PPTX
Demand paging
PPTX
Introduction to Parallel and Distributed Computing
PPTX
cpu scheduling
PDF
Process scheduling (CPU Scheduling)
PPT
File replication
PPTX
Algorithm Introduction
PPTX
Computer architecture multi core processor
PPT
System call
PPTX
Virtual memory management in Operating System
PPTX
INTER PROCESS COMMUNICATION (IPC).pptx
PPTX
Multiprocessor
PPTX
Memory Management in OS
PPT
Scheduling algorithms
PDF
Structure of Operating System
PPTX
Deadlocks in operating system
PPTX
Page replacement algorithms
Dead Code Elimination
Hadoop combiner and partitioner
Process synchronization in Operating Systems
Demand paging
Introduction to Parallel and Distributed Computing
cpu scheduling
Process scheduling (CPU Scheduling)
File replication
Algorithm Introduction
Computer architecture multi core processor
System call
Virtual memory management in Operating System
INTER PROCESS COMMUNICATION (IPC).pptx
Multiprocessor
Memory Management in OS
Scheduling algorithms
Structure of Operating System
Deadlocks in operating system
Page replacement algorithms
Ad

Similar to cpu scheduling.pdf (20)

DOCX
Cpu scheduling pre final formatting
DOCX
Cpu scheduling
DOCX
Cpu scheduling final
PPTX
LM10,11,12 - CPU SCHEDULING algorithms and its processes
PPTX
Osy ppt - Copy.pptx
PPTX
PPTX
Scheduling algorithms
PPT
CPU SCHEDULING IN OPERATING SYSTEMS IN DETAILED
PPTX
CPU SCHEDULING ALGORITHMS-FCFS,SJF,RR.pptx
PPT
Chapter No 4 CPU Scheduling and Algorithms.ppt
PPTX
(CPU Scheduling) in operating systems.pptx
PPT
MODULE 2 for the cpu shcheduling and.ppt
PPT
Introduction of cpu scheduling in operating system
PPTX
Scheduling algo(by HJ)
PPT
Cpu scheduling
PPTX
2_CPU Scheduling (2)beautifulgameyt.pptx
DOCX
Unit 2 notes
PPTX
Cpu scheduling
PPTX
Operating system
PDF
Cpu scheduling pre final formatting
Cpu scheduling
Cpu scheduling final
LM10,11,12 - CPU SCHEDULING algorithms and its processes
Osy ppt - Copy.pptx
Scheduling algorithms
CPU SCHEDULING IN OPERATING SYSTEMS IN DETAILED
CPU SCHEDULING ALGORITHMS-FCFS,SJF,RR.pptx
Chapter No 4 CPU Scheduling and Algorithms.ppt
(CPU Scheduling) in operating systems.pptx
MODULE 2 for the cpu shcheduling and.ppt
Introduction of cpu scheduling in operating system
Scheduling algo(by HJ)
Cpu scheduling
2_CPU Scheduling (2)beautifulgameyt.pptx
Unit 2 notes
Cpu scheduling
Operating system
Ad

Recently uploaded (20)

PDF
Basic Mud Logging Guide for educational purpose
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Insiders guide to clinical Medicine.pdf
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
Classroom Observation Tools for Teachers
PDF
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
VCE English Exam - Section C Student Revision Booklet
PPTX
Week 4 Term 3 Study Techniques revisited.pptx
PPTX
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
01-Introduction-to-Information-Management.pdf
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
Cell Structure & Organelles in detailed.
PDF
Pre independence Education in Inndia.pdf
PPTX
Cell Types and Its function , kingdom of life
PDF
Business Ethics Teaching Materials for college
Basic Mud Logging Guide for educational purpose
2.FourierTransform-ShortQuestionswithAnswers.pdf
Insiders guide to clinical Medicine.pdf
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Classroom Observation Tools for Teachers
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
VCE English Exam - Section C Student Revision Booklet
Week 4 Term 3 Study Techniques revisited.pptx
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
Anesthesia in Laparoscopic Surgery in India
Module 4: Burden of Disease Tutorial Slides S2 2025
01-Introduction-to-Information-Management.pdf
Renaissance Architecture: A Journey from Faith to Humanism
O5-L3 Freight Transport Ops (International) V1.pdf
Cell Structure & Organelles in detailed.
Pre independence Education in Inndia.pdf
Cell Types and Its function , kingdom of life
Business Ethics Teaching Materials for college

cpu scheduling.pdf

  • 10. Shortest Job First (SJF) Algorithm
  • 11. Example of nonpre-emptive SJF Algorithm
  • 12. Example of preemptive SJF Algorithm
  • 13. Priority CPU Scheduling  In the Shortest Job First scheduling algorithm, the priority of a process is generally the inverse of the CPU burst time, i.e. the larger the burst time the lower is the priority of that process.  In case of priority scheduling the priority is not always set as the inverse of the CPU burst time, rather it can be internally or externally set.  The scheduling is done on the basis of priority of the process where the process which is most urgent is processed first.  Processes with same priority are executed in FCFS manner.  The priority of process, when internally defined, can be decided based on memory requirements, time limits ,number of open files, ratio of I/O burst to CPU burst etc.  Whereas, external priorities are set based on criteria outside the operating system, like the importance of the process, funds paid for the computer resource use, etc.
  • 14. Types of Priority Scheduling Algorithm  Two types 1. Preemptive Priority Scheduling: If the new process arrived at the ready queue has a higher priority than the currently running process, the CPU is preempted. 2. Non-Preemptive Priority Scheduling: In case of non-preemptive priority scheduling algorithm if a new process arrives with a higher priority than the current running process, the incoming process is put at the head of the ready queue, which means after the execution of the current process it will be processed.
  • 15. Example of Priority Scheduling Algorithm
  • 16. Example of Priority Scheduling Algorithm
  • 17. Priority CPU Scheduling  Priority scheduling can suffer from a major problem known as indefinite blocking, or starvation, in which a low-priority task can wait forever because there are always some other jobs around that have higher priority.  If this problem is allowed to occur, then processes will either run eventually when the system load lightens ( at say 2:00 a.m. ), or will eventually get lost when the system is shut down or crashes. ( There are rumors of jobs that have been stuck for years. )  One common solution to this problem is aging, in which priorities of jobs increase the longer they wait. Under this scheme a low-priority job will eventually get its priority raised high enough that it gets run.
  • 18. Round Robin Scheduling  Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are assigned with limits called time quantum.  When a process is given the CPU, a timer is set for whatever value has been set for a time quantum.  If the process finishes its burst before the time quantum timer expires, then it is swapped out of the CPU just like the normal FCFS algorithm.  If the timer goes off first, then the process is swapped out of the CPU and moved to the back end of the ready queue.  The ready queue is maintained as a circular queue, so when all processes have had a turn, then the scheduler gives the first process another turn, and so on.
  • 19. Round Robin Scheduling  RR scheduling can give the effect of all processors sharing the CPU equally, although the average wait time can be longer than with other scheduling algorithms.
  • 20. Round Robin Scheduling  In the above diagram, arrival time is not mentioned so it is taken as 0 for all processes.  Note: If arrival time is not given for any problem statement then it is taken as 0 for all processes; if it is given then the problem can be solved accordingly.
  • 21. Round Robin Scheduling average waiting time=11+5+15+13/4 = 44/4= 11ms
  • 24. RR Scheduling Process Arrival Time Burst Time p1 0 24 p2 1 3 p3 2 3 p1 p2 p3 p1 p1 p1 P1 p1 0 4 7 10 14 18 22 26 30
  • 25. Advantages and Disadvantages  The performance of RR is sensitive to the time quantum selected.  If the quantum is large enough, then RR reduces to the FCFS algorithm; If it is very small, then each process gets 1/nth of the processor time and share the CPU equally.  BUT, a real system invokes overhead for every context switch, and the smaller the time quantum the more context switches there are. ( See Figure 5.4 below. )  Most modern systems use time quantum between 10 and 100 milliseconds, and context switch times on the order of 10 microseconds, so the overhead is small relative to the time quantum.
  • 26. Multilevel Queue Scheduling  Another class of scheduling algorithms has been created for situations in which processes are easily classified into different groups.  For example, A common division is made between foreground(or interactive) processes and background (or batch) processes.  These two types of processes have different response-time requirements, and so might have different scheduling needs.  In addition, foreground processes may have priority over background processes.
  • 27. Multilevel Queue Scheduling  Algorithm partitions the ready queue into several separate queues.  The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type.  Each queue has its own scheduling algorithm.  For example, separate queues might be used for foreground and background processes.  The foreground queue might be scheduled by the Round Robin algorithm, while the background queue is scheduled by an FCFS algorithm.
  • 28. Multilevel Queue Scheduling  In addition, there must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling.  For example, The foreground queue may have absolute priority over the background queue.  Note that under this algorithm jobs cannot switch from queue to queue  Once they are assigned a queue, that is their queue until they finish.
  • 29. Multilevel Queue Scheduling Let us consider an example of a multilevel queue-scheduling algorithm with five queues:
  • 30. Multilevel Queue Scheduling  Each queue has absolute priority over lower-priority queues.  No process in the batch queue, for example, could run unless the queues for system processes, interactive processes, and interactive editing processes were all empty.  If an interactive editing process entered the ready queue while a batch process was running, the batch process will be preempted.
  • 31. Multilevel Feedback-Queue Scheduling  Multilevel feedback queue scheduling is similar to the ordinary multilevel queue scheduling described above, except jobs may be moved from one queue to another for a variety of reasons: ❖If the characteristics of a job change between CPU-intensive and I/O intensive, then it may be appropriate to switch a job from one queue to another. ❖Aging can also be incorporated, so that a job that has waited for a long time can get bumped up into a higher priority queue for a while.
  • 33. Multilevel Feedback-Queue Scheduling  Multilevel feedback queue scheduling is the most flexible, because it can be tuned for any situation.  But it is also the most complex to implement because of all the adjustable parameters.  Some of the parameters which define one of these systems include: ❖ The number of queues. ❖ The scheduling algorithm for each queue. ❖ The methods used to upgrade or demote processes from one queue to another. ( Which may be different. ) ❖ The method used to determine which queue a process enters initially.
  • 34. Multiple-Processor Scheduling  In multiple-processor scheduling multiple CPU’s are available and hence Load Sharing becomes possible.  However multiple processor scheduling is more complex as compared to single processor scheduling.  In multiple processor scheduling there are cases when the processors are identical i.e. HOMOGENEOUS, in terms of their functionality, we can use any processor available to run any process in the queue.
  • 35. Multiple-Processor Scheduling  Approaches to Multiple-Processor Scheduling –  Asymmetric Multiprocessing: when all the scheduling decisions and I/O processing are handled by a single processor which is called the Master Server and the other processors execute only the user code. This is simple and reduces the need of data sharing.  Symmetric Multiprocessing: where each processor is self scheduling. All processes may be in a common ready queue or each processor may have its own private queue for ready processes. The scheduling proceeds further by having the scheduler for each processor examine the ready queue and select a process to execute.
  • 36. Multiple-Processor Scheduling  Processor Affinity –  Processor Affinity means a process has an affinity for the processor on which it is currently running.  When a process runs on a specific processor there are certain effects on the cache memory.  The data most recently accessed by the process populate the cache for the processor and as a result successive memory access by the process are often satisfied in the cache memory.  Now if the process migrates to another processor, the contents of the cache memory must be invalidated for the first processor and the cache for the second processor must be repopulated.  Because of the high cost of invalidating and repopulating caches, most of the SMP(symmetric multiprocessing) systems try to avoid migration of processes from one processor to another and try to keep a process running on the same processor. This is known as PROCESSOR AFFINITY.
  • 37. Multiple-Processor Scheduling  There are two types of processor affinity:  Soft Affinity – When an operating system has a policy of attempting to keep a process running on the same processor but not guaranteeing it will do so, this situation is called soft affinity.  Hard Affinity – Hard Affinity allows a process to specify a subset of processors on which it may run. Some systems such as Linux implements soft affinity but also provide some system calls like sched_setaffinity() that supports hard affinity.
  • 38. Multiple-Processor Scheduling  Load Balancing –  Load Balancing is the phenomena which keeps the workload evenly distributed across all processors in an SMP system.  Load balancing is necessary only on systems where each processor has its own private queue of process which are eligible to execute.  On SMP(symmetric multiprocessing), it is important to keep the workload balanced among all processors to fully utilize the benefits of having more than one processor else one or more processor will sit idle while other processors have high workloads along with lists of processors awaiting the CPU.
  • 39. Multiple-Processor Scheduling  There are two general approaches to load balancing :  Push Migration – In push migration a special task routinely checks the load on each processor and if it finds an imbalance then it evenly distributes load on each processors by moving the processes from overloaded to idle or less busy processors.  Pull Migration – Pull Migration occurs when an idle processor pulls a waiting task from a busy processor for its execution.
  • 40. Multiple-Processor Scheduling  Multicore Processors  In multicore processors multiple processor cores are placed on the same physical chip.  Each core has a register set to maintain its architectural state and thus appears to the operating system as a separate physical processor.  SMP systems that use multicore processors are faster and consume less power than systems in which each processor has its own physical chip.
  • 41. Multiple-Processor Scheduling  Multicore Processors  However multicore processors may complicate the scheduling problems.  When processor accesses memory then it spends a significant amount of time waiting for the data to become available.  This situation is called MEMORY STALL.
  • 42. Multiple-Processor Scheduling  Multicore Processors  Memory stall occurs for various reasons such as cache miss, which is accessing the data that is not in the cache memory.  In such cases the processor can spend upto fifty percent of its time waiting for data to become available from the memory.  To solve this problem recent hardware designs have implemented multithreaded processor cores in which two or more hardware threads are assigned to each core.  Therefore if one thread stalls while waiting for the memory, core can switch to another thread.
  • 43. Multiple-Processor Scheduling  Multicore Processors: There are two ways to multithread a processor :  Coarse-grained multithreading: In this method, when an event like a memory stall occurs, the processor switches to another thread to begin execution.  The cost of switching is higher as the instruction pipeline must be terminated before other threads can begin execution.  Fine-grained multithreading: In this multithreading, the processor switches between threads at a much finer level, mainly at the boundary of an instruction cycle.  The architectural design of fine grained systems include logic for thread switching and as a result the cost of switching between threads is small.
  • 44. Multiple-Processor Scheduling  Virtualization and Threading  In this type of multiple-processor scheduling even a single CPU system acts like a multiple-processor system.  In a system with Virtualization, the virtualization presents one or more virtual CPU to each of virtual machines running on the system and then schedules the use of physical CPU among the virtual machines.
  • 45. Multiple-Processor Scheduling- Virtualization and Threading  Most virtualized environments have one host operating system and many guest operating systems.  The host operating system creates and manages the virtual machines.  Each virtual machine has a guest operating system installed and applications run within that guest.  Each guest operating system may be assigned for specific use cases, applications or users including time sharing or even real-time operation.  The guest OSes don't realize their processors are virtual, and make scheduling decisions on the assumption of real processors.