Operating System 10
MULTIPROCESSOR
AND REAL-TIME
SCHEDULING
MULTIPROCESSOR
SCHEDULING
• When a computer system contains more than a single processor,
several new issues are introduced into the design of the
scheduling function. We begin with a brief overview of multiprocessors
and then look at the rather different considerations when scheduling
is done at the process level and the thread level. We can classify
multiprocessor systems as follows :
• Loosely coupled or distributed multiprocessor, or cluster
• Functionally specialized processors
• Tightly coupled multiprocessing
Granularity
 A good way of characterizing multiprocessors and
placing them in context withother architectures is to
consider the synchronization granularity, or frequency of
synchronization, between processes in a system.We can
distinguish five categoriesof parallelism that differ in the
degree of granularity.
Independent Parallelism
 With independent parallelism, there is no explicit synchronization
among processes. Each represents a separate, independent
application or job.A typical use of this type of parallelism is in a time-
sharing system. Each user is performing a particular application, such
as word processing or using a spreadsheet.
 It is possible to achieve a similar performance gain by providing each
user with a personal computer or workstation. If any files or
information are to be shared, then the individual systems must be
hooked together into a distributed system supported by a network.
Coarse and Very Coarse-Grained
Parallelism
 With coarse and very coarsegrained parallelism, there is
synchronization among processes, but at a very gross level.This kind
of situation is easily handled as a set of concurrent processes running
on a multiprogrammed uniprocessor and can be supported on a
multiprocessor with little or no change to user software.
 In general, any collection of concurrent processes that need to
communicate or synchronize can benefit from the use of a
multiprocessor architecture. In the case of very infrequent interaction
among processes, a distributed system can provide good support.
 In that case, the multiprocessor organization provides the most
effective support.
Medium-Grained Parallelism
 In this case, the programmer must explicitly specify the potential
parallelism of an application.Typically, there will need to be rather a
high degree of coordination and interaction among the threads of an
application, leading to a medium-grain level of synchronization.
 Whereas independent, very coarse, and coarse-grained parallelism
can be supported on either a multiprogrammed uniprocessor or a
multiprocessor with little or no impact on the scheduling function, we
need to reexamine scheduling when dealing with the scheduling of
threads. Because the various threads of an application interact so
frequently, scheduling decisions concerning one thread may affect the
performance of the entire application.We return to this issue later in
this section.
Fine-Grained Parallelism
 Fine-grained parallelism represents a much more complex
use of parallelism than is found in the use of threads.
Although much work has been done on highly parallel
applications, this is so far a specialized and fragmented
area, with many different approaches.
Granularity Example:Valve Game
Software
 Valve is an entertainment and technology company that has developed a
number of popular games, as well as the Source engine, one of the most
widely played game engines available. From Valve’s perspective, threading
granularity options are defined as follows :
• Coarse threading
• Fine-grained threading
• Hybrid threading
 Valve found that a hybrid threading approach was the most promising and
would scale the best as multiprocessors with eight or sixteen processors
became available.Valve identified systems that operate very effectively
being permanently assigned to a single processor.
•> Figure 10.1 illustrates the thread structure for the rendering module. In this
hierarchical structure, higher-level threads spawn lower-level threads as
needed.
Design Issues
 Scheduling on a multiprocessor involves three interrelated issues:
• The assignment of processes to processors
• The use of multiprogramming on individual processors
• The actual dispatching of a process
In looking at these three issues, it is important to keep in
mind that the approach taken will depend, in general, on the degree
of granularity of the applications and on the number of processors
available.
Assignment of Processes to
Processors
 If a process is permanently assigned to one processor from activation until its
completion, then a dedicated short-term queue is maintained for each
processor.An advantage of this approach is that there may be less overhead
in the scheduling function, because the processor assignment is made once
and for all.
 A disadvantage of static assignment is that one processor can be idle, with an
empty queue, while another processor has a backlog. To prevent this
situation, a common queue can be used.
 Regardless of whether processes are dedicated to processors, some means is
needed to assign processes to processors.Two approaches have been
used:master/slave and peer.With a master/slave architecture, key kernel
functions of the operating system always run on a particular processor.The
other processors may only execute user programs.The master is responsible
for scheduling jobs
The Use of Multiprogramming on
Individual Processors
 When each process is statically assigned to a processor for the
duration of its lifetime, a new question arises: Should that processor
be multiprogrammed? The reader’s first reaction may be to wonder
why the question needs to be asked; it would appear particularly
wasteful to tie up a processor with a single process when that process
may frequently be blocked waiting for I/O or because of
concurrency/synchronization considerations.
 In the traditional multiprocessor, which is dealing with coarse-grained
or independent synchronization granularity, it is clear that each
individual processor should be able to switch among a number of
processes to achieve high utilization and therefore better
performance.
Process Dispatching
 The final design issue related to multiprocessor scheduling is the
actual selection of a process to run. We have seen that, on a
multiprogrammed uniprocessor, the use of priorities or of
sophisticated scheduling algorithms based on past usage may
improve performance over a simple-minded first-come-first-served
strategy.When we consider multiprocessors, these complexities may
be unnecessary or even counterproductive, and a simpler approach
may be more effective with less overhead. In the case of thread
scheduling, new issues come into play that may be more important
than priorities or execution histories.We address each of these topics
in turn.
Process Scheduling
 In most traditional multiprocessor systems, processes are not dedicated to
processors. Rather there is a single queue for all processors, or if some sort
of priority scheme is used, there are multiple queues based on priority, all
feeding into the common pool of processors. In any case, we can view the
system as being a multiserver queuing architecture.
 The study is concerned with process service time, which measures the
amount of processor time a process needs, either for a total job or the
amount of time needed each time the process is ready to use the processor.
 The study in [SAUE81] repeated this analysis under a number of
assumptions about degree of multiprogramming, mix of I/O-bound versus
CPU-bound processes, and the use of priorities.The general conclusion is
that the specific scheduling discipline is much less important with two
processors than with one. It should be evident that this conclusion is even
stronger as the number of processors increases.
Thread Scheduling
 On a uniprocessor, threads can be used as a program structuring aid and to
overlap I/O with processing. Because of the minimal penalty in doing a
thread switch compared to a process switch, these benefits are realized with
little cost.
 Among the many proposals for multiprocessor thread scheduling and
processor assignment, four general approaches stand out:
 Load sharing
 Gang scheduling
 Dedicated processor assignment
 Dynamic scheduling
Operating-System-10 for ph.d  teachi.ppt
REAL-TIME SCHEDULING
Background
 Real-time computing is becoming an increasingly important discipline. The
operating system, and in particular the scheduler, is perhaps the most
important component of a real-time system.
 Real-time computing may be defined as that type of computing in which the
correctness of the system depends not only on the logical result of the
computation but also on the time at which the results are produced.
 Another characteristic of real-time tasks is whether they are periodic or
aperiodic. An aperiodic task has a deadline by which it must finish or start,
or it may have a constraint on both start and finish time.
Characteristics of Real-Time
Operating Systems
 Real-time operating systems can be characterized as having unique requirements in
five general areas :
• Determinism
• Responsiveness
• User control
• Reliability
• Fail-soft operation
 An operating system is deterministic to the extent that it performs operations at fixed,
predetermined times or within predetermined time intervals.When multiple processes
are competing for resources and processor time, no system will be fully
deterministic.
 Aspects of responsiveness include the following:
1. The amount of time required to initially handle the interrupt and begin
execution of the interrupt service routine (ISR).
2. The amount of time required to perform the ISR.
3. The effect of interrupt nesting.
• User control is generally much broader in a real-time operating system
than in ordinary operating systems. In a typical non-real-time operating system,
the user either has no control over the scheduling function of the operating
system or can only provide broad guidance, such as grouping users into more
than one priority class.
• Reliability is typically far more important for real-time systems than non-
realtime systems. A transient failure in a non-real-time system may be solved by
simplyre booting the system. A processor failure in a multiprocessor non-real-
time system may result in a reduced level of service until the failed processor is
repaired or replaced.
• To meet the foregoing requirements, real-time operating systems typically
include the following features :
• Fast process or thread switch
• Small size (with its associated minimal functionality)
• Ability to respond to external interrupts quickly
• Preemptive scheduling based on priority
• Minimization of intervals during which interrupts are disabled
• Special alarms and timeouts
Real-Time Scheduling
 Real-time scheduling is one of the most active areas of
research in computer science. In this subsection, we
provide an overview of the various approaches to
realtime scheduling and look at two popular classes of
scheduling algorithms.
Operating-System-10 for ph.d  teachi.ppt
• Based on these considerations, the authors identify the following
classes of algorithms:
• Static table-driven approaches
• Static priority-driven preemptive approaches
• Dynamic planning-based approaches
• Dynamic best effort approaches
• Static table-driven scheduling is applicable to tasks that are periodic.
Input to the analysis consists of the periodic arrival time, execution
time, periodic ending deadline, and relative priority of each task.
• Static priority-driven preemptive scheduling makes use of the priority-
driven preemptive scheduling mechanism common to most non-real-
time multiprogramming systems. In a non-real-time system, a variety
of factors might be used to determine priority.
• With dynamic planning-based scheduling, after a task arrives,
but before its execution begins, an attempt is made to create a
schedule that contains the previously scheduled tasks as well as the
new arrival.
• Dynamic best effort scheduling is the approach used by many real-
time systems that are currently commercially available.
Deadline Scheduling
 Most contemporary real-time operating systems are designed with the
objective of starting real-time tasks as rapidly as possible, and hence
emphasize rapid interrupt handling and task dispatching. In fact, this is not a
particularly useful metric in evaluating real-time operating systems.
 There have been a number of proposals for more powerful and appropriate
approaches to real-time task scheduling. All of these are based on having
additional information about each task. In its most general form, the
following information about each task might be used:
• Ready time
• Starting deadline
• Completion deadline
• Processing time
• Resource requirements
• Priority
• Subtask structure
Operating-System-10 for ph.d  teachi.ppt
Operating-System-10 for ph.d  teachi.ppt
Priority Inversion
 Priority inversion is a phenomenon that can occur in any
priority-based preemptive scheduling scheme but is
particularly relevant in the context of real-time
scheduling. The best-known instance of priority inversion
involved the Mars Pathfinder mission.
 In any priority scheduling scheme, the system should
always be executing the task with the highest priority.
Priority inversion occurs when circumstances within the
system force .a higher-priority task to wait for a lower-
priority task.
Operating-System-10 for ph.d  teachi.ppt
Selesai....

More Related Content

PPT
Chapter10-OS7el real time presentati.ppt
PDF
Real time operating systems
PPTX
Multiprocessor Scheduling
DOCX
Basic features of distributed system
DOCX
Uniprocessor SchedulingCsci 430, Spring 2018Texas A&
PPTX
PPTX
Process Management Operating Systems .pptx
DOCX
Load balancing in Distributed Systems
Chapter10-OS7el real time presentati.ppt
Real time operating systems
Multiprocessor Scheduling
Basic features of distributed system
Uniprocessor SchedulingCsci 430, Spring 2018Texas A&
Process Management Operating Systems .pptx
Load balancing in Distributed Systems

Similar to Operating-System-10 for ph.d teachi.ppt (20)

PPTX
Lecture-AI and Alogor Parallel Aglorithms.pptx
PPTX
Operating system 08 time sharing and multitasking operating system
PPTX
Chapter 6 Concurrency: Deadlock and Starvation
PDF
Learning scheduler parameters for adaptive preemption
PDF
LEARNING SCHEDULER PARAMETERS FOR ADAPTIVE PREEMPTION
PPTX
Operating system
DOCX
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
PPTX
Different Operating system's use and advantage.pptx
PDF
Types of Operating Systems-converted.pdf
PDF
Types of Operating System-converted.pdf
PPTX
Distributed Operating System
PPTX
Multi processor scheduling
PPTX
Scheduling Algorithm in Operating System.pptx
PDF
Concurrency in Operating system_12345678
PPTX
Embedded system-3 is a note given to fourth year students at Gambella univers...
DOC
Symmetric multiprocessing and Microkernel
PDF
Simulation of Process Scheduling Algorithms
PDF
Os overview
PPTX
PPTX
Types of Operating System
Lecture-AI and Alogor Parallel Aglorithms.pptx
Operating system 08 time sharing and multitasking operating system
Chapter 6 Concurrency: Deadlock and Starvation
Learning scheduler parameters for adaptive preemption
LEARNING SCHEDULER PARAMETERS FOR ADAPTIVE PREEMPTION
Operating system
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
Different Operating system's use and advantage.pptx
Types of Operating Systems-converted.pdf
Types of Operating System-converted.pdf
Distributed Operating System
Multi processor scheduling
Scheduling Algorithm in Operating System.pptx
Concurrency in Operating system_12345678
Embedded system-3 is a note given to fourth year students at Gambella univers...
Symmetric multiprocessing and Microkernel
Simulation of Process Scheduling Algorithms
Os overview
Types of Operating System
Ad

More from naghamsalimmohammed (20)

PPSX
11 zakaria ai with energy newenergy.ppsx
PPT
d983d98ad981-d8aad983d988d986-d8a7d984d8a8d988d8a7d8a8d8a7d8aa-d985d986d8b7d9...
PPTX
AI with ner and renewableenergies n a.pptx
PPT
Multiprocessor Real-Time Sched_uling.ppt
PPT
Lec07 multiprocessor schaduling chap.ppt
PPT
Chap10 real time for multiprocessor7.ppt
PPT
10-MultiprocessorScheduling chapter8.ppt
PPT
mutiprocessor systems chapter8 ph.d .ppt
PPT
Introduction_to_Matlabbanmar k ibrahim a
PPT
matlab_tutorial for student in the first
PPTX
WO9ejqiILZzj8X9ukgJcLZEOytsecbibi1qSR6Bz (1).pptx
PPTX
MKmsUggtqlS5mRDRLo3FlTeNcmPzNIVCjpzVVkaN (1).pptx
PPTX
oQyvgYTNyhAJVuO8BoLY0enJahENMxFkfNT8paTS (1).pptx
PPTX
TSF5yTmKkTPQaknkEYP9X6WSyTsCrMNzUpijtAkK (1).pptx
PPTX
KV26unCxavbG0CVZUWraVNg1uPCF5dF05pWJjAvt (2).pptx
PPTX
H2vg9XMPlt5mQXMHQKy2LLkQBlcLL5MW101LhYpX (1).pptx
PPTX
AEl5jGXj60URITII3M0VZOgr0xufAxaWRDQhy4J4 (2).pptx
PPTX
3WcjHN3YS2okZOUBUeaiYybyJfk5Risw5717oEeO (1)(1).pptx
PPTX
24idjaxYc0jbbc5OtD33gM2n4efPnQ1OiDAUIGbF.pptx
PPTX
CUqkpROcbIgUcPnZxqbDntBHRgCSEdP2jXkOoNiO.pptx
11 zakaria ai with energy newenergy.ppsx
d983d98ad981-d8aad983d988d986-d8a7d984d8a8d988d8a7d8a8d8a7d8aa-d985d986d8b7d9...
AI with ner and renewableenergies n a.pptx
Multiprocessor Real-Time Sched_uling.ppt
Lec07 multiprocessor schaduling chap.ppt
Chap10 real time for multiprocessor7.ppt
10-MultiprocessorScheduling chapter8.ppt
mutiprocessor systems chapter8 ph.d .ppt
Introduction_to_Matlabbanmar k ibrahim a
matlab_tutorial for student in the first
WO9ejqiILZzj8X9ukgJcLZEOytsecbibi1qSR6Bz (1).pptx
MKmsUggtqlS5mRDRLo3FlTeNcmPzNIVCjpzVVkaN (1).pptx
oQyvgYTNyhAJVuO8BoLY0enJahENMxFkfNT8paTS (1).pptx
TSF5yTmKkTPQaknkEYP9X6WSyTsCrMNzUpijtAkK (1).pptx
KV26unCxavbG0CVZUWraVNg1uPCF5dF05pWJjAvt (2).pptx
H2vg9XMPlt5mQXMHQKy2LLkQBlcLL5MW101LhYpX (1).pptx
AEl5jGXj60URITII3M0VZOgr0xufAxaWRDQhy4J4 (2).pptx
3WcjHN3YS2okZOUBUeaiYybyJfk5Risw5717oEeO (1)(1).pptx
24idjaxYc0jbbc5OtD33gM2n4efPnQ1OiDAUIGbF.pptx
CUqkpROcbIgUcPnZxqbDntBHRgCSEdP2jXkOoNiO.pptx
Ad

Recently uploaded (20)

PPTX
Core Concepts of Personalized Learning and Virtual Learning Environments
PPTX
Computer Architecture Input Output Memory.pptx
PDF
Hazard Identification & Risk Assessment .pdf
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
Environmental Education MCQ BD2EE - Share Source.pdf
PDF
Journal of Dental Science - UDMY (2021).pdf
PDF
Empowerment Technology for Senior High School Guide
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
English Textual Question & Ans (12th Class).pdf
PDF
My India Quiz Book_20210205121199924.pdf
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
Race Reva University – Shaping Future Leaders in Artificial Intelligence
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PDF
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PPTX
B.Sc. DS Unit 2 Software Engineering.pptx
PDF
AI-driven educational solutions for real-life interventions in the Philippine...
PPTX
Introduction to pro and eukaryotes and differences.pptx
Core Concepts of Personalized Learning and Virtual Learning Environments
Computer Architecture Input Output Memory.pptx
Hazard Identification & Risk Assessment .pdf
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
Environmental Education MCQ BD2EE - Share Source.pdf
Journal of Dental Science - UDMY (2021).pdf
Empowerment Technology for Senior High School Guide
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
A powerpoint presentation on the Revised K-10 Science Shaping Paper
English Textual Question & Ans (12th Class).pdf
My India Quiz Book_20210205121199924.pdf
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
Race Reva University – Shaping Future Leaders in Artificial Intelligence
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
FORM 1 BIOLOGY MIND MAPS and their schemes
B.Sc. DS Unit 2 Software Engineering.pptx
AI-driven educational solutions for real-life interventions in the Philippine...
Introduction to pro and eukaryotes and differences.pptx

Operating-System-10 for ph.d teachi.ppt

  • 3. • When a computer system contains more than a single processor, several new issues are introduced into the design of the scheduling function. We begin with a brief overview of multiprocessors and then look at the rather different considerations when scheduling is done at the process level and the thread level. We can classify multiprocessor systems as follows : • Loosely coupled or distributed multiprocessor, or cluster • Functionally specialized processors • Tightly coupled multiprocessing
  • 4. Granularity  A good way of characterizing multiprocessors and placing them in context withother architectures is to consider the synchronization granularity, or frequency of synchronization, between processes in a system.We can distinguish five categoriesof parallelism that differ in the degree of granularity.
  • 5. Independent Parallelism  With independent parallelism, there is no explicit synchronization among processes. Each represents a separate, independent application or job.A typical use of this type of parallelism is in a time- sharing system. Each user is performing a particular application, such as word processing or using a spreadsheet.  It is possible to achieve a similar performance gain by providing each user with a personal computer or workstation. If any files or information are to be shared, then the individual systems must be hooked together into a distributed system supported by a network.
  • 6. Coarse and Very Coarse-Grained Parallelism  With coarse and very coarsegrained parallelism, there is synchronization among processes, but at a very gross level.This kind of situation is easily handled as a set of concurrent processes running on a multiprogrammed uniprocessor and can be supported on a multiprocessor with little or no change to user software.  In general, any collection of concurrent processes that need to communicate or synchronize can benefit from the use of a multiprocessor architecture. In the case of very infrequent interaction among processes, a distributed system can provide good support.  In that case, the multiprocessor organization provides the most effective support.
  • 7. Medium-Grained Parallelism  In this case, the programmer must explicitly specify the potential parallelism of an application.Typically, there will need to be rather a high degree of coordination and interaction among the threads of an application, leading to a medium-grain level of synchronization.  Whereas independent, very coarse, and coarse-grained parallelism can be supported on either a multiprogrammed uniprocessor or a multiprocessor with little or no impact on the scheduling function, we need to reexamine scheduling when dealing with the scheduling of threads. Because the various threads of an application interact so frequently, scheduling decisions concerning one thread may affect the performance of the entire application.We return to this issue later in this section.
  • 8. Fine-Grained Parallelism  Fine-grained parallelism represents a much more complex use of parallelism than is found in the use of threads. Although much work has been done on highly parallel applications, this is so far a specialized and fragmented area, with many different approaches.
  • 9. Granularity Example:Valve Game Software  Valve is an entertainment and technology company that has developed a number of popular games, as well as the Source engine, one of the most widely played game engines available. From Valve’s perspective, threading granularity options are defined as follows : • Coarse threading • Fine-grained threading • Hybrid threading  Valve found that a hybrid threading approach was the most promising and would scale the best as multiprocessors with eight or sixteen processors became available.Valve identified systems that operate very effectively being permanently assigned to a single processor.
  • 10. •> Figure 10.1 illustrates the thread structure for the rendering module. In this hierarchical structure, higher-level threads spawn lower-level threads as needed.
  • 11. Design Issues  Scheduling on a multiprocessor involves three interrelated issues: • The assignment of processes to processors • The use of multiprogramming on individual processors • The actual dispatching of a process In looking at these three issues, it is important to keep in mind that the approach taken will depend, in general, on the degree of granularity of the applications and on the number of processors available.
  • 12. Assignment of Processes to Processors  If a process is permanently assigned to one processor from activation until its completion, then a dedicated short-term queue is maintained for each processor.An advantage of this approach is that there may be less overhead in the scheduling function, because the processor assignment is made once and for all.  A disadvantage of static assignment is that one processor can be idle, with an empty queue, while another processor has a backlog. To prevent this situation, a common queue can be used.  Regardless of whether processes are dedicated to processors, some means is needed to assign processes to processors.Two approaches have been used:master/slave and peer.With a master/slave architecture, key kernel functions of the operating system always run on a particular processor.The other processors may only execute user programs.The master is responsible for scheduling jobs
  • 13. The Use of Multiprogramming on Individual Processors  When each process is statically assigned to a processor for the duration of its lifetime, a new question arises: Should that processor be multiprogrammed? The reader’s first reaction may be to wonder why the question needs to be asked; it would appear particularly wasteful to tie up a processor with a single process when that process may frequently be blocked waiting for I/O or because of concurrency/synchronization considerations.  In the traditional multiprocessor, which is dealing with coarse-grained or independent synchronization granularity, it is clear that each individual processor should be able to switch among a number of processes to achieve high utilization and therefore better performance.
  • 14. Process Dispatching  The final design issue related to multiprocessor scheduling is the actual selection of a process to run. We have seen that, on a multiprogrammed uniprocessor, the use of priorities or of sophisticated scheduling algorithms based on past usage may improve performance over a simple-minded first-come-first-served strategy.When we consider multiprocessors, these complexities may be unnecessary or even counterproductive, and a simpler approach may be more effective with less overhead. In the case of thread scheduling, new issues come into play that may be more important than priorities or execution histories.We address each of these topics in turn.
  • 15. Process Scheduling  In most traditional multiprocessor systems, processes are not dedicated to processors. Rather there is a single queue for all processors, or if some sort of priority scheme is used, there are multiple queues based on priority, all feeding into the common pool of processors. In any case, we can view the system as being a multiserver queuing architecture.  The study is concerned with process service time, which measures the amount of processor time a process needs, either for a total job or the amount of time needed each time the process is ready to use the processor.  The study in [SAUE81] repeated this analysis under a number of assumptions about degree of multiprogramming, mix of I/O-bound versus CPU-bound processes, and the use of priorities.The general conclusion is that the specific scheduling discipline is much less important with two processors than with one. It should be evident that this conclusion is even stronger as the number of processors increases.
  • 16. Thread Scheduling  On a uniprocessor, threads can be used as a program structuring aid and to overlap I/O with processing. Because of the minimal penalty in doing a thread switch compared to a process switch, these benefits are realized with little cost.  Among the many proposals for multiprocessor thread scheduling and processor assignment, four general approaches stand out:  Load sharing  Gang scheduling  Dedicated processor assignment  Dynamic scheduling
  • 19. Background  Real-time computing is becoming an increasingly important discipline. The operating system, and in particular the scheduler, is perhaps the most important component of a real-time system.  Real-time computing may be defined as that type of computing in which the correctness of the system depends not only on the logical result of the computation but also on the time at which the results are produced.  Another characteristic of real-time tasks is whether they are periodic or aperiodic. An aperiodic task has a deadline by which it must finish or start, or it may have a constraint on both start and finish time.
  • 20. Characteristics of Real-Time Operating Systems  Real-time operating systems can be characterized as having unique requirements in five general areas : • Determinism • Responsiveness • User control • Reliability • Fail-soft operation  An operating system is deterministic to the extent that it performs operations at fixed, predetermined times or within predetermined time intervals.When multiple processes are competing for resources and processor time, no system will be fully deterministic.  Aspects of responsiveness include the following: 1. The amount of time required to initially handle the interrupt and begin execution of the interrupt service routine (ISR). 2. The amount of time required to perform the ISR. 3. The effect of interrupt nesting.
  • 21. • User control is generally much broader in a real-time operating system than in ordinary operating systems. In a typical non-real-time operating system, the user either has no control over the scheduling function of the operating system or can only provide broad guidance, such as grouping users into more than one priority class. • Reliability is typically far more important for real-time systems than non- realtime systems. A transient failure in a non-real-time system may be solved by simplyre booting the system. A processor failure in a multiprocessor non-real- time system may result in a reduced level of service until the failed processor is repaired or replaced. • To meet the foregoing requirements, real-time operating systems typically include the following features : • Fast process or thread switch • Small size (with its associated minimal functionality) • Ability to respond to external interrupts quickly • Preemptive scheduling based on priority • Minimization of intervals during which interrupts are disabled • Special alarms and timeouts
  • 22. Real-Time Scheduling  Real-time scheduling is one of the most active areas of research in computer science. In this subsection, we provide an overview of the various approaches to realtime scheduling and look at two popular classes of scheduling algorithms.
  • 24. • Based on these considerations, the authors identify the following classes of algorithms: • Static table-driven approaches • Static priority-driven preemptive approaches • Dynamic planning-based approaches • Dynamic best effort approaches • Static table-driven scheduling is applicable to tasks that are periodic. Input to the analysis consists of the periodic arrival time, execution time, periodic ending deadline, and relative priority of each task. • Static priority-driven preemptive scheduling makes use of the priority- driven preemptive scheduling mechanism common to most non-real- time multiprogramming systems. In a non-real-time system, a variety of factors might be used to determine priority. • With dynamic planning-based scheduling, after a task arrives, but before its execution begins, an attempt is made to create a schedule that contains the previously scheduled tasks as well as the new arrival. • Dynamic best effort scheduling is the approach used by many real- time systems that are currently commercially available.
  • 25. Deadline Scheduling  Most contemporary real-time operating systems are designed with the objective of starting real-time tasks as rapidly as possible, and hence emphasize rapid interrupt handling and task dispatching. In fact, this is not a particularly useful metric in evaluating real-time operating systems.  There have been a number of proposals for more powerful and appropriate approaches to real-time task scheduling. All of these are based on having additional information about each task. In its most general form, the following information about each task might be used: • Ready time • Starting deadline • Completion deadline • Processing time • Resource requirements • Priority • Subtask structure
  • 28. Priority Inversion  Priority inversion is a phenomenon that can occur in any priority-based preemptive scheduling scheme but is particularly relevant in the context of real-time scheduling. The best-known instance of priority inversion involved the Mars Pathfinder mission.  In any priority scheduling scheme, the system should always be executing the task with the highest priority. Priority inversion occurs when circumstances within the system force .a higher-priority task to wait for a lower- priority task.