SlideShare a Scribd company logo
Multiprocessor
Real-Time Scheduling
Supervised by
Prof. Dr. Dhuha Basheer
By
Asmaa Mowafaq ALQassab & Nagham Salim ALLaila
Multiprocessor Models
 Identical (Homogeneous): All the processors have the
same characteristics, i.e., the execution time of a
job is independent on the processor it is executed.
 Uniform: Each processor has its own speed, i.e., the
execution time of a job on a processor is proportional
to the speed of the processor.
 A faster processor always executes a job faster than slow
processors do.
 Unrelated (Heterogeneous): Each job has its own
execution time on a specified processor
 A job might be executed faster on a processor, but other
jobs might be slower on that processor.
Scheduling Models
 Global Scheduling:
 A job may execute on any processor.
 The system maintains a global ready queue.
 Execute the M highest-priority jobs in the ready queue, where M
is the number of processors.
 It requires high on-line overhead.
 Partitioned Scheduling:
 Each task is assigned on a dedicated processor.
 Schedulability is done individually on each processor.
 It requires no additional on-line overhead.
 Semi-partitioned Scheduling:
 Adopt task partitioning first and reserve time slots (bandwidths)
for tasks that allow migration.
 It requires some on-line overhead.
Scheduling Models
6
2 3
4
7
5
CPU 1 CPU 2 CPU 3
Waiting Queue
CPU 1 CPU 2 CPU 3
1
4
9 8
6
2 3
5
7
Global Scheduling
CPU 1 CPU 2 CPU 3
1
4 6
2
5
Partitioned Scheduling
Semi-partitioned
Scheduling
3
3 7 7
Global Scheduling
 All ready tasks are kept in a global queue
 A job can be migrated to any processor.
 Priority-based global scheduling:
 Among the jobs in the global queue, the M highest priority jobs
are chosen to be executed on M processors.
 Task migration here is assumed with no overhead.
 Global-EDF: When a job finishes or arrives to the global
queue, the M jobs in the queue with the shortest
absolute deadlines are chosen to be executed on M
processors.
 Global-RM: When a job finishes or arrives to the global
queue, the M jobs in the queue with the highest priorities
are chosen to be executed on M processors.
Global Scheduling
 Advantages:
 Effective utilization of processing resources (if it works)
 Unused processor time can easily be reclaimed at run-
time (mixture of hard and soft RT tasks to optimize
resource utilization)
 Disadvantages:
 Adding processors and reducing computation times and
other parameters can actually decrease optimal
performance in some scenarios!
 Poor resource utilization for hard timing constraints
 Few results from single-processor scheduling can be
used
Schedule Anomaly
 Anomaly 1
A decrease in processor demand from higher-priority tasks can
increase the interference on a lower-priority task because of
the change in the time when the tasks execute
 Anomaly 2
A decrease in processor demand of a task negatively affects
the task itself because the change in the task arrival times
make it suffer more interference
Anomaly 1
Anomaly 2
Dhall effect
 Dhall effect : For Global-EDF or Global-RM, the
least upper bound for schedulability analysis is
at most 1.
 On 2 processors:
 T3 is not schedulable
Task T D C U
T1 10 10 5 0.5
T2 10 10 5 0.5
T3 12 12 8 0.67
Schedulability Test
 A set of periodic tasks t1, t2, . . . , tN with implicit
deadlines is schedulable on M processors by using
preemptive Global EDF scheduling if
where tk is the task with the largest utilization Ck/Tk
Weakness of Global
Scheduling
 Migration overhead
 Schedule Anomaly
Partitioned Scheduling
 Two steps:
 Determine a mapping of tasks to processors
 Perform run-time single-processor scheduling
•Partitioned with EDF
 Assign tasks to the processors such that no processor’s
capacity is exceeded (utilization bounded by 1.0)
 Schedule each processor using EDF
Bin-packing Problem
The problem is NP-
complete !!
Bin-packing to
Multiprocessor Scheduling
 The problem concerns packing objects of varying sizes
in boxes (”bins”) with the objective of minimizing
number of used boxes.
 Solutions (Heuristics): First Fit
 Application to multiprocessor systems:
 Bins are represented by processors and objects by tasks.
 The decision whether a processor is ”full” or not is derived
from a utilization-based schedulability test.
Partitioned Scheduling
 Advantages:
 Most techniques for single-processor scheduling are also
applicable here
 Partitioning of tasks can be automated
 Solving a bin-packing algorithm
 Disadvantages:
 Cannot exploit/share all unused processor time
 May have very low utilization, bounded by 50%
Partitioned Scheduling
Problem
Given a set of tasks with arbitrary deadlines, the objective
is to decide a feasible task assignment onto M processors
such that all the tasks meet their timing constraints, where
Ci is the execution time of task ti on any processor m.
Partitioned Algorithm
 First-Fit: choose the one with the smallest index
 Best-Fit: choose the one with the maximal utilization
 Worst-Fit: choose the one with the minimal utilization
Partitioned Example
 0.2 -> 0.6 -> 0.4 -> 0.7 -> 0.1 -> 0.3
0.2
0.4
0.6
0.1
0.3
0.7
0.2
0.4
0.6
0.1 0.3
0.7
0.2
0.7
0.1
0.3
0.4
First Fit Best Fit Worst Fit
0.6
EDF with First Fit
Schedulability Test
Lopez [3] proves that the worst-case achievable
utilization for EDF scheduling and FF allocation (EDF-FF)
takes the value
If all the tasks have an utilization factor C/T under a value
α, where m is the number of processors
where
1
( , )
1
EDF FF
wc
m
U m



 

 1/
 
  
 
Demand Bound Function
 Define demand bound function as
 We need approximation to enforce polynomial-
time schedulability test
Deadline Monotonic Partition
Schedulabiliy Test
Weakness of Partitioned
Scheduling
 Restricting a task on a processor reduces the
schedulability
 Restricting a task on a processor makes the problem NP-
hard
 Example: Suppose that there are M processors and M + 1
tasks with the same period T and the (worst-case)
execution times of all these M + 1 tasks are T/2 + e with
e > 0
 With partitioned scheduling, it is not schedulable
Semi-partitioned Scheduling
 Tasks are first partitioned into processor.
 To reduce the utilization, we again pick the processor
with the minimum task utilization
 If a task cannot fit into the picked processor, we will
have to split it into multiple (two or even more) parts.
 If ti is split and assigned to a processor m and the
utilization on processor m after assigning ti is at most
U(scheduler,N), then ti is so far schedulable.
Semi-partitioned EDF
 Tmin is the minimum period among all the tasks.
 By a user-designed parameter k, we divide time into slots with length S =
Tmin/k .
 We can use the first-fit approach by splitting a task into 2 subtasks, in
which one is executed on processor m and the other is executed on
processor m + 1.
 Execution of a split task is only possible in the reserved time window in
the time slot.
 Applying first-fit algorithm, by taking SEP as the upper bound of utilization
on a processor.
 If a task does not fit, split this task into two subtasks and allocate a new
processor, one is assigned on the processor under consideration, and the
other is assigned on the newly allocated processor.
Multiprocessor Real-Time Scheduling.pptx
Semi-partitioned EDF
 For each time slot, we will reserve two parts.
If a task ti is split, the task can
be served only within these two
pre-defined time slots with length
xi and yi .
A processor can host two split
tasks, ti and tj. ti is served at
the beginning of the time slot,
and tj is served at the end.
The schedule is EDF, but if a split task instance is in the ready queue,
it is
executed in the reserved time region.
Semi-partitioned EDF
 We can assign all the tasks ti with Ui > SEP
on a dedicated processor. So, we only
consider tasks with Ui no larger SEP.
When executing, the reservation to serve ti is to set
xi to S X (f + lo_split(ti )) and yi to S X (f + high_split(ti )).
SEP is set as a constant.
Two Split Tasks on a Processor
 For split tasks to be schedulable, the following
sufficient conditions have to be satisfied
 lo_split(ti ) + f + high_split(ti ) + f <= 1 for any split task
ti .
 lo_split(tj ) + f + high_split(ti ) + f <= 1 when ti and tj
are assigned on the same processor.
 Therefore, the “magic value” SEP
 However, we still have to guarantee the
schedulability of the non-split tasks. It can be
shown that the sufficient condition is
Schedulability Test
Magic Values: f
Magic Values: SEP
Reference
 Multiprocessor Real-Time Scheduling
 Dr. Jian-Jia Chen: Multiprocessor Scheduling. Karlsruhe
Institute of Technology (KIT): 2011-2012
 Global Scheduling
 Sanjoy K. Baruah: Techniques for Multiprocessor Global
Schedulability Analysis. RTSS 2007: 119-128
 Partitioned Scheduling
 Sanjoy K. Baruah, Nathan Fisher: The Partitioned
Multiprocessor Scheduling of Sporadic Task Systems. RTSS 2005:
321-329
 Semi-partitioned Scheduling
 Björn Andersson, Konstantinos Bletsas: Sporadic Multiprocessor
Scheduling with Few Preemptions. ECRTS 2008: 243-252
See You Next
Week
Critical Instants?
 The analysis for uniprocessor scheduling is based on the
gold critical instant theorem.
 Synchronous release of events does not lead to the
critical instant for global multiprocessor scheduling

More Related Content

PPTX
Importance of telugu literature
PPT
multiprocessor real_ time scheduling.ppt
DOCX
Rate.docx
PPT
task_sched2.ppt
PPTX
Task allocation and scheduling inmultiprocessors
PPTX
Multiprocessor scheduling 3
PDF
Real Time most famous algorithms
Importance of telugu literature
multiprocessor real_ time scheduling.ppt
Rate.docx
task_sched2.ppt
Task allocation and scheduling inmultiprocessors
Multiprocessor scheduling 3
Real Time most famous algorithms

Similar to Multiprocessor Real-Time Scheduling.pptx (20)

PPTX
Flowshop scheduling
PDF
Improvement of Scheduling Granularity for Deadline Scheduler
PPT
Real time-embedded-system-lec-02
PPT
Real time-embedded-system-lec-02
PPT
rtos by mohit
PPTX
ERTS UNIT 5.pptx
PDF
6_RealTimeScheduling.pdf
PPT
Process management
PPTX
Clock driven scheduling
PPTX
Scheduling algorithm in real time system
PPT
Chap 4.ppt
PPT
Chap 4.ppt
PPT
Scheduling.ppt with operating system slides
PPTX
Real time operating system which explains scheduling algorithms
PPT
chapter 5 CPU scheduling.ppt
PPT
CPU Scheduling
PPT
CPU scheduling in Operating System Explanation
PDF
Bounded ant colony algorithm for task Allocation on a network of homogeneous ...
PPTX
Scheduling Algorithms unit IV(II).pptx -
PDF
Presentation ON Real time system about Multiprocessor.pdf
Flowshop scheduling
Improvement of Scheduling Granularity for Deadline Scheduler
Real time-embedded-system-lec-02
Real time-embedded-system-lec-02
rtos by mohit
ERTS UNIT 5.pptx
6_RealTimeScheduling.pdf
Process management
Clock driven scheduling
Scheduling algorithm in real time system
Chap 4.ppt
Chap 4.ppt
Scheduling.ppt with operating system slides
Real time operating system which explains scheduling algorithms
chapter 5 CPU scheduling.ppt
CPU Scheduling
CPU scheduling in Operating System Explanation
Bounded ant colony algorithm for task Allocation on a network of homogeneous ...
Scheduling Algorithms unit IV(II).pptx -
Presentation ON Real time system about Multiprocessor.pdf
Ad

More from naghamallella (20)

PPT
OS-20210426203801 introduction to os.ppt
PPT
basic logic gate presentation date23.ppt
PPT
logic gate presentation for and or n.ppt
PPT
6_2019_04_09!08_59_48_PM logic gate_.ppt
PPT
bin packing 2 for real time scheduli.ppt
PPTX
bin packing2 and scheduling for mul.pptx
PPT
BOOTP computer science for multiproc.ppt
PPT
trusted computing platform alliancee.ppt
PPT
trusted computing for security confe.ppt
PPT
bin packing and scheduling multiproc.ppt
PPT
multiprocessor _system _presentation.ppt
PPT
image processing for jpeg presentati.ppt
PPT
introduction to jpeg for image proce.ppt
PPT
jpg image processing nagham salim_as.ppt
PPTX
lips _reading_nagham _salim compute.pptx
PPT
electronic mail security for authent.ppt
PPT
web _security_ for _confedindality s.ppt
PPT
lips _reading _in computer_ vision_n.ppt
PPT
thread_ multiprocessor_ scheduling_a.ppt
PPT
distributed real time system schedul.ppt
OS-20210426203801 introduction to os.ppt
basic logic gate presentation date23.ppt
logic gate presentation for and or n.ppt
6_2019_04_09!08_59_48_PM logic gate_.ppt
bin packing 2 for real time scheduli.ppt
bin packing2 and scheduling for mul.pptx
BOOTP computer science for multiproc.ppt
trusted computing platform alliancee.ppt
trusted computing for security confe.ppt
bin packing and scheduling multiproc.ppt
multiprocessor _system _presentation.ppt
image processing for jpeg presentati.ppt
introduction to jpeg for image proce.ppt
jpg image processing nagham salim_as.ppt
lips _reading_nagham _salim compute.pptx
electronic mail security for authent.ppt
web _security_ for _confedindality s.ppt
lips _reading _in computer_ vision_n.ppt
thread_ multiprocessor_ scheduling_a.ppt
distributed real time system schedul.ppt
Ad

Recently uploaded (20)

PPT
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
PPTX
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
PPTX
2. Earth - The Living Planet Module 2ELS
PDF
bbec55_b34400a7914c42429908233dbd381773.pdf
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PDF
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
PPTX
Derivatives of integument scales, beaks, horns,.pptx
PDF
An interstellar mission to test astrophysical black holes
PPTX
SCIENCE10 Q1 5 WK8 Evidence Supporting Plate Movement.pptx
PPTX
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
PDF
IFIT3 RNA-binding activity primores influenza A viruz infection and translati...
PPTX
INTRODUCTION TO EVS | Concept of sustainability
PDF
. Radiology Case Scenariosssssssssssssss
PPTX
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
PPTX
Vitamins & Minerals: Complete Guide to Functions, Food Sources, Deficiency Si...
PPTX
2. Earth - The Living Planet earth and life
PPTX
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
PPTX
TOTAL hIP ARTHROPLASTY Presentation.pptx
PPTX
neck nodes and dissection types and lymph nodes levels
PDF
MIRIDeepImagingSurvey(MIDIS)oftheHubbleUltraDeepField
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
2. Earth - The Living Planet Module 2ELS
bbec55_b34400a7914c42429908233dbd381773.pdf
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
Derivatives of integument scales, beaks, horns,.pptx
An interstellar mission to test astrophysical black holes
SCIENCE10 Q1 5 WK8 Evidence Supporting Plate Movement.pptx
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
IFIT3 RNA-binding activity primores influenza A viruz infection and translati...
INTRODUCTION TO EVS | Concept of sustainability
. Radiology Case Scenariosssssssssssssss
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
Vitamins & Minerals: Complete Guide to Functions, Food Sources, Deficiency Si...
2. Earth - The Living Planet earth and life
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
TOTAL hIP ARTHROPLASTY Presentation.pptx
neck nodes and dissection types and lymph nodes levels
MIRIDeepImagingSurvey(MIDIS)oftheHubbleUltraDeepField

Multiprocessor Real-Time Scheduling.pptx

  • 1. Multiprocessor Real-Time Scheduling Supervised by Prof. Dr. Dhuha Basheer By Asmaa Mowafaq ALQassab & Nagham Salim ALLaila
  • 2. Multiprocessor Models  Identical (Homogeneous): All the processors have the same characteristics, i.e., the execution time of a job is independent on the processor it is executed.  Uniform: Each processor has its own speed, i.e., the execution time of a job on a processor is proportional to the speed of the processor.  A faster processor always executes a job faster than slow processors do.  Unrelated (Heterogeneous): Each job has its own execution time on a specified processor  A job might be executed faster on a processor, but other jobs might be slower on that processor.
  • 3. Scheduling Models  Global Scheduling:  A job may execute on any processor.  The system maintains a global ready queue.  Execute the M highest-priority jobs in the ready queue, where M is the number of processors.  It requires high on-line overhead.  Partitioned Scheduling:  Each task is assigned on a dedicated processor.  Schedulability is done individually on each processor.  It requires no additional on-line overhead.  Semi-partitioned Scheduling:  Adopt task partitioning first and reserve time slots (bandwidths) for tasks that allow migration.  It requires some on-line overhead.
  • 4. Scheduling Models 6 2 3 4 7 5 CPU 1 CPU 2 CPU 3 Waiting Queue CPU 1 CPU 2 CPU 3 1 4 9 8 6 2 3 5 7 Global Scheduling CPU 1 CPU 2 CPU 3 1 4 6 2 5 Partitioned Scheduling Semi-partitioned Scheduling 3 3 7 7
  • 5. Global Scheduling  All ready tasks are kept in a global queue  A job can be migrated to any processor.  Priority-based global scheduling:  Among the jobs in the global queue, the M highest priority jobs are chosen to be executed on M processors.  Task migration here is assumed with no overhead.  Global-EDF: When a job finishes or arrives to the global queue, the M jobs in the queue with the shortest absolute deadlines are chosen to be executed on M processors.  Global-RM: When a job finishes or arrives to the global queue, the M jobs in the queue with the highest priorities are chosen to be executed on M processors.
  • 6. Global Scheduling  Advantages:  Effective utilization of processing resources (if it works)  Unused processor time can easily be reclaimed at run- time (mixture of hard and soft RT tasks to optimize resource utilization)  Disadvantages:  Adding processors and reducing computation times and other parameters can actually decrease optimal performance in some scenarios!  Poor resource utilization for hard timing constraints  Few results from single-processor scheduling can be used
  • 7. Schedule Anomaly  Anomaly 1 A decrease in processor demand from higher-priority tasks can increase the interference on a lower-priority task because of the change in the time when the tasks execute  Anomaly 2 A decrease in processor demand of a task negatively affects the task itself because the change in the task arrival times make it suffer more interference
  • 10. Dhall effect  Dhall effect : For Global-EDF or Global-RM, the least upper bound for schedulability analysis is at most 1.  On 2 processors:  T3 is not schedulable Task T D C U T1 10 10 5 0.5 T2 10 10 5 0.5 T3 12 12 8 0.67
  • 11. Schedulability Test  A set of periodic tasks t1, t2, . . . , tN with implicit deadlines is schedulable on M processors by using preemptive Global EDF scheduling if where tk is the task with the largest utilization Ck/Tk
  • 12. Weakness of Global Scheduling  Migration overhead  Schedule Anomaly
  • 13. Partitioned Scheduling  Two steps:  Determine a mapping of tasks to processors  Perform run-time single-processor scheduling •Partitioned with EDF  Assign tasks to the processors such that no processor’s capacity is exceeded (utilization bounded by 1.0)  Schedule each processor using EDF
  • 14. Bin-packing Problem The problem is NP- complete !!
  • 15. Bin-packing to Multiprocessor Scheduling  The problem concerns packing objects of varying sizes in boxes (”bins”) with the objective of minimizing number of used boxes.  Solutions (Heuristics): First Fit  Application to multiprocessor systems:  Bins are represented by processors and objects by tasks.  The decision whether a processor is ”full” or not is derived from a utilization-based schedulability test.
  • 16. Partitioned Scheduling  Advantages:  Most techniques for single-processor scheduling are also applicable here  Partitioning of tasks can be automated  Solving a bin-packing algorithm  Disadvantages:  Cannot exploit/share all unused processor time  May have very low utilization, bounded by 50%
  • 17. Partitioned Scheduling Problem Given a set of tasks with arbitrary deadlines, the objective is to decide a feasible task assignment onto M processors such that all the tasks meet their timing constraints, where Ci is the execution time of task ti on any processor m.
  • 18. Partitioned Algorithm  First-Fit: choose the one with the smallest index  Best-Fit: choose the one with the maximal utilization  Worst-Fit: choose the one with the minimal utilization
  • 19. Partitioned Example  0.2 -> 0.6 -> 0.4 -> 0.7 -> 0.1 -> 0.3 0.2 0.4 0.6 0.1 0.3 0.7 0.2 0.4 0.6 0.1 0.3 0.7 0.2 0.7 0.1 0.3 0.4 First Fit Best Fit Worst Fit 0.6
  • 21. Schedulability Test Lopez [3] proves that the worst-case achievable utilization for EDF scheduling and FF allocation (EDF-FF) takes the value If all the tasks have an utilization factor C/T under a value α, where m is the number of processors where 1 ( , ) 1 EDF FF wc m U m        1/       
  • 22. Demand Bound Function  Define demand bound function as  We need approximation to enforce polynomial- time schedulability test
  • 25. Weakness of Partitioned Scheduling  Restricting a task on a processor reduces the schedulability  Restricting a task on a processor makes the problem NP- hard  Example: Suppose that there are M processors and M + 1 tasks with the same period T and the (worst-case) execution times of all these M + 1 tasks are T/2 + e with e > 0  With partitioned scheduling, it is not schedulable
  • 26. Semi-partitioned Scheduling  Tasks are first partitioned into processor.  To reduce the utilization, we again pick the processor with the minimum task utilization  If a task cannot fit into the picked processor, we will have to split it into multiple (two or even more) parts.  If ti is split and assigned to a processor m and the utilization on processor m after assigning ti is at most U(scheduler,N), then ti is so far schedulable.
  • 27. Semi-partitioned EDF  Tmin is the minimum period among all the tasks.  By a user-designed parameter k, we divide time into slots with length S = Tmin/k .  We can use the first-fit approach by splitting a task into 2 subtasks, in which one is executed on processor m and the other is executed on processor m + 1.  Execution of a split task is only possible in the reserved time window in the time slot.  Applying first-fit algorithm, by taking SEP as the upper bound of utilization on a processor.  If a task does not fit, split this task into two subtasks and allocate a new processor, one is assigned on the processor under consideration, and the other is assigned on the newly allocated processor.
  • 29. Semi-partitioned EDF  For each time slot, we will reserve two parts. If a task ti is split, the task can be served only within these two pre-defined time slots with length xi and yi . A processor can host two split tasks, ti and tj. ti is served at the beginning of the time slot, and tj is served at the end. The schedule is EDF, but if a split task instance is in the ready queue, it is executed in the reserved time region.
  • 30. Semi-partitioned EDF  We can assign all the tasks ti with Ui > SEP on a dedicated processor. So, we only consider tasks with Ui no larger SEP. When executing, the reservation to serve ti is to set xi to S X (f + lo_split(ti )) and yi to S X (f + high_split(ti )). SEP is set as a constant.
  • 31. Two Split Tasks on a Processor  For split tasks to be schedulable, the following sufficient conditions have to be satisfied  lo_split(ti ) + f + high_split(ti ) + f <= 1 for any split task ti .  lo_split(tj ) + f + high_split(ti ) + f <= 1 when ti and tj are assigned on the same processor.  Therefore, the “magic value” SEP  However, we still have to guarantee the schedulability of the non-split tasks. It can be shown that the sufficient condition is
  • 35. Reference  Multiprocessor Real-Time Scheduling  Dr. Jian-Jia Chen: Multiprocessor Scheduling. Karlsruhe Institute of Technology (KIT): 2011-2012  Global Scheduling  Sanjoy K. Baruah: Techniques for Multiprocessor Global Schedulability Analysis. RTSS 2007: 119-128  Partitioned Scheduling  Sanjoy K. Baruah, Nathan Fisher: The Partitioned Multiprocessor Scheduling of Sporadic Task Systems. RTSS 2005: 321-329  Semi-partitioned Scheduling  Björn Andersson, Konstantinos Bletsas: Sporadic Multiprocessor Scheduling with Few Preemptions. ECRTS 2008: 243-252
  • 37. Critical Instants?  The analysis for uniprocessor scheduling is based on the gold critical instant theorem.  Synchronous release of events does not lead to the critical instant for global multiprocessor scheduling

Editor's Notes

  • #7: Supported by most multiprocessor operating systems Windows NT, Solaris, Linux, ...
  • #9: Task 3 misses its deadline when task 1 increases its period. This can happen for the following schedulable task set (with priorities assigned according to RM): (T1 = 3; C1 = 2); (T2 = 4; C2 = 2); (T3 =12; C3 = 8). When \tau_1 increases its period to 4, tau_3 becomes unschedulable. This is because tau_3 is saturated and its interference increases from 4 to 6.
  • #10: Consider the following schedulable task set (with priorities assigned according to RM): (T1 = 4; C1 = 2); (T2 = 5; C2 = 3); (T3 = 10; C3 = 7). If we increase T3, the resulting task set becomes unschedulable. When T3 increases its period to 11, the second instance of T3 misses its deadline. This is because the interference increases from 3 to 5 and 3 is already saturated.
  • #11: One heavy task tk : Dk = Tk = Ck M light tasks ti s: Ci = e and Di = Ti = Ck -e, in which e is a positive number, very close to 0.
  • #23: DBF(\tau_i, t) <= DBF*(\tau_i,t) < 2* DBF(\tau_i,t)
  • #30: X+y/S= C/T
  • #31: X+y/S= C/T lo_split(ti )+high_split(ti )=C/T <SEP