SlideShare a Scribd company logo
Multiprocessor
Real-Time Scheduling
Embedded System Software Design
Use under the permission of Prof. Ya-
Shu Chen (NTUST)
Outline
• Multiprocessor Real-Time Scheduling
• Global Scheduling
• Partitioned Scheduling
• Semi-partitioned Scheduling
Multiprocessor Models
• Identical (Homogeneous): All the processors have the same
characteristics, i.e., the execution time of a job is
independent on the processor it is executed.
• Uniform: Each processor has its own speed, i.e., the
execution time of a job on a processor is proportional to the
speed of the processor.
– A faster processor always executes a job faster than slow
processors do.
– For example, multiprocessors with the same instruction set but
with different supply voltages/frequencies.
• Unrelated (Heterogeneous): Each job has its own execution
time on a specified processor
– A job might be executed faster on a processor, but other jobs
might be slower on that processor.
– For example, multiprocessors with different instruction sets.
Scheduling Models
• Global Scheduling:
– A job may execute on any processor.
– The system maintains a global ready queue.
– Execute the M highest-priority jobs in the ready queue, where M
is the number of processors.
– It requires high on-line overhead.
• Partitioned Scheduling:
– Each task is assigned on a dedicated processor.
– Schedulability is done individually on each processor.
– It requires no additional on-line overhead.
• Semi-partitioned Scheduling:
– Adopt task partitioning first and reserve time slots (bandwidths)
for tasks that allow migration.
– It requires some on-line overhead.
Scheduling Models
6
2 3
4
7
5
CPU 1 CPU 2 CPU 3
Waiting Queue
CPU 1 CPU 2 CPU 3
1
4
9 8
6
2 3
5
7
Global Scheduling
CPU 1 CPU 2 CPU 3
1
4 6
2
5
Partitioned Scheduling
Semi-partitioned
Scheduling
3
3 7 7
Global Scheduling
• All ready tasks are kept in a global queue
• A job can be migrated to any processor.
• Priority-based global scheduling:
– Among the jobs in the global queue, the M highest priority
jobs are chosen to be executed on M processors.
– Task migration here is assumed with no overhead.
• Global-EDF: When a job finishes or arrives to the global
queue, the M jobs in the queue with the shortest
absolute deadlines are chosen to be executed on M
processors.
• Global-RM: When a job finishes or arrives to the global
queue, the M jobs in the queue with the highest
priorities are chosen to be executed on M processors.
Global Scheduling
• Advantages:
– Effective utilization of processing resources (if it works)
– Unused processor time can easily be reclaimed at run-
time (mixture of hard and soft RT tasks to optimize
resource utilization)
• Disadvantages:
– Adding processors and reducing computation times and
other parameters can actually decrease optimal
performance in some scenarios!
– Poor resource utilization for hard timing constraints
– Few results from single-processor scheduling can be
used
Schedule Anomaly
• Anomaly 1
A decrease in processor demand from higher-
priority tasks can increase the interference on a
lower-priority task because of the change in the time
when the tasks execute
• Anomaly 2
A decrease in processor demand of a task negatively
affects the task itself because the change in the task
arrival times make it suffer more interference
Anomaly 1
Anomaly 2
Dhall effect
• Dhall effect : For Global-EDF or Global-RM, the least
upper bound for schedulability analysis is at most 1.
• On 2 processors:
• T3 is not schedulable
Task T D C U
T1 10 10 5 0.5
T2 10 10 5 0.5
T3 12 12 8 0.67
Schedulability Test
• A set of periodic tasks t1, t2, . . . , tN with
implicit deadlines is schedulable on M
processors by using preemptive Global EDF
scheduling if
where tk is the task with the largest utilization
Ck/Tk
Weakness of Global Scheduling
• Migration overhead
• Schedule Anomaly
Partitioned Scheduling
• Two steps:
– Determine a mapping of tasks to processors
– Perform run-time single-processor scheduling
•Partitioned with EDF
– Assign tasks to the processors such that no
processor’s capacity is exceeded (utilization
bounded by 1.0)
– Schedule each processor using EDF
Bin-packing Problem
The problem is NP-complete !!
Bin-packing to
Multiprocessor Scheduling
• The problem concerns packing objects of
varying sizes in boxes (”bins”) with the
objective of minimizing number of used boxes.
– Solutions (Heuristics): First Fit
• Application to multiprocessor systems:
– Bins are represented by processors and objects by
tasks.
– The decision whether a processor is ”full” or not is
derived from a utilization-based schedulability test.
Partitioned Scheduling
• Advantages:
– Most techniques for single-processor scheduling
are also applicable here
• Partitioning of tasks can be automated
– Solving a bin-packing algorithm
• Disadvantages:
– Cannot exploit/share all unused processor time
– May have very low utilization, bounded by 50%
Partitioned Scheduling Problem
Given a set of tasks with arbitrary deadlines, the
objective is to decide a feasible task assignment
onto M processors such that all the tasks meet
their timing constraints, where Ci is the
execution time of task ti on any processor m.
Partitioned Algorithm
• First-Fit: choose the one with the smallest
index
• Best-Fit: choose the one with the maximal
utilization
• Worst-Fit: choose the one with the minimal
utilization
Partitioned Example
• 0.2 -> 0.6 -> 0.4 -> 0.7 -> 0.1 -> 0.3
0.2
0.4
0.6
0.1
0.3
0.7
0.2
0.4
0.6
0.1 0.3
0.7
0.2
0.7
0.1
0.3
0.4
First Fit Best Fit Worst Fit
0.6
EDF with First Fit
Schedulability Test
Lopez [3] proves that the worst-case achievable utilization for
EDF scheduling and FF allocation (EDF-FF) takes the value
If all the tasks have an utilization factor C/T under a value α, where
m is the number of processors
where
1
( , )
1
EDF FF
wc
m
U m



 

 1/
 
  
 
Demand Bound Function
• Define demand bound function as
• We need approximation to enforce polynomial-time
schedulability test
Deadline Monotonic Partition
Schedulabiliy Test
Weakness of Partitioned Scheduling
• Restricting a task on a processor reduces the
schedulability
• Restricting a task on a processor makes the
problem NP-hard
• Example: Suppose that there are M processors
and M + 1 tasks with the same period T and
the (worst-case) execution times of all these
M + 1 tasks are T/2 + e with e > 0
– With partitioned scheduling, it is not schedulable
Semi-partitioned Scheduling
• Tasks are first partitioned into processor.
• To reduce the utilization, we again pick the
processor with the minimum task utilization
• If a task cannot fit into the picked processor, we
will have to split it into multiple (two or even
more) parts.
• If ti is split and assigned to a processor m and the
utilization on processor m after assigning ti is at
most U(scheduler,N), then ti is so far schedulable.
Semi-partitioned EDF
• Tmin is the minimum period among all the tasks.
• By a user-designed parameter k, we divide time into slots
with length S = Tmin/k .
• We can use the first-fit approach by splitting a task into 2
subtasks, in which one is executed on processor m and the
other is executed on processor m + 1.
• Execution of a split task is only possible in the reserved
time window in the time slot.
• Applying first-fit algorithm, by taking SEP as the upper
bound of utilization on a processor.
• If a task does not fit, split this task into two subtasks and
allocate a new processor, one is assigned on the processor
under consideration, and the other is assigned on the
newly allocated processor.
multiprocessor real_ time scheduling.ppt
Semi-partitioned EDF
• For each time slot, we will reserve two parts.
If a task ti is split, the task can
be served only within these two
pre-defined time slots with length
xi and yi .
A processor can host two split
tasks, ti and tj. ti is served at
the beginning of the time slot,
and tj is served at the end.
The schedule is EDF, but if a split task instance is in the ready queue, it is
executed in the reserved time region.
Semi-partitioned EDF
• We can assign all the tasks ti with Ui > SEP on a dedicated
processor. So, we only consider tasks with Ui no larger SEP.
When executing, the reservation to serve ti is to set
xi to S X (f + lo_split(ti )) and yi to S X (f + high_split(ti )).
SEP is set as a constant.
Two Split Tasks on a Processor
• For split tasks to be schedulable, the following sufficient
conditions have to be satisfied
– lo_split(ti ) + f + high_split(ti ) + f <= 1 for any split task ti .
– lo_split(tj ) + f + high_split(ti ) + f <= 1 when ti and tj are assigned on
the same processor.
• Therefore, the “magic value” SEP
• However, we still have to guarantee the schedulability of the
non-split tasks. It can be shown that the sufficient condition is
Schedulability Test
Magic Values: f
Magic Values: SEP
Reference
• Multiprocessor Real-Time Scheduling
– Dr. Jian-Jia Chen: Multiprocessor Scheduling. Karlsruhe Institute
of Technology (KIT): 2011-2012
• Global Scheduling
– Sanjoy K. Baruah: Techniques for Multiprocessor Global
Schedulability Analysis. RTSS 2007: 119-128
• Partitioned Scheduling
– Sanjoy K. Baruah, Nathan Fisher: The Partitioned Multiprocessor
Scheduling of Sporadic Task Systems. RTSS 2005: 321-329
• Semi-partitioned Scheduling
– Björn Andersson, Konstantinos Bletsas: Sporadic Multiprocessor
Scheduling with Few Preemptions. ECRTS 2008: 243-252
See You Next Week
Critical Instants?
• The analysis for uniprocessor scheduling is
based on the gold critical instant theorem.
• Synchronous release of events does not lead
to the critical instant for global multiprocessor
scheduling

More Related Content

PPTX
Multiprocessor Real-Time Scheduling.pptx
DOCX
Rate.docx
PPTX
Commonly used Approaches to Real Time Scheduling
PPT
Lecture1
PPT
06-scheduling.ppt including multiple CPUs
PDF
Section05 scheduling
PPTX
Real Time System
Multiprocessor Real-Time Scheduling.pptx
Rate.docx
Commonly used Approaches to Real Time Scheduling
Lecture1
06-scheduling.ppt including multiple CPUs
Section05 scheduling
Real Time System

Similar to multiprocessor real_ time scheduling.ppt (20)

PDF
OS Process Chapter 3.pdf
PPT
3 process scheduling
PDF
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
PPT
fggggggggggggggggggggggggggggggfffffffffffffffffff
PPT
3_process_scheduling.ppt----------------
PPT
3_process_scheduling.ppt
PPT
Process Scheduling Algorithms for Operating Systems
PPT
3_process_scheduling.ppt
PPT
ESC UNIT 3.ppt
PPT
Scheduling.ppt with operating system slides
PPT
chapter 5 CPU scheduling.ppt
PPTX
Sequencing and shedulding problems for Operations management
PPT
Operating Systems Process Scheduling Algorithms
PPTX
FreeRTOS basics (Real time Operating System)
PPT
Distributed systems scheduling
PPTX
Flowshop scheduling
PPT
dataprocess using different technology.ppt
PPT
CPU Scheduling
PPT
CPU scheduling in Operating System Explanation
PPTX
Scheduling algorithms
OS Process Chapter 3.pdf
3 process scheduling
Sara Afshar: Scheduling and Resource Sharing in Multiprocessor Real-Time Systems
fggggggggggggggggggggggggggggggfffffffffffffffffff
3_process_scheduling.ppt----------------
3_process_scheduling.ppt
Process Scheduling Algorithms for Operating Systems
3_process_scheduling.ppt
ESC UNIT 3.ppt
Scheduling.ppt with operating system slides
chapter 5 CPU scheduling.ppt
Sequencing and shedulding problems for Operations management
Operating Systems Process Scheduling Algorithms
FreeRTOS basics (Real time Operating System)
Distributed systems scheduling
Flowshop scheduling
dataprocess using different technology.ppt
CPU Scheduling
CPU scheduling in Operating System Explanation
Scheduling algorithms
Ad

More from naghamallella (20)

PPT
OS-20210426203801 introduction to os.ppt
PPT
basic logic gate presentation date23.ppt
PPT
logic gate presentation for and or n.ppt
PPT
6_2019_04_09!08_59_48_PM logic gate_.ppt
PPT
bin packing 2 for real time scheduli.ppt
PPTX
bin packing2 and scheduling for mul.pptx
PPT
BOOTP computer science for multiproc.ppt
PPT
trusted computing platform alliancee.ppt
PPT
trusted computing for security confe.ppt
PPT
bin packing and scheduling multiproc.ppt
PPT
multiprocessor _system _presentation.ppt
PPT
image processing for jpeg presentati.ppt
PPT
introduction to jpeg for image proce.ppt
PPT
jpg image processing nagham salim_as.ppt
PPTX
lips _reading_nagham _salim compute.pptx
PPT
electronic mail security for authent.ppt
PPT
web _security_ for _confedindality s.ppt
PPT
lips _reading _in computer_ vision_n.ppt
PPT
thread_ multiprocessor_ scheduling_a.ppt
PPT
distributed real time system schedul.ppt
OS-20210426203801 introduction to os.ppt
basic logic gate presentation date23.ppt
logic gate presentation for and or n.ppt
6_2019_04_09!08_59_48_PM logic gate_.ppt
bin packing 2 for real time scheduli.ppt
bin packing2 and scheduling for mul.pptx
BOOTP computer science for multiproc.ppt
trusted computing platform alliancee.ppt
trusted computing for security confe.ppt
bin packing and scheduling multiproc.ppt
multiprocessor _system _presentation.ppt
image processing for jpeg presentati.ppt
introduction to jpeg for image proce.ppt
jpg image processing nagham salim_as.ppt
lips _reading_nagham _salim compute.pptx
electronic mail security for authent.ppt
web _security_ for _confedindality s.ppt
lips _reading _in computer_ vision_n.ppt
thread_ multiprocessor_ scheduling_a.ppt
distributed real time system schedul.ppt
Ad

Recently uploaded (20)

PPTX
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
PPTX
Taita Taveta Laboratory Technician Workshop Presentation.pptx
PPTX
Classification Systems_TAXONOMY_SCIENCE8.pptx
PPTX
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
PDF
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
PDF
HPLC-PPT.docx high performance liquid chromatography
PPTX
The KM-GBF monitoring framework – status & key messages.pptx
PPTX
INTRODUCTION TO EVS | Concept of sustainability
PPTX
neck nodes and dissection types and lymph nodes levels
PPTX
2Systematics of Living Organisms t-.pptx
DOCX
Viruses (History, structure and composition, classification, Bacteriophage Re...
PPTX
BIOMOLECULES PPT........................
PDF
An interstellar mission to test astrophysical black holes
PPTX
2. Earth - The Living Planet earth and life
PPTX
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
PDF
Placing the Near-Earth Object Impact Probability in Context
PPT
protein biochemistry.ppt for university classes
PPTX
ECG_Course_Presentation د.محمد صقران ppt
PPTX
microscope-Lecturecjchchchchcuvuvhc.pptx
PPT
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
Taita Taveta Laboratory Technician Workshop Presentation.pptx
Classification Systems_TAXONOMY_SCIENCE8.pptx
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
HPLC-PPT.docx high performance liquid chromatography
The KM-GBF monitoring framework – status & key messages.pptx
INTRODUCTION TO EVS | Concept of sustainability
neck nodes and dissection types and lymph nodes levels
2Systematics of Living Organisms t-.pptx
Viruses (History, structure and composition, classification, Bacteriophage Re...
BIOMOLECULES PPT........................
An interstellar mission to test astrophysical black holes
2. Earth - The Living Planet earth and life
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
Placing the Near-Earth Object Impact Probability in Context
protein biochemistry.ppt for university classes
ECG_Course_Presentation د.محمد صقران ppt
microscope-Lecturecjchchchchcuvuvhc.pptx
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice

multiprocessor real_ time scheduling.ppt

  • 1. Multiprocessor Real-Time Scheduling Embedded System Software Design Use under the permission of Prof. Ya- Shu Chen (NTUST)
  • 2. Outline • Multiprocessor Real-Time Scheduling • Global Scheduling • Partitioned Scheduling • Semi-partitioned Scheduling
  • 3. Multiprocessor Models • Identical (Homogeneous): All the processors have the same characteristics, i.e., the execution time of a job is independent on the processor it is executed. • Uniform: Each processor has its own speed, i.e., the execution time of a job on a processor is proportional to the speed of the processor. – A faster processor always executes a job faster than slow processors do. – For example, multiprocessors with the same instruction set but with different supply voltages/frequencies. • Unrelated (Heterogeneous): Each job has its own execution time on a specified processor – A job might be executed faster on a processor, but other jobs might be slower on that processor. – For example, multiprocessors with different instruction sets.
  • 4. Scheduling Models • Global Scheduling: – A job may execute on any processor. – The system maintains a global ready queue. – Execute the M highest-priority jobs in the ready queue, where M is the number of processors. – It requires high on-line overhead. • Partitioned Scheduling: – Each task is assigned on a dedicated processor. – Schedulability is done individually on each processor. – It requires no additional on-line overhead. • Semi-partitioned Scheduling: – Adopt task partitioning first and reserve time slots (bandwidths) for tasks that allow migration. – It requires some on-line overhead.
  • 5. Scheduling Models 6 2 3 4 7 5 CPU 1 CPU 2 CPU 3 Waiting Queue CPU 1 CPU 2 CPU 3 1 4 9 8 6 2 3 5 7 Global Scheduling CPU 1 CPU 2 CPU 3 1 4 6 2 5 Partitioned Scheduling Semi-partitioned Scheduling 3 3 7 7
  • 6. Global Scheduling • All ready tasks are kept in a global queue • A job can be migrated to any processor. • Priority-based global scheduling: – Among the jobs in the global queue, the M highest priority jobs are chosen to be executed on M processors. – Task migration here is assumed with no overhead. • Global-EDF: When a job finishes or arrives to the global queue, the M jobs in the queue with the shortest absolute deadlines are chosen to be executed on M processors. • Global-RM: When a job finishes or arrives to the global queue, the M jobs in the queue with the highest priorities are chosen to be executed on M processors.
  • 7. Global Scheduling • Advantages: – Effective utilization of processing resources (if it works) – Unused processor time can easily be reclaimed at run- time (mixture of hard and soft RT tasks to optimize resource utilization) • Disadvantages: – Adding processors and reducing computation times and other parameters can actually decrease optimal performance in some scenarios! – Poor resource utilization for hard timing constraints – Few results from single-processor scheduling can be used
  • 8. Schedule Anomaly • Anomaly 1 A decrease in processor demand from higher- priority tasks can increase the interference on a lower-priority task because of the change in the time when the tasks execute • Anomaly 2 A decrease in processor demand of a task negatively affects the task itself because the change in the task arrival times make it suffer more interference
  • 11. Dhall effect • Dhall effect : For Global-EDF or Global-RM, the least upper bound for schedulability analysis is at most 1. • On 2 processors: • T3 is not schedulable Task T D C U T1 10 10 5 0.5 T2 10 10 5 0.5 T3 12 12 8 0.67
  • 12. Schedulability Test • A set of periodic tasks t1, t2, . . . , tN with implicit deadlines is schedulable on M processors by using preemptive Global EDF scheduling if where tk is the task with the largest utilization Ck/Tk
  • 13. Weakness of Global Scheduling • Migration overhead • Schedule Anomaly
  • 14. Partitioned Scheduling • Two steps: – Determine a mapping of tasks to processors – Perform run-time single-processor scheduling •Partitioned with EDF – Assign tasks to the processors such that no processor’s capacity is exceeded (utilization bounded by 1.0) – Schedule each processor using EDF
  • 15. Bin-packing Problem The problem is NP-complete !!
  • 16. Bin-packing to Multiprocessor Scheduling • The problem concerns packing objects of varying sizes in boxes (”bins”) with the objective of minimizing number of used boxes. – Solutions (Heuristics): First Fit • Application to multiprocessor systems: – Bins are represented by processors and objects by tasks. – The decision whether a processor is ”full” or not is derived from a utilization-based schedulability test.
  • 17. Partitioned Scheduling • Advantages: – Most techniques for single-processor scheduling are also applicable here • Partitioning of tasks can be automated – Solving a bin-packing algorithm • Disadvantages: – Cannot exploit/share all unused processor time – May have very low utilization, bounded by 50%
  • 18. Partitioned Scheduling Problem Given a set of tasks with arbitrary deadlines, the objective is to decide a feasible task assignment onto M processors such that all the tasks meet their timing constraints, where Ci is the execution time of task ti on any processor m.
  • 19. Partitioned Algorithm • First-Fit: choose the one with the smallest index • Best-Fit: choose the one with the maximal utilization • Worst-Fit: choose the one with the minimal utilization
  • 20. Partitioned Example • 0.2 -> 0.6 -> 0.4 -> 0.7 -> 0.1 -> 0.3 0.2 0.4 0.6 0.1 0.3 0.7 0.2 0.4 0.6 0.1 0.3 0.7 0.2 0.7 0.1 0.3 0.4 First Fit Best Fit Worst Fit 0.6
  • 22. Schedulability Test Lopez [3] proves that the worst-case achievable utilization for EDF scheduling and FF allocation (EDF-FF) takes the value If all the tasks have an utilization factor C/T under a value α, where m is the number of processors where 1 ( , ) 1 EDF FF wc m U m        1/       
  • 23. Demand Bound Function • Define demand bound function as • We need approximation to enforce polynomial-time schedulability test
  • 26. Weakness of Partitioned Scheduling • Restricting a task on a processor reduces the schedulability • Restricting a task on a processor makes the problem NP-hard • Example: Suppose that there are M processors and M + 1 tasks with the same period T and the (worst-case) execution times of all these M + 1 tasks are T/2 + e with e > 0 – With partitioned scheduling, it is not schedulable
  • 27. Semi-partitioned Scheduling • Tasks are first partitioned into processor. • To reduce the utilization, we again pick the processor with the minimum task utilization • If a task cannot fit into the picked processor, we will have to split it into multiple (two or even more) parts. • If ti is split and assigned to a processor m and the utilization on processor m after assigning ti is at most U(scheduler,N), then ti is so far schedulable.
  • 28. Semi-partitioned EDF • Tmin is the minimum period among all the tasks. • By a user-designed parameter k, we divide time into slots with length S = Tmin/k . • We can use the first-fit approach by splitting a task into 2 subtasks, in which one is executed on processor m and the other is executed on processor m + 1. • Execution of a split task is only possible in the reserved time window in the time slot. • Applying first-fit algorithm, by taking SEP as the upper bound of utilization on a processor. • If a task does not fit, split this task into two subtasks and allocate a new processor, one is assigned on the processor under consideration, and the other is assigned on the newly allocated processor.
  • 30. Semi-partitioned EDF • For each time slot, we will reserve two parts. If a task ti is split, the task can be served only within these two pre-defined time slots with length xi and yi . A processor can host two split tasks, ti and tj. ti is served at the beginning of the time slot, and tj is served at the end. The schedule is EDF, but if a split task instance is in the ready queue, it is executed in the reserved time region.
  • 31. Semi-partitioned EDF • We can assign all the tasks ti with Ui > SEP on a dedicated processor. So, we only consider tasks with Ui no larger SEP. When executing, the reservation to serve ti is to set xi to S X (f + lo_split(ti )) and yi to S X (f + high_split(ti )). SEP is set as a constant.
  • 32. Two Split Tasks on a Processor • For split tasks to be schedulable, the following sufficient conditions have to be satisfied – lo_split(ti ) + f + high_split(ti ) + f <= 1 for any split task ti . – lo_split(tj ) + f + high_split(ti ) + f <= 1 when ti and tj are assigned on the same processor. • Therefore, the “magic value” SEP • However, we still have to guarantee the schedulability of the non-split tasks. It can be shown that the sufficient condition is
  • 36. Reference • Multiprocessor Real-Time Scheduling – Dr. Jian-Jia Chen: Multiprocessor Scheduling. Karlsruhe Institute of Technology (KIT): 2011-2012 • Global Scheduling – Sanjoy K. Baruah: Techniques for Multiprocessor Global Schedulability Analysis. RTSS 2007: 119-128 • Partitioned Scheduling – Sanjoy K. Baruah, Nathan Fisher: The Partitioned Multiprocessor Scheduling of Sporadic Task Systems. RTSS 2005: 321-329 • Semi-partitioned Scheduling – Björn Andersson, Konstantinos Bletsas: Sporadic Multiprocessor Scheduling with Few Preemptions. ECRTS 2008: 243-252
  • 37. See You Next Week
  • 38. Critical Instants? • The analysis for uniprocessor scheduling is based on the gold critical instant theorem. • Synchronous release of events does not lead to the critical instant for global multiprocessor scheduling

Editor's Notes

  • #8: Supported by most multiprocessor operating systems Windows NT, Solaris, Linux, ...
  • #10: Task 3 misses its deadline when task 1 increases its period. This can happen for the following schedulable task set (with priorities assigned according to RM): (T1 = 3; C1 = 2); (T2 = 4; C2 = 2); (T3 =12; C3 = 8). When \tau_1 increases its period to 4, tau_3 becomes unschedulable. This is because tau_3 is saturated and its interference increases from 4 to 6.
  • #11: Consider the following schedulable task set (with priorities assigned according to RM): (T1 = 4; C1 = 2); (T2 = 5; C2 = 3); (T3 = 10; C3 = 7). If we increase T3, the resulting task set becomes unschedulable. When T3 increases its period to 11, the second instance of T3 misses its deadline. This is because the interference increases from 3 to 5 and 3 is already saturated.
  • #12: One heavy task tk : Dk = Tk = Ck M light tasks ti s: Ci = e and Di = Ti = Ck -e, in which e is a positive number, very close to 0.
  • #24: DBF(\tau_i, t) <= DBF*(\tau_i,t) < 2* DBF(\tau_i,t)
  • #31: X+y/S= C/T
  • #32: X+y/S= C/T lo_split(ti )+high_split(ti )=C/T <SEP