SlideShare a Scribd company logo
Optimal and Adaptive
Multiprocessor
Real-Time Scheduling:
The Quasi-Partitioned Approach
Γ = { { E.Massa, G.Lima, P.Regnier }, { G.Levin, S.Brandt } }
Real-Time Systems
Andrei Petrov
Padua, October 13th, 2015
Outline
+ Motivation
+ QPS Algorithm
+ Evaluation
+ Conclusion
2
Introduction
3
QPS
#optimal
#partitioned proportionate fairness
#preserves space & time locality of tasks
#gentle off-line bin-packing
Motivation
4
QPS
#on-line adaptation
#efficient switch mechanism P-EDF <--> G-Scheduling Rules
#low preemption and migration overhead
QPS Algorithm
5
Off-line phase On-line phase
❏ Major/minor execution
partitions
❏ Allocates processors to
execution sets
❏ Generates the schedule
❏ Manages the
activation/deactivation of
QPS servers
Quasi-Partition
cornerstone in dealing
with sporadic tasks
QPS Algorithm /1
6
System Model
❏ Γ = { i
:(Ri
, Di
) } for i=1,..,n
sporadic set
❏ independent, implicit-
deadlines tasks
❏ Ri
≤ 1 execution rate
❏ Di
infinite set of deadlines
❏ m identical processors
❏ migrations and preemptions
allowed
❏ A job J:(r,c,d)∈ i
❏ J.r∈{ 0 U Di
}
❏ J.d = min{d∈D, d>J.r}
❏ J.c = Ri
*(J.d-J.r)
QPS Algorithm /2
7
Definition
❏ A server S is a scheduling
abstraction
❏ acts like a proxy for a
collection of client
tasks/servers
❏ schedules its clients
according to the EDF policy
❏ Fixed-Rate EDF Server S:(RS
,DS
)
❏ RS
= Σ ∈T
R( )
❏ RS
≤ 1
❏ DS
= ⋃
∈T
D( )
QPS Algorithm /3
Definition
❏ Quasi-partition Q(Γ,m)
is a partition of Γ on m processors s.t.
1. | Q(Γ,m)|≤m
➢ restricts the cardinality of execution sets to the
number of available processors
2. ∀P∈Q(Γ,m), 0<R(P)<2
➢ Each element P in Q(Γ,m) is
○ (if R(P) <=1) a minor execution set
○ (if R(P) >1) a major execution set
➢ P at maximum requires two-processors for
being correctly scheduled
3. ∀P∈Q(Γ,m), ∀σ∈P,
R(P)>1 => R(σ)>R(P)-1
➢ P is a major execution
➢ the P’s extra ratio must be
less than the ration of any
other server σ in P
➢ keystone for the QPS’s
flexibility
N.B.
A graphical example follows
8
QPS Algorithm /4
9
➢ are given
2 processors
and a task set P
P
1
:(0.6)
3
:(0.8)
4
:(0.2)
2
:(0.3)
➔ R(P)=1.9≤2
◆ 1st cond. is OK
◆ 2 execution sets
Proc1
and Proc2
◆ max(capacity( Proci∈{1,2}
))=1
3
0
1
2
◆
3
Proc1
Proc2
➔ R(P)=1.9>1
◆ 2nd condition is OK
◆ more than 1 processor is required
to schedule P
➔ Let P = {PA
, PB
} such that
◆ PA
={ 3
, 4
}, PB
={ 1
, 2
}
◆ R(PA
)=1, R(PB
)=0.9
● no excess ratio,fully partitioned
● the slack 1-R(PB
) may be
an idle task on Proc2
➔ if modifying the 4
’s ratio to 0.3 then the
processor system becomes 100% utilized
◆ the 3rd cond. holds
◆ the excess rate (red color) is less than
the rate of all tasks in Proc1 execution
set
● otherwise a better re-allocation
of tasks exists
4
3
1
Example
QPS Algorithm /5
10
❏ QPS servers
➔ P is a major execution set with rate R(P)
=1+x
➔ P={PA
, PB
} is a bi-partition
➔ σA
:(R(PA
)-x, PA
)
σB
:(R(PB
)-x, PB
)
σM
:(x, P)
σS
:(x, P)
Dedicated servers
Master/Slave
servers
Excess ratio
➢ At any time t, all QPS servers
associated with P share the same
deadline D(P,t)
➢ Each QPS server σ
is a Fixed-Rate EDF Server
Definitions
QPS Algorithm /6
11
❏ Γ = { i
:(2/3,3) } for i=1,
2,3
❏ two-processors
❏ P1
= { 1
, 2
}, P2
= { 3
}
❏ R(P1
)= 4/3 , R(P2
)= 2/3
❏ [2,3] time interval denotes
parallel execution
Fixed-Rate Servers
Master/Slave
servers
QPS Algorithm /7
12
Off-line phase
❏ Γ = { σi
} for i = 1,2,...,5
a server set, three-
processors
❏ R(σi
) = 0.6
❏ Proc.1 and Proc.2 dedicated
processors
❏ Proc.3 shared processor
➢ σ6
,σ7
external servers reserve computer
capacity for exceeding parts
Processor Hierarchy
Lemma IV.1.
Any server set Γ0
with ceiling(R(Γ0
))≤m will be
allocated no more than m processors
QPS Algorithm /8
13
On-line phase Scheduling
❏ servers and tasks are scheduled according to
the following rules:
❏ visit each processor in revers order
to that of its allocation
❏ select via EDF the highest priority
server/task
❏ if a master server is selected,
❏ select also its slave associated
server
❏ for all selected servers
❏ select all highest priority client
❏ dispatch all selected tasks to execute
QPS Algorithm /9
14
On-line phase Scheduling
❏ Γ ={ 1
:(6,15), 2
:(12,30),
3
:(5,10), 4
:(3.5,5)}
set of sporadic tasks
❏ to be scheduled by QPS
servers on two-processors
❏ all tasks release their first
job at time 0
❏ the second job of 3
arrives at time t=16
whereas the other tasks
behave periodically
❏ Q(Γ,2)={P1
,P2
} with P1
={ 1
, 2
, 3
} and P2
={ 4
}
❏ R(P1
)=1.3>1 ⟹ σA
:(0.1, { 1
}), σB
:(0.6, { 2
, 3
}),
σM
:(0.3, P1
), σS
:(0.3, P1
)
QPS Correctness /1
15
Assumptions Time Definitions
❏ Δ = [t,t*
) time interval such that
❏ P is major execution set
❏ Δ is complete EDF interval if
Manager activates P at time t, next
activates QPS mode at t*
❏ Δ is complete QPS interval if all tasks
in P are active
❏ Δ is a QPS job interval if some task in
P releases a job at time t and has the
t*
=D(Γ,t)
QPS
The Partitoner
The Manager The Dispatcher
QPS Correctness /2
16
Theorem V.1
QPS produces a valid schedule for a set of implicit-deadline sporadic tasks Γ on
m ≥ ceiling(R(Γ)) identical processors
Proof: Let P be a major execution set
Lemma V.1.
Consider a complete QPS interval Δ. If the master server σM
in charge of P is scheduled on its
shared processor so that it meets all its deadlines in Δ, then the other three QPS servers will
also meet theirs on P’s dedicated processor
Lemma V.2.
The individual tasks and servers in P will meet all their deadlines provided that the master server
in charge of P meets its deadlines
Implementation
17
QPS
implemented 1
on top of LITMUSRT
1 Compagnin D., Mezzetti E., Vardanega T., “Experimental evaluation of optimal schedulers
based on partitioned proportionate fairness ” ECRTS15
#off-line decisions may influence run-time performance
#RUN vs QPS comparison
#empirical evaluation
Evaluation /1
18
QPS RUN1
U-EDF2
1 Regnier P., Lima G., Massa E., Levin G., Brandt S., “RUN: Optimal Multiprocessor Real-Time
Scheduling via Reduction to Uniprocessor”, 2011 32nd IEEE Real-Time Systems Symposium
2 Nelissen G., Berten V., Nelis V., Goossens J., Milojevic D., “U-EDF: An Unfair but Optimal
Multiprocessor Scheduling Algorithm for Sporadic Tasks”, 2012 ECRTS
#optimal scheduling algorithms
#the performance of the algorithms was assessed via simulation
#QPS’s performance is influenced by the “processor hierarchy”
tasks running on kth
processor may migrate
to any of m-k
processors
Evaluation /2
19
QPS RUN U-EDFDeveloping intuition about run-time overhead
❏ given a set of
❏ m processors
❏ m+1 periodic tasks that fully utilize the m processors
❏ the average hierarchy size is “(m-1)/2”
❏ for instance:
❏ if m=5 then 2 is the average hierarchy size
❏ m+1=5+1=6 periodic tasks
❏ in the worst case the hierarchy levels are as
many as the number of available m processors
(i.e. 6 processor levels )
❏ bin over-packing propagation phenomenon
Evaluation /3
20
QPS RUN U-EDF
Continuing the case study
❏ whenever the size of the task set grows up
❏ for example from m+1 tasks to 2m
❏ the run-time overhead drops down due of
❏ more lighter tasks
❏ more linear task set partitioning
❏ EDF mode which behaves like a
Partitioned EDF
Conclusion
21
❏ presents itself in a fashionable way against other
state-of-the-art global schedulers
★ while showing a manageable run-time overhead
❏ outperforms similar schedulers
★ when the processor hierarchy size is gently low
★ in presence of a fully partitioned task system (i.e P-EDF bevavior)
➔ off-line phase may influence at run-time significant overhead
◆ inter-server coordination my be quite not trivial
❏ depicts a simple scheduling abstraction model based on servers
★ dynamic adaptation as a function of system load variations
★ parallel execution needs are assured by master/slave relation
QPS
Future Work
22
❏ is related with the extension of QPS to broader problem constraints
❏ arbitrarily deadlines, heterogeneous multiprocessors, etc
❏ different adaptation strategies to extend the current master/slave abstraction model
to enhance less expensive inter-processor communication mechanism
❏ improve the quasi-partioning implementation
❏ extend it with different implementations
❏ evaluate the performance differences between them

More Related Content

PPTX
Homework solutionsch9
PPTX
INTERRUPT LATENCY AND RESPONSE OF THE TASK
PDF
Comparision of different Round Robin Scheduling Algorithm using Dynamic Time ...
PDF
Linux kernel development ch4
PPTX
Cpu scheduling
PPTX
Operating system
PPTX
Cpu scheduling algorithm on windows
PPTX
Dynamic Resource Management In a Massively Parallel Stream Processing Engine
Homework solutionsch9
INTERRUPT LATENCY AND RESPONSE OF THE TASK
Comparision of different Round Robin Scheduling Algorithm using Dynamic Time ...
Linux kernel development ch4
Cpu scheduling
Operating system
Cpu scheduling algorithm on windows
Dynamic Resource Management In a Massively Parallel Stream Processing Engine

What's hot (19)

PPT
Planificacion
PPTX
Feedback Queueing Models for Time Shared Systems
PPT
Homework solution1
PPTX
Pipeline processing and space time diagram
PPT
Os5
PDF
Mapreduce - Simplified Data Processing on Large Clusters
PDF
Real Time most famous algorithms
PDF
Virtual Flink Forward 2020: Autoscaling Flink at Netflix - Timothy Farkas
PPTX
Rocks db state store in structured streaming
PDF
Cfd simulation of flow heat and mass transfer
PPT
Distributed systems scheduling
PDF
capacityshifting1
PPT
Observations on dag scheduling and dynamic load-balancing using genetic algor...
PPT
Parallel computing chapter 3
PPTX
Node level parallism in hadoop
PDF
Improvement of Scheduling Granularity for Deadline Scheduler
PPTX
MapReduce : Simplified Data Processing on Large Clusters
PDF
Keep Calm and React with Foresight: Strategies for Low-Latency and Energy-Eff...
Planificacion
Feedback Queueing Models for Time Shared Systems
Homework solution1
Pipeline processing and space time diagram
Os5
Mapreduce - Simplified Data Processing on Large Clusters
Real Time most famous algorithms
Virtual Flink Forward 2020: Autoscaling Flink at Netflix - Timothy Farkas
Rocks db state store in structured streaming
Cfd simulation of flow heat and mass transfer
Distributed systems scheduling
capacityshifting1
Observations on dag scheduling and dynamic load-balancing using genetic algor...
Parallel computing chapter 3
Node level parallism in hadoop
Improvement of Scheduling Granularity for Deadline Scheduler
MapReduce : Simplified Data Processing on Large Clusters
Keep Calm and React with Foresight: Strategies for Low-Latency and Energy-Eff...
Ad

Similar to Quasi Partitioned Scheduling (20)

PPTX
Operating Systems - CPU Scheduling Process
PDF
Scheduling Task-parallel Applications in Dynamically Asymmetric Environments
PDF
cpu scheduling.pdfoieheoirwuojorkjp;ooooo
PPTX
Operating Systems CPU Scheduling Process
PPTX
Multiprocessor Real-Time Scheduling.pptx
PPT
Galvin-operating System(Ch6)
PPTX
CPU SCHEDULINGCPU SCHEDULINGCPU SCHEDULINGCPU SCHEDULING.pptx
PPTX
Process and CPU Scheduling.pptx it is about Operating system
PDF
R workshop xx -- Parallel Computing with R
PDF
Otras.pdf
PDF
CH06.pdf
PPTX
Performance measures
PPTX
ERTS UNIT 5.pptx
PPT
ch6_CPU Scheduling.ppt
PPTX
LoadBalancing .pptx
PPTX
LoadBalancing .pptx
PPT
Window scheduling algorithm
PPT
Operating System 5
PPT
multiprocessor real_ time scheduling.ppt
PDF
ch5_EN_CPUSched.pdf
Operating Systems - CPU Scheduling Process
Scheduling Task-parallel Applications in Dynamically Asymmetric Environments
cpu scheduling.pdfoieheoirwuojorkjp;ooooo
Operating Systems CPU Scheduling Process
Multiprocessor Real-Time Scheduling.pptx
Galvin-operating System(Ch6)
CPU SCHEDULINGCPU SCHEDULINGCPU SCHEDULINGCPU SCHEDULING.pptx
Process and CPU Scheduling.pptx it is about Operating system
R workshop xx -- Parallel Computing with R
Otras.pdf
CH06.pdf
Performance measures
ERTS UNIT 5.pptx
ch6_CPU Scheduling.ppt
LoadBalancing .pptx
LoadBalancing .pptx
Window scheduling algorithm
Operating System 5
multiprocessor real_ time scheduling.ppt
ch5_EN_CPUSched.pdf
Ad

Recently uploaded (20)

PPTX
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
PDF
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
PPTX
TOTAL hIP ARTHROPLASTY Presentation.pptx
PPT
6.1 High Risk New Born. Padetric health ppt
PPT
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
PPTX
BIOMOLECULES PPT........................
PDF
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
PDF
CHAPTER 3 Cell Structures and Their Functions Lecture Outline.pdf
PDF
Sciences of Europe No 170 (2025)
PDF
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...
PDF
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
PDF
lecture 2026 of Sjogren's syndrome l .pdf
PPT
POSITIONING IN OPERATION THEATRE ROOM.ppt
PPTX
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
PPTX
Introduction to Fisheries Biotechnology_Lesson 1.pptx
PPTX
Taita Taveta Laboratory Technician Workshop Presentation.pptx
PPTX
Pharmacology of Autonomic nervous system
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PPT
protein biochemistry.ppt for university classes
PPTX
2Systematics of Living Organisms t-.pptx
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
TOTAL hIP ARTHROPLASTY Presentation.pptx
6.1 High Risk New Born. Padetric health ppt
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
BIOMOLECULES PPT........................
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
CHAPTER 3 Cell Structures and Their Functions Lecture Outline.pdf
Sciences of Europe No 170 (2025)
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
lecture 2026 of Sjogren's syndrome l .pdf
POSITIONING IN OPERATION THEATRE ROOM.ppt
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
Introduction to Fisheries Biotechnology_Lesson 1.pptx
Taita Taveta Laboratory Technician Workshop Presentation.pptx
Pharmacology of Autonomic nervous system
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
protein biochemistry.ppt for university classes
2Systematics of Living Organisms t-.pptx

Quasi Partitioned Scheduling

  • 1. Optimal and Adaptive Multiprocessor Real-Time Scheduling: The Quasi-Partitioned Approach Γ = { { E.Massa, G.Lima, P.Regnier }, { G.Levin, S.Brandt } } Real-Time Systems Andrei Petrov Padua, October 13th, 2015
  • 2. Outline + Motivation + QPS Algorithm + Evaluation + Conclusion 2
  • 3. Introduction 3 QPS #optimal #partitioned proportionate fairness #preserves space & time locality of tasks #gentle off-line bin-packing
  • 4. Motivation 4 QPS #on-line adaptation #efficient switch mechanism P-EDF <--> G-Scheduling Rules #low preemption and migration overhead
  • 5. QPS Algorithm 5 Off-line phase On-line phase ❏ Major/minor execution partitions ❏ Allocates processors to execution sets ❏ Generates the schedule ❏ Manages the activation/deactivation of QPS servers Quasi-Partition cornerstone in dealing with sporadic tasks
  • 6. QPS Algorithm /1 6 System Model ❏ Γ = { i :(Ri , Di ) } for i=1,..,n sporadic set ❏ independent, implicit- deadlines tasks ❏ Ri ≤ 1 execution rate ❏ Di infinite set of deadlines ❏ m identical processors ❏ migrations and preemptions allowed ❏ A job J:(r,c,d)∈ i ❏ J.r∈{ 0 U Di } ❏ J.d = min{d∈D, d>J.r} ❏ J.c = Ri *(J.d-J.r)
  • 7. QPS Algorithm /2 7 Definition ❏ A server S is a scheduling abstraction ❏ acts like a proxy for a collection of client tasks/servers ❏ schedules its clients according to the EDF policy ❏ Fixed-Rate EDF Server S:(RS ,DS ) ❏ RS = Σ ∈T R( ) ❏ RS ≤ 1 ❏ DS = ⋃ ∈T D( )
  • 8. QPS Algorithm /3 Definition ❏ Quasi-partition Q(Γ,m) is a partition of Γ on m processors s.t. 1. | Q(Γ,m)|≤m ➢ restricts the cardinality of execution sets to the number of available processors 2. ∀P∈Q(Γ,m), 0<R(P)<2 ➢ Each element P in Q(Γ,m) is ○ (if R(P) <=1) a minor execution set ○ (if R(P) >1) a major execution set ➢ P at maximum requires two-processors for being correctly scheduled 3. ∀P∈Q(Γ,m), ∀σ∈P, R(P)>1 => R(σ)>R(P)-1 ➢ P is a major execution ➢ the P’s extra ratio must be less than the ration of any other server σ in P ➢ keystone for the QPS’s flexibility N.B. A graphical example follows 8
  • 9. QPS Algorithm /4 9 ➢ are given 2 processors and a task set P P 1 :(0.6) 3 :(0.8) 4 :(0.2) 2 :(0.3) ➔ R(P)=1.9≤2 ◆ 1st cond. is OK ◆ 2 execution sets Proc1 and Proc2 ◆ max(capacity( Proci∈{1,2} ))=1 3 0 1 2 ◆ 3 Proc1 Proc2 ➔ R(P)=1.9>1 ◆ 2nd condition is OK ◆ more than 1 processor is required to schedule P ➔ Let P = {PA , PB } such that ◆ PA ={ 3 , 4 }, PB ={ 1 , 2 } ◆ R(PA )=1, R(PB )=0.9 ● no excess ratio,fully partitioned ● the slack 1-R(PB ) may be an idle task on Proc2 ➔ if modifying the 4 ’s ratio to 0.3 then the processor system becomes 100% utilized ◆ the 3rd cond. holds ◆ the excess rate (red color) is less than the rate of all tasks in Proc1 execution set ● otherwise a better re-allocation of tasks exists 4 3 1 Example
  • 10. QPS Algorithm /5 10 ❏ QPS servers ➔ P is a major execution set with rate R(P) =1+x ➔ P={PA , PB } is a bi-partition ➔ σA :(R(PA )-x, PA ) σB :(R(PB )-x, PB ) σM :(x, P) σS :(x, P) Dedicated servers Master/Slave servers Excess ratio ➢ At any time t, all QPS servers associated with P share the same deadline D(P,t) ➢ Each QPS server σ is a Fixed-Rate EDF Server Definitions
  • 11. QPS Algorithm /6 11 ❏ Γ = { i :(2/3,3) } for i=1, 2,3 ❏ two-processors ❏ P1 = { 1 , 2 }, P2 = { 3 } ❏ R(P1 )= 4/3 , R(P2 )= 2/3 ❏ [2,3] time interval denotes parallel execution Fixed-Rate Servers Master/Slave servers
  • 12. QPS Algorithm /7 12 Off-line phase ❏ Γ = { σi } for i = 1,2,...,5 a server set, three- processors ❏ R(σi ) = 0.6 ❏ Proc.1 and Proc.2 dedicated processors ❏ Proc.3 shared processor ➢ σ6 ,σ7 external servers reserve computer capacity for exceeding parts Processor Hierarchy Lemma IV.1. Any server set Γ0 with ceiling(R(Γ0 ))≤m will be allocated no more than m processors
  • 13. QPS Algorithm /8 13 On-line phase Scheduling ❏ servers and tasks are scheduled according to the following rules: ❏ visit each processor in revers order to that of its allocation ❏ select via EDF the highest priority server/task ❏ if a master server is selected, ❏ select also its slave associated server ❏ for all selected servers ❏ select all highest priority client ❏ dispatch all selected tasks to execute
  • 14. QPS Algorithm /9 14 On-line phase Scheduling ❏ Γ ={ 1 :(6,15), 2 :(12,30), 3 :(5,10), 4 :(3.5,5)} set of sporadic tasks ❏ to be scheduled by QPS servers on two-processors ❏ all tasks release their first job at time 0 ❏ the second job of 3 arrives at time t=16 whereas the other tasks behave periodically ❏ Q(Γ,2)={P1 ,P2 } with P1 ={ 1 , 2 , 3 } and P2 ={ 4 } ❏ R(P1 )=1.3>1 ⟹ σA :(0.1, { 1 }), σB :(0.6, { 2 , 3 }), σM :(0.3, P1 ), σS :(0.3, P1 )
  • 15. QPS Correctness /1 15 Assumptions Time Definitions ❏ Δ = [t,t* ) time interval such that ❏ P is major execution set ❏ Δ is complete EDF interval if Manager activates P at time t, next activates QPS mode at t* ❏ Δ is complete QPS interval if all tasks in P are active ❏ Δ is a QPS job interval if some task in P releases a job at time t and has the t* =D(Γ,t) QPS The Partitoner The Manager The Dispatcher
  • 16. QPS Correctness /2 16 Theorem V.1 QPS produces a valid schedule for a set of implicit-deadline sporadic tasks Γ on m ≥ ceiling(R(Γ)) identical processors Proof: Let P be a major execution set Lemma V.1. Consider a complete QPS interval Δ. If the master server σM in charge of P is scheduled on its shared processor so that it meets all its deadlines in Δ, then the other three QPS servers will also meet theirs on P’s dedicated processor Lemma V.2. The individual tasks and servers in P will meet all their deadlines provided that the master server in charge of P meets its deadlines
  • 17. Implementation 17 QPS implemented 1 on top of LITMUSRT 1 Compagnin D., Mezzetti E., Vardanega T., “Experimental evaluation of optimal schedulers based on partitioned proportionate fairness ” ECRTS15 #off-line decisions may influence run-time performance #RUN vs QPS comparison #empirical evaluation
  • 18. Evaluation /1 18 QPS RUN1 U-EDF2 1 Regnier P., Lima G., Massa E., Levin G., Brandt S., “RUN: Optimal Multiprocessor Real-Time Scheduling via Reduction to Uniprocessor”, 2011 32nd IEEE Real-Time Systems Symposium 2 Nelissen G., Berten V., Nelis V., Goossens J., Milojevic D., “U-EDF: An Unfair but Optimal Multiprocessor Scheduling Algorithm for Sporadic Tasks”, 2012 ECRTS #optimal scheduling algorithms #the performance of the algorithms was assessed via simulation #QPS’s performance is influenced by the “processor hierarchy” tasks running on kth processor may migrate to any of m-k processors
  • 19. Evaluation /2 19 QPS RUN U-EDFDeveloping intuition about run-time overhead ❏ given a set of ❏ m processors ❏ m+1 periodic tasks that fully utilize the m processors ❏ the average hierarchy size is “(m-1)/2” ❏ for instance: ❏ if m=5 then 2 is the average hierarchy size ❏ m+1=5+1=6 periodic tasks ❏ in the worst case the hierarchy levels are as many as the number of available m processors (i.e. 6 processor levels ) ❏ bin over-packing propagation phenomenon
  • 20. Evaluation /3 20 QPS RUN U-EDF Continuing the case study ❏ whenever the size of the task set grows up ❏ for example from m+1 tasks to 2m ❏ the run-time overhead drops down due of ❏ more lighter tasks ❏ more linear task set partitioning ❏ EDF mode which behaves like a Partitioned EDF
  • 21. Conclusion 21 ❏ presents itself in a fashionable way against other state-of-the-art global schedulers ★ while showing a manageable run-time overhead ❏ outperforms similar schedulers ★ when the processor hierarchy size is gently low ★ in presence of a fully partitioned task system (i.e P-EDF bevavior) ➔ off-line phase may influence at run-time significant overhead ◆ inter-server coordination my be quite not trivial ❏ depicts a simple scheduling abstraction model based on servers ★ dynamic adaptation as a function of system load variations ★ parallel execution needs are assured by master/slave relation QPS
  • 22. Future Work 22 ❏ is related with the extension of QPS to broader problem constraints ❏ arbitrarily deadlines, heterogeneous multiprocessors, etc ❏ different adaptation strategies to extend the current master/slave abstraction model to enhance less expensive inter-processor communication mechanism ❏ improve the quasi-partioning implementation ❏ extend it with different implementations ❏ evaluate the performance differences between them