SlideShare a Scribd company logo
An Efficient Implementation of Slot Shifting
Algorithm based on deferred Update
L.J. GokulVasan
Chair of Real-Time Systems
TU-KL, Germany
Email: lakshama@rhrk.uni-kl.de
Dr.Mitra Nasri
Max Plank Institute for Software Systems
Kaiserslautern, Germany
Email: mitra@mpi-sws.org
Prof.Dipl.-Ing.Dr.Gerhard Fohler
Chair of Real-Time Systems
TU-KL, Germany
Email: fohler@eit.uni-kl.de
Abstract
Slot-shifting algorithm provides a mechanism to accommodate event triggered tasks along with periodic tasks.
Slot-shifting uses the residual bandwidth of the periodic tasks to accommodate aperiodic tasks, called spare capacity,
to improve the chances of admitting aperiodic tasks. In contrast to bandwidth servers, slot-shifting preserves the
unoccupied capacity of the system by shifting the spare capacity to future.
Slot-shifting uses a discrete time model, termed as slots. Slot-shifting assumes a global time, whose progression
is triggered by equidistant events, detected by an independent external observer. At the beginning of each slot the
scheduler is invoked to determine which job of which task to schedule in that slot. The scheduler will also update
the spare capacity of the system. This activation of scheduler for each slot is computationally expensive and has a
considerable overhead. In this work we will provide an approach to remove the slots and replace it with a larger
quantity that is naturally related to the release time and deadlines of the jobs. Moreover, we provide a method for
delayed updates of the space capacities in order to reduce the runtime computational complexity, but preserving the
notion incepted by slot shifting. We will show that this new approach reduces the scheduling overhead of slot shifting
by around 60% in best cases and 45% on average cases.
Finally, The work will provide a new approach to guarantee the aperiodic task. This new solution will have time
complexity of O(1) per interval compared to existing (slot-shifting) algorithm having complexity of O(n) per interval,
where n is the complexity involved to iterate over n tasks that belongs to an interval.
I. Introduction
Real-time systems must fulfill twofold constraints, in order to be considered working functionally correct: First, they
must process information and consecutively produce a correct output behaviour. Second, their interaction with the
environment must happen within stringent timing constraints dictated by the corresponding environment. In contrast
to many other systems, real-time systems are thus not primarily optimized for speed (in the sense of maximized for
throughput), but to provide determinism and worst case guarantees.
To ensure meeting their strict timing constraints, real-time systems utilize real-time scheduling algorithms. A
scheduling algorithm determines the execution order of the jobs of the workload on the processors of the system.
There exist many different classification schemes for scheduling algorithms, e.g., depending on the method to prioritize
different jobs, or depending on the moment in time when scheduling decisions are made. An important classification
of scheduling algorithms is time-triggered and event-triggered algorithms. Event-triggered algorithms are based on a
set of rules that are used at runtime of the system to make the scheduling decisions. In contrast to this, time-triggered
algorithms determine the execution order of the jobs statically, i.e., prior to the runtime of the system. Static scheduling
has been shown to be appropriate for a variety of hard real time systems mainly due to the verifiable timing behaviour
of the system and the complex task models supported. Another classification of scheduling algorithms is established
on the criterion by which priorities are assigned to the tasks. If each task is assigned a fixed priority that does not
change at runtime from job to job of the same task, then this is called fixed (task) priority scheduling . If the priority
of jobs of the same task may change over time or from job to job, this is called dynamic task priority scheduling.
Most of real time applications have both periodic tasks and aperiodic tasks. Typically, periodic tasks are time driven
and execute the demanding activities with hard timing constraints aimed at guaranteeing regular activation rates.
Aperiodic tasks are usually event driven and may have hard, soft, or non real time requirements depending on the
specific application. When dealing with hybrid task sets, the main objective of the system is to assure the schedulability
of all guaranteed tasks in worst case conditions and provide good average response times for soft and non-realtime
activities. Offline guarantee of event driven aperiodic tasks with critical timing constraints can be done only by making
proper assumptions on the environment, i.e., by assuming the maximum arrival rate for each critical event. If the
maximum arrival rate of some event cannot be bounded a priori, the associated aperiodic task cannot be guaranteed
statically, although an online guarantee of individual aperiodic requests can still be done. Aperiodic tasks requiring
online guarantee on individual instances are called firm. Whenever a firm aperiodic request enters the system, an
acceptance test can be executed by the kernel to verify whether the request can be served within its deadline. If such
a guarantee cannot be done, the request is rejected. On rejection of firm aperiodic tasks the systems predictability or
deterministic behaviour will not be effected and will not cause safety hazard for the system.
A. Related Work
Server algorithms for fixed priority scheduling [2, 3, 4], as well as for dynamic priority scheduling [5, 6], aim at
reserving a fraction of the processor bandwidth to the aperiodic jobs. The server algorithms introduce an additional
periodic task, the server task, into the schedule. The purpose of the periodic server task is to service aperiodic requests
as soon as possible. Like any periodic task, a server is characterized by a period Ts and a computation time Cs called
server capacity.
Polling Server, (PS) [2]. At periods Ts server becomes active and serves aperiodic requests with its capacity Cs.
If no aperiodic requests are pending, PS suspends itself until the beginning of its next period, and the time originally
allocated for aperiodic service is not preserved for aperiodic execution but is used by periodic tasks. If no aperiodic
task arrives the capacity is wasted.Note that if an aperiodic request arrives just after the server has suspended, it must
wait until the beginning of the next polling period, when the server capacity is replenished at its full value. The server
is based on Rate Monotonic Scheduling
Deferrable Server, (DS) [2,3].DS algorithm creates a periodic task (usually having a high priority) for servicing
aperiodic requests. DS preserves its capacity if no requests are pending upon the invocation of the server. The capacity
is maintained until the end of the period, so that aperiodic requests can be serviced at the same server’s priority
at any-time, as long as the capacity has not been exhausted. At the beginning of any server period, the capacity is
replenished at its full value.
Priority Exchange [2,3].The algorithm uses a periodic server (usually at a high priority) for servicing aperiodic
requests.The server preserves its high-priority capacity by exchanging it for the execution time of a lower-priority
periodic task. At the beginning of each server period, the capacity is replenished at its full value. If aperiodic requests
are pending and the server is the ready task with the highest priority, then the requests are serviced using the available
capacity; otherwise Cs is exchanged for the execution time of the active periodic task with the highest priority.When a
priority exchange occurs between a periodic task and server, the periodic task executes at the priority level of the server
while the server accumulates a capacity at the priority level of the periodic task. Thus, the periodic task advances its
execution, and the server capacity is not lost but preserved at a lower priority.
Slack Stealing [4]. The Slack Stealing algorithm does not create a periodic server for aperiodic task service. Rather
it creates a passive task, referred to as the Slack Stealer, which attempts to make time for servicing aperiodic tasks
by ”stealing” all the processing time it can from the periodic tasks without causing their deadlines to be missed. The
main idea behind slack stealing is that, typically, there is no benefit in early completion of the periodic tasks. when
an aperiodic request arrives, the Slack Stealer steals all the available slack from periodic tasks and uses it to execute
aperiodic requests as soon as possible. If no aperiodic requests are pending, periodic tasks are normally scheduled by
rate monotonic algorithm.
Constant Bandwidth Server [5]. When a new job enters the system, it is assigned a suitable scheduling deadline
(to keep its demand within the reserved bandwidth) and it is inserted in the EDF ready queue. If the job tries to
execute more than expected, its deadline is postponed (i.e., its priority is decreased) to reduce the interference on
the other tasks. Note that by postponing the deadline, the task remains eligible for execution. In this way, the CBS
behaves as a work conserving algorithm, exploiting the available slack in an efficient
Total Bandwidth Server [6]. The assignment must be done in such a way that the overall processor utilization
of the aperiodic load never exceeds a specified maximum value. Each time an aperiodic request enters the system,
the total bandwidth of the server is immediately assigned to it, whenever possible. Once the deadline is assigned, the
request is inserted into the ready queue of the system and scheduled by EDF as any other periodic instance. when the
kth aperiodic request arrives at time t = rk, it receives a deadline. dk = max(rk, dk−1) + Ck
Us
where ck is the execution
time of the request and Us is the server utilization factor( which is the bandwidth).
When noticed the bandwidth servers needs an explicit bandwidth allocation. The main drawback of this approach is
that a substantial amount of the CPU utilization might be reserved for future aperiodic jobs that will not necessarily
arrive. Another drawback is that using the server algorithm can result in a lower schedulability bound of the system.
To the best of our knowledge, it has not been shown how to combine server algorithms with arbitrary time-triggered
scheduling tables. Though bandwidth allocated, servers don’t provide guarantee for the aperiodic task whose execution
time is beyond Cs. Methods like slack stealing do try to admit the aperiodic tasks without explicit bandwidth, but
does not guarantee the admittance of firm aperiodic task. Moreover, the allocation of bandwidth utilisation in servers
is very minimal, because of random arrival nature of events.
Slot shifting algorithm which combines the benefits of both time- and event-triggered scheduling for systems. Slot
shifting resolves the complex constraints of a set of online tasks by constructing an static offine scheduling table.
Similar to the aforementioned slack stealing algorithm, slot shifting expresses the leeway of tasks in this derived offine
schedule by spare capacities. At runtime of the system, slot shifting performs acceptance tests for the individual jobs
of aperiodic tasks and integrates them feasibly into the schedule. With the advantages also comes overhead on the
system. This paper tries to address one among the prominent overheads, slots. We would address this problem by
completely removing this notion, but still preserving the idea of slot shifting leading to a new notion of algorithm.
We will also present a new guarantee algorithm which has a much less latency, easy to implement and needs less
data space for computation compared to existing slot shifting algorithm.
B. Organization of the report
The following work is organized as follows: The Section 2 provides the system-model and notations followed by
description of slot shifting algorithm. The section 3 would provide an elaborate version of problem statement which
would succor in establishing solution. Purposefully the problem statement is deferred after algorithmic explanation of
slot shifting, because this might provide a profound realization of the presented complication. The section 4 would
provide the proposed solution. Incremental solution is presented with problems behind each phase; Enabling the assayer
understand the real intuitiveness behind the final solution. The section 5 would provide the experimental results followed
by conclusion and future work.
II. System Model and Background
Task set. Consider a realtime system with n periodic tasks, τ = {τ1, ..., τn}. Each task τi has a worst case execution
time (WCET) Ci. A period Ti, an initiation time or offset relative to some time origin Φ, where 0 ≤ ϕi ≤ Ti and a
hard deadline (relative to the initiation time), di. The parameters Ci, Ti, ϕi, di are assumed to be known deterministic
quantities. We require that the tasks be scheduled according to the Earlies-Deadline-First (EDF) scheduling algorithm
with τ1 having highest priority 1 and τn having lowest priority; however we do not require this priority assignment to
hold, but we do assume that di ≤ Ti.
A periodic task, say τi, generates infinite sequence of jobs. The kth
such job , Ji,k is ready at time RJi,k
= ϕi+(K−1)Ti
and its CJi,k
units of required execution must be completed by time dJi,k
= ϕi + (K − 1)Ti + di or else timing fault
will occur causing the job to terminate.We assume a fully preemptive system.
We now introduce the aperiodic tasks, γk, k ≥ 1. Each aperiodic job, γk, has an associated arrival time, αk, a
processing requirement, pk, and an optional hard deadline, Dk. The tasks that arrives with such hard deadline is called
firm aperiodic job, Γ and tasks with no hard deadline is named soft aperiodic job, ζ, if the aperiodic job does not have
a hard deadline, we set Dk = ∞. The aperiodic job are indexed such that 0 ≤ αk ≤ αk+1, K ≥ 1. We assume that the
aperiodic job sequence is not known in advance. On arrival of Γ the job is temporarily placed on a list before acceptance
test applied on it. The list that holds the set of firm aperiodic task waiting to be accepted is called unresolved list,ιA.
The list holding set of γk which could not be guaranteed and not a ζ is called not guaranteed list,ιS, job from list,ιS
is named Ji,k,S. The list holding a set of jobs that is ready to run is called ready list ,ιξ. Irrespective of the list, Job
that is selected and running in the system is called current job, Ji,k,curr.
Interval. A layer of certainty is added around the guaranteed jobs called interval. The definition of interval is as
follows, Each interval has an id,i. The end of an interval, ei = dJi,k
. The end of an interval determines the owner or
nativity of the jobs, and there could be more than one job associated with an interval. The early start time of an
interval ξi = min(RJi,k
), i.e., is the minimum of the start time of the job that belongs to the interval. The start of an
interval si = max(ei−1, ξi). The spare capacity of an interval ωi for an interval is defined by
ωi = |ωi| −
n∑
i=0
CJi,k
+ min(0, ωi+1). (1)
Spare capacity and interval calculation notion could create a negative spare capacity interval. This could create a
consecutive backward wave of negative spare capacity intervals, till an interval which could completely satisfy the
capacity needed for such interval. This ripple effect due to backward propagation of negative spare capacity forms a
relation among these intervals is denoted as relation window, ϱi (Diagrammatic representation in Fig 1). The first
interval of this relation window is named lender, li and the last in the relation window is named lent-till, bi. If interval
is part of no relation window then li = bi = ∞. There could be many such relation window in a system within its
hyper period.
Fig. 1: Relation window
In general, each interval Ii is represented as a set Ii = {i, s, e, ω, l, b}. Each element within the set is accessed using
member access operator “ . ”( Dot operator 1
).
The interval progression is closely associated with time, the interval that is associated with the current time is called
current interval, Icurr. The interval to which job Ji,k is associated is named IJi,k
. The list of intervals is simply named
I. Each job will hold the reference to the interval to which it belongs.
In general, each Job,Ji,k is represented as a set Ji,k = {C, T, ϕ, d, I}. Each element within the set is accessed using
member access operator “ . ”( Dot operator 1
).
System widely uses the notion of list. The list in general can be described as an enumerated collection of objects.
Like a set, it contains members. Unlike set the elements have position. The position of an element in a list is its rank
or index. The number of elements is called the length of the sequence. Formally, a list can be defined as an abstract
data type whose domain is either the set of the natural numbers (for infinite sequences) or the set of the first n natural
numbers (for a sequence of finite length n). The logical behaviour is defined by a set of values and a set of operations.
The order of arrangement is an abstract functionality, where dequeue() removes the element with rank one and queue()
would add an element at appropriate rank. The list holds the current accessed element rank. The traversal of the list
from the current rank i to i−1 is called prev operator. Similarly the traversal from current rank to i to i−1 is through
nxt operator. In this work, behavioural access of the list is done through symbol “ . ”( Dot operator1
).
During online phase, the timer λ is triggered based on the parameter λexpiry, which in case of slot shifting is
a fixed relative period called slots. A slot s is defined as an external observer (λ) counts the ticks of the globally
synchronized clock with granularity of slot length,|s|, i.e., λexpiry ← |s| and assigns numbers from 0 to ∞ on every slot
called slot count scnt. We denote by ”in slot i” the time between the start and end of slot i, si, i.e., the time-interval
[|s| ∗ i, |s| ∗ (i + 1)]. slot has uniform time length and on expiry triggers function schedule.
A. Background
Now we sketch the existing version of slot shifting algorithm. Describing the existing solution would enable us hand
on hand comparison of the slot shifting algorithm with the proposed one, giving us intuitiveness of the problem and
justifying the proposed solution.
The slot shifting algorithm works at two phases namely, offline phase and online phase. In Offline Phase, the periodic
tasks are broken into its corresponding jobs and a layer of certainty is added around the jobs through interval, then
the spare capacity for each interval is calculated with equation(1) (Example below gives diagrammatic interpretation).
This yields to a table of jobs and corresponding intervals.
1same as ANSI C’s dot operator
Example: Exemplification of spare capacity calculation2
The intervals are created by applying the formulation e = dJi,k
which calculates end of an interval, then the start
of an interval is calculated using s = max(ei−1, ϵi). The created intervals are assigned with unique identification
by assigning the 1st
interval 1, 2nd
interval 2,...so on, i.e., single count integer increment value is assigned to each
consecutive interval starting with value 1 for 1st
interval till the last interval. Along with the id assignment initial
spare capacity is assigned with length of the interval, then equation 1 is applied as below.
The table provides a rudimentary exemplification of applying equation 1 with some apocryphal values. The figurine
below the table gives synonymic pictorial view of the table. We presume that such an approach would assist the
reader in better understanding the intuitiveness behind the algorithm.
TABLE I: Calculation of spare capacity
Interval-id( Ii)
step 1 2 3 4 Equation applied
1 2 1 1 -3 Tωi ← |ωi| −
∑n
i=0 CJi,k
2 1 -1 -2 -3 ωi ← Tωi + min(0, ωi+1)
Fig. 2: Calculation of spare capacity
The example gives a clear picture of ripple of negative intervals. Graphical representation shows that I1 is not the
only lender interval, rather many intermediate intervals partially satisfy the computational need of I4. Like many
lenders the relation window might also have multiple borrowers and also combination of both.
The same approach of assurance is provided during online phase for accepted firm aperiodic tasks.
Interval progression. During online phase as time progresses the current interval’s end is checked against current
time and progressed to next interval when needed. This is represented algorithmically as function update-slot.
Spare Capacity update. For each slot the job is selected based on EDF. Selected job is checked for interval to
which it belongs. If job belongs to the current interval, then the spare capacity of the interval is untouched as the
needed computation is already done either during offline phase(in case of periodic tasks) or online phase(in case of
accepted and guaranteed firm aperiodic task). If the task does not belong to the current interval the spare capacity of
the current interval is negated by one slot and spare capacity of the job’s interval is tested for negativity. If negative
then along with job’s interval all the intervals that are negative backwards are added one slot to its spare capacity till it
reaches a positive interval. If the job’s interval is positive, it is simply added with one slot. This process is functionally
represented in algorithm labelled update-sc. This functionality repeats itself for every slot.
1) Neutralisation Effect.: On execution of spare capacity update for every slot, an effect is observed when the current
interval Icurr and the job Ji,k,curr is within the same relation window,ϱi, and IJi,k
.ω is negative.This effect is named
neutralisation effect. The observation is as follows:
1 If the Ji,k,curr’s interval is not Icurr, then we first negate the Icurr by one slot.
2 Then we go to interval IJi,k
, test for spare capacity negativity and add one slot to it.
3 Now we traverse backward to each consecutive negative interval and add one slot. Repeat this process till the
first positive interval is noticed and also add one slot to it.The first positive interval in this case is Icurr, so we
neutralise what we negated in the step 1.
This effect is important because it leads to a rule of application in deferred update, explained later in this paper called
neutralisation rule.
Example: Exemplification of neutralisation effect2
Let’s assume IJi,k,curr
= 4 and Icurr = I1, so both the current job and current interval is within the same relation
window ϱi. The following is an elementary view of the interval’s spare capacity before Ji,k,curr is scheduled with
apocryphal values.
Fig 3-(a): Initial state before Ji,k,curr is scheduled.
Fig 3-(b): On decision 1st
the job’s interval nativity is checked. In this scenario, the job does not belong to the
current interval, so one slot is negated from the Icurr, i.e., from I1, so the I1’s spare capacity becomes 0.
Fig 3-(c): Next the job’s interval,I4 is tested for negativity. In this scenario it is negative, i.e., “-3”. Irrespective
of negativity, the job’s interval is incremented by one, making the interval I4 = −2.
Fig 3-(d): Since the job’s interval is tested positive in negativity check, all the consecutive back intervals including
the 1s
t positive interval is incremented by one, i.e., intervals I3, I2 and I1 is incremented by one.
Fig. 3: Neutralisation effect
If noticed interval I1 is negated and added with same value at same point of time, causing neutralisation effect on
Icurr, i.e., I1.
2Illustration provided is from interval perspective, because the paper tries to reason the problems from an abstract interval viewpoint.
Algorithm 1: Slotshifting Online Phase
1 Function schedule(Ji,k,Curr);
Input : Ji,k,Curr: job previously selected
Output: Ji,k,nxt: next selected job
2 update-sc(Ji,k,Curr, I) ◃ update the interval;
3 if A ̸= ∅ then
4 Test-aperiodic(ιA) ◃ Test and try Guaranteeing firm Aperiodic’s if any;
5 end
6 Ji,k,nxt ←selection-fn() ◃ Get next eligible job;
7 update-slot();
8 return Ji,k,nxt;
9 Function update-sc(Ji,k,Curr, I);
Input : Ji,k,Curr: previous selected job, I: Interval list
Output: ∅
Result: Spare Capacity of the respective intervals get updated
10 if Ji,k,Curr.I == Icurr then
11 return;
12 end
13 Icurr ← Icurr − 1;
14 Itmp ← Ji,k,Curr.I;
15 while Itmp do
16 if Itmp.ω ≥ 0 then
17 Itmp.ω ← Itmp.ω + 1;
18 break;
19 end
20 else
21 Itmp.ω ← Itmp.ω + 1;
22 Itmp.ω ← Itmp.prev;
23 end
24 end
25 Function update-slot();
Input : ∅
Output: ∅
Result: scnt gets added by 1; also check and move Icurr to I.nxt
26 scnt ← scnt + 1;
27 if scnt ≥ Icurr.end then
28 Icurr ← Icurr.nxt;
29 end
30 Function Test-aperiodic(ιA, I);
Input : ιA: Unconcluded firm aperiodic list;
I: list of intervals
Output: ∅
Result: Job in ιA is moved to either ιG or ιS
31 while ιA ̸= None do
32 Γ ← ιA.dequeue();
33 if Iacc ← Acceptance-test(Γ, I) ̸= None then
34 Guarantee-job(I, Iacc, Γ);
35 ιG.queue(Γ)
36 end
37 else
38 ιS.queue(Γ)
39 end
40 end
1 Function Acceptance-test(Γ, I);
Input : Γ: Unconcluded firm aperiodic job;
I: list of intervals
Output: Iacc: interval in which task was accepted or None
2 Itmp ← Icurr;
3 ωtmp ← 0;
4 while Itmp.end ≤ Γ.d do
5 if Itmp.ω > 0 then
6 ωtmp ← Itmp.ω + ωtmp;
7 end
8 Itmp ← Itmp.nxt;
9 end
10 if ωtmp ≥ Γ.C then
11 return Itmp.prev;
12 end
13 else
14 return None;
15 end
16 Function Guarantee-Job(I, Iacc, Γ);
Input : Γ: Unconcluded firm aperiodic job;
I: list of intervals;
Iacc: interval at which Γ.d falls
Output: None
Result: Γ gets guaranteed
17 if Iacc.end > Γ.C then
18 INew = split-intr(Iacc, Γ.C);
19 end
20 Iiter ← Iacc;
21 while Iiter.id ≥ Icurr.id do
22 Jlst ← Iiter.Jlst;
23 ω ← 0;
24 J ← Jlst.head;
25 while J do
26 ω ← ω + J.c;
27 J ← J.nxt;
28 end
29 Iiter.ω ← ω + max(0, Iiter.nxt.ω);
30 Iiter ← Iiter.prev;
31 end
III. Problem Statement
In real-time systems, scheduling algorithms are required to improve the use of the computational resources of the
system. Thus, these scheduling algorithms must feature low overheads to leverage as much computational resources as
possible for the applications at runtime. On the one hand, they must provide a satisfactory level of predictability and
worst case guarantees for the tasks. On the other hand, they are expected to provide flexibility to react to aperiodic
tasks. Last but not least, it is desirable that these algorithms must be able to handle the more and more complex
constraints between interacting applications. In this section we will provide a detailed overview of the overheads in the
current implementation of slot shifting.
•Problem1: Updating Spare Capacity
The function update-sc is applied for every slot.The problem arises when job that does not belong to the current
interval is executing in current interval and spare capacity of the job’s Interval is negative. If the mentioned is the
case, we need to traverse Backwards through all the consecutive negative intervals and increment the spare capacity
of the intervals till either we reach positive interval or current interval.
The function update-sc becomes an consistent Overhead when:
* current executing job’s Interval is not current interval and the job interval’s spare capacity is a big negative
number. The big negative spare capacity intuitively means the borrow from the lender interval is big.
* Interval to which task belongs is far away from lender, i.e., there are many intervals in between the lender and
job’s interval.
* overhead further complicates when current job is the only task being selected for the next consecutive slots
of current interval, which would be a normal scenario in slot level EDF. In other words, once the backward
spare capacity update procedure is started between intervals due to negative job’s interval’s spare capacity, the
probability of repeating for next consecutive slots till the job’s interval’s spare capacity becomes positive is almost
1.
•Problem2: Slots
Although there exists systems, such as avionic systems[29] that are based on the notion of slots, most of the existing
real-time systems do not use slot. The notion of slots adds following overhead on system:
* Increases the number of time scheduling decision needs to be made.
* The worst case execution time of the task is calculated based on
Ci =
⌈
calculated upper bound of Job execution time
slotlength
⌉
(2)
In the above defined equation, we notice the approximation of computation time of the jobs to its ceiling to fit the
notion of slot. There might be some cases where the rounding of computation might take off more time from the
system than needed by the tasks. This approximation could be part of the time needed to accept a firm aperiodic
tasks during online phase, causing admittance failure.
•Problem3: Acceptance and Guarantee Algorithm
When an aperiodic arrives and gets accepted then it needs to be guaranteed. As described, guarantee algorithm
creates an interval if needed and applies equation (1) starting from the last interval in which job was accepted till the
current interval. The traditional spare capacity calculation might be helpful in offline phase for initial spare capacity
calculation; but in online phase this calculation adds following complications:
* Needs More data space. Both job and interval needs to hold reference3
of each other, needing an unnecessary
additional place holder in either interval or job.
* Implementation complexity. Adds additional difficulty in implementation, because along with the traversal of
intervals, we also need to traverse through the jobs per interval.
* More latency. Increases runtime complexity due to application of equation 1 to calculate spare capacity for each
interval.
* Reduces predictability. Calculation of spare capacity on each interval would take different runtime. The
calculation runtime per interval directly proportionates the number of jobs associated with the interval, in other
words, complexity of calculation of spare capacity per interval is not constant, reducing the determinism of the
system.
IV. Proposed Solution
In this section we define a new algorithm inspired from slot shifting but with the mentioned problems addressed. To
make this approach easier we from here on call the existing algorithm traditional so that we could seamlessly distinguish
between the existing algorithm and proposed one.
A. Deferred Update of Spare Capacity.
As explained in Section 3 problem 1, the function that updates the spare capacity every slot has considerable
overhead. The complexity of spare capacity update is O(n) per slot, where n is the computational complexity involved
to traverse the intervals backwards till the first positive interval. The whole update procedure is a redundant repeatable
procedure that occurs for each slot, this aggregates the complexity of spare capacity update for an interval to O(n ∗ i),
where i is computational complexity to update the spare capacity every slot within the interval.
On observation, we infer that accumulating i slots spare capacity update procedure and applying them later (when
necessary, i.e., either at end of the interval or on aperiodic task arrival) would curtail the redundancy in update
procedure and thus reducing computation to O(n) per interval from O(n ∗ i). This procedure of accumulating and
updating the spare capacity on need basis is termed as deferred update.
Offline and preparation phase. To make this deferred update work, offline phase is complicated with an additional
computation that would interweave the references of the relation window, i.e., if the relation window, ϱ exists between
a set of intervals, then each interval in ϱ’s lender, l and lent-till, b field is delegated with the respective reference of
lender and lent-till3
.
1) O1 update spare capacity: Additionally, every Interval is added with an additional field named update-val, u. The
update-val is updated during online phase through algorithm termed as O1-Update-sc, described below.
Algorithm 1: Slot based O1-Update-SC
1 Function o1-update-sc(Ji,k,curr, I);
Input : Ji,k,curr: Previous selected job, I: Interval table
Output: None
Result: Checks and Updates only spare capacity of job and current interval
2 if Ji,k,curr.I == Icurr then
3 return;
4 end
5 tskω ← Ji,k,curr.I.ω ◃ take a local copy of task’s spare capacity before updating;
6 if Ji,k,curr.I.ω < 0 then
7 Ji,k,curr.I.u ← Ji,k,curr.I.u + 1;
8 end
9 Ji,k,curr.I.ω ← Ji,k,curr.I.ω + 1;
10 /*neutralisation rule 1.check job’s lender and current interval’s lender is same.;
11 2. also check if job’s interval is negative before update.;
12 if both condition satisfies just don’t negate Icurr;
13 */;
14 if Ji,k,curr.I.l == Icurr.l and tskω < 0 then
15 return;
16 end
17 Icurr.ω ← Icurr.ω − 1;
Described O1-Update-sc will get triggered at the end or beginning of every slot. When noticed the algorithm has
no loops, but just checks on three conditions to update spare capacity and update-val of job’s interval or just current
interval’s spare capacity. The verbose explanation of algorithm is as follows:
• Check the spare capacity of job’s interval,ωj is negative before updating, if yes, then along with ωj update also
job’s interval’s update-val, Uj.
• Neutralisation rule. Check lender of job’s interval,lj and lender of current interval, lcurr is same and also check ωj
is negative before updating. If both these conditions are satisfied then don’t negate the ωcurr.
At any given scenario now the complexity of spare capacity update per slot is O(1).
2) Naive Deferred update: The above mentioned algorithm O1-Update-sc is just a facilitator of deferred update
algorithm. The real spare capacity update for intervals happen during deferred update. The function deferred update
gets triggered on 2 conditions.
• When ICurr is moving to next interval.
• When an aperiodic task arrives and needs to be tested to guarantee.
To understand the intuitiveness of the working deferred update, we will now sketch a non working algorithm which
will be called naive-deferred-update and explain certain challenges or scenarios that would defy straight forward
notion of deferring the update of intervals within ϱi. We presume this approach would provide a good insight on the
complete solution on deferred update.
3References are method to directly access the specified object without any traversal or computational overhead. Similar to ANSI C’s
pointer notion.
Algorithm 2: naive-deferred-update
1 Function naive-deferred-update(I);
Input : I: Interval list
Result: spare capacity of the interval within ϱi is updated with right data
2 lentTill ← Icurr.b;
3 updateV al ← 0;
4 while lentTill ̸= Icurr do
5 lentTill.ω ← lentTill.ω + updateV al ◃ Update the interval with previous interval’s deferred spare capacity;
6 updateV al ← updateV al + lentTill.U ◃ Accumulate previous intervals within ϱi update-val ;
7 lentTill.U ← 0;
8 lentTill ← lentTill.prev;
9 end
To brief on the Algorithm 1 mentioned above, the algorithm traverses backwards starting from lent-till,bi till Icurr.
As it moves backwards it does the following
• It starts accumulating intervals update-val, Ui, within ϱi generated during O1-Update-SC
• As it progresses backwards and before accumulating the current lent to interval’s update-val. Ui, we add the
previously accumulated Ui to the current lent to interval’s spare capacity bi.ω.
Symbolizing the functions. To make the explanation easier, we will symbolize certain functions. The function
Update-SC symbolized as IJi,k
Icurr, where Icurr is the current interval that is executing a job that belongs to
the interval IJi,k
and the function Update-SC is applied. We will symbolize the result as ωi. The ωi represents the
spare capacity of the interval i, the subscript,i that represent a interval is made optional, rather the explanation in
examples holds a column representing the corresponding interval id. We would Symbolize the function O1-Update-SC
as IJi,k
x
Icurr, The super script x represents O(1) version of Update-SC, i.e., O1-Update-SC. The result of the
x
will be represented ω
[U]
i , where superscript U is the update val of the interval i. We say curr Ii (No entity on
the left), when the current interval moves from Icurr to Ii, .i.e., Ii becomes Icurr. We will also symbolize function
deferred-update as ϱi, which means deferred update is applied on the ϱi. To symbolize the naive version of deferred
update we say N
.
Scenario 1: lender interval is not the only lender. In Algorithm (2) the deferred update happens with an assumption
that lender,l of the relation window ϱi is the only lender. This is generally not the case; There could be an interval,
Iϱi,btw in the middle of ϱi that could partially satisfy the capacity constraint of some interval, Iϱi,aft after the Iϱi,btw
within ϱi.
Example: existence of multiple lenders within ϱi
To make the explanation more justifiable and apparent, we will first derive the offline version of spare capacity
calculation in 2 steps.
TABLE II: Offline calculation of spare capacity
Interval-id
step 1 2 3 4 function applied
1 4 1 1 -3 Tωi ← |ωi| −
∑n
i=0 CJi,k
2 3 -1 -2 -3 ωi ← Tωi + min(0, ωi+1)
The above offline calculation of ωi gives an simple insight on how Iϱi,btw, precisely, I2 and I3, partially satisfies
capacity constraint of Iϱi,aft, i.e., I4. Assuming deferred update offline phase is applied on this set of intervals, we label
this relation window as ϱ1. In this exemplification, let’s assume Icurr starts at I1 and I4 is the last interval. During
online phase for spare capacity maintenance Algorithm(2) is applied on each slot and Algorithm(3) is applied when
necessary, i.e., either when aperiodic arrives or end of interval. The table below shows slot by slot update procedure.
TABLE III: Online phase using deferred update
Interval-id
slot 1 2 3 4 function-applied
0 3 -1 -2 -3 curr I1
1 3 -1 -2 −2[1]
I4
x
Icurr
2 3 -1 -2 −1[2]
I4
x
Icurr
3 3 -1 -2 0[3]
I4
x
Icurr
4 2 -1 -2 0[3]
∅ x
Icurr
5 2 2 1 0 curr I2 and N
ϱ1
but, this calculation is wrong in ω1 and ω2. lets run the same scenario with traditional algorithm, i.e., algorithm
1 to understand what went wrong in the TABLE III.
TABLE IV: Online phase using Traditional algorithm
Interval-id
slot 1 2 3 4 function-applied
0 3 -1 -2 -3 curr I1
1 3 0 -1 -2 I4 Icurr
2 2 1 0 -1 I4 Icurr
3 1 1 1 0 I4 Icurr
4 0 1 1 0 ∅ Icurr
5 0 1 1 0 curr I2
On running naive-deferred-update we find ω2 has 1 spare capacity extra and ω1 has 2 spare capacity extra than
the traditional algorithm. This additional spare capacity is an error, due to an interval, Iϱi,btw becoming positive
during deferred update. The value above 0 of the interval,Iϱi,btw in relation window should not be propagated to
back intervals, or the value after the interval,Iϱi,btw becoming positive should be treated as an error to backward
interval.
Scenario 2: During online phase, middle of ϱi some interval becomes positive before deferred update. During online
phase of deferred update, irrespective of multiple lenders there could be an interval Iϱi,btw that becomes positive during
O1-update-SC and before deferred-update is applied. To explain this scenario lets make an example.
Example: Iϱi,btw becomes positive during O1-update-SC and before deferred-update
To make this scenario simple and easy to understand lets make an assumption that lender l is the only interval that
has lent to all the intervals in relation window ϱ1 and ϱ1 is the only set of intervals generated.
TABLE V: Offline calculation of spare capacity
Interval-id
step 1 2 3 4 function applied
1 7 -1 -1 -3 Tωi ← |ωi| −
∑n
i=0 CJi,k
2 2 -5 -4 -3 ωi ← Tωi + min(0, ωi+1)
TABLE VI: Online phase using deferred update
Interval-id
slot 1 2 3 4 function-applied
0 2 -5 -4 -3 curr I1
1 2 −4[1]
-4 -3 I2
x
Icurr
2 2 −3[2]
-4 -3 I2
x
Icurr
3 2 −2[3]
-4 -3 I2
x
Icurr
4 2 −1[4]
-4 -3 I2
x
Icurr
5 2 0[5]
-4 -3 I2
x
Icurr
6 1 1[5]
-4 -3 I2
x
Icurr
7 1 1[5]
−3[1]
-3 I3
x
Icurr
8 1 2 -3 -3 curr I2 and N
ϱ1
On execution of I3 in Icurr, The Icurr should be negated by one, because I2 has already become positive so
neutralisation effect should stop at I2. The problem is due to the fact that I3 is not aware that I1 has already
become positive. This is an error that needs to be addressed in naive-deferred-update.
The above mentioned scenarios can occur in any combinations.
3) Understanding the Problem with naive deferred update: In the above mentioned scenarios there is an error in
deferred update due to assumption that the lender,l is the only lender. Second assumption is that no interval within
the ϱi will become positive when executing in Icurr. The mentioned problems is due to the function x
not aware
of its past intervals, but this unawareness was not the case with traditional approach. Making intervals aware of each
other is only possible by traversing backwards and updating the past intervals on each schedule; This would de-vast
the purpose of deferred update approach. To address the problem we need to rectify the error during deferred update
N
ϱi. We would call the error correction as ripple effect correction(REC), ε.
4) Formalizing Solution: To correct the error we now define ε in 2 steps each rectifying error due to the above
mentioned scenarios and we would conglomerate them into one single solution. This conglomerated solution would be
applied on naive-deferred-update to make it into a complete working deferred spare capacity update solution.
Addressing Scenario 1. The scenario 1 leads to an error due to some intervals which were partial lenders becomes
positive during deferred-update and still Ui of after intervals traverses backward. To address this scenario we define ε
as
εc ← max(0, ([ωc −
c+1∑
j=bi−1
εj] +
c+1∑
k=bi
Uk)) (3)
so the spare capacity of the interval that needs to get updated becomes
ωc ← ([ωc −
c+1∑
j=bi−1
εj] +
c+1∑
k=bi
Uk) (4)
• c ← is the current interval whose ω getting updated.
• j ← ranges from interval before lent-till, bi − 1 till the interval just after c within ϱi.
• k ← ranges from lent-till,bi till the interval just after c.
when we probe the above formulation closer into equation(3) and equation(4), both are the variant of the same
formulae applied iteratively during ϱi. The REC in this scenario is just the summation of all ωi ≥ 0.
Addressing scenario 2. To understand the solution, lets understand the cause for the error. The error is due to
assumption that during online phase Iϱi,btw will not become positive before deferred update is applied, but this
assumption is wrong. An interval in between,Iϱi,btw can become positive satisfying the borrowed values of Iϱi,aft
before deferred update, due to some job Ji,k,curr belonging Iϱi,btw runs satisfying the constraints. Intuitiveness of
solution is that, spare capacity of the intervals Iϱi,aft that executes in Icurr should not propagate its U beyond Iϱi,btw.
To make this work, U accumulated after Iϱi,btw should be made into an error to Iϱi,bfr. This inference leads to
εc ← [ωc,bfr ≥ 0]?
c+1∑
k=bi
Uk :
c+1∑
j=bi−1
εj (5)
.
• c ← is the current interval whose ω getting updated.
• j ← ranges from interval before lent-till, bi − 1 till the interval just after c within ϱi.
• k ← ranges from lent-till,bi till the interval just after c.
• ωc,bfr is the spare capacity of the interval before deferred update on the interval c.
In words, The interval that is currently getting updated is first checked weather its spare capacity is greater than or
equal to zero, if yes, then the update-val from the after intervals,Iϱi,aft becomes the current REC val, ϵc, else proceed
with equation 3 and 4. If the spare capacity of the interval that is getting updated,Iϱi,btw+x is found positive before
update, then REC value collected previously is applied on the updating interval, then the update val from all the after
interval becomes the REC for the before intervals,Iϱi,bfr.
Addressing Current interval Icurr. The current interval Icurr need not be added with update-val U rather just needs
to be corrected with REC, ε collected from previous intervals .i.e.,
Icurr.ω ← Icurr.ω −
curr+1∑
j=bi−1
εj (6)
• j ← ranges from interval before lent-till, bi − 1 till the interval just after Icurr within ϱi.
Fig. 4: Interval between the ϱi becoming positive before deferred update applied.
5) Complete working solution: On conglomerating above 3 solutions into one single algorithm leads to complete
deferred update algorithm which is as follows.
Algorithm 3: deferred-update
1 Function deferred-update(I, Icurr);
Input : I: Interval list, Icurr: Current interval
Result: spare capacity of the intervals in ϱi is updated with right data
2 lentTilltemp ← Icurr.b;
3 if !lentTilltemp then
4 return;
5 end
6 updateV al ← 0;
7 ε ← 0;
8 while lentTilltemp ̸= Icurr do
9 ωbefore ← lentTilltemp.ω ◃ Make a copy of ω before updating;
10 lentTilltemp.ω ← lentTilltemp.ω + updateV al ◃ Update the deferred spare capacity;
11 if ωbefore < 0 and lentTilltemp ̸= Icurr.b then
12 ε ← ε + max(0, lentTilltemp.ω)
13 end
14 else
15 ω ← updateV al
16 end
17 updateV al ← updateV al + lentTill.U ◃ Accumulate previous intervals update-val within ϱi ;
18 lentTill.U ← 0;
19 lentTill ← lentTill.prev;
20 end
B. Removing Slots
The previous section, deferred update of spare capacity almost removed the complexity involved due to spare capacity
update per slot, by deferring it. In this section we would remove the notion of slot, making the slot level spare capacity
shifting into a time based capacity shifting. To remove the notion of slots we need to make the following alterations
along with the existing one.
• We just need to make the tasks continuous, i.e., we need to remove the additional computation mentioned in
equation (2) and compute the start and end of intervals with the notion of continuous time.
• Decision timer in slot shifting gets triggered for every slot, here on needs to be triggered at the end of each
interval.i.e.,
λexpiry ← Icurr.end (7)
The timer expiry is no more constant as it was in slot shifting.
• we need to alter the slot based O1-Update-SC into capacity based spare capacity update as mentioned in Algorithm
4.
Algorithm 4: Capacity Based O1-Update-SC
1 Function o1-update-sc(Ji,k,curr, I);
Input : Ji,k,curr: Previous selected job, I: Interval table
Output: ∅
Result: Checks and Updates only spare capacity of job and current interval
2 if Ji,k,curr.I == Icurr then
3 return;
4 end
5 ◃ check for 1st
time and initialize sched − time;
6 if first − time then
7 sched − time ← 0;
8 first − time ← 0;
9 end
10 run−time ← [curr −time]−[sched−time] ◃ compute the difference between previous and current schedule time;
11 sched − time ← curr − time;
12 tsk − ω ← Ji,k,curr.I.ω ◃ take a local copy of task’s spare capacity before updating;
13 if Ji,k,curr.I.ω < 0 then
14 ◃ here just update update-val with required value;
15 Ji,k,curr.I.ω ← Ji,k,curr.I.ω + run − time;
16 if Ji,k,curr.I.ω > 0 then
17 Ji,k,curr.I.u ← Ji,k,curr.I.u + [[run − time] − [Ji,k,curr.I.ω]];
18 end
19 else
20 Ji,k,curr.I.u ← Ji,k,curr.I.u + run − time;
21 end
22 end
23 else
24 Ji,k,curr.I.ω ← Ji,k,curr.I.ω + run − time;
25 end
26 ◃ check job’s lender and current interval’s lender is same and job’s interval before updating is negative;
27 if Ji,k,curr.I.l == Icurr.l and tsk − ω < 0 then
28 return;
29 end
30 Icurr.ω ← Icurr.ω − run − time;
Notion in words, The notion of slot is removed and the task can run in Icurr continuously even beyond the ω that
was borrowed, so we need to update update-val of the current job, uJi,k,curr
only the required, so remaining should be
avoided hence derives the following equation.
Ji,k,curr.I.ω ← Ji,k,curr.I.ω + R (8)
Ji,k,curr.I.u ← [Ji,k,curr.I.ω > 0]?(Ji,k,curr.I.u + [[R] − [Ji,k,curr.I.ω]]) : (Ji,k,curr.I.u + R) (9)
• R ← runtime between previous schedule and current schedule
With the mentioned algorithm(Algorithm 4) natural process of decision happens except at the end of interval
trigger. The decision happens during the following scenarios.
• when a job arrives
• when a job exits
• End of the interval, i.e., when Timer, λ expires
We would symbolize the capacity based O1-Update-SC as C
with the same notion as x
, but rather than Algorithm
2, Algorithm 4 will be applied.
C. Guarantee Algorithm
In this section we derive a new mode of guaranteeing the firm aperiodic task, which would take constant time per
interval and would stop traversal once the computation constraint,Γ.c for the aperiodic task is satisfied, thus solving
the problems mentioned in section3 problem3.
Algorithm 5: New Guarantee Algorithm
1 Function Guarantee-algorithm(I, Iacc, Γ);
Input : I: list of intervals;
Iacc: reference of the interval at which Γ deadline falls;
Γ : The aperiodic job that got accepted
Output: ∅
Result: Γ is guaranteed
2 if Iacc.end > Γ.C then
3 INew = split-intr(Iacc, Γ.C);
4 end
5
6 if INew then
7 Iiter ← INew;
8 end
9 else
10 Iiter ← Iacc
11 end
12 ∆ ← Γ.C;
13 while ∆ do
14 if Iiter.ω < 0 then
15 Iiter.ω ← Iiter.ω − ∆
16 end
17 else if Iiter.ω > 0 then
18 if Iiter.ω ≥ ∆ then
19 Iiter.ω ← Iiter.ω − ∆;
20 ∆ ← 0;
21 end
22 else
23 Iiter.ω ← Iiter.ω − ∆;
24 ∆ ← ∆ − Iiter.ω;
25 end
26 end
27 Iiter ← Iiter.prev;
28 end
29 Function split-intr(Iright,sp);
Input : Iright: reference of the interval at which Gamma falls;
sp: split point at which separation should happen
Output: Ileft: New interval that was created and added to list
30 Ileft.start ← Iright.start;
31 Ileft.end ← sp;
32 Ileft.sc ← Ileft.end − Ileft.start;
33 Iright.start ← sp;
34 Iright.sc ← Iright.sc − Ileft.sc;
35 Ileft.sc ← Ileft.sc + min(0, Iright.sc);
36 insert-before(Iright, Ileft) ◃ list function that just inserts the node before first parameter;
37 return Ileft
In words, the algorithm uses a approach of delta, ∆ which will initially hold the Γ.c and updates the intervals till
the ∆ becomes 0. Negation of ∆ happens in the below mentioned steps, which take place as we traverse backwards
from either Inew(in case a new interval is created) or Iacc, i.e., the interval where deadline falls.
• When the interval ω is positive and less than required ∆ then negate the ∆ with ω and make ω the negative
version of the remaining ∆.
• When the interval’s ω is positive and greater or equal to required ∆ then just negate ω with ∆ and make ∆ to 0.
• Finally when interval’s ω is negative then simply negate ∆ to ω, i.e., ω ← ω − ∆
In case of new interval creation we start from Inew because the ω of the Iacc is already satisfied during split-interval
function.
The idea behind the computation is to use the existing spare capacity calculation of offline phase and just update
new spare capacity with the same ideology but from different perspective.
D. Putting it all together
This section tries to show how the capacity shifting algorithm will incorporate into the online phase. The algorithm
removes the notion of slot, s and slot count scnt.
Algorithm 6: Capacity Shifting
1 Function schedule(Ji,k,Curr);
Input : Ji,k,Curr: job previously selected
Output: Ji,k,nxt: next selected job
2 O1-Update-SC(Ji,k,Curr, I) ◃ update the intervals with the run time;
3 if ιA ̸= ∅ then
4 deferred-update(I, Icurr) ◃ Deferred update happens;
5 Test-aperiodic(ιA) ◃ Test and try Guarantee Aperiodic’s if any;
6 end
7 update-time() ◃ Here deferred update happens if needed;
8 Ji,k,nxt ←selection-fn() ◃ Get next eligible job;
9 return Ji,k,nxt;
10 Function o1-update-sc(Ji,k,curr, I);
Input : Ji,k,curr: Previous selected job, I: Interval table
Output: ∅
Result: Checks and Updates only spare capacity of job and current interval
11 if Ji,k,curr.interval == Icurr then
12 return;
13 end
14 ◃ check for 1st
time and initialize sched − time;
15 if first − time then
16 sched − time ← 0;
17 first − time ← 0;
18 end
19 run−time ← [curr −time]−[sched−time] ◃ compute the difference between previous and current schedule time;
20 sched − time ← curr − time;
21 tsk − ω ← Ji,k,curr.interval.ω ◃ take a local copy of task’s spare capacity before updating;
22 if Ji,k,curr.interval.ω < 0 then
23 ◃ here just update update-val with required value;
24 Ji,k,curr.interval.ω ← Ji,k,curr.interval.ω + run − time;
25 if Ji,k,curr.interval.ω > 0 then
26 Ji,k,curr.interval.u ← Ji,k,curr.interval.u + [[run − time] − [Ji,k,curr.interval.ω]];
27 end
28 else
29 Ji,k,curr.interval.u ← Ji,k,curr.interval.u + run − time;
30 end
31 end
32 else
33 Ji,k,curr.interval.ω ← Ji,k,curr.interval.ω + run − time;
34 end
1 ◃ check job’s lender and current interval’s lender is same and job’s interval before updating is negative;
2 if Ji,k,curr.interval.l == Icurr.l and tsk − ω < 0 then
3 return;
4 end
5 Icurr.ω ← Icurr.ω − run − time;
6 Function update-time();
Input : ∅
Output: ∅
Result: checks and moves Icurr to Icurr.nxt along with deferred-update
7 if get-curr-time() ≥ Icurr.end then
8 deferred-update(I,Icurr);
9 Icurr ← Icurr.nxt;
10 end
11 Function Test-aperiodic(ιA, I);
Input : ιA: Unconcluded firm aperiodic list;
I: list of intervals
Output: ∅
Result: Job in ιA is moved to either ιG or ιS
12 while ιA ̸= ∅ do
13 Γ ← A.dequeue();
14 if Iacc ← Acceptance-test(Γ, I) ̸= ∅ then
15 Guarantee-job(I, Iacc, Γ);
16 ιG.queue(Γ)
17 end
18 else
19 ιS.queue(Γ)
20 end
21 end
22 Function deferred-update(I, Icurr);
Input : I: Interval list, Icurr: Current interval
Result: spare capacity of the interval within ϱi is updated with right data
23 lentTilltemp ← Icurr.b;
24 if !lentTilltemp then
25 return;
26 end
27 updateV al ← 0;
28 ε ← 0;
29 while lentTilltemp ̸= Icurr do
30 ωbefore ← lentTilltemp.ω ◃ Make a copy of ω before updating;
31 lentTilltemp.ω ← lentTilltemp.ω + updateV al ◃ Update the deferred spare capacity;
32 if ωbefore < 0 and lentTilltemp ̸= Icurr.b then
33 ε ← ε + max(0, lentTilltemp.ω)
34 end
35 else
36 ω ← updateV al
37 end
38 updateV al ← updateV al + lentTill.U ◃ Accumulate previous intervals update-val within ϱi ;
39 lentTill.U ← 0;
40 lentTill ← lentTill.prev;
41 end
1 Function Acceptance-test(Γ, I);
Input : Γ: Unconcluded firm aperiodic job;
I: list of intervals
Output: Iacc: interval in which task was accepted or ∅
2 Itmp ← Icurr;
3 ωtmp ← 0;
4 while Itmp ≤ Γ.d do
5 if Itmp.ω > 0 then
6 ωtmp ← Itmp.ω + ωtmp;
7 end
8 Itmp ← Itmp.nxt;
9 end
10 if ωtmp ≥ Γ.C then
11 return Itmp.prev;
12 end
13 else
14 return ∅;
15 end
16 /* New Version of guarantee algorithm;
17 Notice that algorithm has only one loop and stops at a point ∆ satisfies;
18 */;
19 Function Guarantee-job(I, Iacc, Γ);
Input : I: list of intervals;
Iacc: reference of the interval at which Γ deadline falls;
Γ : The aperiodic job that got accepted
Output: ∅
Result: Γ is guaranteed
20 if Iacc.end > Γ.C then
21 INew = split-intr(Iacc, Γ.C);
22 end
23
24 if INew then
25 Iiter ← INew;
26 end
27 else
28 Iiter ← Iacc
29 end
30 ∆ ← Γ.C;
31 while ∆ do
32 if Iiter.ω < 0 then
33 Iiter.ω ← Iiter.ω − ∆
34 end
35 else if Iiter.ω > 0 then
36 if Iiter.ω ≥ ∆ then
37 Iiter.ω ← Iiter.ω − ∆;
38 ∆ ← 0;
39 end
40 else
41 Iiter.ω ← Iiter.ω − ∆;
42 ∆ ← ∆ − Iiter.ω;
43 end
44 end
45 Iiter ← Iiter.prev;
46 end
1 Function split-intr(Iright,sp);
Input : Iright: reference of the interval at which Gamma falls;
sp: split point at which separation should happen
Output: Ileft: New interval that was created and added to list
2 Ileft.start ← Iright.start;
3 Ileft.end ← sp;
4 Ileft.sc ← Ileft.end − Ileft.start;
5 Iright.start ← sp;
6 Iright.sc ← Iright.sc − Ileft.sc;
7 Ileft.sc ← Ileft.sc + min(0, Iright.sc);
8 insert-before(Iright, Ileft) ◃ list function that just inserts the node before first parameter;
9 return Ileft
Algorithm 6 is just conglomeration of all the explanations in this paper making the reader easy to understand the
whole flow of online phase. When we notice in scenarios like aperiodic arrival deferred-update might happen twice
where the 2nd iteration would be a simply dumb loop, which could be avoided by simple conditional flag checks, but
we haven’t made it part of this algorithm just to keep the gist of the algorithm simple and understandable.
When the whole algorithm is observed, the complexity of scheduler within interval due to the notion of updating
spare capacity is made O(1), but the interval movement still holds a complexity of O(n), where n is the number of
intervals in the ϱi. This was the complexity of the traditional algorithm per slot, which is now reduced to per interval.
Guarantee-job. In slot shifting algorithm the complexity of recalculation of spare capacity per interval during
guarantee algorithm is O(n), where n is the number of job’s in that interval. The proposed algorithm reduced
this complexity to O(1), moreover backward traversal is cut short from “Iacc to Icurr”, to “Iacc to a point where
computational constraint of the aperiodic is satisfied”.
V. Experimental Results
Above, we proposed a new version of slot shifting scheduling algorithm and showed that this algorithm often makes
better use of available computing resources than pure slot shifting algorithm. We now experimentally evaluate Algorithm
proposed algorithm and compare its performance with that of slot shifting.
we have made use of the pseudo random task set generators by Ripoll et al. [11] and UUnifast [12] for evaluating the
feasibility analysis of the algorithm. This was made available in Simso framework. Workloads generated by mentioned
generator have been widely used for experimentally evaluating real-time scheduling algorithms, and these experiments
have been revealed to the larger research community for several years now. We believe that using this task generator
provides a context for our simulation results, and allows them to be compared with other results performed by other
researchers.
UUnifast generator: It generates the taskset configurations with a fixed number of tasks (N), specified value of
sum of their individual utilization factor (U) and number of taskets to generate (nset). UUniFast produces independent
tasks with randomly generated unbiased utilization factors. In this process, the utilization factor of each task is sampled
and then only the tasks with correct total utilization are kept in the configurations. The task count is drawn between
[5; 20] and utilisation factor is restricted between [10; 90] percent. The nset is kept constant count of 1.
Ripoll generator: It generates taskset configuration based on number of tasksets to generate (nsets), maximum
computation time of a task (Compute), maximum slack time (deadline), maximum delay after deadline (period) and
total utilisation each set should reach (target-util). The first three parameters are used to generate each task of the
task set. The processor utilization, is used to determine the number of tasks. The the utilization factor of the system
which is uniformly drawn from interval [10; 90], the computation times are uniformly chosen from the interval [1; 20],
the deadlines from the interval [2; 170], and the periods from the interval [3; 650].
Further the interval range of the input paramter is further randomized by simple random generator. The results of
the generator is passed as input to offline phase of the slot shifting to generate job and interval table from the task-set
to make it accustomed for slot shifting algorithm. The same framework is used for aperiodic taskset, but early start
time is randomized further between the range of the hyper period.
Fig. 5: Offline Task Generator and Slot shifting offline table generator framework.
To give the right benchmark, online and offline phase of both existing Slot shifting and proposed algorithm was
implemented in SimSo [18, 19] simulation framework. To make the whole implementation adaptable to changes with
respective slot shifting; A generic object oriented framework was created within SimSo to adapt any future changes
with respect to Slot shifting.
Maximum of upto 5000 jobs are used to test the correctness and benchmark between traditional and new algorithm’s
computational need. The slot length was made constant of 5 Milliseconds in slot shifting.
Fig. 6: Performance improvement in number of scheduling decisions with Capacity shifting against slot shifting.
The vertical axis shows the number of scheduling decisions taken on an average 2604 jobs with 4400 slots decision
in slot shifting against capacity shifting. We can infer that capacity shifting takes on an average of 45 percent less
scheduling decision against the slot shifting. In some best cases the improvement is seen going upto 60 percentage.
Guarantee Algorithm. To benchmark the guarantee algorithms of slot shifting against capacity shifting, we
exploited the feature in SimSo simulator of rounding or setting up computation time per functionality. To make the
benchmark visible we scaled the time to iterate one job within the interval to apply equation(1) during guaranteeing
process to 1 nanosecond and considered other miscellaneous computations as 0.5 nanosecond.
Fig 7 below shows the constant time of computation in new guarantee algorithm which is as minimal as miscellaneous
computation of the traditional slot shifting algorithm. Conceptually, the new guarantee algorithm computes spare
capacity of the intervals with minimal constant computation complexity compared with slot shifting. The computation
time of the algorithm is not effected by number of jobs the interval owns.
The vertical axis in Figurine 7 shows the computational time per interval and horizontal axis shows number of jobs
in the interval. we can infer a constant computation time per interval of 0.5 nanosecond in new guarantee algorithm
against a proportional increase in computation time as number of jobs increases in slot shifting algorithm.
Fig. 7: Constant time taken by new Guarantee algorithm against slot-shifting Guarantee algorithm.
VI. Conclusion and future work
This paper presented 3 novel solutions, namely deferred update, slotless notion and O(1) guarantee algorithm. The
solution 1 and 3 can be adapted to any other scenarios independently, but solution 2 is dependent on solution 1, in
other words, solution 2 is an extension of solution 1. conglomerating all the 3 solutions is named capacity shifting.
Moreover following are some more areas of observation and areas to explore with respect to improvising Slot shifting
or current algorithm capacity shifting.
• Compute when required. We tried to defer the computation of spare capacity to the end and beginning of
intervals, but this notion could further be extended to an approach which does update of spare capacity when
required.
• The capacity shifting could be extended to multi core platforms.
• The presented algorithm on deferred spare capacity update happens even if the update is not necessary. This could
be avoided by small conditional checks in implementation.
• The remaining challenges:
– A small drift in system clock against computed slot clock will cause the whole system failure.
– The slot shifting was implemented in realtime platforms like LITMUSRT
and in simulation environment like
SimSo. The major problem was injection and preparation of precomputed tables of jobs and intervals into
the system, i.e., the slot shifting needs both intervals and jobs be aware of each other. Creating reference
of each other(jobs and intervals) for offline table is a strenuous effort. Getting such a data into the system
needs separate preparation software. The effort to implement both offline and online phase of the preparation
software is an effort equal to the implementation of the slot shifting algorithm itself.
– Offline table can become really big. In some cases of non-harmonic task set of 10 tasks produced interval
tables in count of 1000’s for a hyper period, which we believe is normal in real environments.
Acknowledgment
I would like to personally thank John Gamboa for being on my side to derive the new guarantee algorithm. I would
like to thank Mitra Nasri for being patient whenever I came up with deviating ideas of implementation and providing
me with valuable suggestions on need. Finally I would like to thank my professor Gerhard Fohler for deriving a novel
algorithm called Slot shifting and giving me this opportunity to implement a new version of it.
References
[1] Gerhard fohler, Flexibility in Statically Scheduled Hard Real-Time Systems. The dissertation on Slot shifting algorithm.
[2] J. P. Lehoczky, L. Sha, and J. K. Strosnider. Enhanced aperiodic responsiveness in hard real-time environments. In Proceedings of the
IEEE Real-Time Systems Symposium, December 1987
[3] J. K. Strosnider, J. P. Lehoczky, and L. Sha. The deferrable server algorithm for enhancing aperiodic responsiveness in hard-real-time
environments. IEEE Transactions on Computers, 4(1), January 1995.
[4] J. P. Lehoczky and S. Ramos-Thuel. An optimal algorithm for scheduling soft-aperiodic tasks in fixed-priority preemptive systems. In
Proceedings of the IEEE Real-Time Systems Symposium, December 1992.
[5] M. Spuri and G. C. Buttazzo. Efficient aperiodic service under earliest deadline scheduling. In Proceedings of the IEEE Real-Time
Systems Symposium, December 1994.
[6] C. W. Mercer, S. Savage, and H. Tokuda. Processor capacity reserves for multimedia operating systems. Technical Report CMU-CS-93-
157, Carnegie Mellon University, Pittsburg, Pennsylvania, USA, May 1993.
[7] Implementation of Capacity shifting and Slot shifting in Simso.
https://github.com/gokulvasan/CapacityShifting
[8] Giorgio C. Buttazzo. Rate monotonic vs. EDF: judgment day. Real-Time System., Volume 29, Issue 1:5-26, January 2005.
[9] Gowri Sankar Ramachandran, Integrating enhanced slot-shifting in µC/OS-II.School of Innovation, Design and Engineering, Malardalen
University, Sweden.
[10] Giorgio Buttazzo’s. HARD REAL-TIME COMPUTING SYSTEMS Predictable Scheduling Algorithms and Applications. Kluwer
Academic Publishers, 101 Philip Drive, Assinippi Park Norwell, MA 02061, USA, 1997.
[11] I. Ripoll, A. Crespo, and A. K. Mok. Improvement in feasibility testing for real-time tasks. Real-Time Systems: The International
Journal of Time-Critical Computing, 11:19–39, 1996.
[12] .E. Bini and G. C. Buttazzo. Measuring the performance of schedulability tests. Real-Time Systems, 30(1-2):129–154, 2005.
[13] G. C. Buttazzo and E. Bini. Optimal dimensioning of a constant bandwidth server. In Proc. of the IEEE 27th Real-Time Systems
Symposium (RTSS 2006), Rio de Janeiro, Brasil, December 6-8, 2006.
[14] Gerhard Fohler. Joint scheduling of distributed complex periodic and hard aperiodic tasks in statically scheduled systems. In Proceedings
of the 16th Real-Time Systems Symposium, Pisa, Italy, December 1995.
[15] G.Fohler, Analyzing a Pre Run-Time Scheduling Algorithm and Precedence Graphs, Research Report Institut fur Technische Informatik
Technische Universitat Wien, Vienna, Austria ,September 1992
[16] C. Lin and S. A. Brandt. Improving soft real-time performance through better slack management. In Proc. of the IEEE Real-Time
Systems Symposium (RTSS 2005), Miami, Florida, USA, December 5, 2005.
[17] R. I. Davis, K. W. Tindell, and A. Burns. Scheduling slack time in fixed priority pre-emptive systems. In Proceedings of the IEEE
Real-Time Systems Symposium, December 1993.
[18] Maxime Ch´eramy, Pierre-Emmanuel Hladik, Anne-Marie D´eplanche, SimSo: A Simulation Tool to Evaluate Real-Time Multiprocessor
Scheduling Algorithms.5th International Workshop on Analysis Tools and Methodologies for Embedded and Real-time Systems
(WATERS), Jul 2014, Madrid, Spain. 6 p., 2014
[19] Simso simulator framework official website.
http://projects.laas.fr/simso
[20] C. W. Mercer, S. Savage, and H. Tokuda. Processor capacity reserves for multimedia operating systems. Technical Report CMU-CS-
93-157, Carnegie Mellon University, Pittsburg, Pennsylvania, USA, May 1993
[21] M. Spuri and G. C. Buttazzo. Efficient aperiodic service under earliest deadline scheduling. In Proceedings of the IEEE Real-Time
Systems Symposium, December 1994.
[22] M. Spuri and G. C. Buttazzo. Scheduling aperiodic tasks in dynamic priority systems. Journal of Real-Time Systems, 10(2), 1996.
[23] Fabian Scheler and Wolfgang Schroeder-Preikschat. Time-triggered vs. event-triggered: A matter of conguration? Model-Based Testing,
ITGA FA 6.2 Workshop on and GI/ITG Workshop on Non-Functional Properties of Embedded Systems, 2006 13th GI/ITG Conference
-Measuring, Modelling and Evaluation of Computer and Communication (MMB Workshop), pages 1-6, March 2006.
[24] Sanjoy K. Baruah, Aloysius K. Mok, and Louis E. Rosier. Pre-emptively scheduling hard-real-time sporadic tasks on one processor. In
Proc. Real-Time Systems Symposium (RTSS), pages 182-190, 1990.
[25] Damir Isovic. Flexible Scheduling for Media Processing in Resource Constrained Real-Time Systems. PhD thesis, Malardalen University,
Sweden, November 2004.
[26] Damir Isovic and Gerhard Fohler. Handling mixed task sets in combined offine and online scheduled real-time systems. Real-Time
Systems Journal, Volume 43, Issue 3:296-325,December 2009.
[27] Hermann Kopetz. Event-triggered versus time-triggered real-time systems. In Proceedings of the International Workshop on Operating
Systems of the 90s and Beyond, pages 87-101, London, UK, 1991. Springer-Verlag.
[28] Martijn M. H. P. van den Heuvel, Mike Holenderski, Reinder J. Bril, and Johan J.Lukkien. Constant-bandwidth supply for priority
processing. In IEEE Transactions on Consumer Electronics (TCE), volume 57, pages 873-881, May 2011.
[29] Aeronautical Radio, Incorporated (ARINC), https://en.wikipedia.org/wiki/ARINC

More Related Content

PPTX
Hard versus Soft real time system
PPTX
Real time Scheduling in Operating System for Msc CS
PPTX
(Slides) Task scheduling algorithm for multicore processor system for minimiz...
PDF
Comparision of different Round Robin Scheduling Algorithm using Dynamic Time ...
PDF
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...
PPTX
Reference model of real time system
PDF
STATISTICAL APPROACH TO DETERMINE MOST EFFICIENT VALUE FOR TIME QUANTUM IN RO...
PDF
10. resource management
Hard versus Soft real time system
Real time Scheduling in Operating System for Msc CS
(Slides) Task scheduling algorithm for multicore processor system for minimiz...
Comparision of different Round Robin Scheduling Algorithm using Dynamic Time ...
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...
Reference model of real time system
STATISTICAL APPROACH TO DETERMINE MOST EFFICIENT VALUE FOR TIME QUANTUM IN RO...
10. resource management

What's hot (19)

PPTX
Cpu scheduling
PPTX
Reference Model of Real Time System
PPT
task scheduling in cloud datacentre using genetic algorithm
PPTX
Operating system
PPTX
Approaches to real time scheduling
PPTX
Cpu scheduling algorithm on windows
PPTX
Functional Parameter & Scheduling Hierarchy | Real Time System
PPT
Real time scheduling - basic concepts
PPT
Process Management-Process Migration
PPTX
Load Balancing In Distributed Computing
PPTX
Adaptive Replication for Elastic Data Stream Processing
PDF
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTING
PDF
Task Scheduling in Grid Computing.
PPSX
Survey of task scheduler
PPTX
Optimization of Continuous Queries in Federated Database and Stream Processin...
PPTX
Cs 704 d aos-resource&processmanagement
PPTX
INTERRUPT LATENCY AND RESPONSE OF THE TASK
PPT
334839757 task-assignment
PDF
Learning scheduler parameters for adaptive preemption
Cpu scheduling
Reference Model of Real Time System
task scheduling in cloud datacentre using genetic algorithm
Operating system
Approaches to real time scheduling
Cpu scheduling algorithm on windows
Functional Parameter & Scheduling Hierarchy | Real Time System
Real time scheduling - basic concepts
Process Management-Process Migration
Load Balancing In Distributed Computing
Adaptive Replication for Elastic Data Stream Processing
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTING
Task Scheduling in Grid Computing.
Survey of task scheduler
Optimization of Continuous Queries in Federated Database and Stream Processin...
Cs 704 d aos-resource&processmanagement
INTERRUPT LATENCY AND RESPONSE OF THE TASK
334839757 task-assignment
Learning scheduler parameters for adaptive preemption
Ad

Similar to capacityshifting1 (20)

PPTX
PERIODIC TASK SCHEDULING - Chap.5 Periodic Task Scheduling
PDF
slot_shifting
PPTX
scheduling.pptx
PDF
Real Time most famous algorithms
PPTX
Scheduling algorithm in real time system
PDF
Periodic
PDF
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICE
PPTX
Clock driven scheduling
PDF
Scheduling Problems from Workshop to Collaborative Mobile Computing: A State ...
DOCX
Novel Scheduling Algorithm in DFS9(1)
PPTX
Computer System Scheduling
PDF
COMPARATIVE ANALYSIS OF FCFS, SJN & RR JOB SCHEDULING ALGORITHMS
PDF
Comparative Analysis of FCFS, SJN & RR Job Scheduling Algorithms
PPTX
programming .pptx
PPT
ESC UNIT 3.ppt
PPT
task_sched2.ppt
PDF
6_RealTimeScheduling.pdf
DOCX
Rate.docx
PERIODIC TASK SCHEDULING - Chap.5 Periodic Task Scheduling
slot_shifting
scheduling.pptx
Real Time most famous algorithms
Scheduling algorithm in real time system
Periodic
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICE
Clock driven scheduling
Scheduling Problems from Workshop to Collaborative Mobile Computing: A State ...
Novel Scheduling Algorithm in DFS9(1)
Computer System Scheduling
COMPARATIVE ANALYSIS OF FCFS, SJN & RR JOB SCHEDULING ALGORITHMS
Comparative Analysis of FCFS, SJN & RR Job Scheduling Algorithms
programming .pptx
ESC UNIT 3.ppt
task_sched2.ppt
6_RealTimeScheduling.pdf
Rate.docx
Ad

capacityshifting1

  • 1. An Efficient Implementation of Slot Shifting Algorithm based on deferred Update L.J. GokulVasan Chair of Real-Time Systems TU-KL, Germany Email: lakshama@rhrk.uni-kl.de Dr.Mitra Nasri Max Plank Institute for Software Systems Kaiserslautern, Germany Email: mitra@mpi-sws.org Prof.Dipl.-Ing.Dr.Gerhard Fohler Chair of Real-Time Systems TU-KL, Germany Email: fohler@eit.uni-kl.de Abstract Slot-shifting algorithm provides a mechanism to accommodate event triggered tasks along with periodic tasks. Slot-shifting uses the residual bandwidth of the periodic tasks to accommodate aperiodic tasks, called spare capacity, to improve the chances of admitting aperiodic tasks. In contrast to bandwidth servers, slot-shifting preserves the unoccupied capacity of the system by shifting the spare capacity to future. Slot-shifting uses a discrete time model, termed as slots. Slot-shifting assumes a global time, whose progression is triggered by equidistant events, detected by an independent external observer. At the beginning of each slot the scheduler is invoked to determine which job of which task to schedule in that slot. The scheduler will also update the spare capacity of the system. This activation of scheduler for each slot is computationally expensive and has a considerable overhead. In this work we will provide an approach to remove the slots and replace it with a larger quantity that is naturally related to the release time and deadlines of the jobs. Moreover, we provide a method for delayed updates of the space capacities in order to reduce the runtime computational complexity, but preserving the notion incepted by slot shifting. We will show that this new approach reduces the scheduling overhead of slot shifting by around 60% in best cases and 45% on average cases. Finally, The work will provide a new approach to guarantee the aperiodic task. This new solution will have time complexity of O(1) per interval compared to existing (slot-shifting) algorithm having complexity of O(n) per interval, where n is the complexity involved to iterate over n tasks that belongs to an interval. I. Introduction Real-time systems must fulfill twofold constraints, in order to be considered working functionally correct: First, they must process information and consecutively produce a correct output behaviour. Second, their interaction with the environment must happen within stringent timing constraints dictated by the corresponding environment. In contrast to many other systems, real-time systems are thus not primarily optimized for speed (in the sense of maximized for throughput), but to provide determinism and worst case guarantees. To ensure meeting their strict timing constraints, real-time systems utilize real-time scheduling algorithms. A scheduling algorithm determines the execution order of the jobs of the workload on the processors of the system. There exist many different classification schemes for scheduling algorithms, e.g., depending on the method to prioritize different jobs, or depending on the moment in time when scheduling decisions are made. An important classification of scheduling algorithms is time-triggered and event-triggered algorithms. Event-triggered algorithms are based on a set of rules that are used at runtime of the system to make the scheduling decisions. In contrast to this, time-triggered algorithms determine the execution order of the jobs statically, i.e., prior to the runtime of the system. Static scheduling has been shown to be appropriate for a variety of hard real time systems mainly due to the verifiable timing behaviour of the system and the complex task models supported. Another classification of scheduling algorithms is established on the criterion by which priorities are assigned to the tasks. If each task is assigned a fixed priority that does not change at runtime from job to job of the same task, then this is called fixed (task) priority scheduling . If the priority of jobs of the same task may change over time or from job to job, this is called dynamic task priority scheduling. Most of real time applications have both periodic tasks and aperiodic tasks. Typically, periodic tasks are time driven and execute the demanding activities with hard timing constraints aimed at guaranteeing regular activation rates. Aperiodic tasks are usually event driven and may have hard, soft, or non real time requirements depending on the specific application. When dealing with hybrid task sets, the main objective of the system is to assure the schedulability of all guaranteed tasks in worst case conditions and provide good average response times for soft and non-realtime activities. Offline guarantee of event driven aperiodic tasks with critical timing constraints can be done only by making proper assumptions on the environment, i.e., by assuming the maximum arrival rate for each critical event. If the maximum arrival rate of some event cannot be bounded a priori, the associated aperiodic task cannot be guaranteed statically, although an online guarantee of individual aperiodic requests can still be done. Aperiodic tasks requiring
  • 2. online guarantee on individual instances are called firm. Whenever a firm aperiodic request enters the system, an acceptance test can be executed by the kernel to verify whether the request can be served within its deadline. If such a guarantee cannot be done, the request is rejected. On rejection of firm aperiodic tasks the systems predictability or deterministic behaviour will not be effected and will not cause safety hazard for the system. A. Related Work Server algorithms for fixed priority scheduling [2, 3, 4], as well as for dynamic priority scheduling [5, 6], aim at reserving a fraction of the processor bandwidth to the aperiodic jobs. The server algorithms introduce an additional periodic task, the server task, into the schedule. The purpose of the periodic server task is to service aperiodic requests as soon as possible. Like any periodic task, a server is characterized by a period Ts and a computation time Cs called server capacity. Polling Server, (PS) [2]. At periods Ts server becomes active and serves aperiodic requests with its capacity Cs. If no aperiodic requests are pending, PS suspends itself until the beginning of its next period, and the time originally allocated for aperiodic service is not preserved for aperiodic execution but is used by periodic tasks. If no aperiodic task arrives the capacity is wasted.Note that if an aperiodic request arrives just after the server has suspended, it must wait until the beginning of the next polling period, when the server capacity is replenished at its full value. The server is based on Rate Monotonic Scheduling Deferrable Server, (DS) [2,3].DS algorithm creates a periodic task (usually having a high priority) for servicing aperiodic requests. DS preserves its capacity if no requests are pending upon the invocation of the server. The capacity is maintained until the end of the period, so that aperiodic requests can be serviced at the same server’s priority at any-time, as long as the capacity has not been exhausted. At the beginning of any server period, the capacity is replenished at its full value. Priority Exchange [2,3].The algorithm uses a periodic server (usually at a high priority) for servicing aperiodic requests.The server preserves its high-priority capacity by exchanging it for the execution time of a lower-priority periodic task. At the beginning of each server period, the capacity is replenished at its full value. If aperiodic requests are pending and the server is the ready task with the highest priority, then the requests are serviced using the available capacity; otherwise Cs is exchanged for the execution time of the active periodic task with the highest priority.When a priority exchange occurs between a periodic task and server, the periodic task executes at the priority level of the server while the server accumulates a capacity at the priority level of the periodic task. Thus, the periodic task advances its execution, and the server capacity is not lost but preserved at a lower priority. Slack Stealing [4]. The Slack Stealing algorithm does not create a periodic server for aperiodic task service. Rather it creates a passive task, referred to as the Slack Stealer, which attempts to make time for servicing aperiodic tasks by ”stealing” all the processing time it can from the periodic tasks without causing their deadlines to be missed. The main idea behind slack stealing is that, typically, there is no benefit in early completion of the periodic tasks. when an aperiodic request arrives, the Slack Stealer steals all the available slack from periodic tasks and uses it to execute aperiodic requests as soon as possible. If no aperiodic requests are pending, periodic tasks are normally scheduled by rate monotonic algorithm. Constant Bandwidth Server [5]. When a new job enters the system, it is assigned a suitable scheduling deadline (to keep its demand within the reserved bandwidth) and it is inserted in the EDF ready queue. If the job tries to execute more than expected, its deadline is postponed (i.e., its priority is decreased) to reduce the interference on the other tasks. Note that by postponing the deadline, the task remains eligible for execution. In this way, the CBS behaves as a work conserving algorithm, exploiting the available slack in an efficient Total Bandwidth Server [6]. The assignment must be done in such a way that the overall processor utilization of the aperiodic load never exceeds a specified maximum value. Each time an aperiodic request enters the system, the total bandwidth of the server is immediately assigned to it, whenever possible. Once the deadline is assigned, the request is inserted into the ready queue of the system and scheduled by EDF as any other periodic instance. when the kth aperiodic request arrives at time t = rk, it receives a deadline. dk = max(rk, dk−1) + Ck Us where ck is the execution time of the request and Us is the server utilization factor( which is the bandwidth). When noticed the bandwidth servers needs an explicit bandwidth allocation. The main drawback of this approach is that a substantial amount of the CPU utilization might be reserved for future aperiodic jobs that will not necessarily arrive. Another drawback is that using the server algorithm can result in a lower schedulability bound of the system. To the best of our knowledge, it has not been shown how to combine server algorithms with arbitrary time-triggered scheduling tables. Though bandwidth allocated, servers don’t provide guarantee for the aperiodic task whose execution time is beyond Cs. Methods like slack stealing do try to admit the aperiodic tasks without explicit bandwidth, but
  • 3. does not guarantee the admittance of firm aperiodic task. Moreover, the allocation of bandwidth utilisation in servers is very minimal, because of random arrival nature of events. Slot shifting algorithm which combines the benefits of both time- and event-triggered scheduling for systems. Slot shifting resolves the complex constraints of a set of online tasks by constructing an static offine scheduling table. Similar to the aforementioned slack stealing algorithm, slot shifting expresses the leeway of tasks in this derived offine schedule by spare capacities. At runtime of the system, slot shifting performs acceptance tests for the individual jobs of aperiodic tasks and integrates them feasibly into the schedule. With the advantages also comes overhead on the system. This paper tries to address one among the prominent overheads, slots. We would address this problem by completely removing this notion, but still preserving the idea of slot shifting leading to a new notion of algorithm. We will also present a new guarantee algorithm which has a much less latency, easy to implement and needs less data space for computation compared to existing slot shifting algorithm. B. Organization of the report The following work is organized as follows: The Section 2 provides the system-model and notations followed by description of slot shifting algorithm. The section 3 would provide an elaborate version of problem statement which would succor in establishing solution. Purposefully the problem statement is deferred after algorithmic explanation of slot shifting, because this might provide a profound realization of the presented complication. The section 4 would provide the proposed solution. Incremental solution is presented with problems behind each phase; Enabling the assayer understand the real intuitiveness behind the final solution. The section 5 would provide the experimental results followed by conclusion and future work. II. System Model and Background Task set. Consider a realtime system with n periodic tasks, τ = {τ1, ..., τn}. Each task τi has a worst case execution time (WCET) Ci. A period Ti, an initiation time or offset relative to some time origin Φ, where 0 ≤ ϕi ≤ Ti and a hard deadline (relative to the initiation time), di. The parameters Ci, Ti, ϕi, di are assumed to be known deterministic quantities. We require that the tasks be scheduled according to the Earlies-Deadline-First (EDF) scheduling algorithm with τ1 having highest priority 1 and τn having lowest priority; however we do not require this priority assignment to hold, but we do assume that di ≤ Ti. A periodic task, say τi, generates infinite sequence of jobs. The kth such job , Ji,k is ready at time RJi,k = ϕi+(K−1)Ti and its CJi,k units of required execution must be completed by time dJi,k = ϕi + (K − 1)Ti + di or else timing fault will occur causing the job to terminate.We assume a fully preemptive system. We now introduce the aperiodic tasks, γk, k ≥ 1. Each aperiodic job, γk, has an associated arrival time, αk, a processing requirement, pk, and an optional hard deadline, Dk. The tasks that arrives with such hard deadline is called firm aperiodic job, Γ and tasks with no hard deadline is named soft aperiodic job, ζ, if the aperiodic job does not have a hard deadline, we set Dk = ∞. The aperiodic job are indexed such that 0 ≤ αk ≤ αk+1, K ≥ 1. We assume that the aperiodic job sequence is not known in advance. On arrival of Γ the job is temporarily placed on a list before acceptance test applied on it. The list that holds the set of firm aperiodic task waiting to be accepted is called unresolved list,ιA. The list holding set of γk which could not be guaranteed and not a ζ is called not guaranteed list,ιS, job from list,ιS is named Ji,k,S. The list holding a set of jobs that is ready to run is called ready list ,ιξ. Irrespective of the list, Job that is selected and running in the system is called current job, Ji,k,curr. Interval. A layer of certainty is added around the guaranteed jobs called interval. The definition of interval is as follows, Each interval has an id,i. The end of an interval, ei = dJi,k . The end of an interval determines the owner or nativity of the jobs, and there could be more than one job associated with an interval. The early start time of an interval ξi = min(RJi,k ), i.e., is the minimum of the start time of the job that belongs to the interval. The start of an interval si = max(ei−1, ξi). The spare capacity of an interval ωi for an interval is defined by ωi = |ωi| − n∑ i=0 CJi,k + min(0, ωi+1). (1) Spare capacity and interval calculation notion could create a negative spare capacity interval. This could create a consecutive backward wave of negative spare capacity intervals, till an interval which could completely satisfy the capacity needed for such interval. This ripple effect due to backward propagation of negative spare capacity forms a relation among these intervals is denoted as relation window, ϱi (Diagrammatic representation in Fig 1). The first
  • 4. interval of this relation window is named lender, li and the last in the relation window is named lent-till, bi. If interval is part of no relation window then li = bi = ∞. There could be many such relation window in a system within its hyper period. Fig. 1: Relation window In general, each interval Ii is represented as a set Ii = {i, s, e, ω, l, b}. Each element within the set is accessed using member access operator “ . ”( Dot operator 1 ). The interval progression is closely associated with time, the interval that is associated with the current time is called current interval, Icurr. The interval to which job Ji,k is associated is named IJi,k . The list of intervals is simply named I. Each job will hold the reference to the interval to which it belongs. In general, each Job,Ji,k is represented as a set Ji,k = {C, T, ϕ, d, I}. Each element within the set is accessed using member access operator “ . ”( Dot operator 1 ). System widely uses the notion of list. The list in general can be described as an enumerated collection of objects. Like a set, it contains members. Unlike set the elements have position. The position of an element in a list is its rank or index. The number of elements is called the length of the sequence. Formally, a list can be defined as an abstract data type whose domain is either the set of the natural numbers (for infinite sequences) or the set of the first n natural numbers (for a sequence of finite length n). The logical behaviour is defined by a set of values and a set of operations. The order of arrangement is an abstract functionality, where dequeue() removes the element with rank one and queue() would add an element at appropriate rank. The list holds the current accessed element rank. The traversal of the list from the current rank i to i−1 is called prev operator. Similarly the traversal from current rank to i to i−1 is through nxt operator. In this work, behavioural access of the list is done through symbol “ . ”( Dot operator1 ). During online phase, the timer λ is triggered based on the parameter λexpiry, which in case of slot shifting is a fixed relative period called slots. A slot s is defined as an external observer (λ) counts the ticks of the globally synchronized clock with granularity of slot length,|s|, i.e., λexpiry ← |s| and assigns numbers from 0 to ∞ on every slot called slot count scnt. We denote by ”in slot i” the time between the start and end of slot i, si, i.e., the time-interval [|s| ∗ i, |s| ∗ (i + 1)]. slot has uniform time length and on expiry triggers function schedule. A. Background Now we sketch the existing version of slot shifting algorithm. Describing the existing solution would enable us hand on hand comparison of the slot shifting algorithm with the proposed one, giving us intuitiveness of the problem and justifying the proposed solution. The slot shifting algorithm works at two phases namely, offline phase and online phase. In Offline Phase, the periodic tasks are broken into its corresponding jobs and a layer of certainty is added around the jobs through interval, then the spare capacity for each interval is calculated with equation(1) (Example below gives diagrammatic interpretation). This yields to a table of jobs and corresponding intervals. 1same as ANSI C’s dot operator
  • 5. Example: Exemplification of spare capacity calculation2 The intervals are created by applying the formulation e = dJi,k which calculates end of an interval, then the start of an interval is calculated using s = max(ei−1, ϵi). The created intervals are assigned with unique identification by assigning the 1st interval 1, 2nd interval 2,...so on, i.e., single count integer increment value is assigned to each consecutive interval starting with value 1 for 1st interval till the last interval. Along with the id assignment initial spare capacity is assigned with length of the interval, then equation 1 is applied as below. The table provides a rudimentary exemplification of applying equation 1 with some apocryphal values. The figurine below the table gives synonymic pictorial view of the table. We presume that such an approach would assist the reader in better understanding the intuitiveness behind the algorithm. TABLE I: Calculation of spare capacity Interval-id( Ii) step 1 2 3 4 Equation applied 1 2 1 1 -3 Tωi ← |ωi| − ∑n i=0 CJi,k 2 1 -1 -2 -3 ωi ← Tωi + min(0, ωi+1) Fig. 2: Calculation of spare capacity The example gives a clear picture of ripple of negative intervals. Graphical representation shows that I1 is not the only lender interval, rather many intermediate intervals partially satisfy the computational need of I4. Like many lenders the relation window might also have multiple borrowers and also combination of both. The same approach of assurance is provided during online phase for accepted firm aperiodic tasks. Interval progression. During online phase as time progresses the current interval’s end is checked against current time and progressed to next interval when needed. This is represented algorithmically as function update-slot. Spare Capacity update. For each slot the job is selected based on EDF. Selected job is checked for interval to which it belongs. If job belongs to the current interval, then the spare capacity of the interval is untouched as the needed computation is already done either during offline phase(in case of periodic tasks) or online phase(in case of accepted and guaranteed firm aperiodic task). If the task does not belong to the current interval the spare capacity of the current interval is negated by one slot and spare capacity of the job’s interval is tested for negativity. If negative then along with job’s interval all the intervals that are negative backwards are added one slot to its spare capacity till it reaches a positive interval. If the job’s interval is positive, it is simply added with one slot. This process is functionally represented in algorithm labelled update-sc. This functionality repeats itself for every slot.
  • 6. 1) Neutralisation Effect.: On execution of spare capacity update for every slot, an effect is observed when the current interval Icurr and the job Ji,k,curr is within the same relation window,ϱi, and IJi,k .ω is negative.This effect is named neutralisation effect. The observation is as follows: 1 If the Ji,k,curr’s interval is not Icurr, then we first negate the Icurr by one slot. 2 Then we go to interval IJi,k , test for spare capacity negativity and add one slot to it. 3 Now we traverse backward to each consecutive negative interval and add one slot. Repeat this process till the first positive interval is noticed and also add one slot to it.The first positive interval in this case is Icurr, so we neutralise what we negated in the step 1. This effect is important because it leads to a rule of application in deferred update, explained later in this paper called neutralisation rule. Example: Exemplification of neutralisation effect2 Let’s assume IJi,k,curr = 4 and Icurr = I1, so both the current job and current interval is within the same relation window ϱi. The following is an elementary view of the interval’s spare capacity before Ji,k,curr is scheduled with apocryphal values. Fig 3-(a): Initial state before Ji,k,curr is scheduled. Fig 3-(b): On decision 1st the job’s interval nativity is checked. In this scenario, the job does not belong to the current interval, so one slot is negated from the Icurr, i.e., from I1, so the I1’s spare capacity becomes 0. Fig 3-(c): Next the job’s interval,I4 is tested for negativity. In this scenario it is negative, i.e., “-3”. Irrespective of negativity, the job’s interval is incremented by one, making the interval I4 = −2. Fig 3-(d): Since the job’s interval is tested positive in negativity check, all the consecutive back intervals including the 1s t positive interval is incremented by one, i.e., intervals I3, I2 and I1 is incremented by one. Fig. 3: Neutralisation effect If noticed interval I1 is negated and added with same value at same point of time, causing neutralisation effect on Icurr, i.e., I1. 2Illustration provided is from interval perspective, because the paper tries to reason the problems from an abstract interval viewpoint.
  • 7. Algorithm 1: Slotshifting Online Phase 1 Function schedule(Ji,k,Curr); Input : Ji,k,Curr: job previously selected Output: Ji,k,nxt: next selected job 2 update-sc(Ji,k,Curr, I) ◃ update the interval; 3 if A ̸= ∅ then 4 Test-aperiodic(ιA) ◃ Test and try Guaranteeing firm Aperiodic’s if any; 5 end 6 Ji,k,nxt ←selection-fn() ◃ Get next eligible job; 7 update-slot(); 8 return Ji,k,nxt; 9 Function update-sc(Ji,k,Curr, I); Input : Ji,k,Curr: previous selected job, I: Interval list Output: ∅ Result: Spare Capacity of the respective intervals get updated 10 if Ji,k,Curr.I == Icurr then 11 return; 12 end 13 Icurr ← Icurr − 1; 14 Itmp ← Ji,k,Curr.I; 15 while Itmp do 16 if Itmp.ω ≥ 0 then 17 Itmp.ω ← Itmp.ω + 1; 18 break; 19 end 20 else 21 Itmp.ω ← Itmp.ω + 1; 22 Itmp.ω ← Itmp.prev; 23 end 24 end 25 Function update-slot(); Input : ∅ Output: ∅ Result: scnt gets added by 1; also check and move Icurr to I.nxt 26 scnt ← scnt + 1; 27 if scnt ≥ Icurr.end then 28 Icurr ← Icurr.nxt; 29 end 30 Function Test-aperiodic(ιA, I); Input : ιA: Unconcluded firm aperiodic list; I: list of intervals Output: ∅ Result: Job in ιA is moved to either ιG or ιS 31 while ιA ̸= None do 32 Γ ← ιA.dequeue(); 33 if Iacc ← Acceptance-test(Γ, I) ̸= None then 34 Guarantee-job(I, Iacc, Γ); 35 ιG.queue(Γ) 36 end 37 else 38 ιS.queue(Γ) 39 end 40 end
  • 8. 1 Function Acceptance-test(Γ, I); Input : Γ: Unconcluded firm aperiodic job; I: list of intervals Output: Iacc: interval in which task was accepted or None 2 Itmp ← Icurr; 3 ωtmp ← 0; 4 while Itmp.end ≤ Γ.d do 5 if Itmp.ω > 0 then 6 ωtmp ← Itmp.ω + ωtmp; 7 end 8 Itmp ← Itmp.nxt; 9 end 10 if ωtmp ≥ Γ.C then 11 return Itmp.prev; 12 end 13 else 14 return None; 15 end 16 Function Guarantee-Job(I, Iacc, Γ); Input : Γ: Unconcluded firm aperiodic job; I: list of intervals; Iacc: interval at which Γ.d falls Output: None Result: Γ gets guaranteed 17 if Iacc.end > Γ.C then 18 INew = split-intr(Iacc, Γ.C); 19 end 20 Iiter ← Iacc; 21 while Iiter.id ≥ Icurr.id do 22 Jlst ← Iiter.Jlst; 23 ω ← 0; 24 J ← Jlst.head; 25 while J do 26 ω ← ω + J.c; 27 J ← J.nxt; 28 end 29 Iiter.ω ← ω + max(0, Iiter.nxt.ω); 30 Iiter ← Iiter.prev; 31 end III. Problem Statement In real-time systems, scheduling algorithms are required to improve the use of the computational resources of the system. Thus, these scheduling algorithms must feature low overheads to leverage as much computational resources as possible for the applications at runtime. On the one hand, they must provide a satisfactory level of predictability and worst case guarantees for the tasks. On the other hand, they are expected to provide flexibility to react to aperiodic tasks. Last but not least, it is desirable that these algorithms must be able to handle the more and more complex constraints between interacting applications. In this section we will provide a detailed overview of the overheads in the current implementation of slot shifting. •Problem1: Updating Spare Capacity The function update-sc is applied for every slot.The problem arises when job that does not belong to the current interval is executing in current interval and spare capacity of the job’s Interval is negative. If the mentioned is the
  • 9. case, we need to traverse Backwards through all the consecutive negative intervals and increment the spare capacity of the intervals till either we reach positive interval or current interval. The function update-sc becomes an consistent Overhead when: * current executing job’s Interval is not current interval and the job interval’s spare capacity is a big negative number. The big negative spare capacity intuitively means the borrow from the lender interval is big. * Interval to which task belongs is far away from lender, i.e., there are many intervals in between the lender and job’s interval. * overhead further complicates when current job is the only task being selected for the next consecutive slots of current interval, which would be a normal scenario in slot level EDF. In other words, once the backward spare capacity update procedure is started between intervals due to negative job’s interval’s spare capacity, the probability of repeating for next consecutive slots till the job’s interval’s spare capacity becomes positive is almost 1. •Problem2: Slots Although there exists systems, such as avionic systems[29] that are based on the notion of slots, most of the existing real-time systems do not use slot. The notion of slots adds following overhead on system: * Increases the number of time scheduling decision needs to be made. * The worst case execution time of the task is calculated based on Ci = ⌈ calculated upper bound of Job execution time slotlength ⌉ (2) In the above defined equation, we notice the approximation of computation time of the jobs to its ceiling to fit the notion of slot. There might be some cases where the rounding of computation might take off more time from the system than needed by the tasks. This approximation could be part of the time needed to accept a firm aperiodic tasks during online phase, causing admittance failure. •Problem3: Acceptance and Guarantee Algorithm When an aperiodic arrives and gets accepted then it needs to be guaranteed. As described, guarantee algorithm creates an interval if needed and applies equation (1) starting from the last interval in which job was accepted till the current interval. The traditional spare capacity calculation might be helpful in offline phase for initial spare capacity calculation; but in online phase this calculation adds following complications: * Needs More data space. Both job and interval needs to hold reference3 of each other, needing an unnecessary additional place holder in either interval or job. * Implementation complexity. Adds additional difficulty in implementation, because along with the traversal of intervals, we also need to traverse through the jobs per interval. * More latency. Increases runtime complexity due to application of equation 1 to calculate spare capacity for each interval. * Reduces predictability. Calculation of spare capacity on each interval would take different runtime. The calculation runtime per interval directly proportionates the number of jobs associated with the interval, in other words, complexity of calculation of spare capacity per interval is not constant, reducing the determinism of the system. IV. Proposed Solution In this section we define a new algorithm inspired from slot shifting but with the mentioned problems addressed. To make this approach easier we from here on call the existing algorithm traditional so that we could seamlessly distinguish between the existing algorithm and proposed one. A. Deferred Update of Spare Capacity. As explained in Section 3 problem 1, the function that updates the spare capacity every slot has considerable overhead. The complexity of spare capacity update is O(n) per slot, where n is the computational complexity involved to traverse the intervals backwards till the first positive interval. The whole update procedure is a redundant repeatable procedure that occurs for each slot, this aggregates the complexity of spare capacity update for an interval to O(n ∗ i), where i is computational complexity to update the spare capacity every slot within the interval. On observation, we infer that accumulating i slots spare capacity update procedure and applying them later (when necessary, i.e., either at end of the interval or on aperiodic task arrival) would curtail the redundancy in update
  • 10. procedure and thus reducing computation to O(n) per interval from O(n ∗ i). This procedure of accumulating and updating the spare capacity on need basis is termed as deferred update. Offline and preparation phase. To make this deferred update work, offline phase is complicated with an additional computation that would interweave the references of the relation window, i.e., if the relation window, ϱ exists between a set of intervals, then each interval in ϱ’s lender, l and lent-till, b field is delegated with the respective reference of lender and lent-till3 . 1) O1 update spare capacity: Additionally, every Interval is added with an additional field named update-val, u. The update-val is updated during online phase through algorithm termed as O1-Update-sc, described below. Algorithm 1: Slot based O1-Update-SC 1 Function o1-update-sc(Ji,k,curr, I); Input : Ji,k,curr: Previous selected job, I: Interval table Output: None Result: Checks and Updates only spare capacity of job and current interval 2 if Ji,k,curr.I == Icurr then 3 return; 4 end 5 tskω ← Ji,k,curr.I.ω ◃ take a local copy of task’s spare capacity before updating; 6 if Ji,k,curr.I.ω < 0 then 7 Ji,k,curr.I.u ← Ji,k,curr.I.u + 1; 8 end 9 Ji,k,curr.I.ω ← Ji,k,curr.I.ω + 1; 10 /*neutralisation rule 1.check job’s lender and current interval’s lender is same.; 11 2. also check if job’s interval is negative before update.; 12 if both condition satisfies just don’t negate Icurr; 13 */; 14 if Ji,k,curr.I.l == Icurr.l and tskω < 0 then 15 return; 16 end 17 Icurr.ω ← Icurr.ω − 1; Described O1-Update-sc will get triggered at the end or beginning of every slot. When noticed the algorithm has no loops, but just checks on three conditions to update spare capacity and update-val of job’s interval or just current interval’s spare capacity. The verbose explanation of algorithm is as follows: • Check the spare capacity of job’s interval,ωj is negative before updating, if yes, then along with ωj update also job’s interval’s update-val, Uj. • Neutralisation rule. Check lender of job’s interval,lj and lender of current interval, lcurr is same and also check ωj is negative before updating. If both these conditions are satisfied then don’t negate the ωcurr. At any given scenario now the complexity of spare capacity update per slot is O(1). 2) Naive Deferred update: The above mentioned algorithm O1-Update-sc is just a facilitator of deferred update algorithm. The real spare capacity update for intervals happen during deferred update. The function deferred update gets triggered on 2 conditions. • When ICurr is moving to next interval. • When an aperiodic task arrives and needs to be tested to guarantee. To understand the intuitiveness of the working deferred update, we will now sketch a non working algorithm which will be called naive-deferred-update and explain certain challenges or scenarios that would defy straight forward notion of deferring the update of intervals within ϱi. We presume this approach would provide a good insight on the complete solution on deferred update. 3References are method to directly access the specified object without any traversal or computational overhead. Similar to ANSI C’s pointer notion.
  • 11. Algorithm 2: naive-deferred-update 1 Function naive-deferred-update(I); Input : I: Interval list Result: spare capacity of the interval within ϱi is updated with right data 2 lentTill ← Icurr.b; 3 updateV al ← 0; 4 while lentTill ̸= Icurr do 5 lentTill.ω ← lentTill.ω + updateV al ◃ Update the interval with previous interval’s deferred spare capacity; 6 updateV al ← updateV al + lentTill.U ◃ Accumulate previous intervals within ϱi update-val ; 7 lentTill.U ← 0; 8 lentTill ← lentTill.prev; 9 end To brief on the Algorithm 1 mentioned above, the algorithm traverses backwards starting from lent-till,bi till Icurr. As it moves backwards it does the following • It starts accumulating intervals update-val, Ui, within ϱi generated during O1-Update-SC • As it progresses backwards and before accumulating the current lent to interval’s update-val. Ui, we add the previously accumulated Ui to the current lent to interval’s spare capacity bi.ω. Symbolizing the functions. To make the explanation easier, we will symbolize certain functions. The function Update-SC symbolized as IJi,k Icurr, where Icurr is the current interval that is executing a job that belongs to the interval IJi,k and the function Update-SC is applied. We will symbolize the result as ωi. The ωi represents the spare capacity of the interval i, the subscript,i that represent a interval is made optional, rather the explanation in examples holds a column representing the corresponding interval id. We would Symbolize the function O1-Update-SC as IJi,k x Icurr, The super script x represents O(1) version of Update-SC, i.e., O1-Update-SC. The result of the x will be represented ω [U] i , where superscript U is the update val of the interval i. We say curr Ii (No entity on the left), when the current interval moves from Icurr to Ii, .i.e., Ii becomes Icurr. We will also symbolize function deferred-update as ϱi, which means deferred update is applied on the ϱi. To symbolize the naive version of deferred update we say N . Scenario 1: lender interval is not the only lender. In Algorithm (2) the deferred update happens with an assumption that lender,l of the relation window ϱi is the only lender. This is generally not the case; There could be an interval, Iϱi,btw in the middle of ϱi that could partially satisfy the capacity constraint of some interval, Iϱi,aft after the Iϱi,btw within ϱi. Example: existence of multiple lenders within ϱi To make the explanation more justifiable and apparent, we will first derive the offline version of spare capacity calculation in 2 steps. TABLE II: Offline calculation of spare capacity Interval-id step 1 2 3 4 function applied 1 4 1 1 -3 Tωi ← |ωi| − ∑n i=0 CJi,k 2 3 -1 -2 -3 ωi ← Tωi + min(0, ωi+1) The above offline calculation of ωi gives an simple insight on how Iϱi,btw, precisely, I2 and I3, partially satisfies capacity constraint of Iϱi,aft, i.e., I4. Assuming deferred update offline phase is applied on this set of intervals, we label this relation window as ϱ1. In this exemplification, let’s assume Icurr starts at I1 and I4 is the last interval. During online phase for spare capacity maintenance Algorithm(2) is applied on each slot and Algorithm(3) is applied when necessary, i.e., either when aperiodic arrives or end of interval. The table below shows slot by slot update procedure. TABLE III: Online phase using deferred update
  • 12. Interval-id slot 1 2 3 4 function-applied 0 3 -1 -2 -3 curr I1 1 3 -1 -2 −2[1] I4 x Icurr 2 3 -1 -2 −1[2] I4 x Icurr 3 3 -1 -2 0[3] I4 x Icurr 4 2 -1 -2 0[3] ∅ x Icurr 5 2 2 1 0 curr I2 and N ϱ1 but, this calculation is wrong in ω1 and ω2. lets run the same scenario with traditional algorithm, i.e., algorithm 1 to understand what went wrong in the TABLE III. TABLE IV: Online phase using Traditional algorithm Interval-id slot 1 2 3 4 function-applied 0 3 -1 -2 -3 curr I1 1 3 0 -1 -2 I4 Icurr 2 2 1 0 -1 I4 Icurr 3 1 1 1 0 I4 Icurr 4 0 1 1 0 ∅ Icurr 5 0 1 1 0 curr I2 On running naive-deferred-update we find ω2 has 1 spare capacity extra and ω1 has 2 spare capacity extra than the traditional algorithm. This additional spare capacity is an error, due to an interval, Iϱi,btw becoming positive during deferred update. The value above 0 of the interval,Iϱi,btw in relation window should not be propagated to back intervals, or the value after the interval,Iϱi,btw becoming positive should be treated as an error to backward interval. Scenario 2: During online phase, middle of ϱi some interval becomes positive before deferred update. During online phase of deferred update, irrespective of multiple lenders there could be an interval Iϱi,btw that becomes positive during O1-update-SC and before deferred-update is applied. To explain this scenario lets make an example. Example: Iϱi,btw becomes positive during O1-update-SC and before deferred-update To make this scenario simple and easy to understand lets make an assumption that lender l is the only interval that has lent to all the intervals in relation window ϱ1 and ϱ1 is the only set of intervals generated. TABLE V: Offline calculation of spare capacity Interval-id step 1 2 3 4 function applied 1 7 -1 -1 -3 Tωi ← |ωi| − ∑n i=0 CJi,k 2 2 -5 -4 -3 ωi ← Tωi + min(0, ωi+1) TABLE VI: Online phase using deferred update Interval-id slot 1 2 3 4 function-applied 0 2 -5 -4 -3 curr I1 1 2 −4[1] -4 -3 I2 x Icurr 2 2 −3[2] -4 -3 I2 x Icurr 3 2 −2[3] -4 -3 I2 x Icurr 4 2 −1[4] -4 -3 I2 x Icurr 5 2 0[5] -4 -3 I2 x Icurr 6 1 1[5] -4 -3 I2 x Icurr 7 1 1[5] −3[1] -3 I3 x Icurr 8 1 2 -3 -3 curr I2 and N ϱ1 On execution of I3 in Icurr, The Icurr should be negated by one, because I2 has already become positive so
  • 13. neutralisation effect should stop at I2. The problem is due to the fact that I3 is not aware that I1 has already become positive. This is an error that needs to be addressed in naive-deferred-update. The above mentioned scenarios can occur in any combinations. 3) Understanding the Problem with naive deferred update: In the above mentioned scenarios there is an error in deferred update due to assumption that the lender,l is the only lender. Second assumption is that no interval within the ϱi will become positive when executing in Icurr. The mentioned problems is due to the function x not aware of its past intervals, but this unawareness was not the case with traditional approach. Making intervals aware of each other is only possible by traversing backwards and updating the past intervals on each schedule; This would de-vast the purpose of deferred update approach. To address the problem we need to rectify the error during deferred update N ϱi. We would call the error correction as ripple effect correction(REC), ε. 4) Formalizing Solution: To correct the error we now define ε in 2 steps each rectifying error due to the above mentioned scenarios and we would conglomerate them into one single solution. This conglomerated solution would be applied on naive-deferred-update to make it into a complete working deferred spare capacity update solution. Addressing Scenario 1. The scenario 1 leads to an error due to some intervals which were partial lenders becomes positive during deferred-update and still Ui of after intervals traverses backward. To address this scenario we define ε as εc ← max(0, ([ωc − c+1∑ j=bi−1 εj] + c+1∑ k=bi Uk)) (3) so the spare capacity of the interval that needs to get updated becomes ωc ← ([ωc − c+1∑ j=bi−1 εj] + c+1∑ k=bi Uk) (4) • c ← is the current interval whose ω getting updated. • j ← ranges from interval before lent-till, bi − 1 till the interval just after c within ϱi. • k ← ranges from lent-till,bi till the interval just after c. when we probe the above formulation closer into equation(3) and equation(4), both are the variant of the same formulae applied iteratively during ϱi. The REC in this scenario is just the summation of all ωi ≥ 0. Addressing scenario 2. To understand the solution, lets understand the cause for the error. The error is due to assumption that during online phase Iϱi,btw will not become positive before deferred update is applied, but this assumption is wrong. An interval in between,Iϱi,btw can become positive satisfying the borrowed values of Iϱi,aft before deferred update, due to some job Ji,k,curr belonging Iϱi,btw runs satisfying the constraints. Intuitiveness of solution is that, spare capacity of the intervals Iϱi,aft that executes in Icurr should not propagate its U beyond Iϱi,btw. To make this work, U accumulated after Iϱi,btw should be made into an error to Iϱi,bfr. This inference leads to εc ← [ωc,bfr ≥ 0]? c+1∑ k=bi Uk : c+1∑ j=bi−1 εj (5) . • c ← is the current interval whose ω getting updated. • j ← ranges from interval before lent-till, bi − 1 till the interval just after c within ϱi. • k ← ranges from lent-till,bi till the interval just after c. • ωc,bfr is the spare capacity of the interval before deferred update on the interval c. In words, The interval that is currently getting updated is first checked weather its spare capacity is greater than or equal to zero, if yes, then the update-val from the after intervals,Iϱi,aft becomes the current REC val, ϵc, else proceed with equation 3 and 4. If the spare capacity of the interval that is getting updated,Iϱi,btw+x is found positive before update, then REC value collected previously is applied on the updating interval, then the update val from all the after interval becomes the REC for the before intervals,Iϱi,bfr. Addressing Current interval Icurr. The current interval Icurr need not be added with update-val U rather just needs to be corrected with REC, ε collected from previous intervals .i.e., Icurr.ω ← Icurr.ω − curr+1∑ j=bi−1 εj (6)
  • 14. • j ← ranges from interval before lent-till, bi − 1 till the interval just after Icurr within ϱi. Fig. 4: Interval between the ϱi becoming positive before deferred update applied. 5) Complete working solution: On conglomerating above 3 solutions into one single algorithm leads to complete deferred update algorithm which is as follows. Algorithm 3: deferred-update 1 Function deferred-update(I, Icurr); Input : I: Interval list, Icurr: Current interval Result: spare capacity of the intervals in ϱi is updated with right data 2 lentTilltemp ← Icurr.b; 3 if !lentTilltemp then 4 return; 5 end 6 updateV al ← 0; 7 ε ← 0; 8 while lentTilltemp ̸= Icurr do 9 ωbefore ← lentTilltemp.ω ◃ Make a copy of ω before updating; 10 lentTilltemp.ω ← lentTilltemp.ω + updateV al ◃ Update the deferred spare capacity; 11 if ωbefore < 0 and lentTilltemp ̸= Icurr.b then 12 ε ← ε + max(0, lentTilltemp.ω) 13 end 14 else 15 ω ← updateV al 16 end 17 updateV al ← updateV al + lentTill.U ◃ Accumulate previous intervals update-val within ϱi ; 18 lentTill.U ← 0; 19 lentTill ← lentTill.prev; 20 end
  • 15. B. Removing Slots The previous section, deferred update of spare capacity almost removed the complexity involved due to spare capacity update per slot, by deferring it. In this section we would remove the notion of slot, making the slot level spare capacity shifting into a time based capacity shifting. To remove the notion of slots we need to make the following alterations along with the existing one. • We just need to make the tasks continuous, i.e., we need to remove the additional computation mentioned in equation (2) and compute the start and end of intervals with the notion of continuous time. • Decision timer in slot shifting gets triggered for every slot, here on needs to be triggered at the end of each interval.i.e., λexpiry ← Icurr.end (7) The timer expiry is no more constant as it was in slot shifting. • we need to alter the slot based O1-Update-SC into capacity based spare capacity update as mentioned in Algorithm 4. Algorithm 4: Capacity Based O1-Update-SC 1 Function o1-update-sc(Ji,k,curr, I); Input : Ji,k,curr: Previous selected job, I: Interval table Output: ∅ Result: Checks and Updates only spare capacity of job and current interval 2 if Ji,k,curr.I == Icurr then 3 return; 4 end 5 ◃ check for 1st time and initialize sched − time; 6 if first − time then 7 sched − time ← 0; 8 first − time ← 0; 9 end 10 run−time ← [curr −time]−[sched−time] ◃ compute the difference between previous and current schedule time; 11 sched − time ← curr − time; 12 tsk − ω ← Ji,k,curr.I.ω ◃ take a local copy of task’s spare capacity before updating; 13 if Ji,k,curr.I.ω < 0 then 14 ◃ here just update update-val with required value; 15 Ji,k,curr.I.ω ← Ji,k,curr.I.ω + run − time; 16 if Ji,k,curr.I.ω > 0 then 17 Ji,k,curr.I.u ← Ji,k,curr.I.u + [[run − time] − [Ji,k,curr.I.ω]]; 18 end 19 else 20 Ji,k,curr.I.u ← Ji,k,curr.I.u + run − time; 21 end 22 end 23 else 24 Ji,k,curr.I.ω ← Ji,k,curr.I.ω + run − time; 25 end 26 ◃ check job’s lender and current interval’s lender is same and job’s interval before updating is negative; 27 if Ji,k,curr.I.l == Icurr.l and tsk − ω < 0 then 28 return; 29 end 30 Icurr.ω ← Icurr.ω − run − time;
  • 16. Notion in words, The notion of slot is removed and the task can run in Icurr continuously even beyond the ω that was borrowed, so we need to update update-val of the current job, uJi,k,curr only the required, so remaining should be avoided hence derives the following equation. Ji,k,curr.I.ω ← Ji,k,curr.I.ω + R (8) Ji,k,curr.I.u ← [Ji,k,curr.I.ω > 0]?(Ji,k,curr.I.u + [[R] − [Ji,k,curr.I.ω]]) : (Ji,k,curr.I.u + R) (9) • R ← runtime between previous schedule and current schedule With the mentioned algorithm(Algorithm 4) natural process of decision happens except at the end of interval trigger. The decision happens during the following scenarios. • when a job arrives • when a job exits • End of the interval, i.e., when Timer, λ expires We would symbolize the capacity based O1-Update-SC as C with the same notion as x , but rather than Algorithm 2, Algorithm 4 will be applied.
  • 17. C. Guarantee Algorithm In this section we derive a new mode of guaranteeing the firm aperiodic task, which would take constant time per interval and would stop traversal once the computation constraint,Γ.c for the aperiodic task is satisfied, thus solving the problems mentioned in section3 problem3. Algorithm 5: New Guarantee Algorithm 1 Function Guarantee-algorithm(I, Iacc, Γ); Input : I: list of intervals; Iacc: reference of the interval at which Γ deadline falls; Γ : The aperiodic job that got accepted Output: ∅ Result: Γ is guaranteed 2 if Iacc.end > Γ.C then 3 INew = split-intr(Iacc, Γ.C); 4 end 5 6 if INew then 7 Iiter ← INew; 8 end 9 else 10 Iiter ← Iacc 11 end 12 ∆ ← Γ.C; 13 while ∆ do 14 if Iiter.ω < 0 then 15 Iiter.ω ← Iiter.ω − ∆ 16 end 17 else if Iiter.ω > 0 then 18 if Iiter.ω ≥ ∆ then 19 Iiter.ω ← Iiter.ω − ∆; 20 ∆ ← 0; 21 end 22 else 23 Iiter.ω ← Iiter.ω − ∆; 24 ∆ ← ∆ − Iiter.ω; 25 end 26 end 27 Iiter ← Iiter.prev; 28 end 29 Function split-intr(Iright,sp); Input : Iright: reference of the interval at which Gamma falls; sp: split point at which separation should happen Output: Ileft: New interval that was created and added to list 30 Ileft.start ← Iright.start; 31 Ileft.end ← sp; 32 Ileft.sc ← Ileft.end − Ileft.start; 33 Iright.start ← sp; 34 Iright.sc ← Iright.sc − Ileft.sc; 35 Ileft.sc ← Ileft.sc + min(0, Iright.sc); 36 insert-before(Iright, Ileft) ◃ list function that just inserts the node before first parameter; 37 return Ileft In words, the algorithm uses a approach of delta, ∆ which will initially hold the Γ.c and updates the intervals till the ∆ becomes 0. Negation of ∆ happens in the below mentioned steps, which take place as we traverse backwards
  • 18. from either Inew(in case a new interval is created) or Iacc, i.e., the interval where deadline falls. • When the interval ω is positive and less than required ∆ then negate the ∆ with ω and make ω the negative version of the remaining ∆. • When the interval’s ω is positive and greater or equal to required ∆ then just negate ω with ∆ and make ∆ to 0. • Finally when interval’s ω is negative then simply negate ∆ to ω, i.e., ω ← ω − ∆ In case of new interval creation we start from Inew because the ω of the Iacc is already satisfied during split-interval function. The idea behind the computation is to use the existing spare capacity calculation of offline phase and just update new spare capacity with the same ideology but from different perspective. D. Putting it all together This section tries to show how the capacity shifting algorithm will incorporate into the online phase. The algorithm removes the notion of slot, s and slot count scnt. Algorithm 6: Capacity Shifting 1 Function schedule(Ji,k,Curr); Input : Ji,k,Curr: job previously selected Output: Ji,k,nxt: next selected job 2 O1-Update-SC(Ji,k,Curr, I) ◃ update the intervals with the run time; 3 if ιA ̸= ∅ then 4 deferred-update(I, Icurr) ◃ Deferred update happens; 5 Test-aperiodic(ιA) ◃ Test and try Guarantee Aperiodic’s if any; 6 end 7 update-time() ◃ Here deferred update happens if needed; 8 Ji,k,nxt ←selection-fn() ◃ Get next eligible job; 9 return Ji,k,nxt; 10 Function o1-update-sc(Ji,k,curr, I); Input : Ji,k,curr: Previous selected job, I: Interval table Output: ∅ Result: Checks and Updates only spare capacity of job and current interval 11 if Ji,k,curr.interval == Icurr then 12 return; 13 end 14 ◃ check for 1st time and initialize sched − time; 15 if first − time then 16 sched − time ← 0; 17 first − time ← 0; 18 end 19 run−time ← [curr −time]−[sched−time] ◃ compute the difference between previous and current schedule time; 20 sched − time ← curr − time; 21 tsk − ω ← Ji,k,curr.interval.ω ◃ take a local copy of task’s spare capacity before updating; 22 if Ji,k,curr.interval.ω < 0 then 23 ◃ here just update update-val with required value; 24 Ji,k,curr.interval.ω ← Ji,k,curr.interval.ω + run − time; 25 if Ji,k,curr.interval.ω > 0 then 26 Ji,k,curr.interval.u ← Ji,k,curr.interval.u + [[run − time] − [Ji,k,curr.interval.ω]]; 27 end 28 else 29 Ji,k,curr.interval.u ← Ji,k,curr.interval.u + run − time; 30 end 31 end 32 else 33 Ji,k,curr.interval.ω ← Ji,k,curr.interval.ω + run − time; 34 end
  • 19. 1 ◃ check job’s lender and current interval’s lender is same and job’s interval before updating is negative; 2 if Ji,k,curr.interval.l == Icurr.l and tsk − ω < 0 then 3 return; 4 end 5 Icurr.ω ← Icurr.ω − run − time; 6 Function update-time(); Input : ∅ Output: ∅ Result: checks and moves Icurr to Icurr.nxt along with deferred-update 7 if get-curr-time() ≥ Icurr.end then 8 deferred-update(I,Icurr); 9 Icurr ← Icurr.nxt; 10 end 11 Function Test-aperiodic(ιA, I); Input : ιA: Unconcluded firm aperiodic list; I: list of intervals Output: ∅ Result: Job in ιA is moved to either ιG or ιS 12 while ιA ̸= ∅ do 13 Γ ← A.dequeue(); 14 if Iacc ← Acceptance-test(Γ, I) ̸= ∅ then 15 Guarantee-job(I, Iacc, Γ); 16 ιG.queue(Γ) 17 end 18 else 19 ιS.queue(Γ) 20 end 21 end 22 Function deferred-update(I, Icurr); Input : I: Interval list, Icurr: Current interval Result: spare capacity of the interval within ϱi is updated with right data 23 lentTilltemp ← Icurr.b; 24 if !lentTilltemp then 25 return; 26 end 27 updateV al ← 0; 28 ε ← 0; 29 while lentTilltemp ̸= Icurr do 30 ωbefore ← lentTilltemp.ω ◃ Make a copy of ω before updating; 31 lentTilltemp.ω ← lentTilltemp.ω + updateV al ◃ Update the deferred spare capacity; 32 if ωbefore < 0 and lentTilltemp ̸= Icurr.b then 33 ε ← ε + max(0, lentTilltemp.ω) 34 end 35 else 36 ω ← updateV al 37 end 38 updateV al ← updateV al + lentTill.U ◃ Accumulate previous intervals update-val within ϱi ; 39 lentTill.U ← 0; 40 lentTill ← lentTill.prev; 41 end
  • 20. 1 Function Acceptance-test(Γ, I); Input : Γ: Unconcluded firm aperiodic job; I: list of intervals Output: Iacc: interval in which task was accepted or ∅ 2 Itmp ← Icurr; 3 ωtmp ← 0; 4 while Itmp ≤ Γ.d do 5 if Itmp.ω > 0 then 6 ωtmp ← Itmp.ω + ωtmp; 7 end 8 Itmp ← Itmp.nxt; 9 end 10 if ωtmp ≥ Γ.C then 11 return Itmp.prev; 12 end 13 else 14 return ∅; 15 end 16 /* New Version of guarantee algorithm; 17 Notice that algorithm has only one loop and stops at a point ∆ satisfies; 18 */; 19 Function Guarantee-job(I, Iacc, Γ); Input : I: list of intervals; Iacc: reference of the interval at which Γ deadline falls; Γ : The aperiodic job that got accepted Output: ∅ Result: Γ is guaranteed 20 if Iacc.end > Γ.C then 21 INew = split-intr(Iacc, Γ.C); 22 end 23 24 if INew then 25 Iiter ← INew; 26 end 27 else 28 Iiter ← Iacc 29 end 30 ∆ ← Γ.C; 31 while ∆ do 32 if Iiter.ω < 0 then 33 Iiter.ω ← Iiter.ω − ∆ 34 end 35 else if Iiter.ω > 0 then 36 if Iiter.ω ≥ ∆ then 37 Iiter.ω ← Iiter.ω − ∆; 38 ∆ ← 0; 39 end 40 else 41 Iiter.ω ← Iiter.ω − ∆; 42 ∆ ← ∆ − Iiter.ω; 43 end 44 end 45 Iiter ← Iiter.prev; 46 end
  • 21. 1 Function split-intr(Iright,sp); Input : Iright: reference of the interval at which Gamma falls; sp: split point at which separation should happen Output: Ileft: New interval that was created and added to list 2 Ileft.start ← Iright.start; 3 Ileft.end ← sp; 4 Ileft.sc ← Ileft.end − Ileft.start; 5 Iright.start ← sp; 6 Iright.sc ← Iright.sc − Ileft.sc; 7 Ileft.sc ← Ileft.sc + min(0, Iright.sc); 8 insert-before(Iright, Ileft) ◃ list function that just inserts the node before first parameter; 9 return Ileft Algorithm 6 is just conglomeration of all the explanations in this paper making the reader easy to understand the whole flow of online phase. When we notice in scenarios like aperiodic arrival deferred-update might happen twice where the 2nd iteration would be a simply dumb loop, which could be avoided by simple conditional flag checks, but we haven’t made it part of this algorithm just to keep the gist of the algorithm simple and understandable. When the whole algorithm is observed, the complexity of scheduler within interval due to the notion of updating spare capacity is made O(1), but the interval movement still holds a complexity of O(n), where n is the number of intervals in the ϱi. This was the complexity of the traditional algorithm per slot, which is now reduced to per interval. Guarantee-job. In slot shifting algorithm the complexity of recalculation of spare capacity per interval during guarantee algorithm is O(n), where n is the number of job’s in that interval. The proposed algorithm reduced this complexity to O(1), moreover backward traversal is cut short from “Iacc to Icurr”, to “Iacc to a point where computational constraint of the aperiodic is satisfied”. V. Experimental Results Above, we proposed a new version of slot shifting scheduling algorithm and showed that this algorithm often makes better use of available computing resources than pure slot shifting algorithm. We now experimentally evaluate Algorithm proposed algorithm and compare its performance with that of slot shifting. we have made use of the pseudo random task set generators by Ripoll et al. [11] and UUnifast [12] for evaluating the feasibility analysis of the algorithm. This was made available in Simso framework. Workloads generated by mentioned generator have been widely used for experimentally evaluating real-time scheduling algorithms, and these experiments have been revealed to the larger research community for several years now. We believe that using this task generator provides a context for our simulation results, and allows them to be compared with other results performed by other researchers. UUnifast generator: It generates the taskset configurations with a fixed number of tasks (N), specified value of sum of their individual utilization factor (U) and number of taskets to generate (nset). UUniFast produces independent tasks with randomly generated unbiased utilization factors. In this process, the utilization factor of each task is sampled and then only the tasks with correct total utilization are kept in the configurations. The task count is drawn between [5; 20] and utilisation factor is restricted between [10; 90] percent. The nset is kept constant count of 1. Ripoll generator: It generates taskset configuration based on number of tasksets to generate (nsets), maximum computation time of a task (Compute), maximum slack time (deadline), maximum delay after deadline (period) and total utilisation each set should reach (target-util). The first three parameters are used to generate each task of the task set. The processor utilization, is used to determine the number of tasks. The the utilization factor of the system which is uniformly drawn from interval [10; 90], the computation times are uniformly chosen from the interval [1; 20], the deadlines from the interval [2; 170], and the periods from the interval [3; 650]. Further the interval range of the input paramter is further randomized by simple random generator. The results of the generator is passed as input to offline phase of the slot shifting to generate job and interval table from the task-set to make it accustomed for slot shifting algorithm. The same framework is used for aperiodic taskset, but early start time is randomized further between the range of the hyper period.
  • 22. Fig. 5: Offline Task Generator and Slot shifting offline table generator framework. To give the right benchmark, online and offline phase of both existing Slot shifting and proposed algorithm was implemented in SimSo [18, 19] simulation framework. To make the whole implementation adaptable to changes with respective slot shifting; A generic object oriented framework was created within SimSo to adapt any future changes with respect to Slot shifting. Maximum of upto 5000 jobs are used to test the correctness and benchmark between traditional and new algorithm’s computational need. The slot length was made constant of 5 Milliseconds in slot shifting. Fig. 6: Performance improvement in number of scheduling decisions with Capacity shifting against slot shifting. The vertical axis shows the number of scheduling decisions taken on an average 2604 jobs with 4400 slots decision in slot shifting against capacity shifting. We can infer that capacity shifting takes on an average of 45 percent less scheduling decision against the slot shifting. In some best cases the improvement is seen going upto 60 percentage.
  • 23. Guarantee Algorithm. To benchmark the guarantee algorithms of slot shifting against capacity shifting, we exploited the feature in SimSo simulator of rounding or setting up computation time per functionality. To make the benchmark visible we scaled the time to iterate one job within the interval to apply equation(1) during guaranteeing process to 1 nanosecond and considered other miscellaneous computations as 0.5 nanosecond. Fig 7 below shows the constant time of computation in new guarantee algorithm which is as minimal as miscellaneous computation of the traditional slot shifting algorithm. Conceptually, the new guarantee algorithm computes spare capacity of the intervals with minimal constant computation complexity compared with slot shifting. The computation time of the algorithm is not effected by number of jobs the interval owns. The vertical axis in Figurine 7 shows the computational time per interval and horizontal axis shows number of jobs in the interval. we can infer a constant computation time per interval of 0.5 nanosecond in new guarantee algorithm against a proportional increase in computation time as number of jobs increases in slot shifting algorithm. Fig. 7: Constant time taken by new Guarantee algorithm against slot-shifting Guarantee algorithm.
  • 24. VI. Conclusion and future work This paper presented 3 novel solutions, namely deferred update, slotless notion and O(1) guarantee algorithm. The solution 1 and 3 can be adapted to any other scenarios independently, but solution 2 is dependent on solution 1, in other words, solution 2 is an extension of solution 1. conglomerating all the 3 solutions is named capacity shifting. Moreover following are some more areas of observation and areas to explore with respect to improvising Slot shifting or current algorithm capacity shifting. • Compute when required. We tried to defer the computation of spare capacity to the end and beginning of intervals, but this notion could further be extended to an approach which does update of spare capacity when required. • The capacity shifting could be extended to multi core platforms. • The presented algorithm on deferred spare capacity update happens even if the update is not necessary. This could be avoided by small conditional checks in implementation. • The remaining challenges: – A small drift in system clock against computed slot clock will cause the whole system failure. – The slot shifting was implemented in realtime platforms like LITMUSRT and in simulation environment like SimSo. The major problem was injection and preparation of precomputed tables of jobs and intervals into the system, i.e., the slot shifting needs both intervals and jobs be aware of each other. Creating reference of each other(jobs and intervals) for offline table is a strenuous effort. Getting such a data into the system needs separate preparation software. The effort to implement both offline and online phase of the preparation software is an effort equal to the implementation of the slot shifting algorithm itself. – Offline table can become really big. In some cases of non-harmonic task set of 10 tasks produced interval tables in count of 1000’s for a hyper period, which we believe is normal in real environments. Acknowledgment I would like to personally thank John Gamboa for being on my side to derive the new guarantee algorithm. I would like to thank Mitra Nasri for being patient whenever I came up with deviating ideas of implementation and providing me with valuable suggestions on need. Finally I would like to thank my professor Gerhard Fohler for deriving a novel algorithm called Slot shifting and giving me this opportunity to implement a new version of it. References [1] Gerhard fohler, Flexibility in Statically Scheduled Hard Real-Time Systems. The dissertation on Slot shifting algorithm. [2] J. P. Lehoczky, L. Sha, and J. K. Strosnider. Enhanced aperiodic responsiveness in hard real-time environments. In Proceedings of the IEEE Real-Time Systems Symposium, December 1987 [3] J. K. Strosnider, J. P. Lehoczky, and L. Sha. The deferrable server algorithm for enhancing aperiodic responsiveness in hard-real-time environments. IEEE Transactions on Computers, 4(1), January 1995. [4] J. P. Lehoczky and S. Ramos-Thuel. An optimal algorithm for scheduling soft-aperiodic tasks in fixed-priority preemptive systems. In Proceedings of the IEEE Real-Time Systems Symposium, December 1992. [5] M. Spuri and G. C. Buttazzo. Efficient aperiodic service under earliest deadline scheduling. In Proceedings of the IEEE Real-Time Systems Symposium, December 1994. [6] C. W. Mercer, S. Savage, and H. Tokuda. Processor capacity reserves for multimedia operating systems. Technical Report CMU-CS-93- 157, Carnegie Mellon University, Pittsburg, Pennsylvania, USA, May 1993. [7] Implementation of Capacity shifting and Slot shifting in Simso. https://github.com/gokulvasan/CapacityShifting [8] Giorgio C. Buttazzo. Rate monotonic vs. EDF: judgment day. Real-Time System., Volume 29, Issue 1:5-26, January 2005. [9] Gowri Sankar Ramachandran, Integrating enhanced slot-shifting in µC/OS-II.School of Innovation, Design and Engineering, Malardalen University, Sweden. [10] Giorgio Buttazzo’s. HARD REAL-TIME COMPUTING SYSTEMS Predictable Scheduling Algorithms and Applications. Kluwer Academic Publishers, 101 Philip Drive, Assinippi Park Norwell, MA 02061, USA, 1997. [11] I. Ripoll, A. Crespo, and A. K. Mok. Improvement in feasibility testing for real-time tasks. Real-Time Systems: The International Journal of Time-Critical Computing, 11:19–39, 1996. [12] .E. Bini and G. C. Buttazzo. Measuring the performance of schedulability tests. Real-Time Systems, 30(1-2):129–154, 2005. [13] G. C. Buttazzo and E. Bini. Optimal dimensioning of a constant bandwidth server. In Proc. of the IEEE 27th Real-Time Systems Symposium (RTSS 2006), Rio de Janeiro, Brasil, December 6-8, 2006. [14] Gerhard Fohler. Joint scheduling of distributed complex periodic and hard aperiodic tasks in statically scheduled systems. In Proceedings of the 16th Real-Time Systems Symposium, Pisa, Italy, December 1995. [15] G.Fohler, Analyzing a Pre Run-Time Scheduling Algorithm and Precedence Graphs, Research Report Institut fur Technische Informatik Technische Universitat Wien, Vienna, Austria ,September 1992
  • 25. [16] C. Lin and S. A. Brandt. Improving soft real-time performance through better slack management. In Proc. of the IEEE Real-Time Systems Symposium (RTSS 2005), Miami, Florida, USA, December 5, 2005. [17] R. I. Davis, K. W. Tindell, and A. Burns. Scheduling slack time in fixed priority pre-emptive systems. In Proceedings of the IEEE Real-Time Systems Symposium, December 1993. [18] Maxime Ch´eramy, Pierre-Emmanuel Hladik, Anne-Marie D´eplanche, SimSo: A Simulation Tool to Evaluate Real-Time Multiprocessor Scheduling Algorithms.5th International Workshop on Analysis Tools and Methodologies for Embedded and Real-time Systems (WATERS), Jul 2014, Madrid, Spain. 6 p., 2014 [19] Simso simulator framework official website. http://projects.laas.fr/simso [20] C. W. Mercer, S. Savage, and H. Tokuda. Processor capacity reserves for multimedia operating systems. Technical Report CMU-CS- 93-157, Carnegie Mellon University, Pittsburg, Pennsylvania, USA, May 1993 [21] M. Spuri and G. C. Buttazzo. Efficient aperiodic service under earliest deadline scheduling. In Proceedings of the IEEE Real-Time Systems Symposium, December 1994. [22] M. Spuri and G. C. Buttazzo. Scheduling aperiodic tasks in dynamic priority systems. Journal of Real-Time Systems, 10(2), 1996. [23] Fabian Scheler and Wolfgang Schroeder-Preikschat. Time-triggered vs. event-triggered: A matter of conguration? Model-Based Testing, ITGA FA 6.2 Workshop on and GI/ITG Workshop on Non-Functional Properties of Embedded Systems, 2006 13th GI/ITG Conference -Measuring, Modelling and Evaluation of Computer and Communication (MMB Workshop), pages 1-6, March 2006. [24] Sanjoy K. Baruah, Aloysius K. Mok, and Louis E. Rosier. Pre-emptively scheduling hard-real-time sporadic tasks on one processor. In Proc. Real-Time Systems Symposium (RTSS), pages 182-190, 1990. [25] Damir Isovic. Flexible Scheduling for Media Processing in Resource Constrained Real-Time Systems. PhD thesis, Malardalen University, Sweden, November 2004. [26] Damir Isovic and Gerhard Fohler. Handling mixed task sets in combined offine and online scheduled real-time systems. Real-Time Systems Journal, Volume 43, Issue 3:296-325,December 2009. [27] Hermann Kopetz. Event-triggered versus time-triggered real-time systems. In Proceedings of the International Workshop on Operating Systems of the 90s and Beyond, pages 87-101, London, UK, 1991. Springer-Verlag. [28] Martijn M. H. P. van den Heuvel, Mike Holenderski, Reinder J. Bril, and Johan J.Lukkien. Constant-bandwidth supply for priority processing. In IEEE Transactions on Consumer Electronics (TCE), volume 57, pages 873-881, May 2011. [29] Aeronautical Radio, Incorporated (ARINC), https://en.wikipedia.org/wiki/ARINC