182.086 real-time scheduling ss 2021
TRANSCRIPT
182.086 Real-Time Scheduling
SS 2022
Prof. Ulrich Schmid
http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched
Technische Universitat Wien
Institut fur Technische Informatik
Embedded Computing Systems Group (E191-02)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 1/209
Course Overview
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 2/209
Paradigm
Use an advanced scheduling algorithm, namely, EarliestDeadline First (EDF), for an in-depth treatment of:
Modeling and Analysis
Algorithms
Schedulability results
Prerequisites:
Basic mathematics and complexity theory
Basic real-time scheduling knowledge
Nice introduction: C. J. Fidge. Real-time schedulingtheory. SVRC Services (UniQuest Pty Ltd) ConsultancyReport 0036-2, April 2002(http://eprint.uq.edu.au/archive/00001038/)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 3/209
Course Content
What you will hear about:
Uniprocessor systems under EDF
Feasibility and response time analysis
Complexity analysis
Competitive analysis
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 4/209
Course Content
What you will hear about:
Uniprocessor systems under EDF
Feasibility and response time analysis
Complexity analysis
Competitive analysis
What you will NOT hear about:
Different scheduling algorithms
Real-time programming
Worst-case execution time analysis
Scheduling in multiprocessor and distributed systems
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 4/209
Some Background Info
Textbook: John A. Stankovic, Marco Spuri, KritiRamamritham, Giorgio C. Buttazzo: DeadlineScheduling for Real-Time Systems, Kluwer AcademicPublishers (now Springer Verlag), 1998, ISBN0-7923-8269-2
Schedule of lectures available on course homepagehttp://ti.tuwien.ac.at/ecs/teaching/courses/rt_sched
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 5/209
What to Do ?
2 Homework assignments (65%), to be carefully,rigorously and completely worked out by yourself
LATEX paper version
uploaded in TUWEL course
3 student presentations (35%), where you will be askedon the spot to present a certain topic from the previouslectures (using my or your own slides/presentations)
Participation in discussions in class (± 10%)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 6/209
General Rules
Passing requires ≥ 60% of the achievable maximum
Presence in class is expected!
Advance reading of textbook is recommended.
All work must be done on your own and written up inyour own words; all sources of information must beproperly referenced (except textbook and slides)
Graduate courses like this one adhere to “pull-based”learning — it is up to you to make the best use of thiscourse!
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 7/209
Expected Achievments
Having passed this course, you should
have got a basis for own work in this area
have a somewhat improved formal/mathematicalunderstanding (major rationale of most graduate basiccourses in Mag-TI)
have seen another example of “computer science ⊃programming”
http://ti.tuwien.ac.at/ecs/teaching/courses/rt_sched
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 8/209
Follow-up Courses
Scientific Project in Technischer Informatik 182.759
First steps in own scientific work in a (self-)assigneddistributed algorithms project
Guided writing of a small scientific paper + presentation
Learning about the scientific community in the field
Master thesis, Dissertation
Typically funded positions (Forschungsbeihilfe,Master-level research contract, PhD research contract)
Learn about top-level international research
Try out your (first) own steps in real scientific researchhttp://ti.tuwien.ac.at/ecs/teaching/courses/valg/misc/success_st
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 9/209
Questions ?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 10/209
Introduction to Real-Time Scheduling (Ch.2)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 11/209
Tasks (I)
A task is an executable entity of work, characterized by
task execution properties
task release constraints
task completion constraints
task dependencies
Tasks are executed in a system made up of
one processor (CPU)
other resources (communication links, shared data etc.)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 12/209
Tasks (II)
Task execution properties:
Worst-case execution time
Importance value
Preemptive / non-preemptive execution
Task can / cannot be suspended during execution
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 13/209
Tasks (II)
Task execution properties:
Worst-case execution time
Importance value
Preemptive / non-preemptive execution
Task can / cannot be suspended during execution
Task release constraints:
Periodic tasks: Activated periodically
Sporadic tasks: Minimum inter-activation time
Aperiodic tasks: No constraints
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 13/209
Tasks (III)
Task completion constraints:
Deadlines:
Hard: Completion by the deadline mandatory
Firm: Completion by the deadline or no execution
Soft: Best-effort completion
[Jitter: Completion within some time interval]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 14/209
Tasks (III)
Task completion constraints:
Deadlines:
Hard: Completion by the deadline mandatory
Firm: Completion by the deadline or no execution
Soft: Best-effort completion
[Jitter: Completion within some time interval]
Task dependencies:
Precedence relations among tasks
Resource sharing during task execution (shared data,communication links, . . . )
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 14/209
Real-Time Scheduling
Schedule execution of a set of tasks
on the processor and resources available in the system
such that all timing constraints are met
⇒ Need a feasible schedule
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 15/209
Real-Time Scheduling
Schedule execution of a set of tasks
on the processor and resources available in the system
such that all timing constraints are met
⇒ Need a feasible schedule
2 dimensions of classification for real-time schedulingapproaches:
Static / dynamic
Off-line / on-line
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 15/209
Static vs. Dynamic
Static scheduling assumes a priori knowledge of
characteristic parameters of all tasks
all future release times (e.g. periodic tasks)
⇒ Typically allows guarantees and optimal algorithms
Dynamic scheduling
knows currently active task set
does not know future arrivals
⇒ Guarantees and optimality more difficult to establish [ifat all existent]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 16/209
On-line vs. Off-line
On-line scheduling:
All scheduling decisions are taken at run-time
On-line algorithm could be
Planning-based: Admission control allowsguarantees
Best-effort: Deadline misses possible
Off-line scheduling:
Given a priori knowledge of tasks and future arrivals
Compute off-line some information that simplifieson-line scheduling, like
dispatch tables for table-driven cyclic scheduler
priority mapping for rate-monotonic scheduling
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 17/209
Basic Notation
Task and Jobs:
System of n tasks τi, 1 ≤ i ≤ n, with WCET Ci
Task execution consists of a sequence of jobs [= taskinstances]: Ji,j denotes j-th job of task τi
Characteristics of job Ji,j:
Release time ri,j when the job is ready for execution
Deadline di,j when the job must have completed
Sometimes: Relative deadline Di,j = di,j − ri,j
Sometimes no need to distinguish different tasks: Jjdenotes j-th job, according to some unique enumeration
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 18/209
Task Release Characteristics
Distinguish 3 types of tasks:
Periodic tasks: ri,j = si + (j − 1)Ti, with
period Tioffset si (synchronous tasks: ∀i : si = 0)
Typical: Deadline = next release time di,j = ri,j+1
Sporadic tasks: ri,j+1 ≥ ri,j + Ti, with
sporadicity interval TiTypical: Deadline = earliest next release timedi,j = ri,j + Ti
Aperiodic tasks: No release restrictions, typicallyrelative deadlines
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 19/209
Fundamentals of EDF Scheduling (Ch. 3)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 20/209
Underlying Assumptions
General constraints in this part:
Independent tasks on single processor
Varying constraints:
Preemptive/non-preemptive
Idling/non-idling
Periodic/sporadic task sets
Synchronous/asynchronous task sets
Arbitrary/restricted deadlines
With/without release jitter, scheduling overhead, . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 21/209
Basic Terms
A schedule is
feasible if all deadlines are met
non-idling if the processor is never idle when there arejobs ready for execution
A real-time scheduling algorithm A is optimal if
whenever a certain task set T can be feasiblyscheduled by some scheduling algorithm B,
then T can also be feasibly scheduled by A
Note that optimality results are typically restricted:
Certain classes of algorithms (e.g. preemptive ones)
Certain task sets (e.g. periodic ones)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 22/209
Flavor of Upcoming Results
Optimality & Complexities:
EDF optimality for single processor scheduling [nooverloads]
NP-hardness of feasibility checking for asynchronousperiodic task sets with arbitrary deadlines
Feasibility analysis: EDF guarantees feasible schedule if
maximum loading factor [processor demand/time] u ≤ 1
processor utilization U ≤ 1
maximum loading factor during busy period L = W (L)
u ≤ 1 (preemptive EDF)
u < 1 (non-preemptive EDF)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 23/209
Optimality and Complexity
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 24/209
Optimality of Jackson’s Rule (I)
Consider the following scheduling problem:
n independent jobs Ji with execution time Ci anddeadline di, 1 ≤ i ≤ n
all jobs simultaneously released at time 0
non-preemptive scheduling
minimize maximum lateness ℓ = max1≤i≤n ℓi withℓi = fi − di, where fi is the completion time of job Ji
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 25/209
Optimality of Jackson’s Rule (I)
Consider the following scheduling problem:
n independent jobs Ji with execution time Ci anddeadline di, 1 ≤ i ≤ n
all jobs simultaneously released at time 0
non-preemptive scheduling
minimize maximum lateness ℓ = max1≤i≤n ℓi withℓi = fi − di, where fi is the completion time of job Ji
Theorem 25 (Jackson’s rule (= EDF)). Any schedule that puts the jobs
in order of non-decreasing deadlines minimizes the maximum lateness ℓ.
Note: EDF schedule not unique if di = dj for some i 6= j.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 25/209
Optimality of Jackson’s Rule (II)
Proof. (By contradiction.)
Assume maximum lateness is minimized by schedule S different from
any EDF schedule.
We inductively construct schedules Sk = J1J2 · · · Jn, S0 = S , with
non-increasing maximum lateness, which coincide with EDF schedule
E = JE1JE2
· · · JEnin the first k jobs.
Given Sk, let i = maxE mini{i|∀ℓ < i : Jℓ = JEℓ∧ Ji 6= JEi
} be first
index with Ji 6= JEiin maximal matching EDF execution E . Then,
i ≥ k + 1, Sy = Sk for k + 1 ≤ y < i, and
both Ji and JEiare released at the same time t = 0 in Sk and E
JEi= Jj with j > i since i first violation of EDF
di > dj , since di = dj would allow choosing another EDF
schedule where JEi= Ji, violating the definition of i
di+1 ≥ dj , . . .dj−1 ≥ dj , since dj nearest deadline in Sk after Ji
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 26/209
Optimality of Jackson’s Rule (III)
Proof. (cont.) Hence, in Sk,
Jj is the task with maximum lateness ℓj among Ji, . . . , Jj
lateness of job Jm, for i ≤ m ≤ j − 1, satisfies ℓm ≤ ℓj − Cj
Exchanging Ji and Jj thus yields schedule Si where
Jj starts and completes much earlier, i.e., ℓ′j < ℓj
Jx, i+ 1 ≤ x ≤ j − 1, start and complete at most Cj − Ci < Cj
later, i.e., ℓ′x < ℓj
Ji completes at same time as Jj but has later deadline, i.e., ℓ′i < ℓj
Thus, Si coincides with E in the first i jobs and
decreases the maximum lateness ℓ = maxi≤k≤j ℓk of the jobs
k ∈ [i, j] in S , and preserves ℓ′x = ℓx for x 6∈ [i, j]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 27/209
Optimality of EDF
Is EDF still optimal if we enrich the model?
Adding non-zero release times:
Non-preemptive scheduling becomes NP-hard
⇒ Non-preemptive EDF cannot be optimal since it is notclairvoyant (cannot predict future arrivals)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 28/209
Optimality of EDF
Is EDF still optimal if we enrich the model?
Adding non-zero release times:
Non-preemptive scheduling becomes NP-hard
⇒ Non-preemptive EDF cannot be optimal since it is notclairvoyant (cannot predict future arrivals)
How to re-establish optimality:
Restriction to non-idling policies: Non-idlingnon-preemptive EDF is optimal
Allowing preemption ⇒ Preemptive EDF is optimal
[EDF optimal also under various stochastic models.]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 28/209
Non-idling Scheduling (I)
Useful lemma for every non-idling policy:
Lemma 29. Consider two non-idling schedules S and S ′ of the same
task set T . Then, the processor is idle at time t in S iff it is idle in S ′.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 29/209
Non-idling Scheduling (I)
Useful lemma for every non-idling policy:
Lemma 29. Consider two non-idling schedules S and S ′ of the same
task set T . Then, the processor is idle at time t in S iff it is idle in S ′.
Proof. By contradiction (embedded in induction on number of idle/busy
transitions):
Let
t be the first time when, w.l.o.g., the processor is idle in S but busy
in S ′,
t0 < t be the time of the processor’s last transition from busy to idle
before t (with t0 = 0 if none), both in S and in S ′.
Clearly, the processor is idle in [t0, t) both in S and in S ′.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 29/209
Non-idling Scheduling (II)
Proof. (cont.)
Since the set J of job releases in [0, t) is the same in S and in S ′,
the cumulative workload C =∑
j∈J Cj is the same in S and S ′,
every job ∈ J must have been processed by t0 < t, i.e.,
C ≤ t0 < t. [Note: Cj is independent of when a task is scheduled!]
If the processor is indeed busy in S ′ at time t, there must be a job
release at t.
This job should be assigned to the processor also in S at time t,since the policy is non-idling.
Hence, the processor cannot be idle at t in S .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 30/209
Optimality non-idling non-pre EDF (I)
Theorem 31 (George et.al). Non-preemptive EDF is an optimal
non-idling scheduling strategy that minimizes maximum lateness.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 31/209
Optimality non-idling non-pre EDF (I)
Theorem 31 (George et.al). Non-preemptive EDF is an optimal
non-idling scheduling strategy that minimizes maximum lateness.
Proof. Let any feasible schedule S be given. We inductively construct
schedules Sk = J1J2 · · · Jn, S0 = S , with non-increasing maximum
lateness, which coincide with EDF schedule E = JE1JE2
· · · JEnin the
first k jobs.
According to the “non-idling” lemma, we can restrict our attention to the
case where the processor is busy in any Sk iff it is busy in E .
Given Sk, let again i = maxE mini{i|∀ℓ < i : Jℓ = JEℓ∧ Ji 6= JEi
}.
Then,
JEi= Jj with j > i since i first violation of EDF
di > dj , since di = dj would allow choosing another EDF
schedule where JEi= Ji, violating the definition of i
di+1 ≥ dj , . . .dj−1 ≥ dj , since dj nearest deadline in Sk after Ji
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 31/209
Optimality non-idling non-pre EDF (II)
Proof. (cont.)
Hence,
in Sk, every job Jm for i ≤ m ≤ j − 1 must actually complete by
time dj − Cj ≤ dm − Cj at latest
Jj is already released in E at the time t when Ji is started in Sk.
We can hence again swap Jj and Ji in Sk to derive schedule Si, without
violating any deadline:
Jj starts and hence completes much earlier
Ji, . . . , Jj−1 start and complete at most Cj − Ci < Cj later
Ji completes at same time as Jj but has later deadline
Clearly, Si now coincides with E in the first i jobs and does not increase
the maximum lateness.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 32/209
Optimality of Preemptive EDF (I)
Preemptive scheduling is
based upon discrete time model
unit time "1"
unit time = granularity of preemption
usually non-idling (but of course not necessarily so)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 33/209
Optimality of Preemptive EDF (I)
Preemptive scheduling is
based upon discrete time model
unit time "1"
unit time = granularity of preemption
usually non-idling (but of course not necessarily so)
Theorem 33 (Dertouzos). Preemptive EDF is an optimal preemptive
scheduling policy that minimizes maximum lateness.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 33/209
Optimality of Preemptive EDF (I)
Preemptive scheduling is
based upon discrete time model
unit time "1"
unit time = granularity of preemption
usually non-idling (but of course not necessarily so)
Theorem 33 (Dertouzos). Preemptive EDF is an optimal preemptive
scheduling policy that minimizes maximum lateness.
Proof. Let any feasible schedule S be given. We inductively construct
(by swapping time-slices) schedules St, S0 = S , with non-increasing
maximum lateness, which coincide with a preemptive EDF schedule E in
the first t slots.
Induction basis: For t = 0, S and E are of course the same.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 33/209
Optimality of Preemptive EDF (II)
Proof. (cont.)
Induction step: Suppose we have St that coincides with E in [0, t).
Consider the time slice [t, t+ 1):
If no job is executed in St, schedule the first unassigned slot of the
nearest-deadline ready job (or an idle slot if none) in E .
If Ji with deadline di is executed, let
Jj be the job with the smallest deadline dj < di pending for
execution at t [must exist, otherwise we have already EDF]
tj > t be the first time when Jj is executed (in slot [tj , tj + 1])after t
Since tj < dj < di, we may swap Ji and Jj in [t, t+ 1) without
violating feasibility.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 34/209
Optimality of Preemptive EDF (III)
Proof. (cont.)
Why is the maximum lateness not increased?
All latencies ℓ′m = ℓm unchanged, except possibly for m = i, j:
If last slot of Ji before [tj , tj + 1) in S : ℓi < ℓj ≤ ℓmax
ℓ′j ≤ ℓj and
ℓ′i > ℓi, but still: ℓ′i < ℓj ≤ ℓmax since dj < di
If last slot of Ji after [tj , tj + 1) in S :
ℓ′j ≤ ℓj and
ℓ′i = ℓi
Hence, the maximum lateness is never increased.
The induction step is completed, since we have transformed St into
St+1 that coincides with E in [0, t+ 1).
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 35/209
Feasibility Analysis
The results established for EDF so far reveal that noalgorithm can do better in many settings.
Just always using EDF should solve the real-timescheduling issue?
What else can we do?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 36/209
Feasibility Analysis
The results established for EDF so far reveal that noalgorithm can do better in many settings.
Just always using EDF should solve the real-timescheduling issue?
What else can we do?
Unfortunately:
Optimality results deal with feasible schedules only
How can we know that some task set can be feasiblyscheduled at all?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 36/209
Feasibility Analysis
The results established for EDF so far reveal that noalgorithm can do better in many settings.
Just always using EDF should solve the real-timescheduling issue?
What else can we do?
Unfortunately:
Optimality results deal with feasible schedules only
How can we know that some task set can be feasiblyscheduled at all?
One needs techniques for feasibility analysis.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 36/209
Feasibility Analysis
The purpose of feasibility tests is to decide whether sometask set can be feasibly scheduled:
Off-line tests: Allow to determine feasible settings ofparameters like task periods
On-line tests: Allows acceptance tests for guaranteedreal-time properties [firm deadlines]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 37/209
Feasibility Analysis
The purpose of feasibility tests is to decide whether sometask set can be feasibly scheduled:
Off-line tests: Allow to determine feasible settings ofparameters like task periods
On-line tests: Allows acceptance tests for guaranteedreal-time properties [firm deadlines]
Discouraging result:
Deciding whether a general asynchronous periodic taskcan be feasibly scheduled is NP-hard in the strongsense
We hence need to restrict our task sets to getcomputationally tractable feasibility analysis techniques
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 37/209
Complexity Theory Refresher
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 38/209
Complexity Theory Refresher (I)
Recommended literature:
Garay, Johnson: Computers and Intractability: A Guideto the Theory of NP-completeness, Freeman, 1979
Tutorial: Chapter 2 in Joseph Y.-T. Leung: Handbook ofScheduling: Algorithms, models, and performanceanalysis. Chapman & Hall/CRC, 2004
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 39/209
Complexity Theory Refresher (I)
Recommended literature:
Garay, Johnson: Computers and Intractability: A Guideto the Theory of NP-completeness, Freeman, 1979
Tutorial: Chapter 2 in Joseph Y.-T. Leung: Handbook ofScheduling: Algorithms, models, and performanceanalysis. Chapman & Hall/CRC, 2004
In real-time scheduling, we are interested in 2 types ofproblems:
Optimization problems: Given problem instance I,compute minimum or maximum solution S(I)
Decision problems: Given problem instance I, computedecision D(I) ∈ {yes,no}
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 39/209
Complexity Theory Refresher (II)
Optimization example: Travelling Salesman Problem (TSP)
Problem instance I: A fully connected weighted graphwith n nodes
Solution S(I): A tour of the n nodes with minimumweight sum
Decision example: Travelling Salesman Decision (TSD)
Problem instance I: A fully connected weighted graphwith n nodes and a bound B
Decision D(I): Is there a tour of the n nodes with weightsum ≤ B?
Complement problem coTSD: Is there no tour of the nnodes with weight sum ≤ B?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 40/209
Complexity Theory Refresher (III)
Problem complexity depends upon its size.
Input parameters in problem instances:
Problem size (e.g. number of nodes n in TSP)
Data size (e.g. size of variables holding the weights)
Problem complexity severely depends upon encoding ofdata:
Binary encoding: s symbols (0 or 1) allow to encodeV (s) = 2s values ⇒ exponential complexity w.r.t. s
Unary encoding: s symbols (1 only) allow to encodeV (s) = s values ⇒ linear complexity w.r.t. s
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 41/209
Complexity Theory Refresher (IV)
Theory of NP-completeness is restricted to (language)decision problems
P: Problems solveable by a deterministic Turingmachine in polynomial time
NP: Problems solveable by a non-deterministic Turingmachine in polynomial time
We distinguish
Polynomial complexity: Polynomial time w.r.t. codelength s
Pseudo-polynomial complexity: Polynomial time w.r.t.numerical value V (s) [= polynomial when using onlyunary encoding].
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 42/209
Complexity Theory Refresher (V)
Example: Travelling Salesman Decision TSD ∈ NP
Non-deterministically guessing the [right] out of the n!permutations of nodes in I
Summing all weights along the resulting cycle andcomparing whether it has weight sum at most B
All these operations can be done in polynomial time ⇒TSD ∈ NP
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 43/209
Complexity Theory Refresher (VI)
Example: Complement Travelling Salesman DecisioncoTSD ∈ coNP
Cannot solve coTSD via solution algorithm for TSD:
Checking Yes-instance of coTSD requires checkingNo-instance of TSD
TSD solution algorithm need not be polynomial(need not even halt) for No-instances!
Since coNP 6= NP (if P 6= NP), it follows that coTSD 6∈NP
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 44/209
Complexity Theory Refresher (VII)
Alternative characterization of NP: Problems where
yes-instances provide a certificate of polynomial sizethat can be verified in polynomial time
no-instances need not provide such a certificate at all
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 45/209
Complexity Theory Refresher (VII)
Alternative characterization of NP: Problems where
yes-instances provide a certificate of polynomial sizethat can be verified in polynomial time
no-instances need not provide such a certificate at all
Example: Travelling Salesman Decision TSD ∈ NP
Take the tour C(I) that is proposed as certificate
C(I) has polynomial size and can polynomially beverified to have weight ≤ B
Recall: No certificate needs to be provided forNo-instances ⇒ cannot derive a certificate for coTSDfrom TSD certificate!
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 45/209
Complexity Theory Refresher (VIII)
Reduction of decision problems:
Consider two decision problems P and Q
P ≤ Q: P can be (pseudo-)polynomially reduced to Q if
there is a (pseudo-)polynomially computable functionf that maps every IP to some IQ = f(IP )
D(IP ) = yes ⇔ D(IQ) = yes for every IQ = f(IP )
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 46/209
Complexity Theory Refresher (VIII)
Reduction of decision problems:
Consider two decision problems P and Q
P ≤ Q: P can be (pseudo-)polynomially reduced to Q if
there is a (pseudo-)polynomially computable functionf that maps every IP to some IQ = f(IP )
D(IP ) = yes ⇔ D(IQ) = yes for every IQ = f(IP )
Properties: If P ≤ Q and Q ≤ R, then
P ≤ R (transitivity) [but not Q ≤ P (symmetry)]
if Q is polynomially solveable, then P is alsopolynomially solveable
if P cannot be solved polynomially, then Q cannot besolved polynomially
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 46/209
Complexity Theory Refresher (IX)
A decision problem P is
NP-hard if all problems Q ∈ NP satisfy Q ≤ P
coNP-hard if all problems Q ∈ NP satisfy Q ≤ coP
NP-complete if P is NP-hard and P ∈ NP
coNP-complete if P is coNP-hard and coP ∈ NP
A decision problem P is
(co)NP-hard/complete in the strong sense if it is w.r.t.polynomial complexity [unary data encoding]
(co)NP-hard/complete in the ordinary sense if it is w.r.t.pseudo-polynomial time [binary data encoding]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 47/209
Complexity Theory Refresher (X)
How to do a NP-hardness/completeness proof?
Need to show that all problems Q ∈ NP satisfy Q ≤ P
Since we do not know all Q, this is impossible in a directway
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 48/209
Complexity Theory Refresher (X)
How to do a NP-hardness/completeness proof?
Need to show that all problems Q ∈ NP satisfy Q ≤ P
Since we do not know all Q, this is impossible in a directway
But:
There are problems like SATISFYABILITY ∈ NP , whichhave been shown (directly, via Turing machines) tosatisfy P ≤ SATISFYABILITY for all P ∈ NP
For a new problem Q, it hence suffices to show thatsome problem P known to be NP-hard can be reducedto Q, i.e., satisfies P ≤ Q
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 48/209
Complexity Theory Refresher (XI)
Scheduling often deals with optimization problems, sotheory of NP-completeness not directly applicable.Remedy:
Typically, it is possible to compute some lower boundLB and upper bound UB for [discrete value] optimalsolution S(I)
in polynomial time
with UB − LB being polynomial
Use a (binary) search ∈ [LB,UB] for determining boundB and run an instance of the corresponding decisionproblem with bound B
⇒ Solution to optimization problem, with complexity mainlydetermined by the decision problem
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 49/209
NP-Hardness of Feasibility Analysis
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 50/209
NP-Hard Feasibility Analysis (I)
A well-known decision problem:
Definition 51 (Simultaneous Congruences Problem (SCP)). Given a set
A = {(x1, y1), (x2, y2), . . . , (xn, yn)} of ordered pairs of positive
integers, and an integer k, 2 ≤ k ≤ n. Determine whether there is a
subset A′ ⊆ A of k ordered pairs and a positive integer z such that for
all (xi, yi) ∈ A′ it holds z ≡ xi mod yi.
A well-known result:
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 51/209
NP-Hard Feasibility Analysis (I)
A well-known decision problem:
Definition 51 (Simultaneous Congruences Problem (SCP)). Given a set
A = {(x1, y1), (x2, y2), . . . , (xn, yn)} of ordered pairs of positive
integers, and an integer k, 2 ≤ k ≤ n. Determine whether there is a
subset A′ ⊆ A of k ordered pairs and a positive integer z such that for
all (xi, yi) ∈ A′ it holds z ≡ xi mod yi.
A well-known result:
Theorem 51 (Leung and Witehead; Baruah et.al.). The SCP is NP-hard
in the strong sense.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 51/209
NP-Hard Feasibility Analysis (II)
We will use polynomial reduction to show:
Theorem 52 (Leung and Merrill; Baruah et.al.). The feasibility analysis
problem for asynchronous periodic task systems is coNP-hard in the
strong sense.
Note that this can be extended to arbitrarily smallutilizations!
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 52/209
NP-Hard Feasibility Analysis (II)
We will use polynomial reduction to show:
Theorem 52 (Leung and Merrill; Baruah et.al.). The feasibility analysis
problem for asynchronous periodic task systems is coNP-hard in the
strong sense.
Note that this can be extended to arbitrarily smallutilizations!
Proof. We have to show that every problem instance ISCP can be
mapped to a problem instance IcoFA, where coFA means that some
task set cannot be feasibly scheduled.
Note that we can select specific problem instances in coFA that suit our
needs [with Di = 1, Ci = 1/(k − 1)]; we do not need to cover the
whole set coFA!
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 52/209
NP-Hard Feasibility Analysis (III)
Proof. (cont.)
Proof intuition:
z ≡ xi mod yi means that there is some mi ensuring
xi +mi · yi = z
if this is simultaneously true for k indices i, this can be mapped to kjobs that are simultaneously released at time z
if we make their cumulative execution time larger than their
deadlines, the task set cannot be feasibly scheduled
Given ISCP = {(x1, y1), (x2, y2), . . . , (xn, yn)} and some k ≥ 2, we
choose the following periodic task set:
Offset si = xi, period Ti = yi
Relative deadline Di = 1, WCET Ci =1
k−1
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 53/209
NP-Hard Feasibility Analysis (IV)
Proof. Given a yes-instance of ISCP , this implies that
k jobs collide at time z
their cumulative execution time is k/(k − 1) > 1, so some job
misses its deadline
⇒ The task set cannot be feasibly scheduled
Given an infeasible task set (not schedulable by EDF)
let t0 be the first time of a deadline violation
all task parameters (except WCET) are integers ⇒ t0 is an integer
for a deadline miss, ≥ k jobs must have arrived at time z = t0 − 1
all k colliding tasks τi must satisfy si +mi · Ti = z for some mi
⇒ the corresponding ISCP is a yes-instance.
The mapping function can of course be computed in polynomial time.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 54/209
EDF Optimality Revisited (I)
We established earlier that preemptive EDF is optimal andminimizes maximum lateness:
For every preemptive schedule S, there is some asleast as good preemptive EDF schedule E .
We proved even more: For every schedule S,
preemptive or not
idling or not
there is some as least as good preemptive EDFschedule E .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 55/209
EDF Optimality Revisited (I)
We established earlier that preemptive EDF is optimal andminimizes maximum lateness:
For every preemptive schedule S, there is some asleast as good preemptive EDF schedule E .
We proved even more: For every schedule S,
preemptive or not
idling or not
there is some as least as good preemptive EDFschedule E .
Why not using EDF as a coFA test: If preemptive EDF
cannot schedule some task set, return ‘´No”
can schedule some task set, return “Yes”
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 55/209
EDF Optimality Revisited (II)
Does this contradict coNP hardness of feasibility analysisfor aynchronous task sets with arbitrary deadlines?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 56/209
EDF Optimality Revisited (II)
Does this contradict coNP hardness of feasibility analysisfor aynchronous task sets with arbitrary deadlines?
There is no contradiction here:
How long do we need to run EDF until we find somedeadline violation, if any?
May have arbitrary running time for periodic task setswith arbitrary deadlines Di ≤ Ti
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 56/209
EDF Optimality Revisited (II)
Does this contradict coNP hardness of feasibility analysisfor aynchronous task sets with arbitrary deadlines?
There is no contradiction here:
How long do we need to run EDF until we find somedeadline violation, if any?
May have arbitrary running time for periodic task setswith arbitrary deadlines Di ≤ Ti
But:
Will establish running time bounds later on, which arepolynomial in case of Di = Ti and Di ≥ Ti, for example.
There are indeed efficient preemptive EDF-basedfeasibility tests for restricted cases.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 56/209
Feasibility Analysis Techniques
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 57/209
Processor Demand Analysis
Informal definitions:
Processor demand is the computation time requestedby timely tasks in a given interval of time
Loading factor is the proportion of processor demandover the time interval
Intuitively,
no scheduler can handle a task set where the processordemand in some interval exceeds this interval
the best one can hope is an algorithm that schedulestask sets with loading factor 1
We show that preemptive EDF accomplishes this ⇒optimal.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 58/209
Definitions
Definition 59. Given a set of real-time jobs and a release time interval
[t1, t2), which is the interval [t1, t2] with task releases excluded at t2,
the processor demand h[t1,t2) of the job set in the interval [t1, t2) is
h[t1,t2) =∑
t1≤rk,dk≤t2
Ck.
Definition 59. Given a set of real-time jobs, its loading factor u[t1,t2) in
the interval [t1, t2) is
u[t1,t2) =h[t1,t2)
t2 − t1.
The maximum loading factor u, simply called loading factor, is
u = sup0≤t1<t2<∞
u[t1,t2).
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 59/209
Example
Consider 3 jobs:
J1 with WCET C1 = 4, r1 = 3 and d1 = 18
J2 with WCET C2 = 4, r2 = 5 and d2 = 12
J3 with WCET C3 = 6, r3 = 6 and d3 = 14
u[3,18) = 4+4+615 = 14
15
u[5,14) = 4+69 = 10
9
u[5,12) = 47 = 4
7
u[6,14) = 68 = 6
8
Hence, u = 109 > 1 in this example.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 60/209
Busy Periods (I)
If the processor is busy at time t, we say that
t is properly within a busy period, iff some job releasedbefore t is executed after t
t delimits a busy period, iff no job released before t isexecuted after t
If the processor is busy at t, then the unique busy periodBP (t) = [t1, t2) containing t is defined by its
start delim. timet1 = inf{t′|t′ ≤ t′′ ≤ t: every t′′ properly within a BP}
end delim. timet2 = sup{t′|t′ ≥ t′′ ≥ t: every t′′ properly within a BP}
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 61/209
Busy Periods (II)
Consequently,
a busy period is a “maximal” period of continuousprocessor activity
every busy period is started by a task release and endsupon a task completion
two busy periods [t1, t2) and [t′1, t′2) may occur
back-to-back, i.e., t′1 = t2
Any schedule consists of alternating busy and idle periods,
independently of the scheduling policy (for non-idlingpolicies)
possibly involving idle periods of duration 0
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 62/209
Deadline-t Busy Periods (I)
We say that some time t0 where the processor is busy witha job with deadline ≤ t is
properly within a deadline-t busy period, iff some jobwith deadline ≤ t released before t0 is executed after t0
delimits a deadline-t busy period, iff no job withdeadline ≤ t [but possibly a job with deadline > t]released before t0 is executed after t0
If the processor is busy with a job with deadline ≤ t at t0,then the unique deadline-t busy period BPt(t0) = [t1, t2)containig t0 is defined by
t1 = inf{t′|t′ ≤ t′′ ≤ t0: every t′′ properly within t-BP}
t2 = sup{t′|t′ ≥ t′′ ≥ t0: every t′′ properly within t-BP}
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 63/209
Deadline-t Busy Periods (II)
In case of preemptive EDF scheduling, when there arepending jobs with deadline ≤ t, then
the processor cannot be idle
no job with deadline > t is executed
Consequently, in case of preemptive EDF scheduling, adeadline-t busy period
is a busy period where only jobs with deadlines ≤ t areexecuted
is started by a deadline ≤ t task release and ends upona deadline ≤ t task completion
must end before t if there is no deadline miss
may immediately be followed by another deadline-t b.p.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 64/209
EDF Loading Factor Condition (I)
Theorem 65 (Spuri). A set of real-time jobs is feasibly scheduled by
preemptive EDF if and only if u ≤ 1.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 65/209
EDF Loading Factor Condition (I)
Theorem 65 (Spuri). A set of real-time jobs is feasibly scheduled by
preemptive EDF if and only if u ≤ 1.
Proof. Direction ⇐: We show that u ≤ 1 ⇒ feasible schedule by
showing that non-feasible schedule ⇒ u > 1.
Assuming that the first deadline miss occurs at time t, this must happen
in a deadline-t busy period [t1, t2) with t < t2. Hence,
the processor demand h[t1,t) must be excessive, i.e.,
h[t1,t) =∑
t1≤rk,dk≤t
Ck > t− t1
⇒ u[t1,t) > 1 and hence u > 1.
This contradicts u ≤ 1 and thus completes the proof for direction ⇐.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 65/209
EDF Loading Factor Condition (II)
Proof. (cont.)
Direction ⇒: Since the schedule is feasible,
the processor demand in any interval [t1, t2) cannot exceed the
interval’s length, i.e.,
∀[t1, t2) : h[t1,t2) =∑
t1≤rk,dk≤t2
Ck ≤ t2 − t1
⇒ ∀[t1, t2) : u[t1,t2) ≤ 1 and hence u ≤ 1,
This completes the proof for direction ⇒.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 66/209
EDF Loading Factor Condition (III)
Since preemptive EDF is optimal and the loading factor testworks for all task sets
there is a loading-factor-based feasibility test forarbitrary task sets
contradicts coNP hardness of feasibility analysis foraynchronous task sets with arbitrary deadlines?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 67/209
EDF Loading Factor Condition (III)
Since preemptive EDF is optimal and the loading factor testworks for all task sets
there is a loading-factor-based feasibility test forarbitrary task sets
contradicts coNP hardness of feasibility analysis foraynchronous task sets with arbitrary deadlines?
However, there is no contradiction here:
All [t1, t2) must be examined to determine u
Leads to exponential complexity for periodic task setswith arbitrary deadlines Di ≤ Ti
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 67/209
EDF Loading Factor Condition (III)
Since preemptive EDF is optimal and the loading factor testworks for all task sets
there is a loading-factor-based feasibility test forarbitrary task sets
contradicts coNP hardness of feasibility analysis foraynchronous task sets with arbitrary deadlines?
However, there is no contradiction here:
All [t1, t2) must be examined to determine u
Leads to exponential complexity for periodic task setswith arbitrary deadlines Di ≤ Ti
Again: One can derive efficient preemptive EDF feasibilitytests for restricted cases [in particular, Di = Ti and Di ≥ Ti].
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 67/209
Invocations Lemma (I)
The following lemma will prove useful on several occasions:
Lemma 68. Consider a periodic task with period Ti and relative
deadline Di = Ti, and two arbitrary points in time t1, t2. There are
(a) at most⌊
t2−t1Ti
⌋
jobs from τi completely within [t1, t2), as well as
within [t1, t2]
(b) at most⌈
t2−t1Ti
⌉
job releases from τi within [t1, t2)
(c) at most 1 +⌊
t2−t1Ti
⌋
job releases from τi within [t1, t2]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 68/209
Invocations Lemma (II)
Proof. The number of job invocations within [t1, t2] is obviously
maximized when there is a job release at time t1.
Assume a job release at t1 and distinguish 2 cases:
Case Ti|(t2 − t1):
(a) exactly t2−t1Ti
=⌊
t2−t1Ti
⌋
complete jobs fit into [t1, t2) (as well as
into [t1, t2])
(b) exactly t2−t1Ti
=⌈
t2−t1Ti
⌉
job releases occur within [t1, t2) (the
release at t2 is not counted here)
(c) exactly 1 + t2−t1Ti
= 1 +⌊
t2−t1Ti
⌋
job releases occur within [t1, t2]
(the release at t2 is counted here)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 69/209
Invocations Lemma (III)
Proof. (cont.)
Case Ti 6 |(t2 − t1):
Here, [t1, t2) and [t1, t2] are the same since no release can take
place at t2 [remember that we assumed a release at t1]
(a) exactly⌊
t2−t1Ti
⌋
complete jobs fit completely into [t1, t2)
(b)+(c) exactly 1 +⌊
t2−t1Ti
⌋
=⌈
t2−t1Ti
⌉
job releases occur within [t1, t2).
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 70/209
Invocations Lemma (III)
Proof. (cont.)
Case Ti 6 |(t2 − t1):
Here, [t1, t2) and [t1, t2] are the same since no release can take
place at t2 [remember that we assumed a release at t1]
(a) exactly⌊
t2−t1Ti
⌋
complete jobs fit completely into [t1, t2)
(b)+(c) exactly 1 +⌊
t2−t1Ti
⌋
=⌈
t2−t1Ti
⌉
job releases occur within [t1, t2).
Note that, for any x, it holds that
⌊x⌋ ≤ x ≤ ⌈x⌉ ≤ ⌊x⌋+ 1
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 70/209
Invocations Lemma with Deadlines
Lemma 71. Consider a task τi with period Ti and arbitrary relative
deadline Di, and an interval [t1, t2] with t2 − t1 −Di ≥ 0. The number
of jobs with release times within [t1, t2) and deadlines within [t1, t2] is
bounded by
1 +
⌊
t2 − t1 −Di
Ti
⌋
.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 71/209
Invocations Lemma with Deadlines
Lemma 71. Consider a task τi with period Ti and arbitrary relative
deadline Di, and an interval [t1, t2] with t2 − t1 −Di ≥ 0. The number
of jobs with release times within [t1, t2) and deadlines within [t1, t2] is
bounded by
1 +
⌊
t2 − t1 −Di
Ti
⌋
.
Proof. According to the Invocations Lemma (c),
at most 1 +⌊
t2−t1−Di
Ti
⌋
task releases of τi may take place within
[t1, t2 −Di]
⇒ this is also the maximum number of complete jobs within [t1, t2) in
the presence of an arbitrary deadline Di ≤ t2 − t1.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 71/209
Feasibility Test by Liu & Layland (I)
Theorem 72. A set of n synchronous periodic tasks with ∀i : Di = Tican be feasibly scheduled by preemptive EDF if and only if
U =
n∑
i=1
Ci
Ti≤ 1.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 72/209
Feasibility Test by Liu & Layland (I)
Theorem 72. A set of n synchronous periodic tasks with ∀i : Di = Tican be feasibly scheduled by preemptive EDF if and only if
U =
n∑
i=1
Ci
Ti≤ 1.
Proof. Recalling the Invocation Lemma (a), for any [t1, t2) it follows that
h[t1,t2) =∑
t1≤rk,dk≤t2
Ck ≤
n∑
i=1
⌊
t2 − t1Ti
⌋
Ci ≤
n∑
i=1
t2 − t1Ti
Ci
such that h[t1,t2) ≤ (t2 − t1)U and thus u[t1,t2) ≤ U .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 72/209
Feasibility Test by Liu & Layland (II)
Proof. (cont.)
Choosing t1 = 0 and t2 = H with H = lcm(T1, . . . , Tn) denoting the
length of the hyper-period, we get
h[0,H) =∑
0≤rk,dk≤H
Ck =
n∑
i=1
H
TiCi
such that h[0,H) = H · U and hence u[0,H) = U .
Recalling ∀[t1, t2) : u[t1,t2) ≤ U , this implies u = U .
The condition U ≤ 1 hence follows immediately from the loading factor
condition u ≤ 1.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 73/209
Feasibility Test by Coffman (I)
Theorem 74. Any set of n asynchronous periodic tasks with
∀i : Di = Ti can be feasibly scheduled by preemptive EDF if and only if
U =
n∑
i=1
Ci
Ti≤ 1.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 74/209
Feasibility Test by Coffman (I)
Theorem 74. Any set of n asynchronous periodic tasks with
∀i : Di = Ti can be feasibly scheduled by preemptive EDF if and only if
U =
n∑
i=1
Ci
Ti≤ 1.
Proof. Recalling the Invocation Lemma (a), for any [t1, t2) it follows
again that
h[t1,t2) =∑
t1≤rk,dk≤t2
Ck ≤
n∑
i=1
⌊
t2 − t1Ti
⌋
Ci ≤
n∑
i=1
t2 − t1Ti
Ci
such that h[t1,t2) ≤ (t2 − t1)U and thus u[t1,t2) ≤ U .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 74/209
Feasibility Test by Coffman (II)
Proof. (cont.)
Choosing t1 = 0 and t2 = s+m ·H , where
s = max{s1, . . . , sn} is the maximum of all offsets
H = lcm(T1, . . . , Tn) is the length of the hyper-period
m ≥ 1 is an arbitrary integer
h[0,s+mH) ≥
n∑
i=1
hi[si,si+mH) =
n∑
i=1
mH
TiCi = mHU
such that u[0,s+mH) ≥mH
s+mHU for any m.
Recalling ∀[t1, t2) : u[t1,t2) ≤ U , taking the limit m→ ∞ reveals
u = U . The condition U ≤ 1 hence follows again from the loading
factor test condition u ≤ 1.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 75/209
Feasibility of Sporadic/Hybrid Tasks
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 76/209
Feasibility of Sporadic Tasks (I)
Definition 77. A sporadic task set is feasible if, for any choice of release
times compatible with the specified sporadicity intervals, the resulting job
set is feasible.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 77/209
Feasibility of Sporadic Tasks (I)
Definition 77. A sporadic task set is feasible if, for any choice of release
times compatible with the specified sporadicity intervals, the resulting job
set is feasible.
Note that
Dertouzos’ theorem revealed that preemptive EDF isalso optimal for sporadic tasks
Intuition tells that the worst case for sporadic task setsis the corresponding synchronous periodic one
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 77/209
Feasibility of Sporadic Tasks (I)
Definition 77. A sporadic task set is feasible if, for any choice of release
times compatible with the specified sporadicity intervals, the resulting job
set is feasible.
Note that
Dertouzos’ theorem revealed that preemptive EDF isalso optimal for sporadic tasks
Intuition tells that the worst case for sporadic task setsis the corresponding synchronous periodic one
Theorem 77 (Sporadic demand). Given a set of n sporadic tasks τi with
sporadicity interval Ti, and the corresponding synchronous periodic task
set τ ′i with period Ti and maximum loading factor u′, any sequence of
jobs J of the sporadic task set has a loading factor u ≤ u′.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 77/209
Feasibility of Sporadic Tasks (II)
Proof. Let h[t1,t2) and h′[t1,t2) be the processor demand of the sporadic
job set J and the periodic job set J ′, respectively.
We will show that, for all [t1, t2), there is some [t′1, t′2) with
t2 − t1 = t′2 − t′1 such that h[t1,t2) ≤ h′[t′1,t′2), which obviously implies
u ≤ u′.
Consider the interval [t1, t2). From the Invocations Lemma with
Deadlines 71, we immediately obtain
h[t1,t2) ≤∑
i:Di≤t2−t1
(
1 +
⌊
t2 − t1 −Di
Ti
⌋)
Ci
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 78/209
Feasibility of Sporadic Tasks (III)
Proof. (cont.)
Choosing t′1 = 0 and t′2 = t2 − t1, the synchronous set J ′ obviously
satisfies
h′[t′1,t′2) =
∑
i:Di≤t′2−t′
1
(
1 +
⌊
t′2 − t′1 −Di
Ti
⌋)
Ci
=∑
i:Di≤t2−t1
(
1 +
⌊
t2 − t1 −Di
Ti
⌋)
Ci
Putting things together, h[t1,t2) ≤ h′[t′1,t′2) and hence u ≤ u′ follows.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 79/209
Feasibility of Hybrid Task Sets
A hybrid task set consists of synchronous periodic and/orsporadic tasks with arbitrary deadlines.
Due to the previous theorem, all results for synchronousperiodic task sets generalize to hybrid task sets
⇒ It makes sense to further study feasibility analysis ofsynchronous periodic task sets
What is known for synchronous periodic/hybrid task setswith arbitrary deadlines?
The general asynchronous problem is coNP-hard
The general synchronous problem is (weakly)coNP-hard
Necessary / sufficient conditions for EDF schedulabilitydo exist
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 80/209
Necessary Hybrid Feasibility Condition
Theorem 81 (Baruah et.al.). If a given hybrid task set is feasible under
preemptive EDF scheduling, then U ≤ 1.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 81/209
Necessary Hybrid Feasibility Condition
Theorem 81 (Baruah et.al.). If a given hybrid task set is feasible under
preemptive EDF scheduling, then U ≤ 1.
Proof. Consider the interval [0,mH +Dmax), where
Dmax = max{D1, . . . , Dn} is the maximum relative deadline
H = lcm(T1, . . . , Tn) is the length of the hyper-period
m ≥ 1 is an arbitrary integer
For the worst case of periodic jobs, it follows that
h[0,mH+Dmax) ≥
n∑
i=1
mH
TiCi = mHU
such that u[0,mH+Dmax) ≥mH
mH+DmaxU for any m. Taking m→ ∞
reveals u ≥ U , so U ≤ 1 by the EDF loading factor condition.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 81/209
Hybrid Feasibility Condition for Di ≥ Ti
Theorem 82 (Baruah et.al.). Any hybrid task set with ∀i : Di ≥ Ti is
feasible under preemptive EDF scheduling if and only if U ≤ 1.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 82/209
Hybrid Feasibility Condition for Di ≥ Ti
Theorem 82 (Baruah et.al.). Any hybrid task set with ∀i : Di ≥ Ti is
feasible under preemptive EDF scheduling if and only if U ≤ 1.
Proof. The previous theorem established already that feasible EDF
schedule ⇒ U ≤ 1. Hence, we only need to show the other direction
U ≤ 1 ⇒ feasible schedule.
Consider the corresponding synchronous task set and any interval
[t1, t2). The processor demand is
h[t1,t2) ≤∑
i:Di≤t2−t1
(
1 +
⌊
t2 − t1 −Di
Ti
⌋)
Ci
≤∑
i:Di≤t2−t1
(
t2 − t1Ti
−Di − TiTi
)
Ci
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 82/209
Hybrid Feasibility Condition for Di ≥ Ti (II)
Proof. (cont.)
h[t1,t2) ≤ (t2 − t1)∑
i:Di≤t2−t1
Ci
Ti
≤ (t2 − t1)U ≤ t2 − t1,
from where u ≤ 1 follows. Applying the EDF loading factor condition
completes the proof of our theorem.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 83/209
Sufficient Hybrid Feasibility Condition (I)
Theorem 84. A hybrid task set of n tasks is feasible under preemptive
EDF scheduling ifn∑
i=1
Ci
min{Di, Ti}≤ 1.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 84/209
Sufficient Hybrid Feasibility Condition (I)
Theorem 84. A hybrid task set of n tasks is feasible under preemptive
EDF scheduling ifn∑
i=1
Ci
min{Di, Ti}≤ 1.
Proof. Consider the corresponding synchronous task set and any
interval [t1, t2). The processor demand is
h[t1,t2) ≤∑
i:Di≤t2−t1
(
1 +
⌊
t2 − t1 −Di
Ti
⌋)
Ci
≤∑
i:Di≤t2−t1
(
1 +t2 − t1 −min{Di, Ti}
min{Di, Ti}
)
Ci
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 84/209
Sufficient Hybrid Feasibility Condition (II)
Proof. (cont.)
h[t1,t2) ≤∑
i:Di≤t2−t1
t2 − t1min{Di, Ti}
Ci
≤ (t2 − t1)
n∑
i=1
Ci
min{Di, Ti}≤ t2 − t1
from where u ≤ 1 follows. Applying the EDF loading factor condition
completes the proof of our theorem.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 85/209
Sufficient Hybrid Feasibility Condition (II)
Proof. (cont.)
h[t1,t2) ≤∑
i:Di≤t2−t1
t2 − t1min{Di, Ti}
Ci
≤ (t2 − t1)
n∑
i=1
Ci
min{Di, Ti}≤ t2 − t1
from where u ≤ 1 follows. Applying the EDF loading factor condition
completes the proof of our theorem.
Unfortunately, this condition is sufficient but not necessary(example Fig. 3.5).
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 85/209
Hybrid Loading Factor Condition
The general EDF loading factor condition can be simplifiedfor hybrid task sets:
For any interval [t1, t2), it holds that h[t1,t2) ≤ h′[0,t2−t1),
where h′[0,t2−t1)is the processor demand of the
corresponding synchronous task set [see the proof ofthe sporadic demand theorem]
Consider processor demand h(t) = h′[0,t) only
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 86/209
Hybrid Loading Factor Condition
The general EDF loading factor condition can be simplifiedfor hybrid task sets:
For any interval [t1, t2), it holds that h[t1,t2) ≤ h′[0,t2−t1),
where h′[0,t2−t1)is the processor demand of the
corresponding synchronous task set [see the proof ofthe sporadic demand theorem]
Consider processor demand h(t) = h′[0,t) only
Theorem 86. A hybrid task set is feasible under preemptive EDF
scheduling if and only if ∀t : h(t) ≤ t.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 86/209
Hybrid Loading Factor Condition
The general EDF loading factor condition can be simplifiedfor hybrid task sets:
For any interval [t1, t2), it holds that h[t1,t2) ≤ h′[0,t2−t1),
where h′[0,t2−t1)is the processor demand of the
corresponding synchronous task set [see the proof ofthe sporadic demand theorem]
Consider processor demand h(t) = h′[0,t) only
Theorem 86. A hybrid task set is feasible under preemptive EDF
scheduling if and only if ∀t : h(t) ≤ t.
Proof. Follows immediately from h[t1,t2) ≤ h′[0,t2−t1)and the general
loading factor condition.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 86/209
Hybrid Loading Factor Test (I)
Feasibility test based upon hybrid loading factor condition∀t : h(t) ≤ t?
Testing all intervals [0, t) is not practical, since evenincremental computation needs O(n) time per step
Note that looking at intervals [0, t] with t = di,j is
sufficient here, since h(t) increases only at deadlines
Are there conditions under which just all t ≤ tmax needto be tested?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 87/209
Hybrid Loading Factor Test (I)
Feasibility test based upon hybrid loading factor condition∀t : h(t) ≤ t?
Testing all intervals [0, t) is not practical, since evenincremental computation needs O(n) time per step
Note that looking at intervals [0, t] with t = di,j is
sufficient here, since h(t) increases only at deadlines
Are there conditions under which just all t ≤ tmax needto be tested? Yes.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 87/209
Hybrid Loading Factor Test (I)
Feasibility test based upon hybrid loading factor condition∀t : h(t) ≤ t?
Testing all intervals [0, t) is not practical, since evenincremental computation needs O(n) time per step
Note that looking at intervals [0, t] with t = di,j is
sufficient here, since h(t) increases only at deadlines
Are there conditions under which just all t ≤ tmax needto be tested? Yes.
Theorem 87 (Baruah et.al.). If a hybrid task set of n tasks is not feasible
under preemptive EDF scheduling and U < 1, then h(t) > t implies
t < Dmax, or t < U1−U max1≤i≤n{Ti −Di}
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 87/209
Hybrid Loading Factor Test (II)
Proof. Assume h(t) > t. If t < Dmax, we are done. Hence, assume
that also t ≥ Dmax.
h(t) ≤∑
i:Di≤t
(
1 +
⌊
t−Di
Ti
⌋)
Ci
≤
n∑
i=1
(
t+ Ti −Di
Ti
)
Ci
≤ t
n∑
i=1
Ci
Ti+
n∑
i=1
Ci
Ti(Ti −Di)
≤ tU + max1≤i≤n
{Ti −Di}U.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 88/209
Hybrid Loading Factor Test (III)
Proof. (cont.)
Recalling t < h(t), it follows that
t(1− U) < U max1≤i≤n
{Ti −Di}
from where the statement in our theorem follows.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 89/209
Hybrid Loading Factor Test (IV)
From this theorem, we conclude the following:
Testing all intervals [0, t] for
t ≤ tmax = max{
Dmax,U
1−U max1≤i≤n{Ti −Di}}
suffices
If U ≤ c < 1, the complexity of feasibility testing ispseudo-polynomial, since
U1−U max1≤i≤n{Ti −Di} ≤ c
1−c max1≤i≤n{Ti −Di}
checking h(t) ≤ t can be done incrementally in O(n)time per step (= per job deadline)
⇒ test needs only O (nmax1≤i≤n{Dmax, Ti −Di}) time
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 90/209
Hybrid Loading Factor Test (IV)
From this theorem, we conclude the following:
Testing all intervals [0, t] for
t ≤ tmax = max{
Dmax,U
1−U max1≤i≤n{Ti −Di}}
suffices
If U ≤ c < 1, the complexity of feasibility testing ispseudo-polynomial, since
U1−U max1≤i≤n{Ti −Di} ≤ c
1−c max1≤i≤n{Ti −Di}
checking h(t) ≤ t can be done incrementally in O(n)time per step (= per job deadline)
⇒ test needs only O (nmax1≤i≤n{Dmax, Ti −Di}) time
Are there further bounds on tmax?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 90/209
Hybrid Loading Factor Test (V)
Some additional results regarding the maximum time tmax
where ∀t ≤ tmax : h(t) ≤ t must be checked:
tmax ≤ H +Dmax, where H = lcm(T1, . . . , Tn) for U = 1[which implies exponential complexity]
Zheng and Shin’s bound:
tmax = max
{
Dmax,
∑ni=1(1−Di/Ti)Ci
1− U
}
George et.al.’s bound:
tmax =
∑
Di≤Ti(1−Di/Ti)Ci
1− U
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 91/209
Busy Period Analysis
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 92/209
Recall Busy Periods
Given some time t where the processor is busy, theunique busy period [t1, t2) with t1 ≤ t ≤ t2 is a period ofcontinuous processor activity, such that
t1 is the last instant where no job that was releasedbefore t1 is executed after t1t2 is the first instant where no job that was releasedbefore t2 is executed after t2.
Any schedule consists of alternating busy and idleperiods, although idle periods may have duration 0.
The synchronous busy period is the first busy period inthe schedule of a synchronous task set.
The cumulative workload within some interval [t1, t2) isthe sum of the WCET of all jobs released within [t1, t2).
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 93/209
Recall Deadline-t Busy Periods
A deadline-t busy period [t1, t2) is a busy period where onlyjobs with deadlines ≤ t are executed:
t1 < t is the last instant where no jobs with deadline ≤ tthat were released before t1 are executed after t1
t2 is the first instant where no jobs with deadline ≤ t [butpossibly jobs with deadline > t] that were releasedbefore t2 are executed after t2.
Corollary 94. In case of preemptive EDF scheduling, a deadline-t busy
period [t1, t2)
is initiated by the arrival of a task with deadline ≤ t
is non-idle throughout [t1, t2]
guarantees that if t2 > t, then the task with deadline t cannot make
it.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 94/209
Synchronous Busy Period Feasibility (I)
The following generic theorem allows to further restrict theinterval [0, tmax) where h(t) ≤ t must be checked:
Theorem 95 (Liu and Layland and others). If a synchronous periodic
task set is not feasible under preemptive EDF, then there is a deadline
miss in the synchronous busy period.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 95/209
Synchronous Busy Period Feasibility (I)
The following generic theorem allows to further restrict theinterval [0, tmax) where h(t) ≤ t must be checked:
Theorem 95 (Liu and Layland and others). If a synchronous periodic
task set is not feasible under preemptive EDF, then there is a deadline
miss in the synchronous busy period.
Proof. Suppose there is a deadline miss of a job of τi at time tsomewhere in the schedule.
Let [t′, t′′) with t′ < t < t′′ be the deadline-t busy period
containing the deadline miss.
Note that t must be the deadline of a job of τi.
⇒ The processor demand in [t′, t) must satisfy h[t′,t) > t− t′.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 95/209
Synchronous Busy Period Feasibility (II)
Proof. (cont.)
Consider a schedule derived from the original one by:
Dropping all task releases before t′. This cannot not change h[t′,t)since a task released before t′ either
has a deadline > t and is hence not included in h[t′,t), or
has deadline d ≤ t but belongs to some preceeding deadline-dbusy period, which must have been completed before t′
Shifting all jobs from tasks τj 6= τi that are within the deadline-t
busy period left, so that they are all synchronously released at t′
This can at most increase h[t′,t), so τi will still miss its deadline
at t
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 96/209
Synchronous Busy Period Feasibility (IV)
Proof. (cont.)
Now let also the first job of τi be released synchronously at t′:
The processor demand h[t′,t) in [t′, t) can only increase
Let t be the largest deadline before or at t in the resulting schedule,
with τℓ being the according task
τℓ experiences a deadline miss at t:
The processor demand h[t′,t) = h[t′,t), since t was last
deadline included
Length of deadline-t busy period is larger than t− t′ ≥ t− t′
The resulting schedule is equivalent to the synchronous busy period
(except that it starts at t′) and contains a deadline miss.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 97/209
Busy Period Length (I)
Theorem 98. Independently of the [non-idling] scheduling policy
employed, the length L of the synchronous busy period of a
synchronous task set of n tasks is the [unique] limiting point of
L(m+1) = W(
L(m))
L(0) =
n∑
i=1
Ci,
where
W (t) =
n∑
i=1
⌈
t
Ti
⌉
Ci.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 98/209
Busy Period Length (II)
Proof. For a given interval [0, t) the (proof of the) Invocation Lemma (b)
reveals that W (t) is the cumulative workload within [0, t). Clearly,
W (t) is piecewise constant
W (ε) =∑n
i=1Ci > 0, for any arbitrarily small ε > 0
W (t+∆) ≥ W (t) for all t,∆ ≥ 0 (weak monononicity)
if W (t+∆) 6= W (t) then W (t+∆) ≥ W (t) + Cmin
limt→∞W (t) = ∞
Define L(−1) = ε. Using induction on m ≥ 0, we show that
(1) L(m) ≥ L(m−1)
(2) the length of the synchronous busy period L is at least L(m)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 99/209
Busy Period Length (III)
Proof. (cont.)
Basis m = 0: Dealing with a synchronous periodic task set, all tasks are
released at time 0, hence:
The resulting busy period(s) must have length at least
L(0) =∑n
i=1Ci > ε = L(−1), since all tasks released at 0 must
be executed to completion
There is no idle time within [0, L(0)] ⇒ contained in the
synchronous busy period. Hence, L ≥ L(0).
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 100/209
Busy Period Length (IV)
Proof. (cont.)
Induction step m→ m+ 1: Assuming L(m) ≥ L(m−1) and L ≥ L(m),
the cumulative workload within the interval [0, L(m)) is
W(
L(m))
= L(m+1) ≥ L(m) =W (L(m−1)), since otherwise
L(m) < L(m−1) by weak monotonicity of W (.)
There is no idle time within [0, L(m+1)], since
there is no idle time within [0, L(m)], and
any additional workload [if any] is caused by task releases within
[0, L(m))
⇒ the processor must be busy within [L(m), L(m+1)] as well.
⇒ L ≥ L(m+1).
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 101/209
Busy Period Length (V)
Proof. (cont.)
Since L(m) is monotonically increasing, it has a limit point:
L = limm→∞ L(m) = ∞: Non-terminating synchronous busy
period.
L = limm→∞ L(m) <∞: Since W (.) and hence L(.) increases
at least by Cmin if it is increased at all, there must be a finite m ≥ 0
where L(m) = L(m+1).
There are no additional releases in [0, L(m)) as compared to
the releases in [0, L(m−1))
There must be an idle period following t = L(m)
⇒ The length of the synchronous busy period must be exactly
L = L(m) = L(m+1) = . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 102/209
Convergence Condition (I)
Theorem 103 (Spuri). If U ≤ 1, then the synchronous busy period
length L <∞ is computed in a finite number of iterations.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 103/209
Convergence Condition (I)
Theorem 103 (Spuri). If U ≤ 1, then the synchronous busy period
length L <∞ is computed in a finite number of iterations.
Proof. Let H = lcm(T1, . . . , Tn) be the length of the hyper-period.
Then, U ≤ 1 implies
W (H) =
n∑
i=1
⌈
H
Ti
⌉
Ci = H · U ≤ H
and, due to weak monotonicity, W (t) ≤ H for all 0 ≤ t ≤ H .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 103/209
Convergence Condition (II)
Proof. (cont.)
Hence,
∀m : L(m) ≤ H since the iteration cannot exceed the value H
Since W (.) and hence L(m) increases by at least Cmin in every
step where it increases at all, there is a finite m such that
L = L(m) = L(m+1) = . . . .
Consequently, the iteration for computing L(m) terminates after finitely
many steps.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 104/209
Example
Consider task set of previous example:
L(0) = W (0) = 3 + 2 + 1 = 6
L(1) = W (6) = 2 · 3 + 1 · 2 + 1 · 1 = 9
L(2) = W (9) = 3 · 3 + 1 · 2 + 1 · 1 = 12
L(3) = W (12) = 3 · 3 + 1 · 2 + 2 · 1 = 13
L(4) = W (13) = 4 · 3 + 1 · 2 + 2 · 1 = 16
L(5) = W (16) = 4 · 3 + 1 · 2 + 2 · 1 = 16
L = 16
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 105/209
Recast Busy Period Length Theorem
Theorem 106. Independently of the [non-idling] scheduling policy
employed, the length L of the synchronous busy period of a
synchronous task set of n tasks is the smallest positive solution of
t = W (t) =
n∑
i=1
⌈
t
Ti
⌉
Ci.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 106/209
Recast Busy Period Length Theorem
Theorem 106. Independently of the [non-idling] scheduling policy
employed, the length L of the synchronous busy period of a
synchronous task set of n tasks is the smallest positive solution of
t = W (t) =
n∑
i=1
⌈
t
Ti
⌉
Ci.
Proof. We only need to show that there cannot be a smaller solution
L′ > 0 satisfying L′ < L = L(m) = L(m+1).
Clearly, L′ > ε = L(−1) since W (ε) =∑n
i=1Ci > ε
Since W (t) ≤ L′ for all t ≤ L′ due to weak monotonicity, the
iteration cannot exceed L′
⇒ ∀m′ : L(m′) ≤ L′ < L, which contradicts L(m) = L.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 106/209
Complexity of Busy Period Analysis (I)
Being able to restrict feasibility analysis of a task set tot ≤ tmax = L
can speedup feasibility analysis, although
worst case complexity is not improved:
In particular, for U ≤ c < 1,
L =
n∑
i=1
⌈
L
Ti
⌉
Ci ≤
n∑
i=1
(
1 +L
Ti
)
Ci
≤
n∑
i=1
Ci + L
n∑
i=1
Ci
Ti≤
n∑
i=1
Ci + Lc,
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 107/209
Complexity of Busy Period Analysis (II)
Since this leads to L ≤ 11−c
∑ni=1Ci, we obtain:
Pseudo-polynomial time complexity O(
n∑n
i=1 Ci
)
for
computing L since every iteration takes O(n)
Pseudo-polynomial time complexity O(
n∑n
i=1 Ci
)
for
feasibility analysis since incrementally computingh(t+∆t) = h(t) +
∑
i∈I∆tCi takes O(n) per deadline
Final loading factor feasibility test for synchronous/hybridtask sets with arbitrary deadlines and U ≤ 1:
Check h(t) ≤ t for t = dk ∈ [0, tmax], where
tmax = min{L, t(ZS)max , t
(George)max , . . . }
At most exponential complexity; pseudo-polynomialcomplexity in many cases (U ≤ c < 1)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 108/209
Response Time Analysis (Ch. 4)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 109/209
Alternative to Feasibility Analysis
Consider any job Ji,j of task τi:
Release time ri,j, relative deadline Di
Compute worst case response time rti,j for Ji,j
If rti = maxj rti,j ≤ Di for all i, j, the task set is feasible
Response time analysis
is more difficult than feasibility analysis (= simple checkh(t) ≤ t) since response times must be computedrecursively
is more general and allows even end-to-end responsetime analysis in distributed systems (holisticschedulability analysis)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 110/209
Worst Case Response Time (I)
Lemma 111. The worst case response time of job Ji,j with deadline
d = di,j occurs in a τi-peer-synchronous (PS) deadline-d busy period
[t1, t2) containing Ji,j , in which all tasks except τi are released
simultaneously at t1 and with maximum rate.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 111/209
Worst Case Response Time (I)
Lemma 111. The worst case response time of job Ji,j with deadline
d = di,j occurs in a τi-peer-synchronous (PS) deadline-d busy period
[t1, t2) containing Ji,j , in which all tasks except τi are released
simultaneously at t1 and with maximum rate.
Proof. Consider a specific job Ji,j of τi, and let
a be its release time and d = a+Di its absolute deadline
f ≥ a+ Ci be its finishing time
[t1, t2) be the deadline-d busy period containing f (actually,
containing the whole interval [a, f ]), with
t1 denoting the last time before f without a job with deadline
≤ d released before and executed after t1t2 denoting the first time at or after f without a job with deadline
≤ d released before and executed after t2
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 111/209
Worst Case Response Time (II)
Proof. (cont.) Since [t1, t2) is a deadline-d busy period,
t1 must be the release time of a task with deadline ≤ d
only jobs with deadline ≤ d are executed during [t1, t2], without idle
time in between
Ji,j need not be the last in deadline-d BP if there are jobs Jk,l with
same deadline d
Consider the schedule where all tasks except τi are shifted left such that
they are released simultaneously and at their maximum rate from t1 on.
Jobs released before t1 did not contribute to [t1, t2) ⇒ do not
contribute after the shift
The proc. demand h[t1,t) for any t ∈ [t1, t2] cannot decrease
Neither Ji,j ’s completion time (t = f ) nor the length of the
deadline-d busy period (t = t2) can decrease.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 112/209
Worst Case Response Time (III)
Consider initial τi-peer-synchronous (IPS) deadline-d busyperiods [0, t′), where
τi has job release times ri,j = si + (j − 1)Ti with si ≥ 0
all tasks τk 6= τi are released synchronously at time 0
The worst case response time for task τi
need not occur in an IPS deadline-d busy period (i.e.,possibly t1 6= 0 in the PS deadline-d busy period [t1, t2))
BUT: We can nevertheless restrict our attention to IPSdeadline busy periods . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 113/209
Worst Case Response Time (IV)
Given some arbitrary job Ji,j released at a ≥ 0, contained in
the PS deadline-d busy period [t1, t2),
we can choose t1 as our new time origin 0
Ji,j is released at time a′ = a− t1, and has deadline
d′ = d− t1
the resulting IPS deadline-d′ busy period [0, t2 − t1)ends at time Li(a
′) = t2 − t1
a′ + Ci ≤ Li(a′) ≤ L, since
a′ + Ci is trivial lower bound for Li(a′)
the synchronous busy period must include allpossible IPS deadline-d busy periods
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 114/209
Worst Case Response Time (V)
Obviously,
Li(a′) is also the length of the original PS deadline-d
busy period [t1, t2)
The worst case response time of Ji,j released at time a
satisfies rti(a) ≤ t2 − a = Li(a′)− a′
Hence, rti(a) is the same as rti(a′) of a job Ji,j′
released at time a′
It hence suffices to study all IPS deadline-d busy periods,where a specific job Ji,j of τi
arrives at some arbitrary 0 ≤ a ≤ Li(a)− Ci
has deadline d = a+Di
Note that the first job of τi has offset si(a) = a− ⌊ aTi⌋Ti
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 115/209
Li(a) Computation (I)
Lemma 116. Given a schedule where some job of τi is released at time
a with deadline d = a+Di, and all other tasks are released at 0. The
worst case deadline-d relevant cumulated workload Wi(a, t) arriving in
[0, t) is
Wi(a, t) =∑
k 6=iDk≤a+Di
min
{⌈
t
Tk
⌉
, 1 +
⌊
a+Di −Dk
Tk
⌋}
Ck
+∆i(a, t)Ci
where
∆i(a, t) =
{
min{⌈
t−si(a)Ti
⌉
, 1 +⌊
a−si(a)Ti
⌋}
if t ≥ si(a), a ≥ si(a),
0 otherwise.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 116/209
Li(a) Computation (II)
Proof. For any τk 6= τi, the deadline-d-relevant workload created by τkwithin [0, t) is the minimum of
the max. number ⌈ tTk⌉ of τk job releases in [0, t)
the max. number 1 + ⌊d−Dk
Tk⌋ of τk jobs with deadline ≤ d in [0, d]
The deadline-d-relevant workload created by τi within [0, t) is 0 if t is
before or at the first job release, or else determined by the minimum of
the max. number ⌈ t−si(a)Ti
⌉ of job releases in [si(a), t)
the max. number 1 + ⌊a−si(a)+Di−Di
Ti⌋ = 1 + ⌊a−si(a)
Ti⌋ of jobs
with deadline ≤ d in [si(a), d] (and hence in [0, d])
Note that only jobs released before and at time a are relevant for
Li(a), since later jobs of τi have a deadline past d = a+Di
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 117/209
Li(a) Computation (III)
Theorem 118. Given the schedule where some job of τi is released at
time a with deadline d = a+Di, the worst case length of the IPS
deadline-d busy period Li(a) can be computed iteratively as
L(0)i (a) =
∑
j 6=iDj≤a+Di
Cj + δsi(a)=0Ci
L(m+1)i (a) = Wi
(
a, L(m)i (a)
)
where
δsi(a)=0 =
{
1 if si(a) = 0,
0 otherwise.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 118/209
Li(a) Computation (IV)
Proof. The construction is exactly the same as for ordinary busy periods,
except that only deadline-d relevant cumulative workload must be
considered.
This is ensured by the minimum functions in Wi(a, t), which guarantee
that
only the workload that arrived before t is considered
only the workload that is relevant for a deadline-d busy period is
considered
Hence, the iterative construction indeed builds up the IPS deadline-dbusy period.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 119/209
Worst Case Response Time Analysis (I)
Wi
(
a, t) has the same properties as the cumulative
workload W (t) for ordinary busy periods. Hence,
the fixed point iteration used for determining the lengthL of the synchronous busy period can be used
Li(a) is the unique limit limm→∞ L(m)i (a)
Li(a) is also meaningful for irrelevant a (wherea+ Ci > Li(a)), in which case Li(a) is the length of thewhole IPS busy period
If U =∑n
i=1Ci
Ti≤ 1, it again follows
Li(a) = L(m)i (a) = L
(m+1)i (a) for some finite m
Li(a) is the smallest positive solution of t = Wi(a, t)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 120/209
Worst Case Response Time Analysis (II)
To find out whether a task set is feasible, determine:
Worst case response time of Ji,j released at time a,which is rti(a) = max{Ci, Li(a)− a} [Ci is obvious lowerbound]
Note that this definition provides rti(a) = Ci forirrelevant a, where Li(a) = L < a+ Ci
The worst case response time of τi is rti = maxa≥0 rti(a)
Checking all a ≥ 0 not practical, but
Li(a) ≤ L for all τi
Checking 0 ≤ a ≤ L− Ci is hence sufficient
If ∀i : rti ≤ Di, the task set is feasible.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 121/209
Worst Case Response Time Analysis (III)
The range for a where rti(a) ≤ Di must be checked for canbe further reduced by exploiting some additional facts:
Wi(a, t) and hence Li(a) has a local maximum if a issuch that a+Di −Dk = mTk for some m
Di ≤ Dj ⇒ maxa Li(a) ≤ maxa Lj(a)
Li(a) is non-decreasing in a for all τi
The detailed algorithm can be found in the textbook.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 122/209
Variants of Hybrid Models
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 123/209
Non-Idling Non-preemptive Hybrid EDF (I)
Feasibility analysis for non-idling non-preemptive EDF forhybrid task sets is slightly more complicated:
Urgent tasks released during execution of less urgentone is delayed (priority inversion)
May even lead to infeasible schedule, as already shownby earlier example
Fortunately, the maximum delay of any urgent job canbe bounded
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 124/209
Non-Idling Non-preemptive Hybrid EDF (I)
Feasibility analysis for non-idling non-preemptive EDF forhybrid task sets is slightly more complicated:
Urgent tasks released during execution of less urgentone is delayed (priority inversion)
May even lead to infeasible schedule, as already shownby earlier example
Fortunately, the maximum delay of any urgent job canbe bounded
How can this be done?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 124/209
Non-Idling Non-preemptive Hybrid EDF (II)
For some t, let t′ < t be the release time of the job thatinitiates the last deadline-t busy period [t1, t2):
t′ is the release time of a job with deadline ≤ t
A job Jlow with deadline > t could be executing at t′ thatcannot be preempted ⇒ priority inversion during [t′, t1].
There is no idle time during [t′, t2].
After Jlow has completed, only jobs Jhigh with deadline ≤ t
released at or after t′ [if any] are executed
⇒ Any Jhigh can experience at most one priority inversion
⇒ The maximum penalty of Jhigh due to non-preemption is
maxt′+Dj>t{Cj − 1}, with penalty 0 if no such Dj exists
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 125/209
Non-Idling Non-preemptive Hybrid EDF (III)
Theorem 126 (George, Rivierre and Spuri 1996). Any hybrid task set
with processor utilization U ≤ 1 is feasible under non-preemptive EDF if
and only if
∀t ∈ S : h(t) + maxDj>t
{Cj − 1} ≤ t,
where tmax = min{L, t1, t2} with
S =
n⋃
i=1
{
kTi +Di, k = 0, . . . , ⌊tmax −Di
Ti⌋
}
t1 = max
{
Dmax,
∑ni=1(1−Di/Ti)Ci
1− U
}
t2 =
∑
Di≤Ti(1−Di/Ti)Ci +maxi=1,...,n{Ci − 1}
1− U
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 126/209
Tasks with Release Jitter (I)
Up to now,
jobs are ready for execution from their release time on
relative deadlines are measured from release times on
In reality, however, it makes sense to distinguish twoinstants for the j-th job Ji,j of task τi:
Arrival time ai,j [starting the relative deadline]
release time ai,j ≤ ri,j ≤ ai,j + Ji, where Ji ≥ 0 is the
release jitter of task τi
A non-zero release jitter
models scheduling/dispatching overhead
needs to be accounted for in feasibility analysis
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 127/209
Tasks with Release Jitter (II)
If Ji > 0, then two task instances of τi may occur closer toeach other than Ti, namely, Ti − Ji. The worst case arrivalpattern of a hybrid task set is:
All jobs experience their shortest interarrival time at the(synchronous) beginning of the schedule
First job released at t = 0, consecutive ones atmax{kTi − Ji, 0} for k > 0
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 128/209
Tasks with Release Jitter (II)
If Ji > 0, then two task instances of τi may occur closer toeach other than Ti, namely, Ti − Ji. The worst case arrivalpattern of a hybrid task set is:
All jobs experience their shortest interarrival time at the(synchronous) beginning of the schedule
First job released at t = 0, consecutive ones atmax{kTi − Ji, 0} for k > 0
This is equivalent to the following arrival pattern:
The first job Ji,1 of τi arrives at t = −Ji, all subsequent
jobs Ji,k, k ≥ 2, arrive equally spaced by Ti
All jobs arriving at t < 0 are simultaneously released attime 0, all others are released when they arrive
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 128/209
Tasks with Release Jitter (III)
The processor demand h(t) hence becomes
h(t) =∑
i:Di≤t+Ji
(
1 +
⌊
t+ Ji −Di
Ti
⌋)
Ci
The maximum time tmax where the demand must bechecked is
tmax = max
{
maxi=1,...,n
{Di − Ji},U
1− Umax1≤i≤n
{Ti + Ji −Di}
}
Appropriately modified variants of the Zheng & Shin and
George et.al. upper bounds also exist.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 129/209
Tasks with Release Jitter (IV)
By the same token, it is possible to adapt the busy periodanalysis to non-zero release jitter: The cumulative workloadreads
W (t) =
n∑
i=1
⌈
t+ JiTi
⌉
Ci
everything else remains unchanged.
Consequently,
most results for ∀i : Ji = 0 can be adapted to non-zerorelease jitter
handling this important model extension is not toodifficult.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 130/209
Sporadically Periodic Tasks (I)
In real systems, processing requirements for a task τi areoften bursty:
Started by some initial job, arriving sporadically withsporadicity interval Ti (outer period)
Followed by at most mi − 1 ≥ 0 additional jobs each,arriving sporadically with sporadicity interval ti (innerperiod)
Individual deadline for every job, but mandatoryrequirement: miti ≤ Ti
Examples:
Data reception via serial line
Sensor polling triggered by some event, e.g., an alarm
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 131/209
Sporadically Periodic Tasks (II)
Worst case arrival pattern for loading factor analysis:
Packing the maximum number of job releases at thebeginning of the schedule
Incorporating release jitter as before, with same jitterbound Ji applying to every (outer, inner) job of τi
Adaption of busy period and processor demand analysis:Need to consider
number ki of complete outer periods of τi fitting into thetime interval [0, t) in question
number of completely fitting inner periods from the last(incompletely fitted) outer period
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 132/209
Sporadically Periodic Tasks (III)
The cumulative workload then becomesW (t) =
∑ni=1 Ii(t)Ci with
Ii(t) =
⌊
t+ JiTi
⌋
mi +min
{
mi,
⌈
t+ Ji −⌊
t+Ji
Ti
⌋
Ti
ti
⌉}
The processor demand becomes h(t) =∑
Di≤t+JiHi(t)Ci
with Hi(t) =
⌊
t+ Ji −Di
Ti
⌋
mi+min
{
mi, 1 +
⌊
t+ Ji −Di −⌊
t+Ji−Di
Ti
⌋
Ti
ti
⌋}
The feasibility test conditions etc. remain unchanged.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 133/209
Scheduling Overhead (I)
In real systems,
the processor must also execute scheduler anddispatcher ⇒ is not 100% available for task execution
Example: Periodic tick interrupt activating thescheduler, which moves tasks from an arrival queue tothe (dynamic) priority-ordered processing queue anddispatches into the first ranked one.
Consider processors with availability functions satisfyinga(t) + a(T ) ≤ a(t+ T ) for all T, t. Then, it can be shown that
the Liu & Layland Synchronous Busy Period Theorem isstill valid
incorporating scheduling overhead in feasibility analysisis possible
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 134/209
Scheduling Overhead (II)
Scheduling overhead model by Tindell et.al.:
Ctick is the WCET of the tick interrupt service routine
CQL is the WCET to move the first job
CQS is the WCET to move subsequent jobs
The scheduling overhead over [0, w] is then OV (w) =
T (w)Ctick+min{T (w),K(w)}CQL+max{K(w)−T (w), 0}CQS
with
T (w) = ⌈ wTtick
⌉ bounds the number of tick interrupts in
[0, w)
K(w) =∑n
i=1
⌈
w+Ji
Ti
⌉
is the number of task moves
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 135/209
Scheduling Overhead (III)
The cumulative workload then becomes
W (t) = OV (t) +
n∑
i=1
⌈
t+ JiTi
⌉
Ci,
and if the availability function a(t) is such thata(t) ≥ max{t− OV (t), 0}, then a sufficient feasibility testcondition is
t−OV (t) ≥∑
i:Di≤t+Ji
(
1 +
⌊
t+ Ji −Di
Ti
⌋)
Ci
Further generalizations by Jeffay & Stone.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 136/209
EDF under Overloads (Ch. 5.1)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 137/209
Overloads (I)
All previous “positive” results regarding EDF somehowassumed that
a feasible schedule of the task set in question exists
although maybe only God knows this schedule
Examples: EDF is guaranteed to find a feasible schedule if
Maximum loading factor u ≤ 1
Processor utilization U ≤ 1 (Di ≥ Ti)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 138/209
Overloads (I)
All previous “positive” results regarding EDF somehowassumed that
a feasible schedule of the task set in question exists
although maybe only God knows this schedule
Examples: EDF is guaranteed to find a feasible schedule if
Maximum loading factor u ≤ 1
Processor utilization U ≤ 1 (Di ≥ Ti)
In reality, however, overloads and hence infeasible task setscan and do occur.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 138/209
Overloads (II)
Overloads are impossible to circumvent because of limitedcontrol over the environment, leading to
unanticipated changes of the arrival load over time
excessive job execution times
failures
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 139/209
Overloads (II)
Overloads are impossible to circumvent because of limitedcontrol over the environment, leading to
unanticipated changes of the arrival load over time
excessive job execution times
failures
When modeling and analyzing fault-tolerant distributedreal-time systems in such advanced settings,
the environment is considered as the computersystem’s adversary
the computer system plays against the adversary
a correct system design implements a winning strategy
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 139/209
How does EDF Perform under Overloads?
In the presence of overload,
it is impossible for any scheduling algorithm to meet alldeadlines
some [less important] jobs must be aborted
EDF is optimal in the absence of overloads. However,under overloads,
EDF aborts jobs in an “uncontrolled” way
arrival of a new job may cause all preempted ones tomiss their deadlines (domino effect)
EDF performs poorly under overload!
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 140/209
Importance vs. Urgency
Apparently, one has to distinguish two properties of a task:
Urgency
Importance
In pure EDF [as well as in rate/deadline monotonic priorityassignments],
urgency and importance are reflected by a single(dynamic) priority value
⇒ sub-optimal behavior under overloads.
Let’s assign each task τi an additional importance value Vi.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 141/209
Utility Functions (I)
However, for the successful operation of a system,
the importance of task τi may depend upon the relativefinishing times Fi,j = fi,j − ri,j of its jobs
a static importance value is often not sufficient
Generalize importance value Vi by utility function Vi(Fi,j):
hard
firm
soft
non-real-time
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 142/209
Utility Functions (II)
Scheduling with utility functions:
Scheduling algorithm A selects a schedule S, denotedS = A ⊢ Ji,j
Scheduling goal: Maximize cumulative utilityΓ =
∑
A⊢Ji,jVi(Fi,j).
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 143/209
Utility Functions (II)
Scheduling with utility functions:
Scheduling algorithm A selects a schedule S, denotedS = A ⊢ Ji,j
Scheduling goal: Maximize cumulative utilityΓ =
∑
A⊢Ji,jVi(Fi,j).
What can be achieved here?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 143/209
On-line vs. Clairvoyant Scheduling
An on-line scheduling algorithm A
does not have any information (execution time, deadlineetc.) about a job before it is released
is easy to implement
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 144/209
On-line vs. Clairvoyant Scheduling
An on-line scheduling algorithm A
does not have any information (execution time, deadlineetc.) about a job before it is released
is easy to implement
A clairvoyant scheduler C
has all information about all future job releases a priori
is a theoretical algorithm that cannot be implemented
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 144/209
On-line vs. Clairvoyant Scheduling
An on-line scheduling algorithm A
does not have any information (execution time, deadlineetc.) about a job before it is released
is easy to implement
A clairvoyant scheduler C
has all information about all future job releases a priori
is a theoretical algorithm that cannot be implemented
Interesting question:
How does cumulative utility ΓA relate to ΓC?
⇒ Competitive analysis
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 144/209
Competitive Analysis Basics (I)
Given a set of real-time tasks T with firm deadlines, let
vi be the importance value density of a task/job τiuniform value density: ∀i : vi = 1
non-uniform value density: T ’s importance ratio
k =maxτi∈T {vi}minτi∈T {vi} > 1
Vi = Vi(Fi,j) = viCi be the importance value of τi
Given any sequence J = {Ji,j} of job releases of T , let
Γ∗(J ) =∑
C∗⊢Ji,jVi be the maximum cumulative value
obtained by the optimal clairvoyant algorithm C∗
ΓA(J ) =∑
A⊢Ji,jVi be the maximum cumulative value
obtained by on-line algorithm A
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 145/209
Competitive Analysis Basics (II)
Question 1: Does an optimal on-line algorithm exist?
Algorithm A∗ that maximizes ΓA∗(J ) for everysequence J of job releases of T
Simple example reveals that this requires knowledge offuture arrivals
⇒ No optimal on-line algorithm exists
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 146/209
Competitive Analysis Basics (II)
Question 1: Does an optimal on-line algorithm exist?
Algorithm A∗ that maximizes ΓA∗(J ) for everysequence J of job releases of T
Simple example reveals that this requires knowledge offuture arrivals
⇒ No optimal on-line algorithm exists
Question 2: How well performs some given algorithm A?
Determine competitive factor ϕA:
ϕA = infJ
ΓA(J )
Γ∗(J )
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 146/209
Competitive Analysis Basics (III)
Properties ϕA:
For every sequence J of job releases, it must hold thatΓA(J ) ≥ ϕAΓ∗(J )
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 147/209
Competitive Analysis Basics (III)
Properties ϕA:
For every sequence J of job releases, it must hold thatΓA(J ) ≥ ϕAΓ∗(J )
Sometimes: Restrict sequence of job releases forcomputing ϕA, to guarantee things like:
sporadicity of task releases
limited maximum or average load
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 147/209
Competitive Analysis Basics (III)
Properties ϕA:
For every sequence J of job releases, it must hold thatΓA(J ) ≥ ϕAΓ∗(J )
Sometimes: Restrict sequence of job releases forcomputing ϕA, to guarantee things like:
sporadicity of task releases
limited maximum or average load
Further Questions:
What is the best achievable ϕA?
What is the algorithm A achieving it?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 147/209
Competitive Analysis Basics (IV)
How to determine ϕA?
Two-player game between
on-line algorithm A
adversary (constructing J [+ running clairvoyantalgorithm C∗])
Adversary tries to make ratio between A’s and C∗’s gainas small as possible.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 148/209
Competitive Analysis Basics (IV)
How to determine ϕA?
Two-player game between
on-line algorithm A
adversary (constructing J [+ running clairvoyantalgorithm C∗])
Adversary tries to make ratio between A’s and C∗’s gainas small as possible.
To show that ϕA ≤ r, a counter-example suffices:
Construct some sequence J of job releases of T
Determine Γ∗(J ) provided by C∗
Show that ΓA(J ) ≤ rΓ∗(J )
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 148/209
Simple Example: Competitive Factor EDF
Consider J consisting of two zero-slack time jobs J1 and J2:
V1 = C1 = K and V2 = C2 = εK
J1, J2 simultaneously released but d1 > d2
Both preemptive and non-preemptive EDF providecumulative value ΓEDF (J ) = εK
Clairvoyant scheduler C∗ provides ΓC∗(J ) = K
⇒ ϕEDF ≤ ε
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 149/209
Simple Example: Competitive Factor EDF
Consider J consisting of two zero-slack time jobs J1 and J2:
V1 = C1 = K and V2 = C2 = εK
J1, J2 simultaneously released but d1 > d2
Both preemptive and non-preemptive EDF providecumulative value ΓEDF (J ) = εK
Clairvoyant scheduler C∗ provides ΓC∗(J ) = K
⇒ ϕEDF ≤ ε
Hence, EDF has in fact zero competitive factor!
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 149/209
Simple Example: Competitive Factor EDF
Consider J consisting of two zero-slack time jobs J1 and J2:
V1 = C1 = K and V2 = C2 = εK
J1, J2 simultaneously released but d1 > d2
Both preemptive and non-preemptive EDF providecumulative value ΓEDF (J ) = εK
Clairvoyant scheduler C∗ provides ΓC∗(J ) = K
⇒ ϕEDF ≤ ε
Hence, EDF has in fact zero competitive factor!
What is the competitive factor of other algorithms?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 149/209
Lower-Bounds on Competitive Factors (I)
Arbitrary loading factor (in particular, u > 2):
Uniform value density: No on-line algorithm achieves
ϕA > 14
Value density ratio k > 1: No on-line algorithm achieves
ϕA > 1(1+
√k)2
We will now exercise the detailed proof of the first result,based upon the following assumptions:
Infinitely fine-grained preemptability
Finite durations of periods with overload
Adversary and player execute at same speed
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 150/209
Lower-Bounds on Competitive Factors (II)
Theorem 151 (Baruah et. al.). For uniprocessor systems under finite
durations of overload, no online algorithm achieves a competitive factor
ϕ > 1/4.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 151/209
Lower-Bounds on Competitive Factors (II)
Theorem 151 (Baruah et. al.). For uniprocessor systems under finite
durations of overload, no online algorithm achieves a competitive factor
ϕ > 1/4.
Proof. Quite complex — done on blackboard . . .
[You may want to refresh your basic knowledge of difference equations,
generating functions, partial fraction expansions and complex numbers.]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 151/209
An Algorithm with ϕA = 1
4?
The competitive factor lower-bounds can be shown to betight, by giving an online algorithm:
TD1, which has ϕA = 14 (uniform value density)
Dover, which has ϕA = 1(1+
√k)2
(value density ratio k > 1)
We will study a simple version of algorithm TD1, which
works only for task sets with zero laxity
[but can be extended to work also with task sets withnon-zero laxity]
exploits the adversary argument used in the lowerbound proof
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 152/209
Algorithm TD1 (I)
Intuition behind algorithm TD1:
We consider zero-laxity task sets only, hence
if, in a sequence J = J1, J2, . . . , Jm of jobs, anysuccessive pair of jobs overlaps (i.e. rj ≤ rj+1 < dj),
at most one job Ji ∈ J can be feasibly scheduled
choose a task Ji that has sufficiently high value w.r.t.the accumulated total value of J
More specifically, inspired by the lower-bound proof,
abort the current job in favor of a newly arrived one ifits value is < 1/4 of the duration ∆ of the currently“expected” busy period
the clairvoyant scheduler can accumulate a value ofat most ∆
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 153/209
Algorithm TD1 (II)
Hence, given a sequence of overlapping jobs J1, J2 . . . ,starting a busy period:
TD1 eventually chooses a single job Ji that completes
J1, . . . , Ji−1 are discarded right away or aborted beforecompletion
Ji+1, . . . , Jm, with Jm being the last job released beforeJi’s termination, are discarded
For this purpose, TD1 maintains two intervals, both startingat the beginning of a busy period:
∆′ ends at deadline of Jrun, i.e., is busy period thatwould be generated when Jrun was not aborted later.
∆ extends ∆′ by maximum deadline of tasks discardedduring Jrun; ∆ = ∆′ after an abort.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 154/209
Algorithm TD1 (III)
Pseudo-Code Algorithm TD1 (Simple Version)
1 ∆′ = 0; ∆ = 0; vrun = 02 Jrun := ∅ /* currently no scheduled job */
When Jnext is released, where xrun > 0 is remainingprocessing time of Jrun:3 ∆ := max{∆,∆′ − xrun + Cnext}4 if vrun < ∆/4 then5 abort Jrun6 set ∆′ := ∆ /* Note: ∆′ − xrun + Cnext > ∆ here */7 schedule Jrun := Jnext and set vrun := vnext = Cnext
Whenever Jrun completes:8 ∆′ = 0;∆ = 0; vrun = 09 Jrun := ∅ /* currently no scheduled job */
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 155/209
Algorithm TD1 (IV)
Lemma 156. Consider ∆′k and vk maintained by TD1 in ∆′ (line 6) and
vrun (line 7), in the subsequence J1, . . . , Ji of the aborted jobs in an
overlapping sequence of job arrivals; discarded jobs in between are not
counted here. For any 1 ≤ k ≤ i, it holds that vk > ∆′k/2.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 156/209
Algorithm TD1 (IV)
Lemma 156. Consider ∆′k and vk maintained by TD1 in ∆′ (line 6) and
vrun (line 7), in the subsequence J1, . . . , Ji of the aborted jobs in an
overlapping sequence of job arrivals; discarded jobs in between are not
counted here. For any 1 ≤ k ≤ i, it holds that vk > ∆′k/2.
Proof. By induction. For k = 1, obviously v1 = ∆′1 > ∆′
1/2.
For the induction step, assume vk > ∆′k/2 and let ∆k be ∆ before the
arrival of Jk+1
∆′k+1 = max{∆k,∆
′k−xk+Ck+1} = ∆′
k−xk+Ck+1 > ∆k:
Assuming the contrary, we have ∆′k+1 = ∆k.
If no task discarded between Jk and Jk+1: ∆k = ∆′k, and
since Jk is aborted, 4vk < ∆′k+1 = ∆′
k < 2vk is impossible.
If Jℓ is discarded before Jk+1, then 4vk ≥ ∆k. Since by
assumption ∆k = ∆′k+1, Jk would not be aborted.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 156/209
Algorithm TD1 (V)
Proof. (cont.)
We hence obtain:
Since ∆′k+1 > ∆k and xk > 0, it follows that ∆′
k+1 < ∆′k + vk+1.
Since Jk is aborted, we observe vk < ∆′k+1/4 by line 4.
Combining this leads to
vk+1 > ∆′k+1 −∆′
k > ∆′k+1 − 2vk > ∆′
k+1 −∆′k+1/2.
and hence to vk+1 > ∆′k+1/2 as claimed.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 157/209
Algorithm TD1 (VI)
Theorem 158 (Baruah et. al.). For task sets with zero laxity and finite
durations of overload, the simple algorithm TD1 achieves a competitive
factor ϕTD1 = 1/4.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 158/209
Algorithm TD1 (VI)
Theorem 158 (Baruah et. al.). For task sets with zero laxity and finite
durations of overload, the simple algorithm TD1 achieves a competitive
factor ϕTD1 = 1/4.
Proof. Consider a sequence J1, J2 . . . , Jm′ (finite, because the period
of overload must be finite) of overlapping job arrivals. It suffices to
distinguish two cases:
i = m′, so no overlapping job is released after the chosen
Ji = Jm′ : By our lemma, vm′ > ∆m′/2 > ∆m′/4.
i < m′, so jobs Ji+1, . . . , Jm are released and discarded after Jiis released [but before Ji terminates, hence m ≤ m′]: By the
threshold rule (line 4) of TD1, the running task’s vrun = vi must
satisfy vi ≥ ∆ℓ/4 for any i+ 1 ≤ ℓ ≤ m, hence vi ≥ ∆m/4.
Since a clearvoyant scheduler can obtain a cumulative value of at most
∆m in J1, . . . , Jm, TD1 indeed provides ϕTD1 = 1/4 as asserted.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 158/209
Infinite Duration of Overloads ?
The competitive factor lower-bounds hold only for finitedurations of overload
Is this restriction necessary?
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 159/209
Infinite Duration of Overloads ?
The competitive factor lower-bounds hold only for finitedurations of overload
Is this restriction necessary?
YES (at least in the uniprocessor case)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 159/209
Infinite Duration of Overloads ?
The competitive factor lower-bounds hold only for finitedurations of overload
Is this restriction necessary?
YES (at least in the uniprocessor case)
For infinite durations of overload, no on-line uniprocessoralgorithm has non-zero competitive factor:
Adversary can generate a sequence of overlapping jobsof ever increasing values
Player either abandons current task (= no gain) orschedules it (= loses gain)
This goes on forever ⇒ adversary infinitely often“ahead” of player
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 159/209
Extended Competitive Lower-Bounds
Restricted loading factor 1 < u ≤ 2:
Uniform value density: No on-line algorithm achieves
ϕA ≥ p, where p satisfies 4(
1− (u− 1)p)3
= 27p2
Value density ratio k > 1:
If q = k(u− 1) ≥ 1, no on-line algorithm achieves
ϕA ≥ 1(1+
√q)2
If q = k(u− 1) < 1, no on-line algorithm achieves
ϕA ≥ p, where p satisfies 4(
1− qp)3
= 27p2
Can also be “simulated” by allowing the player toexecute faster than the adversary ⇒ (much) betterlower bounds [Baruah et.al.]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 160/209
Summarizing Observations
The considered lower-bounds
can be extended to multiprocessor scheduling
are quite conservative (infinite preemptability etc.)
⇒ real algorithms can be expected to work better in mostcases
Interesting practical algorithms that can cope with finiteduration overloads:
Dover by Koren and Shasha
RED by Buttazzo and Stankovic
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 161/209
EDF Scheduling with Shared Resources(Ch. 6)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 162/209
Where do we Stand?
We considered uniprocessor CPU scheduling using EDF forindependent (simple) tasks
No shared resources
No precedence constraints
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 163/209
Where do we Stand?
We considered uniprocessor CPU scheduling using EDF forindependent (simple) tasks
No shared resources
No precedence constraints
In reality, tasks are not independent:
shared data
shared communication channels
precedence constraints
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 163/209
Shared Resources—An Overview (I)
Accessing shared resources:
Semaphores resp. Critical sections
[Transactions]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 164/209
Shared Resources—An Overview (I)
Accessing shared resources:
Semaphores resp. Critical sections
[Transactions]
Consider joint problem:
Scheduling periodic tasks with deadlines
Tasks access shared resources modeled as criticalsections
Goals:
Meeting all deadlines
Avoid unneccessary blocking
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 164/209
Shared Resources—An Overview (II)
Requires resource access protocol:
If job J wants to enter CS and no job is currently in CS,it enters and holds CS while in
If job J wants to enter CS that is held by some job, ithas to wait until it is granted access to CS
If the job currently holding CS leaves, the CS becomesfree and some job waiting for CS is granted access
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 165/209
Shared Resources—An Overview (II)
Requires resource access protocol:
If job J wants to enter CS and no job is currently in CS,it enters and holds CS while in
If job J wants to enter CS that is held by some job, ithas to wait until it is granted access to CS
If the job currently holding CS leaves, the CS becomesfree and some job waiting for CS is granted access
Interferes with real-time scheduling . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 165/209
Shared Resources—An Overview (III)
Complications:
Tasks may suffer from blocking:
Low priority job JL executed at time t, currently in CSz
High priority job JH released at t, also wants to enterCS z
⇒ JH is blocked until JL leaves z
Preemption within CS not always allowed:
CS representing access to communication channel
CS representing access to I/O-device
Feasibility checking for scheduling with sharedresources is NP-hard
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 166/209
Priority Inversion
Preemptive priority driven scheduling principle:
Low priority job JL preempted upon release of highpriority job JH
Principle “controllably” (while in CS only) violated duringblocking
Principle uncontrollably violated during priorityinversion:
Low-priority JL blocks high-priority JH via some CS
Medium-priority JM preempts JL (for its entireexecution time CM !)
⇒ medium-priority JM indirectly blocks high priority JH
Easily causes deadline violations
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 167/209
NASA Mars Pathfinder — July, 1997 (I)
http://research.microsoft.com/en-us/um/people/mbj/Mars
Some days into the mission: Total SW resets
Press statements:
“software glitches”
“the computer was trying to do too many things atonce”
Wind River reported (VxWorks was the OS):
Preemptive priority scheduling
“Information bus”: shared memory to passinformation
Bus access controlled via a mutex
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 168/209
NASA Mars Pathfinder — July, 1997 (II)
Part of the on-board task structure:
High priority bus management task H (access to bus)
Low priority meteo data gather task L (access to bus)
Medium priority communication task M
Watchdog resets system if bus mgmt. task takes toolong
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 169/209
NASA Mars Pathfinder — July, 1997 (II)
Part of the on-board task structure:
High priority bus management task H (access to bus)
Low priority meteo data gather task L (access to bus)
Medium priority communication task M
Watchdog resets system if bus mgmt. task takes toolong
Pathfinder problem caused by classic priority inversion:
H was released while L held the bus
H waited for L to terminate
L was preempted by long running M
Watchdog timed out ⇒ reset
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 169/209
Assumptions and Notations (I)
Tasks:
A set of n ≥ 2 preemptible periodic (and/or sporadic)tasks τ1, . . . , τn, with Di ≤ Ti
The sequence of jobs Ji,1, Ji,2, . . . of τi sequentially
enters critical sections (CS) zi,1, zi,2, . . .
Constraints on critical sections:
Every CS is entirely in some job zi,k ⊆ Ji,ℓ
For every τi, CSs zi,1, zi,2, . . . may be different, butmust be
disjoint (zi,k ∩ zi,ℓ = ∅)
properly nested (zi,k ⊂ zi,ℓ)
The WCET for CS zi,k is ci,k
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 170/209
Assumptions and Notations (II)
CS and resources:
Every CS zi,k is associated with a single sharedresource Ri,k
For every resource R, let ZR = {zi,k|Ri,k = R} denote
the set of CS associated with R
In case of nested CS zi,k1 ⊂ zi,k2 ⊂ · · · ⊂ zi,km,
(a job in) zi,kℓ holds all resources Ri,kℓ , . . . , Ri,km
resources are
acquired in the order Ri,km, . . . , Ri,k1
freed in the order Ri,k1 , . . . , Ri,km
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 171/209
The Classic Priority Inheritance Protocol
Avoid priority inversion as follows:
Schedule jobs according to static priority scheduling
If a job JL blocks a higher-priority job JH via someshared resource (= both in CS),
JL temporarily inherits the priority of JHJL reverts to original priority when it exits CS
Can also be adapted to nested CS
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 172/209
The Classic Priority Inheritance Protocol
Avoid priority inversion as follows:
Schedule jobs according to static priority scheduling
If a job JL blocks a higher-priority job JH via someshared resource (= both in CS),
JL temporarily inherits the priority of JHJL reverts to original priority when it exits CS
Can also be adapted to nested CS
Will look into it in detail . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 172/209
Blocking with Static Priority Scheduling
With static priority scheduling, capturing blocking is easy:
A job Ji,k is blocked by a job Jj,ℓ in CS zj,m if
Ji,k cannot resume execution before Jj,ℓ has
completed zj,m, and
Jj,ℓ has lower priority than Ji,k
We can also say that task τj blocks task τi since all
jobs of a task have same priority
If R is a resource held by Jj,ℓ in CS zj,m, we also say
that Ji,k (resp. task τi) is blocked by the resource R
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 173/209
Blocking with Static Priority Scheduling
With static priority scheduling, capturing blocking is easy:
A job Ji,k is blocked by a job Jj,ℓ in CS zj,m if
Ji,k cannot resume execution before Jj,ℓ has
completed zj,m, and
Jj,ℓ has lower priority than Ji,k
We can also say that task τj blocks task τi since all
jobs of a task have same priority
If R is a resource held by Jj,ℓ in CS zj,m, we also say
that Ji,k (resp. task τi) is blocked by the resource R
For dynamic priority scheduling, capturing blocking is moredifficult . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 173/209
Blocking with EDF
Adapt blocking definition from static priority scheduling:
A job Ji,k is blocked by a job Jj,ℓ in CS zj,m if
Ji,k cannot resume execution before Jj,ℓ has
completed zj,m, and
Jj,ℓ has later deadline than Ji,k
Unfortunately, this property is not static (i.e., need nothold for all jobs Jj,ℓ of task τj)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 174/209
Blocking with EDF
Adapt blocking definition from static priority scheduling:
A job Ji,k is blocked by a job Jj,ℓ in CS zj,m if
Ji,k cannot resume execution before Jj,ℓ has
completed zj,m, and
Jj,ℓ has later deadline than Ji,k
Unfortunately, this property is not static (i.e., need nothold for all jobs Jj,ℓ of task τj)
We need a static measure πi associated with task τi toexpress worst case blocking times in terms of ci,m only . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 174/209
Preemption Levels (I)
With EDF, Jj,ℓ could block Ji,k on a shared resource only iff
Jj,ℓ has later deadline: dj,ℓ > di,k
otherwise, Ji,k would just be regularly preempted byJj,ℓ
Jj,ℓ is released earlier: rj,ℓ < ri,k
otherwise, Ji,k would never give Jj,ℓ a chance toexecute
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 175/209
Preemption Levels (II)
To capture blocking in EDF as in static priority scheduling,assign preemption level πi to task τi:
Job Ji,k of τi could only be blocked by Jj,ℓ of τj on a
shared resource iff πj < πi
The choice πi > πj iff Di < Dj does the job
Preemption levels will allows us
to analyze potential blocking, in a not overlyconservative way
express worst case blocking times in terms of ci,m only
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 176/209
The Priority Inheritance Protocol (I)
Deadline (= dynamic priority) of a job Ji,j of task τi:
nominal deadline di,j
active deadline di(t)
initialized to di,j upon job release
possibly modified, according to the rules below,whenJi,j enters/leaves a CS
a lower-deadline job wants to enter a CS Ji,j is in
or queued for
Jobs are scheduled EDF, based on their active deadlines.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 177/209
The Priority Inheritance Protocol (II)
PIP update rules (I):
When Ji,k tries to enter zi,m but Ri,m is already held by a
higher-deadline job Jj,ℓ, then Ji,k is blocked by Jj,ℓ onRi,m.
Otherwise, Ji,k enters zi,m and thus holds Ri,m.
If Ji,k is blocked by some Jj,ℓ, then Jj,ℓ inherits Ji,k’s
deadline:
Jj,ℓ’s active deadline dj(t) is set to di(t)
whenever di(t) changes (by blocking some evensmaller-deadline job), dj(t) := min{dj(t), di(t)}
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 178/209
The Priority Inheritance Protocol (III)
PIP update rules (II):
If Jj,ℓ exits a CS, it resumes
the active deadline it had when entering the CS (foroutermost CS)
the active deadline of the lowest active deadline jobqueued for enclosing CS (for nested CS)
The jobs blocked by a resource R are queued accordingto increasing active deadlines
Ties (inevitable due to inheritance!) usually broken byFIFO order
When freed, resource R is acquired by the job at thehead of R’s queue
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 179/209
Observations Priority Inheritance Protocol
From the PIP rules, we immediately observe:
Priority inheritance is transitive: If JL blocks JM , and JMblocks JH , then JL inherits the active deadline of JH viaJM .
The job holding resource R always executes with anactive deadline equal to the smallest of all jobs queuedfor R (or any of its enclosing R′)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 180/209
Observations Priority Inheritance Protocol
From the PIP rules, we immediately observe:
Priority inheritance is transitive: If JL blocks JM , and JMblocks JH , then JL inherits the active deadline of JH viaJM .
The job holding resource R always executes with anactive deadline equal to the smallest of all jobs queuedfor R (or any of its enclosing R′)
But there are subtle details . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 180/209
Priority Inheritance Blocking (I)
PIP leads to different types of blocking:
Direct blocking occurs when a job tries to acquire aresource already held by some other job
Transitive blocking occurs in case of nested CS:
large-deadline job JL acquires R1 in CS
medium-deadline job JM has nested CS, acquiringR2, R1
small-deadline job JH acquires R2 in CS
⇒ JH is transitively blocked by JL
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 181/209
Priority Inheritance Blocking (I)
PIP leads to different types of blocking:
Direct blocking occurs when a job tries to acquire aresource already held by some other job
Transitive blocking occurs in case of nested CS:
large-deadline job JL acquires R1 in CS
medium-deadline job JM has nested CS, acquiringR2, R1
small-deadline job JH acquires R2 in CS
⇒ JH is transitively blocked by JL
There is even a worse form of blocking . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 181/209
Priority Inheritance Blocking (II)
Push-through blocking occurs when
large-deadline JL blocks small-deadline JH (directly ortransitively) via some resources
JL also blocks every medium-deadline JM via itsinherited active deadline from JH
Note:
JM need not access any shared resource for beingblocked!
Preemption levels are not meaningful here: JM canhave πM > πH (“anomalous” p.t.b.), yet cannotpreempt JH since it has later deadline!
⇒ Push-through blocking can hit any τM with πM > πL
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 182/209
Properties of PIP (I)
We start our analysis with some technical lemmas . . .
Lemma 183. After a job Ji,k has been released, only jobs with active
deadline less than or equal to di,k are executed until Ji,k completes.
Proof. The lemma follows directly from the PIP rules:
If the job Ji,k is never blocked, the lemma is an obvious
consequence of EDF scheduling.
Otherwise, any task that blocks Ji,k inherits its deadline or an even
smaller one.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 183/209
Properties of PIP (II)
Lemma 184. A job JH can be blocked by a larger-deadline job JL only
if, at the time JH is released, JL is already within a CS that can (later)
block JH .
Proof. Suppose JL is not within a CS that can either (a) directly block
JH or (b) can lead to an active deadline of JL that is less than or equal
to the one of JH (transitive or push-through blocking).
By Lemma 183, only tasks with active deadline shorter than JH will
be executed after its release
Since JL’s active deadline when JH is released is larger than JH ,
the lemma follows.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 184/209
Properties of PIP (III)
Lemma 185. A job Ji,k of the task τi can only be blocked by a
higher-deadline job Jj,ℓ of task τj if τi has a greater preemption level
than τj , i.e., πi > πj .
Proof. By Lemma 184, Jj,ℓ must already be in a potentially blocking CS
when Ji,k is released. Hence,
Jj,ℓ is released before Ji,k
Jj,ℓ has a larger deadline than Ji,k
⇒ τj has a greater relative deadline than τi, hence πi > πj .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 185/209
Blocking Critical Sections
Definition 186 (a). Let βi,j denote the set of all CS of jobs of τj that can
block jobs of the task τi (via any kind of blocking):
βi,j ={
zj,m : πj < πi and zj,m can block Ji,k}
Since we have properly nested CS, we are primarilyinterested in maximal elements of βi,j:
Definition 186 (b). Maximal elements in βi,j :
β∗i,j = {zj,m : (zj,m ∈ βi,j) and (∄zj,m′ ∈ βi,j such that zj,m ⊂ zj,m′)}
Note carefully that a job Jj,ℓ exiting zj,m ∈ β∗i,j always
assumes a deadline larger than di(t) afterwards.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 186/209
Blocking Critical Sections (II)
Lemma 187. For any j 6= i, a job Ji,k can be blocked by larger-deadline
jobs Jj,ℓ of task τj for at most the duration of one critical section of β∗i,j .
Proof. By Lemma 184 and 185, Jj,ℓ must be in some potentially
blocking CS zj,m′ when Ji,k is released:
Is also in zj,m ∈ β∗i,j if zj,m′ ⊂ zj,m is nested
When Jj,ℓ exits zj,m, it returns to a deadline larger than the active
deadline di(t) of Ji,k
By Lemma 183, Ji,k cannot be blocked by Jj,ℓ (or any later job of
τj ) again.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 187/209
Blocking Critical Sections (III)
Theorem 188. Under EDF scheduling with PIP, each job of a task τi can
be blocked by at most the duration of one critical section in each of β∗i,j ,
1 ≤ j ≤ n and πi > πj .
Proof. The theorem follows from:
Lemma 187 can be applied for all tasks τj that can block τi
Lemma 185 implies that these are exactly the tasks τj with πj < πi.
This theorem only considered blocking by other tasks. Stillopen:
Consider blocking via specific resources
Computing the worst case blocking times
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 188/209
Longest Critical Sections (I)
Definition 189 (a). Let ζ∗i,j,k denote the set of all longest critical sections
of τj associated with resource Rk, which can block τi’s jobs (in any way).
ζ∗i,j,k ={
zj,h : zj,h ∈ β∗i,j and Rj,h = Rk
}
Definition 189 (b). Let ζ∗i,·,k denote the set of all longest critical sections
associated with resource Rk that can block jobs of τi.
ζ∗i,·,k =⋃
πj<πi
ζ∗i,j,k
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 189/209
Longest Critical Sections (II)
Theorem 190. For each resource Rk, a job of τi can be blocked by at
most one critical section in ζ∗i,·,k.
Proof. By Lemma 184 and Lemma 185, a lower priority job Jj,ℓ must be
within some CS zj,m associated with Rk when Ji,k is released, and
πj < πi.
When Jj,ℓ frees Rk, either
Rk is acquired by Ji,kRk is acquired by a smaller or equal active-deadline job
Anyway, subsequently, Ji,k cannot be blocked by higher-deadline
jobs still queued for Rk anymore.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 190/209
Maximum Blocking Times
Corollary 191. Under EDF scheduling with PIP, if there are m resources
than can block the jobs of τi, each of these jobs can experience a
maximum blocking time Bi =
min
∑
πj<πi
max{
cj,ℓ : zj,ℓ ∈ β∗i,j}
,
m∑
k=1
max{
cj,ℓ : zj,ℓ ∈ ζ∗i,·,k}
Meaning:
Left part: tasks
Right part: resources
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 191/209
Maximum Blocking Times
Corollary 191. Under EDF scheduling with PIP, if there are m resources
than can block the jobs of τi, each of these jobs can experience a
maximum blocking time Bi =
min
∑
πj<πi
max{
cj,ℓ : zj,ℓ ∈ β∗i,j}
,
m∑
k=1
max{
cj,ℓ : zj,ℓ ∈ ζ∗i,·,k}
Meaning:
Left part: tasks
Right part: resources
Computation of this expression is somewhat tricky . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 191/209
Computation of Blocking Times
Computing βi,j = βDi,j ∪ βTi,j ∪ β
Pi,j (required for β∗i,j) actually
involves all CS zj,ℓ of τj that can block a job of τi via
direct blocking (βDi,j) - simple
transitive blocking (βTi,j) - complex
push-through blocking (βPi,j) - even more complex
Let βDTi,j = βDi,j ∪ β
Ti,j and consider the latter two . . .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 192/209
Transitive Blocking
Recall example: 3 tasks τj , τh, τi with πj < πh < πi:
Large-deadline task τj acquires R1 in CS
Medium-deadline task τh has nested CS, acq. R2, R1
Small-deadline task τi acquires R2 in CS
⇒ Job Ji can be transitively blocked by job Jj
Incorporate this in βTi,j , for all τh with πj < πh < πi, via
θi,h,j ={
zj,ℓ ∈ βDh,j : (Rj,ℓ = Rh,p) ∧ (zh,p ⊂ zh,q) ∧ (zh,q ∈ βDTi,h )
}
Note that any CS in θi,h,j ⊆ βDTi,j will also cause
push-through blocking in βPh,j (next slide).
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 193/209
Push-Through Blocking Times
Recall example: 3 tasks τj , τh, τi with πj < πh, πi:
Large-deadline task τj blocks small-deadline τi (directly
or transitively) via some resources
τj then also blocks every medium-deadline τh via its
inherited active deadline from Ji
τh need not share any resource with τj or τi!
Incorporate this in βPh,j, for every πh > πj , by including all CS
∈ βDTi,j , for all τi with πi > πj via
βPh,j =⋃
i:πi>πj
βDTi,j =: ψj
As obviously ∀h : θi,h,j ⊆ ψj , we actually have βTi,j ⊆ βPi,j .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 194/209
Push-Through Blocking Times (II)
In order to finally compute βi,j , we introduce:
Definition 195. Let σi be the set of resources accessed by jobs of τi
For each j such that πj < πi, we get
βi,j ={
zj,k : Rj,k ∈ σi}
∪ ψj
where
βDi,j ={
zj,k : Rj,k ∈ σi}
covers direct blocking
ψj covers both push-through and transient blocking
Textbook: Algorithm for computing Bi via βi,j in O(cn3) time,
with c being the max. number of CS per task.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 195/209
Feasibility Check (I)
Assume now that
Bi is known for each τi
Tasks are ordered by decreasing preemption level(increasing relative deadlines), i.e., πi ≥ πj for i ≤ j.
Theorem 196. Given a set T of n periodic/sporadic tasks with Di ≤ Ti,any job set generated by T is feasibly scheduled with EDF scheduling
and PIP, if
i−1∑
j=1
Cj
Dj+Ci +Bi
Di≤ 1 for 1 ≤ i ≤ n.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 196/209
Feasibility Check (II)
Proof. By induction on i, we will prove that τ1, . . . , τi can be feasibly
scheduled, despite blocking from τi+1, . . . , τn.
Basis i = 1: SinceC1 +B1
D1≤ 1
and every job of τ1 can be blocked at most B1 time, Theorem 84 for
n = 1 [note that min{Ti, Di} = Di since Di ≤ Ti] ensures that τ1 is
feasibly schedulable.
Induction step: i− 1 → i for i− 1 ≥ 1: Assume that
the set of tasks τ1, . . . , τi−1 is feasibly schedulable, despite
blocking from τi, . . . , τn
the i-st condition also holds:
i−1∑
j=1
Cj
Dj+Ci +Bi
Di≤ 1
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 197/209
Feasibility Check (III)
Proof. (cont.)
We have to show that τ1, . . . , τi is feasibly schedulable.
Choose any Jj,ℓ out of the first i tasks (with completion time fj,ℓ)
Let t ≤ rj,ℓ be the beginning of the deadline-dj,ℓ busy period
containing the release of Jj,ℓ.
In [t, fj,ℓ), only the following can be executed:
Jobs J (t, fj,ℓ) released at time ≥ t and with deadline ≤ dj,ℓ,including Jj,ℓ (do not cause blocking of Jj,ℓ)
CS of jobs of τh, h > j, that can cause blocking of Jj,ℓ; let
B(t, fj,ℓ) be the cumulative blocking time
Then,
fj,ℓ ≤ t+B(t, fj,ℓ) +∑
Jh,l∈J (t,fj,ℓ)
Ch
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 198/209
Feasibility Check (IV)
Proof. (cont.)
Distinguish 2 cases:
If no job of τi occurs in J (t, fj,ℓ), then the resulting schedule is
identical to the one restricted to τ1, . . . , τi−1, which is feasible by
the induction hypothesis. Hence, fj,ℓ ≤ dj,ℓ as required.
Otherwise, at least one job of τi occurs in J (t, fj,ℓ):
Need to consider τi’s blocking by τi+1, . . . , τn, as this defers
fj,ℓ also (via∑
Jh,l∈J (t,fj,ℓ)Ch)!
All blocking times of jobs of the first i− 1 tasks by τi+1, . . . , τnare also push-through blocking for τi.
Hence, all blocking times are accounted for in Bi.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 199/209
Feasibility Check (V)
Proof. (cont.)
Now construct modified schedule (does not respect critical sections):
Ignore blocking:
Schedule first i− 1 tasks without blocking by τi+1, . . . , τnAdditionally block some job(s) of τi by the (dropped) cumulated
blocking time (in addition to the natural blocking of τi by
τi+1, . . . , τn)
Since all these blocking times are accounted for in Bi, and the i-thcondition holds, the modified schedule is feasible by Theorem 84
Original schedule must also be feasible, since
the modification above did not change fj,ℓ
the loading factor in [t, fj,ℓ] did not change (i.e., remains ≤ 1)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 200/209
The Priority Ceiling Protocol for EDF
PIP’s major problem:
High priority job may be blocked at every CS
⇒ Chained blocking
The Priority Ceiling Protocol (PCP) for EDF:
∀R, let c(R) = c(R, t) be the ceiling of R at time t
c(R) = mini
{
di,k|Ji,k will acquire or did not yet release R}
Let RH be the resource with the smallest ceiling of allresources acquired at time t (by jobs 6∈ τi)
Ji can enter and hold CS only if di(t) < c(RH), otherwiseJi blocks outside CS and JH holding RH inherits Ji’sdeadline.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 201/209
The Dynamic Priority Ceiling Protocol (II)
How the PCP works:
Let JH be a small-deadline job, with deadline dH
At most one lower priority job JL may enter CS Rk andblock JH later on, since c(RH) = c(Rk) = dH
Rk’s ceil. cannot become larger before JL leaves CS
Subsequently, only jobs with deadline less than dH mayenter any CS
Properties Priority Ceiling Protocol (PCP)
Any job is blocked at most once, when it enters its firstcritical section
No deadlock regardless of locking order
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 202/209
EDF Scheduling with PrecendenceConstraints (Ch. 7)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 203/209
Precedence Constraints
Motivation:
Results of jobs may be inputs to other jobs
if Ji’s result is input to Jj this induces a partial orderJi ≺ Jj
Ji and Jj must be scheduled in this order
We assume Ji ≺ Jj ⇒ ri ≤ rj for simplicity (because
otherwise EDF∗ would start executing Jj since it does
not yet know Ji)
Upcoming results:
Simple techniques to handle precedence constraints
[With additional shared resources]
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 204/209
EDF and Precedence Constraints
EDF can easily deal with precedence constraints
Idea: Modify active deadlines to also reflectprecendences
Ensure that a job that causally depends on Ji has alarger deadline
We already introduced the terms nominal and activedeadlines for a job Ji:
nominal deadline di is original deadline of Ji
active deadline d∗i (t) of Ji is updated during execution
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 205/209
EDF∗ (I)
The algorithm:
For each job Ji, compute active deadline d∗i as
d∗i = min
{
di,minh
{dh − Ch : Ji ≺ Jh}
}
Schedule jobs using EDF, where ties are brokenaccording to precedence: If d∗i = d∗j , then
schedule Ji before Jj if Ji ≺ Jj
arbitrary otherwise
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 206/209
EDF∗ (I)
The algorithm:
For each job Ji, compute active deadline d∗i as
d∗i = min
{
di,minh
{dh − Ch : Ji ≺ Jh}
}
Schedule jobs using EDF, where ties are brokenaccording to precedence: If d∗i = d∗j , then
schedule Ji before Jj if Ji ≺ Jj
arbitrary otherwise
Lemma 206. It holds that Ji ≺ Jj ⇒ d∗i ≤ d∗j .
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 206/209
EDF∗ (II)
Proof. Let mini = minh {dh − Ch : Ji ≺ Jh}. If Ji ≺ Jj , we have
mini ≤ minj , since ≺ is transitive:
(Jj ≺ Jh) ∧ (Ji ≺ Jj) ⇒ Ji ≺ Jh (this also justifies using the
nominal deadlines dh in mini)
mini ≤ dj − Cj < dj
Hence,
d∗i ≤ mini < dj
d∗i ≤ mini ≤ minj
⇒ d∗i ≤ d∗j as asserted
Note that d∗i = d∗j if both Ji ≺ Jh and Jj ≺ Jh for some low-deadline
Jh (error in textbook)
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 207/209
EDF∗ (III)
Theorem 208. EDF∗is optimal, i.e., EDF∗finds a feasible schedule if
there exists one.
Proof. Assume that there is a feasible and precedence-compliant
non-EDF∗ schedule. We transform it into a feasible EDF∗ schedule:
Assume Jj is scheduled (in part) before Ji, yet d∗i < d∗jfrom d∗i < d∗j it follows Jj 6≺ Ji by previous lemma
from executing Jj before Ji finishes follows Ji 6≺ Jj
Hence: There cannot be a precedence relation between Ji and Jj
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 208/209
EDF∗ (IV)
Proof. (cont.) No precedence relation between Ji and Jj :
Let ti resp. tj be the first time Ji resp. Jj is executed
Swap maximum-length execution segment in [max{tj , ri}, fj) and
[ti, fi), i.e., execute Ji rather than Jj during this time
completion time of Ji shortened
completion time of Jj is f ′j = max{fi, fj} ≤ dj since
d∗i < d∗j ≤ dj
Produces feasible schedule
A finite number of such inversions eventually yields EDF∗-compliant
schedule.
182.086 Real-Time Scheduling (http://ti.tuwien.ac.at/ecs/teaching/courses/rt sched) – p. 209/209