adapting and evaluating algorithms for ... and evaluating algorithms for dynamic schedulability...

26
ADAPTING AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of York, England. ABSTRACT This report describes an investigation into methods of dynamic schedulability testing. A sporadic task which arrives at a processor must be either rejected or guaranteed to be schedulable alongside the set of tasks already executing on the processor. There already exist algorithms for the static schedulability testing of a task set before run-time. This report describes how these may be adapted to make use of dynamic scheduling data and thus provide an optimal schedulability test for incoming sporadic tasks. The adapted algorithms trade off complexity with pessimism. Their performance may be improved by combining them in a hybrid algorithm. Further performance improvements may be made in all of the algorithms by inserting a timeout in order to limit their worst-case execution time. It is found that the hybrid algorithm still consistently outperforms all of the other adapted algorithms. The hybrid algorithm is then chosen for an investigation into the parameters which determine the optimum value of the performance-enhancing timeout. Finally, this optimum value is made use of, in an investigation into the effect on total processor utilisation, of changing the proportions of periodic and sporadic utilisations.

Upload: trandan

Post on 03-May-2018

226 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

ADAPTING AND EVALUATING ALGORITHMS FOR

DYNAMIC SCHEDULABILITY TESTING

February 1994

Charlie McElhoneDepartment of Computer Science

University of York, England.

ABSTRACT

This report describes an investigation into methods of dynamic schedulability testing. Asporadic task which arrives at a processor must be either rejected or guaranteed to beschedulable alongside the set of tasks already executing on the processor. There already existalgorithms for the static schedulability testing of a task set before run-time. This reportdescribes how these may be adapted to make use of dynamic scheduling data and thus providean optimal schedulability test for incoming sporadic tasks. The adapted algorithms trade offcomplexity with pessimism. Their performance may be improved by combining them in ahybrid algorithm. Further performance improvements may be made in all of the algorithms byinserting a timeout in order to limit their worst-case execution time. It is found that the hybridalgorithm still consistently outperforms all of the other adapted algorithms. The hybridalgorithm is then chosen for an investigation into the parameters which determine theoptimum value of the performance-enhancing timeout. Finally, this optimum value is made useof, in an investigation into the effect on total processor utilisation, of changing the proportionsof periodic and sporadic utilisations.

Page 2: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

1. INTRODUCTION

Many real-time applications require distributed processors with tasks which are required tomeet hard deadlines. According to the Spring Project [11] such real-time systems will becomplex and will need to adapt to environmental changes, maintenance requirements, systemfaults, etc. Consequently dynamic scheduling is required in order to provide load-sharing anda flexible use of processors. A processor should therefore be able to schedule not only itsresident tasks but also any sporadic tasks which arrive from the environment or other parts ofthe system. The problem of scheduling both sporadic and resident periodic tasks has beentackled at various levels of sophistication. The crudest method is simply to run the sporadicsin background with no regard to their criticality. Better, is to invent a polling task which runsperiodically and uses its large capacity to service periodics. All tasks can then be scheduled,for example, according to the rate monotonic scheduling algorithm as shown by Lui andLayland [7]. Obviously a sporadic which arrives just after the polling task has run will have towait a maximum time for it to run again. Improvements on this are the sporadic server due toSprunt [9,10], and the deferrable server due to Strosnider [14] which allow the server task'scapacity to be used throughout the server's period. Each of these has a different schedulabilityanalysis which must be carried out if the sporadic is to be guaranteed to meet a hard deadline.However, the rate monotonic approach assumed by these methods is not optimal when a task'sdeadline is less than its period.

The Spring Project has a different method of dealing with sporadic tasks. A Springnode has a set of resident periodic tasks which are guaranteed schedulable before run-time. Asporadic task with a hard deadline which arrives at a Spring node will be tested forschedulability along with the resident set of (hard) periodic tasks. The node's guaranteealgorithm [12] will use run-time data to attempt to construct a schedule which includes theextra task, and may also take into account resources and task precedence constraints. If aschedule cannot be constructed, the sporadic cannot be guaranteed, and an attempt may bemade to offload the task to another processor which may be able to guarantee it. Thedrawback is the large overhead introduced by the construction of a schedule which guaranteesthe sporadic.

Other approaches such as Leung and Whitehead [6] are aimed at fixed-priority tasks indeadline monotonic ordering which is an optimal scheduling method even when a task'sdeadline is less than its period. For example Lehoczky and Ramos-Thuel [3] have developeda slack stealing algorithm. When a sporadic task arrives at a node the algorithm tries to stealtime from the resident tasks without causing their deadlines to be missed. The slack stealerprovides optimal service for sporadic tasks by delaying the periodic tasks as long as possibleuntil they have just sufficient time left to meet their deadlines. Lehoczky and Ramos-Thuel[4] have extended their slack stealer to guarantee sporadics with hard deadlines. As withSpring, the problem is that the overhead for the guarantee test is considerable.

Audsley [1,2] has developed two static schedulability tests for a task set with fixedpriorities. The tests make the worst-case critical instant assumption that all tasks are releasedsimultaneously. Both tests guarantee tasks, but one test is pessimistic and simple (O(N2))while the other is optimal and pseudo-polynomial (henceforth referred to as PP). This reportdetails an attempt to adapt Audsley's algorithms for dynamic use. The intention is to providea rival dynamic schedulability test to that of Spring or Lehoczky. However, it is hoped thatthe overheads incurred by this test will be more acceptable.

1

Page 3: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

2. SCHEDULABILITY TESTING ALGORITHMS

Pre-emptive fixed-priority scheduling is widely used in real-time systems due to itscombination of predictability and flexibility. It simplifies real-time kernels and its use ofpriorities facilitates bounds on task blocking . Furthermore, the priority scheme has beenchosen by the programming language Ada. The problem addressed in this report is how toprovide the computationally cheapest method of guaranteeing the schedulability of a set of Nfixed-priority tasks. To simplify the problem, all tasks are assumed to be independent.Guarantees must be performed dynamically in order for the resident task set to include anewly arrived sporadic. The simplest algorithm which could be hoped for is O(N), but noreference was found to an algorithm of this low complexity which would guarantee taskdeadlines. The cumulative nature of the interference of higher priority tasks with a lowerpriority task implies that an accurate method would be at least O(N2). Pessimistic methodscould be used which assume the worst-case critical-instant release of all tasks but these wouldbe so pessimistic as to nullify the advantages of dynamic testing. Audsley [1] discusses twoalgorithms for static schedulability testing which shall referred to by their complexity: O(N2)and PP.

For O(N2), Audsley assumes a set of tasks ranked in priority order according todeadline monotonic. The period (T), deadline (D) and worst-case execution time (C) of eachtask is known. It is assumed that all tasks are released simultaneously (worst-case criticalinstant). If B is the worst-case blocking time a task may experience due to the operation ofsome concurrency control protocol and I is the worst-case interference a task may suffer fromhigher priority tasks, then for any task to be schedulable:

D >= C + B + I (1)

The determination of C and B are beyond the scope of this report. Audsley presents an O(N2)algorithm for the determination of I for the duration of the deadline for whichever task (thetest task i ) is being schedulability tested. Obviously this may include interference which doesnot occur during the lapsed execution time of the test task. Hence this test is sufficient but notnecessary. The list of higher priority tasks is scanned to provide the following sum which isthe total interference from all higher priority tasks j:

∑∑j( Di/Tj Cj ) (2)

Inequality (1) may then be used to test the schedulability of the test task. Note this ispessimistic since a final interference by a higher priority task j may not be the full value of thattask's computation time. Potentially every task in the list may need to take a turn as the testtask so the complexity of the algorithm is O(N2).

The PP algorithm is not pessimistic because it accurately calculates the totalinterference of all higher priority tasks during the course of the test task's execution. Ittherefore more finely calculates the response time of the test task. The algorithm proceeds byrepeatedly increasing the test task's window (wi) in which higher priority tasks interfere. Ateach iteration the following sum over all higher priority tasks j is calculated:

∑∑j( wi/Tj Cj ) (3)

2

Page 4: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

The initial value of the window is the worst-case computation time of the test task. Thewindow size at the next iteration will be the value of sum (3) from the last iteration. And soon, until the window size does not increase. Audsley shows that the algorithm will converge ifprocessor utilisation is less than 100%. This convergence yields the total interference required.In general the algorithm is pseudo-polynomial due to the difficulty in determining the numberof iterations required. However, Audsley [2] points out that any particular test task deadline(assumed to be an integer number of ticks) will provide an upper bound on the number ofiterations required. Unlike O(N2), the algorithm is optimal [6]. As before, the algorithm isrepeated for all tasks in the list. Because the tasks are tested in deadline monotonic order, thealgorithm can be speeded up by using the final I value obtained for the ith test task as theinitial value of w for the i+1th test task.

3. ADAPTATIONS FOR DYNAMIC TESTING

Both O(N2) and PP can be adapted for dynamic schedulability testing by similarchanges to the above equations. A more rigorous explanation of the changes may be seen inthe Appendix. The requirement is to guarantee the deadline of an incoming sporadic whicharrives, at some arbitrary time, with a known computation requirement, and is inserted in thedeadline monotonically ordered task list. Note that all those tasks which fall below thesporadic in the list must also be schedulability tested to ensure they can still meet theirdeadlines. Both adapted algorithms avail themselves of the run-time data which is updated bythe scheduler for all tasks: Rj, the current residual execution time of each higher priority task,j, and NRj, the next release time of each task j. When a sporadic arrives, the schedulabilitytester uses the NRj for each task j to calculate the offset, Oj, of its next release, using theformula: Oj = NRj - current time. The changes required in (2) and (3) above allow for thefollowing dynamic properties:

(i) A higher priority task cannot interfere with the test task until that higher priority task has been released.

(ii) If the next release of the higher priority task is after the expiry of the interference interval under consideration, we must ensure zero, not a

negative, interference value is produced.(iii) Any residual execution of an interfering task must be added to that task's total interference with the test task.

In line with these, equations (2) and (3) above are adapted to:

(i) Reduce by the offset Oj, the interval ( Di, wi ) considered for interference by a higher priority task j.(ii) Ensure that the result ( Di - Oj, wi - Oj ) is not negative(iii) Add Rj.

Hence (2) becomes :

∑∑j( (Di - Oj)/Tj o Cj + Rj) (4)

where Xo (i) returns 0 if X <= 0 (ii) returns X if X > 0

3

Page 5: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

and (3) becomes :

∑∑j( (wi - Oj)/Tj o Cj + Rj) (5)

Apart from this, the algorithms proceed as in the static case, except that there is no need toschedulability test the tasks which are above the sporadic in priority ordering. When testingthe sporadic using the above equations, (4) will use the sporadic's deadline for Di , and (5) willinitialise wi to the sporadics worst-case execution time.

Because the sporadic is a one-off, each lower task need only be tested against its nextdeadline. If the lower test task is active (i.e. no-zero residual execution time), then (4) willuse the remainder of the deadline for Di and (5) will initialise wi to the residual execution time(Ri) of the test task. Note that in the dynamic case the remaining interference intervals do notnecessarily increase monotonically down the task list and therefore a final ith window valuecannot be used to initialise the i + 1th test task. If the lower test task is inactive (i.e.completed its current execution and awaiting its next release), then we must check against thedeadline of the task's next activation. Strictly, we should calculate interference in an intervalstarting at the test tasks next release. However, this future data is not yet known by thescheduler, and to calculate it would incur unacceptable overheads. It is sufficient (seeAppendix) to suppose that the next release of the test task is at the current time, and to testagainst the next deadline i.e. the deadline has effectively been increased by the quantity: nextrelease time - current time.

4. FURTHER VARIATIONS

Both static and dynamic O(N2) and PP algorithms trade off complexity with accuracy.In the dynamic case O(N2) will be pessimistic but quicker. Therefore, if the schedulabilitytesting is on the same processor as the resident periodic task set, then O(N2) will allow moretime for sporadics but may reject some sporadics which are schedulable. By contrast, the PPtest takes (indefinitely) longer to arrive at an optimal result. On the same processor it wouldtherefore leave less time for sporadics, but never pessimistically reject a schedulable sporadic.A more efficient algorithm may be to combine O(N2) and PP in a hybrid algorithm. Allschedulability testing is performed by O(N2) until a task is found to be unschedulable. ThenPP is used to make a finer judgement on schedulability. Such a hybrid algorithm should beboth optimal and faster than O(N2).

Another variation would be to reverse the order of schedulability testing byschedulability testing the lowest priority task first and then working up the task list until thesporadic itself is schedulability tested. For a schedulable sporadic this would take the sametime as the top-down order used previously. However it may be that unschedulable sporadicsare found out earlier. This will depend on where in the task list the unschedulable tasks arelikely to occur. At one extreme (justifying top-down) only the sporadic may test asunschedulable, while lower tasks pass their tests. At the other extreme (justifying bottom-up),all tasks (including the sporadic) may be schedulable except the lowest in the list. In otherwords, if the unschedulable tasks are more likely to be found nearer the bottom of the tasklist, then bottom-up testing will on average be faster. To summarise, five dynamicschedulability tests are proposed:

(pure) O(N2) (pure) PP hybrid ( O(N2)/PP) bottom-up PP

4

Page 6: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

bottom-up hybrid.

5. SIMULATION STUDIES

The above discussion points to further investigation of the statistical behaviour of allfive adapted algorithms. The most cost-effective way of doing this seemed to be to build asimulation of a scheduler and schedulability tester which would input a large variety ofperiodic task sets and sporadic requests. It was decided that the schedulability testing part ofthe simulation would run in real time in order to measure exactly the overheads incurred byeach algorithm. The scheduler itself ran as a simulation.

5.1 The Scheduling Model Adopted

It was decided, for the sake of flexibility and ease of later implementation, to adopt aco-operative, fixed-priority pre-emptive scheduling model. The co-operative nature of thescheduler would allow regular checking of sporadic arrivals without the overhead ofdescheduling currently executing tasks. A sufficiently small granularity (10ms) could bechosen for the scheduler slot size in order to minimise delay in testing sporadic arrivals andalso to minimise release jitter. An even smaller slot size might have unreasonably increased theco-operative scheduling overhead. It was decided to allow only one sporadic arrival to beschedulability tested at the beginning of every tenth slot in order to permit a smaller upperbound for schedulability testing.

The following is an outline of the scheduler/schedulability tester (henceforth thescheduler-tester) program. At the start of each slot the scheduler-tester checks for the arrivalof a sporadic and, if one is present, it tests its schedulability. If the sporadic is schedulable it isinserted in the task list (dispatch queue) in deadline monotonic order. The scheduler-testerthen releases any periodic tasks whose reactivations are due, updates next release times andresidual execution times, and finally dispatches the topmost runnable task. The scheduler-tester is also invoked in mid-slot when a task completes it execution. In this case theremainder of the slot is allocated to the next topmost runnable task. An indefinite number ofsporadics may accumulate in the task list until each completes and is then deleted. Thescheduler-tester also verifies that all guaranteed tasks actually complete within their deadlines.

The co-operative scheduling overheads were simulated using the average valuesobtained from a feasibility study. These were (i) the minimum co-operative schedulingoverhead (0.15ms) and (ii) the extra overhead per task release (0.06ms). Five versions of thesimulation were built, one for each of the adapted algorithms.

5.2 Task Generators

A task-set generator was constructed to produce large numbers of schedulable sets ofperiodic tasks. All tasks were independent in order to simplify analysis. The co-operativescheduler was modelled as the highest priority periodic with a period of 10ms. Theschedulability test was modelled separately as the second highest periodic with a period of100ms which was equal to the inverse of the maximum sporadic arrival rate. (The adoption ofa maximum sporadic arrival rate is a pre-requisite for finding an upper bound on schedulabilitytesting.) The generator produced random task sets with task periods, deadlines and (worst-case) computation times all uniformly distributed. Different numbers of periodics and differentperiodic processor utilisations were specified.

5

Page 7: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

Sporadic task generators were constructed to produce random sporadic arrival times,deadlines and (worst-case) computation times. All these parameters were either (i) uniformlydistributed, or (ii) arrival rates were Poisson, with deadlines and computation times normal.The generator allowed a minimum interarrival period for sporadics to be specified.

5.3 Measuring Performance

The first aim of the simulation study was to compare the performance of the fiveadapted algorithms. The best performance index seemed to be the guarantee ratio as used inSpring [13]. This is a ratio obtained over a complete simulation run :

number of sporadics guaranteed by the algorithm / total number of sporadics sent

Before guarantee ratios can be measured by a simulation, an upper bound for the time taken torun the schedulability test must be estimated. This will be the worst-case execution time forthe high priority periodic which models schedulability testing. The task set generator uses thisvalue when performing its own (static) schedulability test of the task sets which it generates.There is obviously a trade-off between the pessimism of this value and the time left for othertasks. It is a 'chicken and egg' problem: some working value of this bound/computation timemust be used to generate the first schedulable task sets, which can then be used in a run toyield a better value for the bound. In practice, the approximate maximum values from afeasibility study were used to generate the first task sets. Simulation runs then allowed thesevalues to be refined. It was found that algorithms based on PP are especially difficult to upperbound. Obviously, a particular maximum value is peculiar to a particular set of test data, andthe question arises as to which maximum to use in practice. For example is it overlypessimistic to use the highest value which has ever been obtained for a particular algorithm?This problem will be addressed later (see Section 7). Meanwhile, the general practice adoptedis to use the maximum value for the particular set of test data used.

6 COMPARING THE ADAPTED ALGORITHMS

There are a number of parameters which affect the performance of all of the adaptedalgorithms. The most obvious is the number of periodic tasks in the periodic task set (i.e. Nabove). Related to this is the ratio:

average periodic deadline / average sporadic deadline (henceforth: PD/SD)

which determines the average position in the task list in which a sporadic will be placed. Alltasks below the sporadic must be schedulability tested so this ratio is an important factor inthe actual time taken by the algorithms. Other parameters are the total periodic processorutilisation, and the intrinsic difficulty of scheduling a particular set of periodics. This latterparameter is referred to by Lehoczky [3]. Task sets whose periods are not harmonics have arelatively low breakdown utilisation i.e. their uneven occurrence of slack time means thatthere are intervals of zero slack time during which no sporadics can be scheduled. Furtherparameters which may affect performance are sporadic arrival rates, and the averagecomputation time per sporadic. For example, the average sporadic computation time mayaffect the number of iterations required to test individual tasks in the PP algorithm. Theapproach taken in the following investigations is to keep all parameters constant except one,

6

Page 8: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

and to measure the performance of the five algorithms whilst varying the single chosenparameter.

6.1 Varying the Periodic Characteristics

Table 1, Graph 1 and Table 2, all below, show the comparative performance of thefive algorithms and two background scheduling methods, when characteristics of the periodictask set only are varied. The background scheduling methods accept all sporadics and executethem at the lowest priority in FIFO order. Their performance can be measured by a successratio i.e. the proportion of all the sporadics which are found to meet their deadlines. It shouldbe noted that this background scheduling does not schedulability test sporadics and thereforedoes not guarantee them. In that sense it is not strictly comparable with other five algorithmsand serves only as a benchmark.

All simulations results shown in these tables and graph use the same set of 420sporadics whose arrival rates are Poisson distributed (µ = 2.8, k = 10) over the totalsimulation time of 100,000 ms. The sporadic deadlines and computation times are normallydistributed.

Table 1 shows the maximum schedulability test times (in ms) and guarantee ratiosobtained from each of the schedulability test algorithms as the number of periodics (N) in thetask set is increased. The PD/SD ratio and the number of tasks below the sporadic positionalso increases with the number N. The table also shows the success ratios (SR) obtained fromtwo versions of background scheduling of the sporadics in FIFO order. (The differencebetween these versions is explained in the paragraph below.) Each result in Table 1 is theoverall output of 10 simulation runs each with 10 different sets of random periodics.Maximum values are the maximum from all 10 runs and guarantee ratios are the average for10 runs. The periodic processor utilisation is always 85% which includes scheduling overheadsfor the periodic tasks, but does not include any utilisation by the periodic which modelsschedulability testing. As explained earlier, the maximum schedulability test time (upperbound) must first be established before a simulation can produce a meaningful guarantee ratio.For high N the maximum schedulability test time can be greater than the scheduler slot size(10ms). It was decided not to allow the schedulability test to overrun the slot size because thiswould violate the scheduler's upper bound on release jitter. In any case more than 10msconstitutes an unacceptably high overhead for worst-case schedulability testing. Therefore, atimeout was placed in the scheduler-tester which thus rejects sporadic requests taking morethan 10ms to test. Obviously, this impacts on the guarantee ratios shown in Table 1, but it isonly significant when the maximum value (shown in brackets) is considerably more than theslot size of 10ms.

Two versions of background scheduling of the sporadics in FIFO order are included atthe end of the table. It should be emphasised that neither version guarantees the sporadics inadvance. Instead, they accept all the sporadics without the overhead of testing theirschedulability. Hence the performance measure is better described as a success ratio which isthe proportion of the sporadics which turn out to meet their deadlines. The difference betweenthe two versions of background scheduling is that Background 1 is strictly FIFO. It continuesto queue, and then execute, all sporadics, even when their deadlines have expired.Background 2, however, deletes sporadics when their deadlines are found to have expired.

Table 1 shows that bottom-up hybrid consistently performs best. Clearly the guaranteeratios of the PP based algorithms are badly affected by the 10ms timeout when N = 20

7

Page 9: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

Numberof

Periodics

PD/SDRatio

Ave no oftasksbelow

sporadic

Bottom-upHybrid

Hybrid Bottom-upPP

PP O(N2) Background (FIFO)

1. 2.Max(ms)

GR Max(ms)

GR Max(ms)

GR Max(ms)

GR Max(ms)

GR SR SR

5 0.340 0.0 3.53 0.742 3.80 0.737 3.41 0.742 3.80 0.734 0.53 0.424 0.170 0.700

10 0.627 3.1 6.30 0.727 6.70 0.719 7.50 0.699 8.00 0.672 1.34 0.606 0.144 0.649

15 0.808 4.4 8.80 0.710 9.20 0.695 10.00(13.20)

0.636 10.00(15.50)

0.580 2.76 0.585 0.128 0.624

20 1.017 7.5 10.00(11.80)

0.686 10.00(14.00)

0.659 10.00(24.60)

0.255 10.00(25.70)

0.173 4.00 0.633 0.099 0.584

30 1.393 13.0 10.00(20.00)

0.576 10.00(25.00)

0.529 10.00(68.00)

0.00 10.00(69.00)

0.00 9.80 0.547 0.085 0.555

Table 1: Comparing performances when changing the number of periodic tasks

8

Page 10: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

and N = 30. This explains why O(N2) produces the second best guarantee ratio when N = 30.Variations within the guarantee ratios obtained by O(N2) may be explained as follows. Thedeterioration in guarantee ratio between N = 20 and N = 30 is as expected due to theincreased number of schedulability tests needed for a longer task list. The small guaranteeratio value for N = 5 can be interpreted as the effect of the pessimism of the algorithm.For N = 5, all the periodic computation is above the sporadic in priority order. Therefore thepessimism of O(N2) due to the full extra hits of higher priority tasks is likely to be greater.The increase in the total laxity of the task set as N increases may also account for a greaterguarantee ratio when N = 30 than N = 5. The shallow trough in guarantee ratio at N = 15may be due to statistical variations: it was observed that guarantee ratios for O(N2) had aparticularly wide standard deviation over the 10 task sets (approximately 10% of guaranteeratio).

Table 1 also shows that the success ratios for the background methods. The lowsuccess ratios for Background 1 show the effect of continuing to queue and execute sporadicseven after their deadlines have expired. Both background versions show a deterioration insuccess ratio as the number of periodics in the task list increase. This can be interpreted as theeffect of the sporadics occupying less and less optimal positions in the task list, as the periodictask list increases. For example, when N = 5, sporadics with an average deadline of 550 msare being queued beneath periodics with a maximum deadline of 500 ms. When N = 30,however, the same sporadics queue below periodics with a maximum deadline of 3000 ms. Itis clear that such a long periodic task list displaces the sporadics further downward from theiroptimal deadline monotonic position in the task list. Because the strictly FIFO Background 1method gives such low success ratios, it was decided to omit it from the rest of the simulationstudies. From now on only Background 2 is included in the results and it is simply referred toas 'Background'.

P eriodic U tilisation %

Guarantee Ratio

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

65 70 75 80 85

B ottom-up H ybridHybridB ottom-up P PP PO(N2)B ackground (S uccess R atio)

B ottom-up H ybrid

Hybrid

B ottom-up H ybrid

Graph 1: Comparing performance with increasing periodic utilisation.

Graph 1 shows the comparative performance of the five algorithms and backgroundscheduling when the periodic utilisation is varied from 65% up to 85%. Lower utilisationswere not used because they cause the performance of all of the algorithms to converge at a

9

Page 11: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

guarantee ratio of 1.0. In this case N and PD/SD were kept constant (N = 10 and PD/SD asnear 0.6 as random task set generation would allow). The sporadics were the same as in Table1, and 10 sets of periodics were randomly generated for each processor utilisation in thegraph. As before, the schedulability testing was not included in the total periodic utilisations.However, as before, the task sets generated were all statically schedulability tested using aworst-case figure of 10ms for the computation time of the periodic which models the dynamicschedulability test. Again a less pessimistic maximum schedulability test time was found, foreach set of test data, by repeating each simulation and revising the maximum schedulabilitytest time. Each guarantee ratio and success ratio produced is an average over 10 periodic tasksets. It can be seen that the success ratio of background exceeds the guarantee ratio of O(N2)at high periodic utilisations. This can be interpreted as the effect of the pessimism of O(N2)increasing as the interferences from higher priority periodics grow larger in size (though not innumber).

Table 2 compares the performance of the algorithms across a variety of periodic tasksets. Periodic task set (1) is adapted from an avionics case study developed by Locke et al.[8]. It consists of 15 periodic tasks with a wide range of periods from 250ms to 10000ms.Periodic task set (2) has a set of periodics with a low breakdown utilisation (80%), and taskset (3) has a high breakdown utilisation (100%). Task sets (2) and (3) evaluate theperformance of the algorithms with task sets which are intrinsically difficult to schedule (2),and easy to schedule (3). All three task sets have a periodic utilisation of 80% and are sent thesame sporadics as previously. Task set (1) has 10 tasks below the average sporadic positionin the task list while task set (2) has 3 tasks below and (3) has 2 tasks below. It is worthnoting that all the periodic tasks for Table 2 are rate monotonic (i.e. deadline = period) andhave relatively large amounts of slack time associated with them.

Sched TestAlgorithm:-

Bottom-upHybrid

Hybrid Bottom-upPP

PP O(N2) Back-ground

Max(ms)

GR Max(ms)

GR Max(ms)

GR Max(ms)

GR Max(ms)

GR SR

1. AvionicsCase Study

Task Set8.80 0.798 9.50 0.786

(>)10.00 0.414

(>)10.00 0.369 2.76 0.762 0.667

2. LowBreakdownUtilisationTask Set

3.53 0.824 3.80 0.824 3.41 0.824 3.80 0.824 0.53 0.783 0.652

3. HighBreakdownUtilisationTask Set

3.53 0.945 3.80 0.945 3.41 0.943 3.80 0.941 0.53 0.926 0.498

Table 2: Comparing performances over a variety of periodic tasks sets.

Clearly bottom up hybrid consistently outperforms the other algorithms across thevariety of periodic task sets. Once again the maximum schedulability test time was not allowedto exceed 10ms which badly affects the guarantee ratios for PP algorithms in the first row ofthe Table 2. The high guarantee ratios obtained for task sets (2) and (3) are due to the largeamount of slack associated with these tasks. In addition it is noticeable that O(N2) performs

10

Page 12: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

better than previously. Again this may be due to greater slack, which means that thepessimism of O(N2) will count less against it while it retains the benefit of a small upperbound on schedulability testing. The small number of tasks in sets (2) and (3), together withthe large amounts of slack, accounts for the closeness of the guarantee ratios across allschedulability test algorithms. It is interesting to note that Background runs against the trendby performing better with low breakdown utilisation than with high. This can be interpreted asthe effect, at low breakdown utilisation, of concentrated intervals of high slack and intervals ofzero slack. This benefits background scheduling because this algorithm is penalised less for itsindiscriminate processing of sporadics in FIFO order. During high slack, less time is wastedprocessing sporadics which will eventually fail to meet their deadlines. During zero slack, noneof the algorithms can perform sporadic processing in any case. With high breakdownutilisation, and a more even distribution of slack over time, Background wastes more timeexecuting sporadics which eventually fail to meet their deadlines.

6.2 Varying Sporadic Characteristics

The above results compare the performance of the algorithms while periodic task setcharacteristics are changed. Now the results of varying sporadic task characteristics arepresented. Graphs 2 and 3 show the effect of varying (1) the sporadic arrival rate and (2) theaverage sporadic computation requirement. All parameters relating to the periodic task setsremain constant. All results were obtained using the average guarantee ratio from 100 sets of10 periodic tasks all of which were schedulable and randomly generated to give 85% periodicutilisation. The average sporadic arrival rates and average computation times were randomlygenerated according to a uniform distribution. Realistic arrival rates may be more accuratelymodelled by a Poisson distribution, however the objective here was to differentiate theperformance of the algorithms under different arrival rates. Graph 2 uses sporadics with afixed average computational requirement of 25ms. Graph 3 uses a fixed average arrival rate of0.004 sporadics per ms.

S poradic Arrival R ate (sporadics per ms)

Guarantee Ratio

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

0.0004 0.002 0.004 0.006 0.008 0.01

B ottom-up H ybridHybridB ottom-up P PP PO(N2) B ackground (S uccess R atio)

B ottom-up H ybrid

Hybrid

Graph 2: Comparing performances over a range of sporadic arrival rates.

11

Page 13: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

As before there is the problem of setting an upper bound on schedulability testing.Here, a different approach is taken. Instead of assuming that the best guarantee ratio occurswhen the upper bound has its maximum value, a series of simulation runs were carried out inwhich the upper bound was reduced in stages and the guarantee ratios measured. Theguarantee ratio for each algorithm was seen to peak at a value less than the maximumschedulability test value. This was therefore the optimum trade-off between allowing time forschedulability testing and leaving more time for scheduled tasks to run. The peak occurs atwhat might be called the optimum upper bound or optimum timeout. The work involved inestablishing this bound for each arrival rate, for each schedulability testing algorithm, wasprohibitive, so the results in Graph 2 use the optimum upper bound established for themaximum arrival rate of 0.01 sporadic per ms. This bound was also used for Graph 3. Thebounds for each algorithm are presented in Table 3:

Algorithm: Bottom-upHybrid

Hybrid Bottom-upPP

PP O(N2)

Bound(ms) 5.5 6.0 7.0 7.5 1.5

Table 3: Optimum upper bounds used for the schedulability test algorithms.

Average S poradic Computation T ime (ms)

Guarantee Ratio

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

2.5 12.5 25 37.5 50

B ottom-up H ybridH ybridB ottom-up P PP PO(N2)B ackground (S uccessR atio)

B ottom-up H ybrid

H ybrid

B ottom-up H ybrid

Graph 3: Comparing performances over a range of average sporadic computation times.

The use of the above bounds for low sporadic loads may be pessimistic, but this should notaffect the comparison of the schedulability test algorithms. (An investigation into parameterswhich determine the optimum bound follows in Section 7.) Again it is clear that bottom-uphybrid consistently outperforms all other algorithms considered. Furthermore, with someminor exceptions, the results so far are consistent with the following list of algorithms indecreasing order of performance:

bottom-up hybridhybridbottom-up PPPP

12

Page 14: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

Background scheduling is omitted because it does not guarantee sporadic deadlines. Thequestion of where O(N2) comes in the ordering is unclear. Tables 1 and 2 show that O(N2)can outperform the pure PP algorithms, and even the hybrid algorithm, when N is sufficientlylarge. However, this may be due to the PP and hybrid algorithms operating under thehandicap of a 10ms timeout. This seems especially likely to be the case for hybrid algorithmsbecause they are based on O(N2). At the other extreme, O(N2) gives consistently poorestperformance in Graphs 2 and 3. In summary, none of the results shows O(N2) outperformingthe bottom-up hybrid.

7. PARAMETERS OF THE OPTIMUM BOUND

The above investigations show that the optimum value of the optimum upper boundfor each algorithm may depend upon a number of parameters: sporadic arrival rate, averagesporadic computation time, PD/SD, the number of periodic tasks (N), and the periodicutilisation. The ratio PD/SD is defined above and takes account of both the average periodicdeadline and the average sporadic deadline. Together with N, this ratio determines the averagenumber of tasks below the sporadic which must be schedulability tested. Note that the averageperiodic computation time is not included as a separate parameter because it is taken intoaccount by the periodic utilisation. The investigations which follow are an attempt todetermine how sensitive the optimum upper bound is to each of these parameters. In otherwords is:

Optimum Upper Bound = f (ave sporadic arrival rate, ave sporadic computation time, PD/SD, N, periodic utilisation) ?

Investigation of all the schedulability test algorithms would be too time-consuming, so it wasdecided to select the algorithm with the best overall performance i.e. bottom-up hybrid. Asbefore, the approach was to keep all parameters constant, except the one to be varied. Theconstant values used were:

average sporadic arrival rate (of uniformly distributed times) = 0.004 sporadics/msaverage sporadic computation time (of uniformly distributed values) = 25msPD/SD = 0.6N = 10periodic utilisation = 85%

A uniform distribution of sporadic arrival times was chosen in order to provide a constantvalue and make clearer the effect of varying some other parameter.

Table 4 shows the complete set of guarantee ratios obtained when investigating theeffect of average sporadic arrival rate on optimum upper bound. All results are guaranteeratios and the optimum upper bound values are emphasised. All guarantee ratios obtainedwere averages from 100 sets of 10 periodic tasks. Graph 4 is derived from Table 4.Noteworthy is the relatively large increase in bound as the sporadic arrival rate reaches itsmaximum permissible 0.01 sporadics/ms. This shows that, as sporadics accumulate in alengthening task list, it rapidly becomes necessary to spend more time schedulability testing in

13

Page 15: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

order to catch those incoming sporadics which are schedulable. It should be noted in Table 4that the sensitivity of guarantee ratio to the value of the bound is higher at low sporadic arrivalrates than at high arrival rates. Table 4 also shows that the variation of guarantee ratio withupper bound is a Poisson-like curve. As this curve is compressed by lower bound values, so itsshape is emphasised. In other words as the peaks occur at lower bound values, so theybecome sharper. This has implications for the choice of best optimum bound across a rangeof sporadic arrival rates.

UpperBound

(Timeout in ms)

Sporadic Arrival Rate

0.0004 0.002 0.004 0.006 0.008 0.01(sporadics per ms)

2.0 0.968 0.9332.5 0.988 0.962 0.912 0.820 0.7043.0 0.994 0.968 0.920 0.830 0.714 0.6263.5 0.993 0.968 0.921 0.832 0.717 0.6304.0 0.992 0.965 0.919 0.832 0.718 0.6334.5 0.989 0.962 0.915 0.829 0.717 0.6345.0 0.985 0.957 0.909 0.825 0.716 0.6355.5 0.981 0.953 0.903 0.820 0.713 0.6366.0 0.6356.5 0.6357.0 0.6347.5 0.632

Table 4: The effect of sporadic arrival rate on optimum upper bound.

S poradic Arrival R ate (sporadics per ms)

2

2.5

3

3.5

4

4.5

5

5.5

0.0004 0.002 0.004 0.006 0.008 0.01

Graph 4: Variation in optimum upper bound with sporadic arrival rate.

14

Page 16: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

Average S poradic Computation T ime (ms)

2

2.2

2.4

2.6

2.8

3

3.2

3.4

3.6

3.8

4

2.5 12.5 25 37.5 50

Graph 5: Variation in optimum upper bound with average sporadic computation time.

Number of P eriodic T asks (R atio P D:S D)

0

1

2

3

4

5

6

7

8

9

10

5 (0.340) 10 (0.627) 15 (0.808) 20 (1.017) 30 (1.393)

Graph 6: Variation in optimum upper bound with the number of periodic tasks.

This increase in sensitivity at low peak values was also observed for Graph 5 which shows ageneral decrease in optimum upper bound as the average sporadic computation requirementincreases. This can be interpreted as follows: as the average computational requirement of

15

Page 17: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

sporadics increases, it becomes less beneficial to spend a long time schedulability testingsporadics which are now more likely to prove unschedulable due to their large computationtimes. The exceptional result for an average sporadic computation time of 2.5ms can beexplained by the guarantee ratio value 'saturating'. The guarantee ratio values for this averagesporadic computation time reach a plateau of 1.0 at 3.5ms upper bound and above. In otherwords all the incoming sporadics are being found schedulable even within a tight upper boundof 3.5ms. The algorithm is no longer being stressed, and its upper bound drops.

P eriodic U tilisation (%)

0

0.5

1

1.5

2

2.5

3

3.5

65 70 75 80 85

Graph 7: Variation in optimum upper bound with periodic utilisation.

Graph 6 shows the steady increase in the optimum upper bound as N and PD/SDincrease. This reflects the need to spend more time on schedulability testing as the number oftasks below the average sporadic position increases. Unless this is done schedulable sporadicswill be rejected due to a premature timeout. Graph 7 shows the rise in optimum upper boundas periodic utilisation rises. Obviously, more time is needed in schedulability testing sporadicswhen the computational demands of the periodic tasks are higher. Incidentally, the results usedfor Graph 7 show the expected fall in best guarantee ratio obtained, for each increase inperiodic task utilisation.

From examination of the above graphs it appears that the optimum upper bound ismost sensitive to changes in N and PD/SD ratio. This is not surprising since it is theseparameters which determine the average number of periodic tasks below the sporadic positionin the task list. This is obviously a major factor in the time taken by the schedulability testalgorithm. Periodic utilisation has a smaller effect on the optimum bound, and sporadic arrivalrates and computation times have less effect still. Therefore, in a practical choice of bestoptimum upper bound, Graph 6 is the most important. This suggests an optimum bound of3.5ms for the final investigation below, which uses PD/SD ratios of around 0.6.

16

Page 18: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

8. CHANGING THE PROPORTION OF SPORADIC AND PERIODIC UTILISATION.

Table 5 records an investigation into the effect on total processor utilisation of varyingthe mix of periodic and sporadic processor utilisation. The constant parameter values were thesame as those used above. The table shows a periodic utilisation of 85% and one of 75%.Added to each are different numbers of sporadics to bring the total possible utilisations to 90,95 and 100%. Each guarantee ratio obtained is an average result from 10 sets of 10 periodicsof the stated periodic utilisation. The PD/SD ratio was again 0.6. All sporadic arrival timeswere generated from a uniform distribution and their average computation time was again25ms. The same optimum upper bound was used in all cases and its value was 3.5ms asdiscussed above. Table 5 shows the actual total utilisation obtained which was calculated fromthe number of sporadics guaranteed, their average computation times, plus the periodicutilisation. Also to be added is an estimate of the utilisation used on schedulability testing.This was based on the number of sporadic requests made and measurements of the averageschedulability test time for bottom-up hybrid. This estimate came to 0.48% utilisation per 400sporadics.

Maximum85% Periodic Utilisation 75% Periodic Utilisation

PossibleTotal

Utilisation%

Numberof

SporadicsGuarantee

Ratio

ActualTotal

Utilisation%

Numberof

SporadicsGuarantee

Ratio

ActualTotal

Utilisation%

90 200 0.9645 89.823 600 0.9855 89.78395 400 0.9153 94.153 800 0.9499 93.998100 600 0.8232 97.348 1000 0.8981 97.453

Table 5: Increasing sporadic utilisation by sporadic arrival rate.

Maximum85% Periodic Utilisation 75% Periodic Utilisation

PossibleTotal

Utilisation%

AverageSporadic

Computation(ms)

GuaranteeRatio

Actual Total

Utilisation%

AverageSporadic

Computation(ms)

GuaranteeRatio

ActualTotal

Utilisation%

90 12.50 0.9965 89.983 37.50 0.9523 89.28595 25.00 0.9153 94.153 50.00 0.8655 92.310100 37.50 0.7610 96.415 62.50 0.7683 94.208105 50.00 0.6310 97.620 75.00 0.6858 95.574

Table 6: Increasing sporadic utilisation by average sporadic computation time (ms).

17

Page 19: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

Table 6 shows the results of a similar investigation in which the sporadic utilisation isincreased by increasing the average sporadic computation time while the number of sporadicsis kept constant at 400. In this case the utilisation for schedulability testing was the same(about 0.48ms) for all guarantee ratios obtained. The conclusions from these limited resultsare (1) a lower periodic utilisation and a correspondingly higher sporadic arrival rate makesno clear difference to the actual total utilisation, and (2) a lower periodic utilisation and acorrespondingly higher average sporadic computation time can give a reduction in the actualtotal utilisation obtained. Conclusion (2) reflects the difficulty of scheduling sporadics withlarge computation requirements.

9. CONCLUSIONS

This work has investigated some algorithms for the dynamic schedulability testing ofsporadic tasks which arrive singly at a target processor running its own set of schedulableperiodic tasks. The schedulability testing takes place on the target processor itself and musttherefore be bound in order that worst-case analysis of that processor's load may be made.Knowledge of the minimum interarrival time of the sporadic tasks is a pre-requisite forestablishing this bound.

The algorithms used for dynamic schedulability testing were developed by adaptingpreviously known algorithms for static schedulability testing. The adapted algorithms makeuse of dynamically updated scheduling data. Optimisations were made in order to reduce therun-time overheads incurred by the adapted algorithms. These involved combining twoalgorithms into a single hybrid algorithm and introducing timeouts into the algorithms in orderto enforce tight upper bounds on schedulability testing. Specific conclusions from thesimulation results are as follows:

(1) Dynamic schedulability testing of sporadic tasks on the same processor as the periodic taskset can incur acceptable overheads of less than 1ms per test .

(2) Bottom-up hybrid is the most efficient of the dynamic schedulability test algorithmsinvestigated.

(3) The performance of any of the dynamic schedulability test algorithms is sensitive to thechoice of upper bound for the worst-case schedulability test.

(4) Constraining the schedulability test algorithm to timeout before the worst-case test timecan improve performance. The value of the timeout which gives the best performance is calledthe optimum upper bound.

(5) The optimum upper bound is most sensitive to N (number of periodic tasks) and PD/SD(the ratio of average periodic deadline to average sporadic deadline). These parametersdetermine the average number of tasks below a sporadic position in the deadline monotonicordering.

(6) Increasing the sporadic proportion of the total possible processor utilisation, will, ifanything, decrease the actual total utilisation achieved.

18

Page 20: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

Of all the adapted algorithms, Bottom-up Hybrid consistently performed best over a range oftest data which varied all the parameters discussed above. Introducing a timeout improved theperformance of all the algorithms but Bottom-up Hybrid still led the field. The aboveperformance figures for Bottom-up Hybrid show that it provides a dynamic schedulability testwhich rivals those of Spring [13], and Lehoczky [4]. The algorithm shows promise foroperational use within adaptive, distributed systems where load-sharing of hard real-timesporadic tasks is advantageous. In order to further substantiate this claim it is planned toinvestigate the use of Bottom-up Hybrid in a real-time supernode architecture. The proposedsupernode consists of a cluster of four processors each of which run a set of residentperiodics. Three of the processors are targets for sporadic requests and attempt to guaranteethem locally, as described in this report. However the fourth processor acts as a mediator,forwarding individual sporadic requests from the wider system to whichever of its threetargets it judges is most likely to guarantee them. Such further investigation should help toestablish whether Bottom-up Hybrid can be of benefit in an adaptive distributed system.

10. ACKNOWLEDGEMENT

The author would like to thank Dr A. Burns for his valuable suggestions during thecourse of this work, and also for his comments on earlier drafts of this report.

REFERENCES

[1] Audsley, N. C., DPhil thesis, Computer Science Department, University of York,(September 1993).

[2] Audsley, N. C., Burns A., Richardson M. F., and Wellings A. J., "Hard real-timescheduling: the Deadline Monotonic approach", Proceedings 8th IEEE Workshop onReal-Time Operating Systems and Software, Atlanta, USA (15-17 May 1991).

[3] Lehoczky, J. P. and Ramos-Thuel, S, "An optimal algorithm for scheduling soft-aperiodic tasks in fixed-priority pre-emptive systems", Proceedings 13th IEEE Real-Time Systems Symposium, Tucson, USA (December 1992)

[4] Lehoczky, J. P., and Ramos-Thuel, S., "On-line scheduling of hard deadline aperiodictasks in fixed-priority systems", Proceedings 14th IEEE Real-Time Systems Symposium,Durham, N. Carolina, USA (December 1993).

[5] Lehoczky, J. P., "Fixed-priority scheduling of periodic task sets with arbitrary deadlines",Proceedings 11th IEEE Real-Time Systems Symposium, Lake Buena Vista, FL, USA,pp 201-209 (December 1990)

[6] Leung, J. Y. T. and Whitehead, J, " On the complexity of fixed-priority scheduling ofperiodic real-time tasks", Performance Evaluation, pp.237-250, Vol. 2, Part 4,(December 1982).

[7] Liu, C. L. and Layland, J. W.,"Scheduling algorithms for multiprogramming in a hardreal-time environment", Journal of the ACM 20(1), pp. 46-61 (1973).

[8] Locke, C. D., Vogel D.R., Mesler, T.J., "Building a predictable avionics platform inAda: A Case Study", Proceedings 12th IEEE Real-Time Systems Symposium,(December 1991).

[9] Sprunt, B.,Sha, L. and Lehoczky, J.P. "Aperiodic task scheduling for hard real-timesystems", Journal of Real-Time Systems, 1:27-60, (1989).

[10] Sprunt, B., "Aperiodic task scheduling for real-time systems", PhD thesis, CarnegieMellon University, (1990).

19

Page 21: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

[11] Stankovic, J. A., Ramamritham K., "Overview of the Spring Project", COINS TechnicalReport 89-03, University of Massachusetts, Amherst (January 1989).

[12] Stankovic, J. A., Ramamritham K.,"The Spring Kernel: a new paradigm for real-timesystems", IEEE Software, (May 1991).

[13] Stankovic, J. A., Ramamritham K., and Zhao, W. "Distributed scheduling of tasks withdeadlines and resource requirements", IEEE Transactions Computers, pp1,110-1,123,(August 1989).

[14] Strosnider, J.K., "Highly responsive real-time token rings", PhD thesis, Carnegie MellonUniversity, (1988).

20

Page 22: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

APPENDIX

1. The O(N2) algorithm

In the O(N2) algorithm, the interference from all higher priority tasks, is calculated for theduration of the deadline (Di) of the test task, i. The number of interferences by a higherpriority task is calculated by taking the ceiling of Di/Tj where Tj is the period of a higherpriority task, j. A dynamic refinement is to first subtract from Di the offset (Oj) of the nextrelease of the interfering task, j. Also, any residual execution time (Rj) of the interfering taskmust be added to the total interference of task j. Finally, schedulability is tested by comparingthe test task's deadline (Di) with the sum of interferences over all higher priority tasks plus thecurrent computational requirement of the test task itself. If the test task is currently active, itscomputational requirement will be its residual execution time Ri, otherwise its worst-casecomputation time (Ci) will have to be considered against Di, the deadline of the test task'snext activation. In the case of the test task being the sporadic itself (see Figure 1), then thetotal interference over all higher tasks, j, is calculated by:

Is = ∑j( (Ds - Oj)/Tj o Cj + Rj) (1)

where:Ds is the sporadic deadlineRj is the current residual execution time of the interfering taskOj is the offset of the interfering taskTj is the period of the interfering taskCj is the (worst-case) computation time of the interfering taskand Xo (i) returns 0 if X <= 0

(ii) returns X if X > 0

<

><

< >

< >

T1

1

2

O1

O2

Ds

s

CurrentTime

>

>

time

time

< >T 2

>

C

C

C

Figure 1: Computation times (C) for periodics above the sporadic.

21

Page 23: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

Figure 1 shows that the interference, I2, of task 2 in the sporadic will be pessimisticallyassumed to include all of the computation time, C2, of the final hit of task 2, despite the expiryof the sporadic deadline before that final hit finishes.

s

Current Time

k

k+1

>

C

C

Ctime

time>

Figure 2: Computation times (C) for periodics below the sporadic.

For a task, i, lower than the sporadic, the interference over all higher task, j, is calculated by:

Ii = ∑j( (Di - Oj)/Tj o Cj + Rj) (2)

Note that Ts is set to infinity, as are the periods of any other sporadic tasks which arecurrently in the task list.

The sporadic, s, is a one-off release, so that the accumulated interference time in a lower taskneed only be tested against either (a) the current deadline of the lower task if it is active or (b)the deadline of the next activation of the lower task if it is inactive. Respective examples fromFigure 2 are task k+1 and task k. The following are more detailed explanations of the differenttests for (a) and (b).

(a) If the lower task being schedulability tested is active (in other words Ri > 0) then testwhether Di >= Ii + Ri.

(b) If the lower task is inactive, then the total interference in the lower task's next activation,plus the lower task's computation time, must be tested against its next deadline. Asufficient condition is to suppose that the next activation of the task starts at the currenttime, and to test whether Di >= Ii + Ci , where Di is the deadline of the next activation.The supposition that the next release of the task being tested is at the current time ismade in order to make use of dynamic scheduling data, to greatly simplify calculationand to thereby reduce schedulability testing overheads. The following is a proof that thesupposition provides a sufficient schedulability test.

22

Page 24: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

Proof: The interval between the current time and the actual next release of test task i iseither (i) filled by interferences from higher priority tasks or (ii) there are 'gaps' in whichtask i could execute if it were released. If (i) then this degenerates into the samecondition as allowing task i to execute only after its next release. If (ii) then all higherpriority tasks, including the sporadic, are satisfied i.e. the interference of the sporadicitself, and its knock-on effects on the lower priority tasks, which are above the test task i,have ended. In other words the supposition is never falsely optimistic. Therefore thesupposition provides a sufficient schedulability test.

2. The Pseudo-Polynomial (PP) algorithm

<

><

< >

< >

T1

1O1

O2

Ds

s

CurrentTime

>

>

time

time

< >T2

w

w

>

w

2

Figure 3: Response times (w) for periodics above the sporadic.

The PP algorithm calculates the interference from higher priority tasks during the elapsedexecution time of the test task. The algorithm therefore generates response times (wi) foreach test task, i. Figure 3 shows the case of the sporadic itself as the test task. (Incidentally,note that w1 = C1.) The interference over all tasks, j, above the sporadic is found by:

Is = ∑j( (ws - Oj)/Tj o Cj + Rj) (3)

where:ws is the response time of the sporadicRj is the current residual execution time of the interfering taskOj is the offset of the interfering taskTj is the period of the interfering taskCj is the (worst-case) computation time of the interfering task

23

Page 25: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

Hence the recursive equation which determines the final value of ws:

n+1 nws = Cs + ∑j( (ws - Oj)/Tj o Cj + Rj) (4)

s

Current Time

timek

k+1

>

w

w

w

Figure 4: Response times (w) for periodics below the sporadic (before the sporadic arrives).

Figure 4 shows response time for tasks below the sporadic in priority order. As with theO(N2) algorithm, the response times of lower tasks need only be tested against either (a) theircurrent deadline if they are active or (b) the deadline of their next activation if they areinactive. The following are more detailed explanations of the different tests for (a) and (b).

(a) If the lower task being schedulability tested is active (in other words Ri > 0) then testwhether Di >= wi , where wi is found when the following recursive equation converges:

n+1 nwi = Ri + ∑j( (wi - Oj)/Tj o Cj + Rj) (5)

This is the same as (4) except that wi is the response time for the residual computationtime of the task, i, being schedulability tested. As before Ts is set to infinity, as are theperiods of any other sporadic tasks which are currently in the task list.

24

Page 26: ADAPTING AND EVALUATING ALGORITHMS FOR ... AND EVALUATING ALGORITHMS FOR DYNAMIC SCHEDULABILITY TESTING February 1994 Charlie McElhone Department of Computer Science University of

(b) If the lower task is inactive, then the response time of the task's next activation must betested against the task's next deadline. Here wi is calculated recursively in a similar wayto (5):

n+1 nwi = Ci + ∑j( (wi - Oj)/Tj o Cj + Rj) (6)

Again Ts is set to infinity.

As with the O(N2) algorithm, the supposition that the next release of the test task, i, isat the current time is made in order to reduce schedulability testing overheads. The proofthat this supposition provides a sufficient schedulability test is the same as for the O(N2)algorithm above.

25