study and development of a new multi level feedback...

12
IJCSES International Journal of Computer Sciences and Engineering Systems, Vol. 5, No. 2, April 2011 CSES International © 2011 ISSN 0973-4406 Manuscript received May 25, 2010 Manuscript revised December 15, 2010 Study and Development of a New Multi Level Feedback Queue Scheduler for Embedded Processor M. V. Panduranga Rao 1 and K. C. Shet 2 1 Research scholar, Department of Computer Engineering, National Institute of Technology, Karnataka, Surathkal, Karnataka 575025, India, E-mail: 1 [email protected], 2 [email protected] Abstract: We developed real-time process scheduling algorithm that guarantee low deadline miss ratios in systems where task execution time may deviate from estimations at run-time. The real time scheduler developed here offers a 10% considerable gain in performance compared with standard scheduler. The research draws a comparison among the other schedulers and shows how effectively schedulers in RTOS systems must work. In this research, a new algorithm New Multi Level Feedback Queue NMLFQ has been presented for solving the problem of resource allocation, scheduling of processes to use cpu utilization time efficiently and minimizing the response time. In this algorithm, an object orientation approach has been used for defining the dynamic priority with scaling, deadline, urgency, fine-grained time-slice distribution of each queue along with the number of queues. The NMLFQ algorithm offers 10 % better response time and cpu utilization. The results are simulated using java language on the personal computer. The NMLFQ scheduler code is embedded using C++ on ARM 7 RISC processor. Key words: scheduling, queue, round robin, sjn, deadline, edf, preemption, multilevel queue, NMLFQ, response time. 1. INTRODUCTION Scheduling is the problem of assigning a set of processes (tasks) to a set of resources subject to a set of constraints. Examples of scheduling constraints include deadlines (i.e., job i must be completed by time T), resource capacities (i.e., limited memory space), precedence constraints on the order of tasks (i.e., sequencing of cooperating tasks according to their activities), and priorities on tasks (i.e., finish job P as soon as possible while meeting the other deadlines). A process will run for a while (the CPU burst), perform some I/O (the I/ O burst), then run for a while more (the next CPU burst). The basic assumptions behind most scheduling algorithms are: There is a pool of run-able processes contending for the CPU. The processes are independent and compete for resources. The job of the scheduler is to distribute the scarce resource of the CPU to the different processes fairly and in a way that optimizes some performance measures. There is a large number of short CPU bursts and is a small number of long CPU bursts. Nevertheless, minimizing the mean completion time (the sum of the times at which each job completes, divided by the number of jobs) is a commonly used objective function. Scheduling algorithm is one of the most important algorithms in operating systems, which plays a key role. These algorithms have been designed for optimized use of processes from processor. The New Multi Level Feedback Queue [NMLFQ] scheduler using the concept of multiple waiting queues have been implemented where each of the ready processes wait for the CPU cycle. This scheduler also considers the important parameters viz. preemption, priority, quantum time, deadline, CPU history and urgency of process. These factors are considered while developing scheduling policy for soft and hard real time systems. Hard real-time systems – required to complete a critical task within a guaranteed number of times. o Scheduler must know how long each task will take to perform resource reservation Soft real-time systems – requires the critical processes receive priority over less fortunate ones. o must have priority scheduling

Upload: lyphuc

Post on 21-Jun-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

IJCSES International Journal of Computer Sciences and Engineering Systems, Vol. 5, No. 2, April 2011CSES International © 2011 ISSN 0973-4406

Manuscript received May 25, 2010Manuscript revised December 15, 2010

Study and Development of a New Multi Level FeedbackQueue Scheduler for Embedded Processor

M. V. Panduranga Rao1 and K. C. Shet2

1Research scholar, Department of Computer Engineering, National Institute of Technology, Karnataka, Surathkal,Karnataka 575025, India, E-mail: [email protected], [email protected]

Abstract: We developed real-time process scheduling algorithm that guarantee low deadline miss ratios in systems wheretask execution time may deviate from estimations at run-time. The real time scheduler developed here offers a 10% considerablegain in performance compared with standard scheduler. The research draws a comparison among the other schedulers andshows how effectively schedulers in RTOS systems must work.In this research, a new algorithm New Multi Level Feedback Queue NMLFQ has been presented for solving the problem ofresource allocation, scheduling of processes to use cpu utilization time efficiently and minimizing the response time. In thisalgorithm, an object orientation approach has been used for defining the dynamic priority with scaling, deadline, urgency,fine-grained time-slice distribution of each queue along with the number of queues. The NMLFQ algorithm offers 10 %better response time and cpu utilization. The results are simulated using java language on the personal computer. TheNMLFQ scheduler code is embedded using C++ on ARM 7 RISC processor.Key words: scheduling, queue, round robin, sjn, deadline, edf, preemption, multilevel queue, NMLFQ, response time.

1. INTRODUCTION

Scheduling is the problem of assigning a set ofprocesses (tasks) to a set of resources subject to aset of constraints. Examples of schedulingconstraints include deadlines (i.e., job i must becompleted by time T), resource capacities (i.e.,limited memory space), precedence constraints onthe order of tasks (i.e., sequencing of cooperatingtasks according to their activities), and priorities ontasks (i.e., finish job P as soon as possible whilemeeting the other deadlines). A process will run fora while (the CPU burst), perform some I/O (the I/O burst), then run for a while more (the next CPUburst).

The basic assumptions behind most schedulingalgorithms are: There is a pool of run-able processescontending for the CPU. The processes areindependent and compete for resources. The job ofthe scheduler is to distribute the scarce resource ofthe CPU to the different processes fairly and in away that optimizes some performance measures.There is a large number of short CPU bursts and isa small number of long CPU bursts. Nevertheless,minimizing the mean completion time (the sum of

the times at which each job completes, divided bythe number of jobs) is a commonly used objectivefunction.

Scheduling algorithm is one of the mostimportant algorithms in operating systems, whichplays a key role. These algorithms have beendesigned for optimized use of processes fromprocessor.

The New Multi Level Feedback Queue[NMLFQ] scheduler using the concept of multiplewaiting queues have been implemented where eachof the ready processes wait for the CPU cycle. Thisscheduler also considers the important parametersviz. preemption, priority, quantum time, deadline,CPU history and urgency of process. These factorsare considered while developing scheduling policyfor soft and hard real time systems.

Hard real-time systems – required to completea critical task within a guaranteed number oftimes.o Scheduler must know how long each task

will take to perform resource reservation Soft real-time systems – requires the critical

processes receive priority over less fortunateones.o must have priority scheduling

146 IJCSES International Journal of Computer Sciences and Engineering Systems, Vol. 5, No. 2, April 2011

o “real-time” priorities must not degrade overtime

o dispatch latency must be lowDispatch latency times can become very long

in an operating system that requires a system callto complete or an I/O block to occur beforeperforming a context switch. This is the case in theUNIX operating system [1]. One way to solve thisissue is to make the system calls preemptible, byinserting preemption points in the system calls. Apreemption point is a location in the code that isnot critical; interrupting the process at this pointwill not harm its data. Whenever a preemptionpoint is reached, the system checks to see if a higher-priority process needs to be run. If it does, controlof the CPU is given to the process. Once it isfinished, the CPU goes back to the initialpreemption point and continues execution.

Priority inversion occurs in the soft real-timesystem when one or more processes are usingresources needed by a high-priority process. In thissituation, the lower-priority processes are given apriority equal to the high-priority process until theyare done using the desired resource or resources.Once the resource is released, each process is givenits previous priority level. This technique is knownas priority-inheritance protocol [2], and it preventshigh-priority processes from spending a lot of timewaiting for resources to become available.

• All processes have a deadline equal to theirperiod (that is, each process must completebefore it is next released)

• All processes have a fixed worst-case executiontime

1.1. Characteristics of the RTOS’s Most of the times, a real time system will be an

embedded system. The processor and thesoftware are embedded within the equipment,which they are controlling. Typical embeddedapplications are Cellular phone, WashingMachine, Microwave Oven, Laser Printer,Electronic Toys, Video Games, Avionic controlsetc. RTOS will be embedded along with theapplication code in the system. There is no HardDisk or Floppy Drive from which the OS willbe loaded. The entire code remains in the ReadOnly Memory of the system. So it must be smallin size.

Not only the response time predictable, but itmust be very fast as well. Many embeddedapplications control critical operations(Example. missile control). So then, it must sensethe signal and give a response, fast enough toachieve the desired task. A late answer is as badas the wrong answer.

Therefore, the RTOS code must be very short andwritten efficiently to respond to process needs.

Because of the small footprint requirement andlack of necessity of other peripherals, manyfeatures found in desk top PC OS’s are notneeded like a sophisticated Memory Manageror a File Manager.

1.2. Introduction of ARM-7 and KEIL Simulator KEIL - µVision3 IDE

The µVision3 IDE from Keil Software combinesproject management, make facilities, sourcecode editing, program debugging, andcomplete simulation in one powerfulenvironment. µVision3 helps you get programsworking faster than ever while providing aneasy-to-use development platform. The editorand debugger are integrated into a singleapplication and provide a seamless embeddedproject development environment.

ARM-7 RISC ProcessorThe ARM® architecture is based on ReducedInstruction Set Computer (RISC) principles, and

Figure 1: Simpler Processor Scheduling Model

• The application is assumed to consist of a fixedset of processes

• All processes are periodic, with known periods• The processes are completely independent of

each other• All system’s overheads, context-switching times

and so on are ignored (i.e, assumed to have zerocost)

Study and Development of a New Multi Level Feedback Queue Scheduler for Embedded Processor 147

the instruction set and related decodemechanism are much simpler than those ofmicroprogrammed Complex Instruction SetComputers. This simplicity results in a highinstruction throughput and impressive real-time interrupt response from a small and cost-effective processor core.• The LPC2119 are based on a 16/32 bit

ARM7TDMI-S™ CPU with real-timeemulation and embedded trace support,together with 128/256 kilobytes (kB) ofembedded high speed flash memory. A 128-bit wide memory interface and a uniqueaccelerator architecture enable 32-bit codeexecution at maximum clock rate. For criticalcode size applications, the alternative 16-bitThumb® Mode reduces code by more than30% with minimal performance penalty.

• Multiple serial interfaces including twoUARTs (16C550), Fast I2C (400 kbits/s) andtwo SPIs

• 60 MHz maximum CPU clock availablefrom programmable on-chip Phase-LockedLoop with settling time of 100 ms.

The ARM7TDMI-S processor also employs aunique architectural strategy known as Thumb. TheThumb set’s 16-bit instruction length allows it toapproach twice the density of standard ARM codewhile retaining most of the ARM’s performanceadvantage over a traditional 16-bit processor using16-bit registers. This is possible because Thumb codeoperates on the same 32-bit register set as ARM code.

Pipeline techniques are employed so that all partsof the processing and memory systems can operatecontinuously. Typically, while one instruction isbeing executed, its successor is being decoded, anda third instruction is being fetched from memory.

2. LITERATURE REVIEW

This section provides a review of the researchrelated to our work for the implementation ofNMLFQ. We describe each approach, itsdistinguishing features, and how it differs fromgeneric scheduling mechanism.

2.1. Analysis of the Policies for Real-timeScheduling

• Round robin schedulingRound-robin scheduling gives every processwith the same priority a pre-set share of time

before making a context switch to the next task.When all tasks have got their time share, thefirst task gets back into the CPU for it’s nextprocessing.

• First-in-first-out (FIFO) schedulingThe scheduler runs the task with the highestpriority first. If there are two or more tasks thatshare the same priority level, they get scheduledin order of their arrival completing the firstarrived task first before continuing with the nextone. Each task is occupying the CPU until itfinishes or another task with higher priorityarrives.

• Earliest-deadline-first (EDF) schedulingThis scheduling policy ignores the priority levelfor each task. Instead it focus on when each taskhas to be finished, choosing the task with theclosest deadline for execution. The moreaccurate the provided deadlines are, the betterCPU utilization can be expected.

• Rate-monotonic schedulingThe rate-monotonic scheduling algorithm setsthe priority level for each task in order of theirperiod length, tasks with short periods (theyexecutes often) will get a high priority whiletasks with long periods gets low priority. Highpriority tasks then take precedence before lowerpriory tasks. This scheduling algorithm is best[3] used if there are well defined periodic tasks,preferable with the same CPU burst length (thetime spent in the CPU for each instance of thetask).

• Least Laxity First (LLF)Tasks can be periodic or not and are scheduledaccording to their laxity.A number of optimal scheduling algorithms

exist for CPU loads of up to 100%. Optimality isdefined as the algorithm’s ability to find a feasibletask ordering if such an ordering exists, The EarliestDeadline First (EDF) and Least Laxity First (LLF)are two such optimal algorithms.

The NMLFQ algorithm model discussed in thispaper assumes that each task:

Repeatedly executes at a known fixed rate(its “period”).

Must end before the beginning of its nextperiod (its “deadline”).

Does not need to synchronize with othersin order to execute.

148 IJCSES International Journal of Computer Sciences and Engineering Systems, Vol. 5, No. 2, April 2011

Can be interrupted at any point in time andreplaced by another task in the CPU.

Does not suspend voluntarily. Has zero preemption cost (task-switch times

and scheduling-algorithm execution loadare neglected).

Is ready while its assigned processing timeis not exhausted. After running out ofexecution units, the task blocks until its nextperiod.

The scheduling problem for providing preciseallocations has been extensively studied in theliterature but most of the work relies on some strictassumptions such as full preemptibility of tasks. Aresponsive kernel with an accurate timingmechanism enables implementation of such CPUscheduling strategies because it makes theassumptions more realistic and improves theaccuracy of scheduling analysis.

3. RESEARCH FINDINGS AND GAPS

Some of the problems with MLFQ are The number of priority levels of queues Finding a suitable scheduling algorithm for

each queue Assigning time quantum for each queue Assigning initial static priorities Adjusting dynamic priorities Favoring I/O bound processes Differentiating foreground processes and

background processes.(i) The scheduler keeps a list of process queues.

Every queue gets a priority assigned.Processes start in a given priority queue.Different OSs use other numberscorresponding with priorities. In Unix, forexample, the highest priority (which ischosen first) is 0. Lower priorities have alarger number. In other OSs this is justinverted. Different OSs also use othernumber of priorities, depending on forwhich situations or purposes they aredesigned [4].

(ii) Processes in queues with a higher priorityget less CPU time, so a smaller ‘timequantum’ than processes in lower priorityqueues. In this way, interactive processesget less CPU time and computing processesmore. However, how do we know which

processes are interactive and which are not?There is an easy solution: let processes movefrom queue to queue. When a process blocksbefore it’s time quantum is spent, thescheduler will increase its priority. Sointeractive processes, which normally justread some input and then quit, willautomatically promote to higher priorities.Processes, which use their quantum, get alower priority, so computing processes,which take all the cpu time they can get, willautomatically move to the lower priorities.Then, if there are multiple processes in aqueue that are ready to be executed, theyare scheduled round robin. This is useful,because all processes in one queue are asimportant. This system is the bestapproximization of SJF, and so it is the bestwe can use. It has very little overhead andgives high response time to interactiveprocesses. Real-time processes can get afixed and high priority; in this way, they arealways chosen when they need to be.

(iii) Some of the problems with MLFQ are thenumber of priority levels of queues, findinga suitable scheduling algorithm for eachqueue, finding a suitable schedulingmechanism for each queue, assigning timequantum for each queue, assigning initialstatic priorities, adjusting dynamicpriorities, favoring I/O bound processes,differentiating foreground processes andbackground processes, and consideringclient against server environment. TheMLFQ approach is used in NMLFQscheduling system in such a way that theresponse time is decreased and thefunctionality of the system is improved. Theoptimum number of queue and thequantum for each queue are found using afault tolerant mechanism to achieve thesegoals. As the proposed mechanismconsiders these objectives simultaneously,they do not have any negative impacts oneach other’s. In NMLFQ scheduling, theoperating system can modify the number ofqueues and the quantum of each queueaccording to the existing processes.

3.1. Problem StatementThe aim of this research is to study the policymechanisms of different real time schedulers in

Study and Development of a New Multi Level Feedback Queue Scheduler for Embedded Processor 149

embedded systems domain, evaluation ofperformance of these mechanisms. In addition, toarrive at a common solution to simulate a newscheduling policy.

4. THE NEW MULTI LEVEL FEEDBACK QUEUESCHEDULER

New MLFQ scheduler using the concept of multiplequeuing has been implemented. It is intended tomeet the following design requirements formultimode systems.

1. Give preference to short jobs.2. Give preference to I/O bound processes.3. Quickly establish the nature of a process

and schedule the process accordingly.Multiple FIFO queues are used and the

operation is as follows:

1. A new process is positioned at the end ofthe top-level FIFO queue.

2. At some stage, the process reaches the headof the queue and is assigned the CPU.

3. If the process is completed it leaves thesystem.

4. If the process voluntarily relinquishescontrol, it leaves the queuing network, andwhen the process becomes ready, again itenters the system on the same queue level.

5. If the process uses all the quantum time, itis pre-empted and positioned at the end ofthe next lower level queue.

6. This will continue until the processcompletes or it reaches the base level queue.

7. Usually a scheduler reevaluates the processpriorities at 1 second interval. The systemmaintains queues for each priority level.

8. Every tenth of a second the scheduler selectsthe topmost process in the runnable queuewith the highest priority.

9. If a process is runnable at the end of itsallocated time, it joins the tail of the queuemaintained for its priority level.

10. If a process is put to sleep awaiting an event,then the scheduler allocates the processorto another process.

11. If a process awaiting an event returns froma system call within its allocated timeinterval but there is a runnable process witha higher priority then the process is

interrupted and higher priority, process isallocated the CPU.

12. Process priority of a running process isdecremented depending upon the historyof the process and the change in deadlineor urgency of a process.

At the base level queue, the processes circulatein round robin fashion until they complete andleave the system. In the multilevel feedback queue,a process is given just one chance to complete at agiven queue level before it is forced down to a lowerlevel queue. It essentially means that the Kernelallocates the Central Processor Unit (CPU) for aspecific time quantum, preempts a process thatexceeds its time quantum and feeds it back into oneof several priority queues. A process may needseveral iterations through the feedback loop beforeit finishes executing.

Figure 2: Basic Analysis and Needs of NMLFQ

150 IJCSES International Journal of Computer Sciences and Engineering Systems, Vol. 5, No. 2, April 2011

When the kernel does a context switch andrestores the context of a process, the processresumes execution from the point where it had beensuspended. At the conclusion of a context switch,the kernel executes the scheduling algorithm toselect a process, selecting the highest priorityprocess from those in the process queue in the states“ready to run” and “Preempted”. If severalprocesses time for highest priority the kernel picksthe one that has been ready to run for the longesttime, following a round-robin scheduling policy. Ifthere are no processes eligible for execution, theprocess scheduler idles until a process is ready torun [5].

This scheduler also considers an importantparameter viz. priority, which is a factor to beconsidered while developing scheduling policies forsoft and firm real time systems. In this schedulermultiple waiting queues have been implementedwhere each of the ready processes wait for the CPUcycle. The processes go into each of the queuesbased on their priority levels viz.0-49-low priority,50-99 medium priority and 100-149 high priority.The high priority queues are given a greater cpucycles than the lower priority ones hence avoidingstarvation and allowing the higher priorityprocesses more CPU cycles. The scheduling in thequeues happens in a round robin fashionminimizing the possibilities of a very high priorityprocess being ignored for long in the queue.

As it was mentioned before, in NMLFQ theoperating system builds several separate queuesand specifies the quantum for each queue. Generallyin this method, all processes end in the mentionedqueue and move out of system. In these methods,the number of queue and the quantum size arespecified while the process is running, so theoperating system has no role in controlling thenumber of queues and amount of each layer’squantum.

CPU scheduling algorithms receive, for everyprocess submitted to the system, three major factorsattached with each process. These factors are:priority, arrival time and Burst time.

Normally, if we try to sort the importance ofeach factor in respect to the process, the priorityfactor is more important factor than the arrival andburst time and then the burst time is moreimportant than the arrival time.

A new factor f is suggested to attach with eachsubmitted process. This factor sums the effects of

all three basic factors (priority, arrival time andburst time). The equation summarizes this relationis:

f = Priority + Arrival Time + Burst Time (1)The value of the factor f is calculated for eachprocess from equation 1. Depending on this newfactor, the submitted processes are ordered in theready queue.

4.1. Design of the New Multilevel Queue withPositive Feedback

Multilevel queues are a timesharing scheme likeRound Robin, but there are several different RRqueues instead of just the one. Each queue has time

Figure 3: Confirmation of Completion of the Quantum andBurst Time of Process

Study and Development of a New Multi Level Feedback Queue Scheduler for Embedded Processor 151

quantum’s of different lengths. As processes age,they are moved to a different queue. Say a systemhas three different queues. Processes in the firstqueue get one time quantum each time they run.For short processes, this is enough, and they willfinish quickly without ever leaving the first queue.If a process stays in this queue for a while and doesnot finish, then the system moves it to the secondqueue where is gets two-time quantum is each timeit runs. After even longer, it is moved to the finalqueue where it gets for example five quantum’seach time it runs [6]. The point of this is that shortprocesses will be switched often creating the illusionof concurrent multitasking, whereas long processesget switched less often to reduce the amount of CPUtime the system uses up and increasing throughputfor those long tasks.

Here we are concerned with the work on shorttime scheduling which is the analysis of processesexisting at the main memory to be executed by theprocessor. The goal of this work is allocating timein a way that one or some systematic behavior isoptimized. Many criterions have been mentionedfor evaluating scheduling in different researchpapers, that among them we can refer to twoimportant criteria:

1] - from the viewpoint of user,

2] - from the viewpoint of system.

Each of these two categories has many criteriato be discussed. In time-sharing, we try to reducethe response time variation because the goal of someoperating systems is providing all users’ servicesare in a suitable way and minimizing the responsetime for users. As it is known, Multi layer Queue(MLQ) scheduling is designed from some preparedqueues and the respective processor of each queue,MLFQ scheduling acts the same as MLQ andprocess can move dynamically in different queues.So processes that need a large amount of CPU timeare sent to the lower queues and process requiringI/O bound or related to interactive process are sentto queues with higher priority of response. NMLFQscheduling algorithm is focused on total time,response time and application of priority, but it istried not to apply the negative influences over thementioned criteria. Our main job in this researchpaper is optimizing response time of MLFQ byusing a new object oriented model [6]. The NMLFQscheduling organizes the queues to minimize thequeuing delay and optimize the queuingenvironment efficiency.

Give newly runnable process a high priority anda very short time slice. If process uses up thetime, slice without blocking then decreasepriority by one and double time slice for nexttime.

Go through the above example, where the initialvalues are 1ms and priority 100.

Keep a history of recent CPU usage for eachprocess: if it is getting less than its share, boostpriority. If it is getting more than its share,reduce priority.

A process can move between the variousqueues; aging can be implemented this way

Multilevel-feedback-queue scheduler definedby the following parameters:

number of queues scheduling algorithms for each queue method used to determine when to upgrade

a process method used to determine when to demote

a process method used to determine which queue a

process will enter when that process needsservice

Overhead: number of context swaps. Efficiency: utilization of CPU and devices. Response time: how long it takes to do

something.

4.2. Background SignificanceIn developing the NMLFQ scheduler, we had anumber of specific design criteria. The criteria andrationale for the scheduler design are:1. The same scheduling policy should apply to

every application; regardless of its schedulingneeds, a uniform algorithm simplifiesscheduling decisions.

Figure 4: Priority Levels of New Multilevel-feedback-queueScheduler with Distinguishing Processes

152 IJCSES International Journal of Computer Sciences and Engineering Systems, Vol. 5, No. 2, April 2011

2. Users or developers need to provide any a prioriinformation about processes.

3. The scheduler will enhance the performance ofsoft real-time applications.

4. The default behavior of the scheduler shouldbe reasonable and consistent with generalpurpose time-sharing schedulers. The schedulershould favor interactive processes over CPU-bound processes. No process should starve. The presence of

compute intensive periodic processes cannotcompletely hinder the progress of otherprocesses. Processes will receive time slicesthat prevent them from monopolizing theCPU and improve overall responsiveness.

When the system is not fully loaded,changes to workload should not affect theperformance of already executing soft real-time processes. For fully loaded systems,performance should degrade gracefully.

4.3. Significance of the Work DoneThere are several critical design choices to be madewhen making, or remaking, an operating systemthat are to be a hard real time system. One of thesedesign choices is how to implement the schedulingof the CPU. In other words, how should it bedecided which process who is next to get hold ofthe CPU and executing its code. This research willfocus on the design choices taken when creatingNMLFQ for embedded system domain.

scheduler. First, the division between I/O-boundtasks and CPU-bound tasks mostly bases on thefunction of CPU usage and sleep (waiting) time, butit’s often hard to make the division precisely whenimplementing such function. Besides, tasks whichdoing disk I/O are normally deemed as I/O-boundtasks, though they do not have interactiverequirements. Thus, to prevent actual interactivetasks from interfered by tasks doing disk I/O, anidentification mechanism is necessary.

Like any other priority scheduling algorithms,indefinite blocking and starvation need to be takeninto careful consideration [8]. Since newly createdtasks will be inserted into NMLFQ at random timeand that tasks which have used up their time slicewill be reallocated with new one and be re-insertedinto NMLFQ, there would be no upper boundstarvation time for the lowest priority task whensystem being in heavy workloads. Moreover,prevention of starvation is often contradicted withinteractivity and the benefits of using NMLFQmodel can therefore be decreased.

The most well-know solution to the starvationis aging which gradually increases the priority oftasks waiting in the system for a long period [8].Since it depends on the way of raising the priorityof tasks, the calculation method of task prioritybecomes an important issue in this solution.

In conclusion, the NMLFQ model suggests abasic framework to build an interactive heuristicreal-time scheduler with a dynamic prioritymechanism. Some vital issues such as how priorityis calculated and how starvation should beprevented heavily depend on other OS componentsand related implementations.

4.4. Gantt Chart Evaluations in Designing theNMLFQ Scheduler

The real key is designing the scheduler. Usually thedata structure of the ready list in the scheduler isdesigned to minimize the worst-case length of timespent in the scheduler’s critical section, duringwhich preemption is inhibited, and, in some cases,all interrupts are disabled. However, the choice ofdata structure depends also on the maximumnumber of tasks that can be on the ready list.

Process Burst TimeP1 24P2 3P3 3

Figure 5: N Independent Process Queues. P1,P2,P3,P4… Jobs,One Queue Per Priority and One Algorithm per Queue

The NMLFQ model has several advantages. It’ssimple and easy to understand. The algorithm usedin NMLFQ is efficient. Since operations (remove,insertion) on queues and finding the highest prioritytask are in constant time, O(1) scheduling can beachieved easily [7]. For the reason of that I/O-boundtasks will be always in the higher priority, goodresponsiveness and interactivity can be guaranteed.

However, there exist several drawbacks andproblems when implementing a NMLFQ based

Study and Development of a New Multi Level Feedback Queue Scheduler for Embedded Processor 153

Suppose that the processes arrive in the order:P1, P2, P3 The Gantt Chart for the schedule is:

{

priority *pri_first,*pri_temp;priority *pri_second,*pri_temp_second;priority *pri_third,*pri_temp_third;int

high_quantum,medium_quantum,low_quantum;public:

int set_values (int,int,int,int);void set_quantum (int);int compute ();void destroy ();

};

Figure 6: New MLFQ scheduler.

5. SIMULATION AND EXPERIMENTALRESULTS OF NEW MLFQ SCHEDULER

The evaluation of results is performed in twophases.

In First phase, the model of NMLFQis simulated using Java. The simulation involvesfive processes A, B, C, D and E as shown infigure 7.

In Second phase, the model of NMLFQ isimplemented and executed on ARM-7 RISCembedded processor. As the research title is “Studyof Scheduler for Real Time and Embedded SystemDomain”, the model of NMLFQ is thoroughlyinvoked on Embedded Processor to achieve realtime result. The snapshot of invocation is depictedin figure 8 and figure 9. The Evaluation of result onKeil Simulator and on ARM-7 Processor is tabulatedin table 1.

Before invoking the NMLFQ Model directly onembedded system, It has been successfully portedon KEIL embedded system simulator software. Allthe compatibility issues are resolved.

Processes Process type Arrival time Burst times

Pid 1, A RT-Task 0 4,5,6Pid 2, B RT-Task 2 2,7,3Pid 3, C RT-Task 4 5,2,3Pid 4, D conventional 10 7,10,2Pid 5, E conventional 14 8,6,3

Figure 7: Simulation of New MLFQ Scheduler using ObjectOriented Approach with Five Processes onExecution. A, B, C are Regular Processes and D, Eare Batch Processes

P P P

24

27

30

0

Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 Suppose that the processes arrive in the order

P2, P3, P1 The Gantt chart for the schedule is:

Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case

4.5. Class Table of NMLFQ Queue Scheduler

SCHEDULER_NMLFQPid ,valuePri_first,pri_tempGet_process_d()Get_arrival_time()Get_burst_time()Get_turn_around_time()Get_waiting_time()Set_process_id()Set_arrival_time()Set_burst _time()Set_turn_around_time ()Set_waiting_ time()Process()Scheduler()

4.6. Code snippet for Scheduler#include”scheduler_NMLFQ_queue.cpp”

struct priority{

int pid;

int value;struct priority *next;

};

class nmlfq_triple_queue : public scheduler

154 IJCSES International Journal of Computer Sciences and Engineering Systems, Vol. 5, No. 2, April 2011

We define the processor utilization factor to bethe fraction of processor time spent in the executionof the task set [9]. In other words, the utilizationfactor is equal to one minus the fraction of idleprocessor time. Since Ci / Ti is the fraction ofprocessor time spent in executing task Ti, for mtasks, the utilization factor U is:

1( / )m

t tiU C T

== ∑ (2)

6. INFERENCE OF RESULTS

The NMLFQ algorithm is efficient when theload on scheduler is in the range of 20 to 80%.Beyond 80% of the load, it is difficult to tracethe behavior manually.

Good response ratio is achieved by 10 %compared to other cpu scheduling algorithmssuch as round robin, first come first serve andshortest job next.

6.1. Evaluation of Response Ratio of ScheduledProcesses on Embedded System

Table 1Evaluation of Result on Keil Simulator and on

ARM-7 Processor

Figure 8: The Execution of NMLFQ Model using C++ on ARM-7 RISC Embedded System - Inputs as given forDifferent Processes

Figure 9: The Execution of NMLFQ Model using C++ on ARM-7 RISC Embedded System - The Results Achieved forturn around Time, Waiting Time and Response Ratio

The graphical notation of the results achievedwith respect to waiting time and turn around timeis drawn as follows.

For Two Processes

For Three Processes

Study and Development of a New Multi Level Feedback Queue Scheduler for Embedded Processor 155

For Four Processes 7.2. Future Work A bounded starvation mechanism with very

little overhead can also be proposed in thisscheduler. Involuntary sleeping time istherefore excluded from the prioritycomputation and interactive tasks are alsoproperly identified with this mechanism. Withthis bounded starvation mechanism, no taskwith higher priority would get blocked by thetask with lower priority unless the lowerpriority one get starved.

In the future we plan to perform moreexperiments with periodic tasks. We will alsoexperiment with a mixture of periodic andaperiodic tasks.

Research is still needed regarding how to mergeNMLFQ programmatically to kernel code andwhether it is feasible to do so in real time.

Also, in addition the NMLFQ scheduler to beimprovised to integrate the algorithm on thereal-time fuzzy processor.

Still ways to improve the utilization of cpu andadaptation of all available resources are to befound.

ACKNOWLEDGMENTSI am grateful to the guide and Research Progress AssessmentCommittee members for their guidance and valuablesuggestions in improving the quality of this research. Mygratitude goes to the people, who previously succeeded inimplementation of general scheduler mechanisms, processcommunication, event analysis, interrupt handling etc., andmaking it available for further development.

REFERENCES[1] C. S. Wong, I. K. T. Tan, R. D. Kumari, W. Fun,

“Towards Achieving Fairness in the Linux Scheduler”,in ACM SIGOPS Operating Systems Review: Research andDevelopments in the Linux Kernel, 42(5), 2008.

[2] Chih-Lin Hu, “On-Demand Real-Time InformationDissemination: A General Approach with Fairness,Productivity and Urgency”, 21st InternationalConference on Advanced Information Networking andApplications, AINA ’07, 2007, 362–369, 21-23 May 2007.

[3] Gauthier L., Yoo S. and Jerraya A., “AutomaticGeneration and Targeting of Application-specificOperating Systems and Embedded SystemsSoftware,” IEEE Transactions on Computer-Aided Designof Integrated Circuits and Systems, 20(11), pp. 1293-1301,November 2005.

[4] Ghosh S., Mosse D. and Melhem R., “Fault-TolerantRate Monotonic Scheduling”, Journal of Real-TimeSystems, 149-181, 1998.

Figure 10: Graphical Notation of the Results Achieved withRespect to Waiting Time and Turn Around Time

Since the process arrival time is randomlydistributed, we used discrete event techniquesimulation [10]. So the system state has been changedwhen an event occurred during the simulation time.At first, we sort the processes by their arrival timeand then find the first process to handle and provideits service. The NMLFQ average response time isbetter by 10% than the other scheduling algorithms.

7. CONCLUSIONS AND FUTURE WORK

7.1. ConclusionThe main contributions made by this research are:

We have developed New Multi LevelFeedback Queue, a real-time CPU schedulingalgorithm that guarantees low deadlinemiss ratios. The scheduler code is developedusing C++ language as well as Java on Linuxoperating system and Gantt chart log isprovided along with a graphical userinterface simulation. The simulations showthat the NMLFQ algorithm gives 10% betterperformance compared to Multi LevelQueue scheduling with respect to responsetime and waiting time. The model ofNMLFQ is thoroughly invoked onEmbedded Processor ARM-7 to achieve realtime result. Before invoking the NMLFQModel directly on embedded system, it hasbeen successfully ported on KEILembedded system simulator software.

156 IJCSES International Journal of Computer Sciences and Engineering Systems, Vol. 5, No. 2, April 2011

[5] Kenneth J. Duda and David R. Cheriton, “Borrowed-Virtual-time (BVT) Scheduling: Supporting Latency-sensitive Threads in a General-purpose Scheduler”,Proceedings of the Seventeenth ACM Symposium onOperating Systems Principles, p.261-276, December 12-15, 1999, Charleston, South Carolina, United States.

[6] Leung J. Y. T. and Whitehead J., “On the Complexityof Fixed-Priority Scheduling of Periodic Real-TimeTasks”, Performance Evaluation, Number 2, pp. 237-250,1982.

[7] Lu, C., Stankovic, A., Tao, G. and Son, H.S. “FeedbackControl Real-time Scheduling: Framework, Modelingand Algorithms”, Special issue of Real-Time SystemsJournal on Control-Theoretic Approaches to Real-TimeComputing, 23(1/2), 85-126, 2002.

[8] Manimaran G. and Siva Ram Murthy C., “A Fault-Tolerant Dynamic Scheduling Algorithm forMultiprocessor Real-time Systems and its Analysis”,IEEE Trans on Parallel and Distributed Systems, 9(11),1137-1152 , 1998.

[9] Wang J. and Ravindran Binoy, “Time-utility Function-driven Switched Ethernet: Packet SchedulingAlgorithm, Implementation, and FeasibilityAnalysis”, IEEE Trans on Parallel and DistributedSystems, 15(2), 119-133, 2004.

[10] Yamada S. and Kusakabe S., “Effect of Context AwareScheduler on TLB”, IEEE International Symposium onParallel and Distributed Processing, IPDPS 2008.Volume, Issue , Page(s):1 – 8, 14-18 April 2008. DigitalObject Identifier 10.1109/IPDPS.2008.4536361.