IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. X, NO. XX, XXXXXXX 200X 1 On Wireless Scheduling Algorithms for Minimizing the Queue-Overflow Probability V. J. Venkataramanan and Xiaojun Lin, Member, IEEEAbstract— In this paper, we are interested in wireless schedul- ing algorithms for the downlink of a single cell that can minimize the queue-overflow probability. Specifically, in a large-deviation se tti ng, we ar e int er es ted in algor ithms that max imi ze the asymptotic decay-rate of the queue-overflow probability, as the queue-overflow threshold approaches infinity. We first derive an upper bound on the decay-rate of the queue-overflow probability over all scheduling policies. We then focus on a class of scheduling algor ithms collect ive ly ref err ed to as the “α-algorithms. ” For a giv en α ≥ 1, the α-al gor ithm pic ks the use r for service at eac h time that has the larges t prod uct of the transmi ssion rate mul tiplie d by the backl og raise d to the power α. We show tha t whe n the over flow met ric is appropri ate ly mod ifie d, the minimum-cost-to-o verflow under the α-algorithm can be achieved by a simple linear path, and it can be written as the solution of a vector-optimization problem. Using this structural property, we the n sho w that whe n α appr oache s infinity , the α-algorithms asy mpt oti cal ly ach ie ve the lar gest dec ay- rat e of the que ue- ove rflow probabilit y. Finall y, this res ult enabl es us to desig n sche duling algorith ms that are both close-to -optimal in terms of the asymptoti c deca y-rat e of the overflo w proba bility , and empirically shown to maintain small queue-overflow probabilities over queue-length ranges of practical interest. Index T erms — Queue-Ov erflow Probability, Wirele ss Schedul- ing, Large Deviations, Asymptotically Optimal Algorithms, Cel- lular System. I. I NTRODUCTION Link schedulin g is an important functionali ty in wirele ss networks due to both the shared nature of the wireless medium and the variations of the wireless channel over time. In the pas t, it has bee n demons tra ted that, by car efully choosi ng the sch eduling dec isi on bas ed on the cha nne l sta te and/or the demand of the users, the syst em perfo rma nc e ca n be substantially improved (see, e.g., the references in [2]). Most studies of scheduling algorithms have focused on optimizing the long-t erm aver age throughput of the users, or in other words stability. Consider the downlink of a single cell in a cellular network. The base-station transmits to Nusers. There is a queue Q i associated with each user i = 1 , 2,...,N. Due to interf erence, at any given time the base-stat ion can only serve the queue of one user. Hence, this system can be modelled as a single server serving Nqueues. Assume that data for user i Manuscript received September 22, 2008; revised April 29, 2009; accepted August 30, 2009. Recommended by Associate Editor Jean Walrand. This workhas been parti ally supported by NSF grants CNS-062670 3, CNS-0643145, CNS-0721484, CCF-0635202 and a grant from Purdue Research Foundation. An ea rli er ve rsi on of this pap er has appea red in 45th Ann ual Aller ton Conference on Communication, Control, and Computing, 2007 [1] The aut hor s are wit h Cen ter for Wi rel ess Sys tems and Applicati ons (CWSA) and School of Elec tric al and Computer Engineeri ng, Purdue Uni- versity, West Lafayette, IN 47906 (Email: {vvenkat,linx }@ecn.purdue.edu). arrives at the base-station at a constant rate λ i . Further, assume a slo tte d mode l, and in eac h time-s lot the wir ele ss cha nne l can be in one of Msta tes . In eac h sta te m = 1, 2,...,M, if the bas e-s tat ion pic ks use r i to serve , the correspondi ng ser vic e rat e is Fi m . Hence, at eac h time-s lot Q i increases by λ i , and if it is served and the channel is at state m, Q i decreases by Fi m . We assume that perfect channel information is available at the base-station. In a stability problem [3]–[5], the goal is to find algorithms for scheduling the transmissions such that the queues are stabilized at given offered loads. An important result along this direction is the development of the so-ca lled “throug hput-op timal” algorit hms [3]. A sche duling algorithm is called throughput-optimal if, at any offered load unde r whi ch any other algorit hm can sta bil ize the sys tem, thi s algorithm can sta bil ize the sys tem as wel l. It is wel l- known that the following class of scheduling algorithms are throughp ut-opti mal [3]–[5]: For a give n α ≥ 1, the base- station picks the user for service at each time that has the lar ges t prod uct of the tra nsmiss ion rat e mult ipli ed by the backlog raised to the power α. In other words , if the channel is in state m, the base-station chooses the user i with the large st (Q i ) α Fi m . To emphasize the dependency on α, in the sequel we will refer to this class of throughput-optimal algorithms as α-algorithms. While stabili ty is an importa nt first-order metric of success, for many delay-sensitive applications it is far from sufficient. In thi s pape r, we are intere ste d in the probabili ty of que ue overflow, which is equivalent to the delay-violation probability unde r cer tai n condit ions. The que sti on tha t we att empt to answer is the following: Is there an optimal algorithm in the sense that, at any given offered load, the algorithm can achieve the sma lle st prob abi lit y tha t any que ue ove rflo ws, i.e ., the smallest value ofP[max 1≤i≤NQ i (T) ≥ B], where B is the overflow threshold. Note that if we impose a quality-of-service (QoS) constraint on each user in the form of an upper bound on the queue-overflow probability, then the above optimality condition will also imply that the algorithm can support the largest set of offered loads subject to the QoS constraint. Unfortu natel y, calculati ng the exact queue-d istri butio n is often mathematically intractable. In this paper, we use large- devia tion theory [11], [12] and reformul ate the QoS constraint in terms of the asymptotic decay-rate of the queue-overflow probability as B approaches infinity. In other words, we are intere sted in finding schedul ing algorithms that can achieve the smallest possible value oflimsup B→∞ 1 B log P[ max 1≤i≤NQ i (T) ≥ B]. (1)