distributed - pdfs.semanticscholar.org filedistributed dynamic capacit y con tracting: a congestion...

13

Upload: others

Post on 30-Aug-2019

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

Distributed Dynami Capa ity Contra ting:A ongestion pri ing framework for Di�-Serv ?Murat Yuksel1 and Shivkumar Kalyanaraman21 CS Department, Rensselaer Polyte hni Institute,110 8th Street, Troy, NY 12180, USAyuksem� s.rpi.edu2 ECSE Department, Rensselaer Polyte hni Institute,110 8th Street, Troy, NY 12180, USAshivkuma�e se.rpi.eduAbstra t. In order to provide better Quality-of-Servi e (QoS) in largenetworks, several ongestion pri ing proposals have been made in the lastde ade. Usually, however, those proposals studied optimal strategies anddid not fo us on implementation issues. Our main ontribution in thispaper is to address implementation issues for ongestion-sensitive pri ingover a single domain of the di�erentiated-servi es (di�-serv) ar hite tureof the Internet. We propose a new ongestion-sensitive pri ing frameworkDistributed Dynami Capa ity Contra ting (Distributed-DCC), whi h isable to provide a range of fairness (e.g. max-min, proportional) in rateallo ation by using pri ing as a tool. Within the Distributed-DCC frame-work, we develop an Edge-to-Edge Pri ing S heme (EEP) and presentsimulation experiments of it.1 Introdu tionAs multimedia appli ations with extensive traÆ loads are be oming more om-mon, better ways of managing network resour es are ne essary in order to pro-vide suÆ ient QoS for those multimedia appli ations. Among several methodsto improve QoS in multimedia networks and servi es, one parti ular method isto employ ongestion pri ing. The main idea is to in rease servi e pri e whennetwork ongestion is more, and to de rease the pri e when ongestion is less.Implementation of ongestion pri ing still remains a hallenge, although sev-eral proposals have been made, e.g. [1, 2℄. Among many others, two major imple-mentation obsta les an be de�ned: need for timely feedba k to users about pri e,determination of ongestion information in an eÆ ient, low-overhead manner.The �rst problem, timely feedba k, is relatively very hard to a hieve in alarge network su h as the Internet. In [3℄, the authors showed that users doneed feedba k about harging of the network servi e (su h as urrent pri e andpredi tion of servi e quality in near future). However, in our re ent work [4℄, weillustrated that ongestion ontrol through pri ing annot be a hieved if pri e hanges are performed at a time-s ale larger than roughly 40 round-trip-times(RTTs), whi h is not possible to implement for many ases. We believe that the? This work is sponsored by NSF under ontra t number ANI9819112, and o-sponsored by Intel Corporation.

Page 2: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

problem of timely feedba k an be solved by pla ing intelligent intermediaries(i.e. software or hardware agents) between users and servi e providers. In thispaper, we do not fo us on this parti ular issue and leave development of su hintelligent agents for future resear h.The se ond problem, ongestion information, is also very hard to do in a waythat does not need a major upgrade at network routers. However, in di�-serv[5℄, it is possible to determine ongestion information via a good ingress-egress oordination. So, this exible environment of di�-serv motivated us to developa pri ing s heme on it.In our previous work [6℄, we presented a simple ongestion-sensitive pri ing\framework", Dynami Capa ity Contra ting (DCC), for a single di�-serv do-main. DCC treats ea h edge router as a station of a servi e provider or a stationof oordinating set of servi e providers. Users (i.e. individuals or other servi eproviders) make short-term ontra ts with the stations for network servi e. Dur-ing the ontra ts, the station re eives ongestion information about the network ore at a time-s ale smaller than ontra ts. The station, then, uses that onges-tion information to update the servi e pri e at the beginning of ea h ontra t.Several pri ing \s hemes" an be implemented in that framework.Fig. 1. DCC framework on di�-serv ar hite ture.DCC models a short-term ontra t for a given traÆ lass as a fun tion ofpri e per unit traÆ volume Pv , maximum volume Vmax (maximum number ofbytes that an be sent during the ontra t) and the ontra t length T :Contra t = f(Pv; Vmax; T ) (1)Figure 1 illustrates the big pi ture of DCC framework. Customers an onlya ess network ore by making ontra ts with the provider stations pla ed atthe edge routers. The stations o�er ontra ts (i.e. Vmax and T ) to fellow users.A ess to these available ontra ts an be done in di�erent ways, what we alledge strategy. Two basi edge strategies are \bidding" (many users bids for anavailable ontra t) or \ ontra ting" (users negotiate Pv with the provider foran available ontra t). So, edge strategy is the de ision-making me hanism toidentify whi h ustomer gets an available ontra t at the provider station.However, in DCC, we assumed that all the provider stations advertise thesame pri e value for the ontra ts, whi h is very ostly to implement over a wide

Page 3: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

area network. This is simply be ause the pri e value annot be ommuni atedto all stations at the beginning of ea h ontra t. In this paper, we relax thisassumption by letting the stations to al ulate the pri es lo ally and advertisedi�erent pri es than the other stations. We all this new version of DCC asDistributed-DCC. We introdu e ways of managing the overall oordination ofthe stations for the ommon purposes of fairness and stability. We then developa pri ing s heme Edge-to-Edge Pri ing (EEP). We illustrate stability of EEPby simulation experiments. We address fairness problems related to pri ing, andshow that the framework an a hieve max-min and proportional fairness bytuning a parameter, alled as fairness oeÆ ient.The paper is organized as follows: In the next se tion, we position our workand brie y survey relevant work in the area. In Se tion 3 we des ribe Distributed-DCC framework, and investigate various issues, su h as pri e al ulation, fair-ness, s alability. Next in Se tion 4, we develop the pri ing s heme EEP. In Se -tion 5, we make experimental omparative evaluation of EEP. We �nalize withsummary and dis ussions.2 Related WorkThere has been several pri ing proposals, whi h an be lassi�ed in many wayssu h as stati vs. dynami , per-pa ket harging vs. per- ontra t harging.Although there are opponents to dynami pri ing in the area (e.g. [7, 8℄), mostof the proposals have been for dynami pri ing (spe i� ally ongestion pri ing)of networks. Examples of dynami pri ing proposals are Ma Kie-Mason andVarian's Smart Market [1℄, Gupta et al.'s Priority Pri ing [9℄, Kelly et al.'s Pro-portional Fair Pri ing (PFP) [10℄, Semret et al.'s Market Pri ing [11℄, and Wangand S hulzrinne's Resour e Negotiation and Pri ing (RNAP) [12, 2℄. Odlyzko'sParis Metro Pri ing (PMP) [13℄ is an example of stati pri ing proposal. Clark'sExpe ted Capa ity [14℄ and Co hi et al.'s Edge Pri ing [15℄ allow both stati and dynami pri ing. In terms of harging granularity, Smart Market, PriorityPri ing, PFP and Edge Pri ing employ per-pa ket harging, whilst RNAP andExpe ted Capa ity do not employ per-pa ket harging.Smart Market is based primarily on imposing per-pa ket ongestion pri es.Sin e Smart Market performs pri ing on per-pa ket basis, it operates on the�nest possible pri ing granularity. This makes Smart Market apable of mak-ing ideal ongestion pri ing. However, Smart Market is not deployable be auseof its per-pa ket granularity and its many requirements from routers. In [16℄,we studied Smart Market and diÆ ulties of its implementation in more detail.While Smart Market holds one extreme in terms of granularity, Expe ted Ca-pa ity holds the other extreme. Expe ted Capa ity proposes to use long-term ontra ts, whi h an give more lear performan e expe tation, for statisti al apa ity allo ation and pri ing. Pri es are updated at the beginning of ea hlong-term ontra t, whi h in orporates little dynamism to pri es. Our work,Distributed-DCC, is a middle-ground between Smart Market and Expe ted Ca-pa ity in terms of granularity. Distributed-DCC performs ongestion pri ing at

Page 4: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

short-term ontra ts, whi h allows more dynamism in pri es while keeping pri -ing overhead small.Another lose work to ours is RNAP, whi h also mainly fo used on imple-mentation issues of ongestion pri ing on di�-serv. Although RNAP provides a omplete pi ture for in orporation of admission ontrol and ongestion pri ing,it has ex essive implementation overhead sin e it requires all network routers toparti ipate in determination of ongestion pri es. This requires upgrades to allrouters similar to the ase of Smart Market. Our work solves this problem byrequiring upgrades only at edge routers rather than at all routers.3 Distributed-DCC: The FrameworkDistributed-DCC is spe i� ally designed for di�-serv ar hite ture, be ause theedge routers an perform omplex operations whi h is essential to several require-ments for implementation of ongestion pri ing. Ea h edge router is treated as astation of the provider. Ea h station advertises lo ally omputed pri es with in-formation re eived from other stations. The main framework basi ally des ribeshow to preserve oordination among the stations su h that stability and fairnessof the overall network is preserved. A Logi al Pri ing Server (LPS) (whi h an beimplemented in a entralized or distributed manner, see Se tion 3.6) plays a ru- ial role. Figure 2 illustrates basi fun tions (whi h will be better understood inthe following sub-se tions) of LPS. The following sub-se tions investigate severalissues regarding the framework.3.1 How to Cal ulate pij ?Ea h ingress station i keeps a " urrent" pri e ve tor pi, where pij is the pri efor ow from ingress i to egress j. So, how do we al ulate the pri e-per- ow,pij? The ingresses make estimation of budget for ea h edge-to-edge ow passingthrough themselves. Let b̂ij be the urrently estimated budget for ow fij (i.ethe ow from ingress i to egress j). The ingresses send estimated budgets to the orresponding egresses (i.e. b̂ij is sent from ingress i to egress j) at a deterministi time-s ale. At the other side, the egresses re eive budget estimations from allthe ingresses, and also they make estimation of apa ity ̂ij for ea h parti ular ow. In other words, egress j al ulates ̂ij and is informed about b̂ij by ingressi. Egress j, then, penalizes or favors fij by updating its estimated budget value,i.e. bij = f(b̂ij ; [parameters℄) where [parameters℄ are optional parameters usedfor de iding whether to penalize or favor the ow. For example, if fij is passingthrough more ongested areas than the other ows, egress j an penalize fij byredu ing its budget estimation b̂ij .At another time-s ale3, egresses keep sending information to LPS. Morespe i� ally, for a di�-serv domain with n edge routers, egress j sends the follow-ing information to LPS:3 Can be larger than ingress-egress time-s ale, but should be less than ontra t length.

Page 5: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

1. the updated budget estimations of all ows passing through itself, i.e. bij fori = 1::n and i 6= j2. the estimated apa ities of all ows passing through itself, i.e. ̂ij for i = 1::nand i 6= j

Fig. 2. Major fun tions of LPS.LPS re eives information from egresses and, for ea h fij , al ulates allowed apa ity ij . Cal ulation of ijs is a ompli ated task whi h depends on bijs. Ingeneral, the ows should share apa ity of the same bottlene k in proportion totheir budgets. We will later de�ne a generi algorithm to do apa ity allo ationtask. LPS, then, sends the following information to ingress i:1. the total estimated network apa ity C (i.e. C =PiPj ̂ij)2. the allowed apa ities to ea h edge-to-edge ow starting from ingress i, i.e. ij for j = 1::n and j 6= iNow, the pri ing s heme at ingress i an al ulate pri e for ea h ow by using ij and b̂ij . An example pri ing s heme will be des ribed in Se tion 4.3.2 Budget Estimation at IngressesIn order to determine user's real budget The ingress stations perform very trivialoperation to estimate budgets of ea h ow, b̂ij . The ingress i basi ally knowsits urrent pri e for ea h ow, pij . When it re eives a pa ket it just needs todetermine whi h egress station the pa ket is going to. Given that the ingressstation has the addresses of all egress stations of the same di�-serv domain, it an �nd out whi h egress the pa ket is going to. So, by monitoring the pa ketstransmitted for ea h ow, the ingress an estimate the budget of ea h ow. Letxij be the total number of pa kets transmitted for fij in unit time, then thebudget estimate for fij is b̂ij = xijpij . Noti e that this operation must be doneat the ingress rather than egress, be ause some of the pa kets might be droppedbefore arriving at the egress. This auses xij to be measured less, and hen e auses b̂ij to be less than it is supposed to be.

Page 6: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

3.3 Congestion-Based Capa ity Estimation at EgressesThe essen e of apa ity estimation in Distributed-DCC is to de rease the a-pa ity estimation when there is ongestion indi ation(s) and to in rease it whenthere is no ongestion indi ation. This will make the pri es ongestion-sensitive,sin e the pri ing s heme is going to adjust the pri e a ording to available a-pa ity. In this sense, several apa ity estimation algorithms an be used, e.g.Additive In rease Additive De rease (AIAD), Additive In rease Multipli ativeDe rease (AIMD). We now provide a full des ription of su h an algorithm.With a simple ongestion dete tion me hanism (su h as marking of pa ketsat interior routers when ongested), egress stations make a ongestion-based esti-mation of the apa ity for the ows passing through themselves. Egress stationsdivide time into deterministi observation intervals and identify ea h observa-tion interval as ongested or non- ongested. Basi ally, an observation interval is ongested if a ongestion indi ation was re eived during that observation inter-val. At the end of ea h observation interval, the egresses update the estimated apa ity. Then, egress j al ulates the estimated apa ity for fij at the end ofobservation interval t as follows: ̂ij(t) = � � � �ij(t); ongested ̂ij(t� 1) +4 ̂; non- ongestedwhere � is in (0,1), �ij(t) is the measured output rate of fij during observationinterval t, and4 ̂ is a pre-de�ned in rease parameter. This algorithm is a variantof well-known AIMD.3.4 Capa ity Allo ation to Edge-to-Edge FlowsLPS is supposed to allo ate the total estimated network apa ity C to edge-to-edge ows in su h a way that the ows passing through the same bottlene kshould share the bottlene k apa ity in proportion to their budgets, and alsothe ows that are not ompeting with other ows should get all the available apa ity on their route. The ompli ated issue is to do this without knowledge ofthe topology for network ore. We now propose a simple and generi algorithmto perform this entralized rate allo ation within Distributed-DCC framework.First, at LPS, we introdu e a new information about ea h edge-to-edge owfij . A ow fij is ongested, if egress j has been re eiving ongestion indi ationsfrom that ow re ently (we will later de�ne what \re ent" is).At LPS, let Kij determine whether fij is ongested or not. If Kij > 0, LPSdetermines fij as ongested. If not, it determines fij as non- ongested. Let's allthe time-s ale at whi h LPS and egresses ommuni ate as LPS interval. At everyLPS interval t, LPS al ulates Kij as follows:Kij(t) = � k̂; fij was ongested at t� 1max(0;Kij(t� 1)� 1); fij was non- ongested at t� 1 (2)where k̂ is a positive integer. Noti e that k̂ parameter de�nes how long a owwill stay in \ ongested" state after the last ongestion indi ation. So, k̂ de�nes

Page 7: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

the time-line to determine if a ongestion indi ation is \re ent" or not. Notethat instead of setting Kij to k̂ at every ongestion indi ation, several di�erentmethods an be used for this purpose, but we pro eed with the method in (2).Given the above method to determine whether a ow is ongested or not, wenow des ribe the algorithm to allo ate apa ity to the ows. Let F be the setof all edge-to-edge ows in the di�-serv domain, and F be the set of ongestededge-to-edge ows. Let C be the a umulation of ̂ijs where fij 2 F . Further,let B be the a umulation of bijs where fij 2 F . Then, LPS al ulates theallowed apa ity for fij as follows: ij = � bijB C ; Kij > 0 ̂ij ; otherwiseThe intuition is that if a ow is ongested, then it must be ompeting with other ongested ows. So, a ongested ow is allowed a apa ity in proportion to itsbudget relative to budgets of all ongested ows. Sin e we assume no knowledgeabout the interior topology, we approximate the situation by onsidering these ongested ows as if they are traversing a single bottlene k. If knowledge aboutthe interior topology is provided, one an easily develop better algorithms bysub-grouping the ongested ows that are passing through the same bottlene k.If a ow is not ongested, then it is allowed to use its own estimated apa ity,whi h will give enough freedom to utilize apa ity available to that ow.3.5 FairnessWe examine the issues regarding fairness in two main ases:{ Single-bottlene k ase: The pri ing proto ol should harge same pri e to usersof same bottlene k. In this way, among users of same bottlene k, the ones withmore budget will be given more apa ity. The intuition behind this reasoningis that the ost of providing apa ity to ea h ustomer is the same.{ Multi-bottlene k ase: The pri ing proto ol should harge more to userswhose traÆ traverses more bottlene ks and auses more osts. So, otherthan proportionality to user budgets, we also want to allo ate less apa ityto users whose ows are traversing more bottlene ks than the others. Formulti-bottlene k networks, two main types of fairness have been de�ned:max-min, proportional [10℄. In max-min fairness, ows get equal share ofbottlene ks, while in proportional fairness ows get penalized a ording tonumber of bottlene ks they traverse. Depending on ost stru ture and userutilities, provider may want to hoose max-min or proportional fairness. So,we would like to have ability of tuning pri ing proto ol su h that fairness ofits rate allo ation is in the way provider wants.To a hieve the above fairness obje tives in Distributed-DCC, we introdu enew parameters for tuning rate allo ation to ows. In order to penalize fij , egressj an redu e b̂ij , whi h auses de rease in ij . It uses following fun tion:bij = f(b̂ij ; r(t); �; rmin) = b̂ijrmin + (rij(t)� rmin) � �

Page 8: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

where rij(t) is the estimated ongestion ost aused by fij , rmin is the minimumpossible ongestion ost, and � is fairness oeÆ ient. Instead of b̂ij , the egressj now sends bij to LPS. When � is 0, Distributed-DCC is employing max-minfairness. As it gets larger, the ow gets penalized more and rate allo ation gets loser to proportional fairness. However, if it is too large, then the rate allo ationwill get away from proportional fairness. Let �� be the � value where the rateallo ation is proportionally fair. If the estimation rij(t) is a urate, then �� = 1.Assuming that severity of ea h bottlene k is the same, we an dire tly usethe number of bottlene ks fij is traversing in order to al ulate rij(t) and rmin.In su h a ase, rmin will be 1 and rij(t) should be number of bottlene ks the owis passing through. If the interior nodes in rement a header �eld of the pa ketsat the time of ongestion, then the egress station an estimate the number ofbottlene ks the ow is traversing. We skip des ription of su h an estimationalgorithm to keep reader's fo us on major issues.3.6 S alabilityThere are mainly two issues regarding s alability: LPS, the number of ows.First of all, the ows are not per- onne tion basis, i.e. all the traÆ going fromedge router i to j is ounted as only one ow. This a tually relieves the s alingof operations happening on per- ow basis. The number of ows in the systemwill be n(n� 1) where n is the number of edge routers in the di�-serv domain.So, indeed, s alability of the ows is not a problem for the urrent Internet sin enumber of edge routers for a single di�-serv domain is very small.LPS an be s aled in two ways. First idea is to implement LPS in a fullydistributed manner. Edge stations ex hange information with ea h other (likelink-state routing). Basi ally, ea h station sends total of n� 1 messages to otherstations. So, this will in rease overhead on network be ause of the extra messages,i.e. omplexity will in rease from O(n) to O(n2) in terms of number of messages.Alternatively, LPS an be divided into multiple lo al LPSs whi h syn hronizeamong themselves to maintain onsisten y. This way the omplexity of numberof messages will redu e. However, this will be at a ost of some optimality again.4 Edge-to-Edge Pri ing S heme (EEP)For ow fij , Distributed-DCC framework provides an allowed apa ity ij andan estimation of total user budget b̂ij at ingress i. So, the provider station atingress i an use these two information to al ulate pri e. We propose a simplepri e formula to balan e supply ij and demand b̂ij :pij = b̂ij ij (3)Also, the ingress i uses the total estimated network apa ity C in al ulatingthe Vmax ontra t parameter de�ned in (1). A simple method for al ulating

Page 9: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

Vmax is Vmax = C�T where T is the ontra t length. This allows all the available apa ity to be ontra ted by a single ow, whi h is a loose admission ontrol.More onservative admission ontrol should be used to al ulate Vmax.We now prove optimality of (3) for a single-bottlene k network. We skip theproof for a multi-bottlene k network for spa e onsiderations. We model useri's utility with the well-known 4 [10℄ fun tion ui(x) = wilog(x), where x isbandwidth given to the user and wi is user i's budget (i.e. willingness-to-pay).Suppose pi is the pri e advertised to user i. Then, user i will maximize his surplusSi by ontra ting for xi = wi=pi.So, the provider an now �gure out what pri e to advertise to ea h user bymaximizing the so ial welfare W = S + R, where R is the provider revenue.Let K(x) = kx be a linear fun tion and be the ost of providing x amount of apa ity to a user, where k is a positive onstant. Then so ial welfare will be:W = nXi=1 [ui(xi)� kxi℄We maximize W subje t to Pi xi � C, where C is the total available apa ity.To maximizeW , all the available apa ity must be given to users sin e they havestri tly in reasing utility. Lagrangian and its solution for that system will be:W = nXi=1 ui(xi)� kxi + �( nXi=1 xi � C)xj = wjPni=1 wiC; j = 1::n (4)This result shows that welfare maximization of the des ribed system an bedone by allo ating apa ity to the users proportional to their budget, wi, relativeto total user budget. Sin e the user will ontra t for xi = wi=pi when advertiseda pri e of pi, then the optimum pri e for provider to advertise (i.e. p�) an be al ulated as follows: p� = pi = Pni=1 wiCi.e. ratio of total budget to available apa ity. So, provider should harge samepri e to users of same bottlene k route. EEP does that for an edge-to-edge route.5 Simulation Experiments and ResultsWe present ns [17℄ simulation experiments of EEP on single-bottlene k andmulti-bottlene k topology. Our goals are to illustrate fairness and stability prop-erties of the framework. The single-bottlene k topology is shown in Figure 3-aand the multi-bottlene k topology is shown in Figure 3-b. The white nodesare edge nodes and the gray nodes are interior nodes. To ease understanding4 Wang and S hulzrinne introdu ed a more omplex version in [12℄.

Page 10: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

the experiments, ea h user sends its traÆ to a separate egress. For the multi-bottlene k topology, one user sends through all the bottlene ks (i.e. long ow), rossed by the others. Bottlene k links have a apa ity of 10Mb/s and all otherlinks have 15Mb/s. Propagation delay on ea h link is 5ms, and users send UDPtraÆ with an average pa ket size of 1000B. Interior nodes mark the pa ketswhen lo al queue ex eeds 30 pa kets. In the multi-bottlene k topology they in- rement a header �eld instead of just marking. Bu�er size is assumed in�nite.(a) Single-bottlene k network (b) Multi-bottlene k networkFig. 3. Topologies for Distributed-DCC experiments.Ea h user ow maximizes its surplus by ontra ting for b=p amount of apa -ity, where b is its budget and p is pri e. b is randomized a ording to trun ated-Normal distribution with mean b. We will refer to this mean b as ow's budget.Ingresses send budget estimations to egresses at every observation interval.LPS sends information to ingresses at every LPS interval. Contra ting takespla e at every 4s, observation interval is 0.8s, and LPS interval is 0.16s. Theparameter k̂ is set to 25 (see Se tion 3.4). 4 ̂ is set to 1 pa ket (i.e. 1000B), theinitial value of ̂ij for ea h ow fij is set to 0.1Mb/s, and � is set to 0.95.5.1 Experiment on Single-bottlene k TopologyWe run a simulation experiment for EEP on the single-bottlene k topology,whi h is represented in Figure 3-a. There are 3 users with budgets of 10, 20, 30respe tively for users 1, 2, 3. Total simulation time is 15000s. Initially, only user1 is a tive and after every 5000s one of the other users gets a tive.Figure 4-a shows the ow rates averaged over 200 ontra t periods. We see the ows are sharing the bottlene k apa ity almost in proportion to their budgets.The distortion in rate allo ation is aused be ause of the assumptions that thegeneri edge-to-edge apa ity allo ation algorithm makes (see Se tion 3.4). Also,Figure 4-b shows the average pri es harged to ows over 200 ontra t periods.As new users join in, EEP in reases the pri e for balan ing supply and demand.Figure 4- shows the bottlene k queue size. Noti e that queue sizes makepeaks transiently at the times when new users gets a tive. Otherwise, the queuesize is ontrolled reasonably and the system is stable. The reason behind thetransient peaks is that the parameter Vmax is not restri ted whi h auses thenewly joining ow to ontra t for a lot more than the available apa ity.Also, average utilization of the bottlene k link was more than 90%.

Page 11: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

5.2 Experiments on Multi-bottlene k TopologyOn a multi-bottlene k network, we would like illustrate two properties:{ Property 1: provision of various fairness in rate allo ation by hanging thefairness oeÆ ient � (see Se tion 3.5){ Property 2: adaptiveness of apa ity allo ation algorithm (see Se tion 3.4)In order to illustrate Property 1, we run a series of experiments for EEP withdi�erent � values. We use a larger version of the topology represented in Figure3-b. In the multi-bottlene k topology there are 10 users and 9 bottlene k links.Total simulation time is 10,000s. Initially, the long ow is a tive. After ea h1000s, one of the other ross ows gets a tive. So, as the time passes the numberof bottlene ks in the system in reases. We are interested in the long ow's rate,sin e it is the one that ause more ongestion osts than the other ows.Figure 4-d shows average rate of the long ow versus number of bottlene ksin the system. As expe ted, the long ow gets lesser apa ity as � in reases.When � = 0, rate allo ation to ows is max-min fair. Observe that when � = 1,rate allo ation follows the proportionally fair rate allo ation. This variation infairness is basi ally a hieved by advertisement of di�erent pri es to the ows.Figure 4-e shows the average pri e that is advertised to the long ow as numberof bottlene ks in the system in reases. We an see that the pri e advertised to thelong ow in reases as number of bottlene ks in reases. As � in reases, frameworkbe omes more responsive to the long ow by in reasing its pri e more sharply.Finally, to illustrate Property 2, we ran an experiment on the topology inFigure 3-b with small hanges. We in reased apa ity of the bottlene k at nodeD from 10 Mb/s to 15Mb/s. Initially, all the ows have equal budget of 10 units.Total simulation time is 30000s. Between times 10000 and 20000, budget of ow1 is temporarily in reased to 20 units. � is set to 0. All the other parameters areexa tly the same as in the single-bottlene k experiments of the previous se tion.Figure 4-f shows the given volumes averaged over 200 ontra ting periods.Until time 10000s, ows 0, 1, and 2 share the bottlene k apa ities equally pre-senting a max-min fair allo ation be ause � is 0. However, ow 3's rate is larger,be ause bottlene k node D has extra apa ity. This is a hieved by the freedomgiven to individual ows by the apa ity allo ation algorithm (see Se tion 3.4).Between times 10000 and 20000, ow 2 gets a step in rease in its rate be auseof the step in rease in its budget. In result of this, ow 0 gets a step de rease inits volume. Also, ows 2 and 3 adapt themselves to the new situation, trying toutilize the extra apa ity leftover from the redu tion in ow 0's rate. So, ows 2and 3 get a step de rease in their rates. After time 20000, ows restore to theiroriginal rates, illustrating adaptiveness of the framework.6 SummaryIn this paper, we presented a new framework, Distributed-DCC, for ongestionpri ing in a single di�-serv domain. Distributed-DCC an provide a ontra ting

Page 12: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

framework based on short-term ontra ts between multimedia (or any otherelasti , adaptive) appli ation and the servi e provider. Contra ting improvesQoS if appropriate admission ontrol and provisioning te hniques are used. Inthis paper, we fo used on pri ing issues.0

1

2

3

4

5

6

7

8

9

10

0 2000 4000 6000 8000 10000 12000 14000

Ave

rage

d R

ates

of

Flo

ws

(Mb/

s)

Time (seconds)

Flow 0Flow 1Flow 2

0

1

2

3

4

5

0 2000 4000 6000 8000 10000 12000 14000

Ave

rage

d P

rice

of

Flo

ws

($/M

b)

Time (seconds)

Flow 0Flow 1Flow 2

(a) Averaged ow rates (b) Pri es to ows0

200

400

600

800

1000

1200

1400

0 2000 4000 6000 8000 10000 12000 14000

Bot

tlen

eck

Que

ue S

ize

(pac

kets

)

Time (seconds)

0

1

2

3

4

5

6

7

8

9

0 1 2 3 4 5 6 7 8

Lon

g F

low

Rat

e(M

b/s)

Number of Bottlenecks

alpha = 0.0alpha = 0.25alpha = 0.50alpha = 0.75alpha = 1.00alpha = 1.25alpha = 1.50alpha = 1.75alpha = 2.00alpha = 2.25alpha = 2.50

Proportional fair rate

( ) Bottlene k queue length (d) Rate of long ow0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

0 1 2 3 4 5 6 7 8

Pri

ce A

dver

tise

d to

the

Lon

g F

low

($/

Mbi

ts)

Number of Bottlenecks

alpha = 0.0alpha = 0.25alpha = 0.50alpha = 0.75alpha = 1.00alpha = 1.25alpha = 1.50alpha = 1.75alpha = 2.00alpha = 2.25alpha = 2.50

0

2

4

6

8

10

12

0 5000 10000 15000 20000 25000 30000

Ave

rage

d R

ates

of

Flo

ws

(Mb/

s)

Time (seconds)

Flow 0Flow 1Flow 2Flow 3

(e) Pri e to the long ow (f) Averaged ow ratesFig. 4. (a)-( ): Results of single-bottlene k experiment for EEP. (d)-(f): Results ofEEP experiments on multi-bottlene k topology.Main ontribution of the paper is to develop an easy-to-implement ongestionpri ing ar hite ture whi h provides exibility in rate allo ation. We investigatedfairness issues within Distributed-DCC and illustrated ways of a hieving a rangeof fairness types (i.e. from max-min to proportional) through ongestion pri ing

Page 13: Distributed - pdfs.semanticscholar.org fileDistributed Dynamic Capacit y Con tracting: A congestion pricing framew ork for Di -Serv? Murat Y uksel 1 and Shivkumar Kaly anaraman 2 1

under ertain onditions. The fa t that it is possible to a hieve various fairnesstypes within a single framework is very en ouraging. We also developed a pri -ing s heme, EEP, within the Distributed-DCC framework, and presented severalsimulation experiments. Future work should in lude investigation of issues re-lated to Distributed-DCC on multiple di�-serv domains. Also, the frameworkshould be supported by admission ontrol te hniques whi h will tune the on-tra t parameter Vmax and address minimum QoS settings in SLAs.Referen es1. J. K. Ma Kie-Mason and H. R. Varian, Pri ing the Internet, Kahin, Brian andKeller, James, 1993.2. X. Wang and H. S hulzrinne, \An integrated resour e negotiation, pri ing, andQoS adaptation framework for multimedia appli ations," IEEE Journal of Sele tedAreas in Communi ations, vol. 18, 2000.3. A. Bou h and M. A. Sasse, \Why value is everything?: A user- entered approa hto Internet quality of servi e and pri ing," in Pro eedings of IWQoS, 2001.4. M. Yuksel, S. Kalyanaraman, and B. Sikdar, \E�e t of pri ing intervals on ongestion-sensitivity of network servi e pri es," Te h. Rep. ECSE-NET-2002-1,Rensselaer Polyte hni Institute, ECSE Networks Lab, 2002.5. S. Blake et. al, \An ar hite ture for Di�erentiated Servi es," RFC 2475, 1998.6. R. Singh, M. Yuksel, S. Kalyanaraman, and T. Ravi handran, \A omparativeevaluation of Internet pri ing models: Smart market and dynami apa ity on-tra ting," in Pro eedings of Workshop on Information Te hnologies and Systems(WITS), 2000.7. A. M. Odlyzko, \Internet pri ing and history of ommuni ations," Te h. Rep., AT& T Labs, 2000.8. I. Ch. Pas halidis and J. N. Tsitsiklis, \Congestion-dependent pri ing of networkservi es," IEEE/ACM Transa tions on Networking, vol. 8, no. 2, 2000.9. A. Gupta, D. O. Stahl, and A. B. Whinston, Priority pri ing of Integrated Servi esnetworks, Eds M Knight and Bailey, MIT Press, 1997.10. F. P. Kelly, A. K. Maulloo, and D. K. H. Tan, \Rate ontrol in ommuni ation net-works: Shadow pri es, proportional fairness and stability," Journal of OperationsResear h So iety, vol. 49, pp. 237{252, 1998.11. N. Semret, R. R.-F. Liao, A. T. Campbell, and A. A. Lazar, \Market pri ing ofdi�erentiated Internet servi es," in Pro eedings of IWQoS, 1999, pp. 184{193.12. X. Wang and H. S hulzrinne, \Pri ing network resour es for adaptive appli ationsin a Di�erentiated Servi es network," in Pro eedings of INFOCOM, 2001.13. A. M. Odlyzko, \A modest proposal for preventing Internet ongestion," Te h.Rep., AT & T Labs, 1997.14. D. Clark, Internet ost allo ation and pri ing, Eds M Knight and Bailey, MITPress, 1997.15. R. Co hi, S. Shenker, D. Estrin, and L. Zhang, \Pri ing in omputer networks:Motivation, formulation and example," IEEE/ACM Transa tions on Networking,vol. 1, De ember 1993.16. M. Yuksel and S. Kalyanaraman, \Simulating the Smart Market pri ing s heme onDi�erentiated Servi es ar hite ture," in Pro eedings of CNDS part of SCS WesternMulti-Conferen e, 2001.17. \UCB/LBLN/VINT network simulator - ns (version 2)," http://www-mash. s.berkeley.edu/ns, 1997.