context-aware rrm for opportunistic content delivery in ... · context-aware rrm for opportunistic...

6
Context-aware RRM for Opportunistic Content Delivery in Cellular Networks Pietro Lungaro, Zary Segall and Jens Zander Wireless@KTH, The Royal Institute of Technology Electrum 229, 164 40 Kista, Sweden Email: [email protected], [email protected], [email protected] Abstract—To opportunistically exploit excess resources avail- able at different times and locations we propose in this paper to include context aware information in the RRM schemes adopted in cellular networks. Both an user and a network centric approaches to content pre-fetching are described and evaluated for different network dimensioning and service scenarios. The obtained results show that for different levels of accuracy in predicting future content requests, operator controlled pre- fetching outperforms the user controlled approach, and that the former can also bring robustness and significant cost reduction as compared to “classical” RRM schemes: the achieved level of performances can be mapped into a tri-dimensional gain region where fewer BSs are needed, or more users can be served, or larger files delivered per user, while maintaining a given level of user-perceived service quality. Finally, considering also the deployment of content caches at the BSs we show that the impact of backhaul limitations on experienced delays can be further mitigated, when there are similar content interests among users. Index Terms—Context-aware, wireless networks, mobile con- tent delivery, user behavior, predictive context. I. BACKGROUND Throughout the last decade, significant research and eco- nomic efforts have been invested in developing advanced cellular systems capable of supporting a large variety of wireless data services. Currently, while voice and SMS are still securing the largest part of revenues, mobile data services are dominating in volumes, but bringing in lower revenues. This “revenue gap” can be explained by considering both the “flat rate” pricing policy that has been adopted by the majority of mobile operators and the success of mobile applications and multimedia for smartphones. To prevent “data hogs” from monopolizing the wireless resources with their traffic [1], flat rate subscriptions typically have a maximum data cap (few GBs per month). While this cap is probably not large enough to substitute the Internet experience achievable with fixed connections, it is more than adequate to perform a significant number of minutes per day of VoIP calls, which poses an additional threat to future operators’ revenues [2]. In order to deliver a real flat rate for data services, following the “everytime-everywhere” paradigm adopted for voice, the operators may need to deploy networks capable of delivering a much higher cell throughput. However, since the cost structure of a “classical” cellular infrastructure scales poorly with the bandwidth provided to the end users, the wide area support for higher data rates would require a substantial increase in Base Station (BS) density. This, in turn, would uplift both the CAPEX and the OPEX of the network ([3],[4]) and may force the operators to charge higher prices. Since the average revenue per user is anticipated not to increase with data volumes, this calls for models that exploit the characteristics of the services and user behavior to optimize content delivery. Further, in a scenario where wireless access is gradually becoming a commodity, the reduction of production costs seems to be one of the key objectives for market survival. Most of the currently deployed networks are typically dimensioned considering the “peak hour” traffic demand. However, throughout the day the instantaneous loads at the BSs are quite different from the expected peak values. Opportunistically utilizing these “excess” resources might be an effective way for improving utilization and lowering the “production” costs. This insight has recently generated a significant amount of research in the area of the dynamic spectrum management, where several research proposals have suggested smart mechanisms for dynamically re-assigning unused bandwidth to ongoing transmissions within the same system or to other systems experiencing resource limitations. An alternative approach, which has received attention in current literature, for utilizing the instantaneous availability of resources is based on “pre-fetching”. This content delivery method consists of decoupling the time when user requested content is delivered and stored in the terminals, from the one when it is accessed and “consumed” by the end users. In order to be successful, pre-fetching requires accurate predic- tions on future user requests, however it is still unclear how these requirements depend on different network and service conditions (BS and user densities, file sizes). In particular, it is not obvious which type of system performances to expect in a multiuser cellular environment when mixing together traffic generated by pre-fetching and “on demand” content requests. Different schemes for delivering content through pre- fetching in cellular networks have been proposed and evaluated in literature (e.g., [5]-[9]). In some cases ([5],[7]), local storage of information at the terminals has been proposed to reduce the consumption of wireless resources for frequently accessed data items. In other investigations, pre-fetching solutions have been suggested for reducing the effects of channel quality fluc- tuations ([6]) and improving the performances of (streaming) protocols in wireless environments ([8],[9]). Exploiting mobility information for performing content pre- fetching in heterogeneous networks has been investigated in [10]-[11], where pre-fetching was mainly performed within WLAN coverage. In order to prevent pre-fetching terminals from consuming too extensively the network’s resources, some thresholds associated with the required probability of future access were included in the decision strategies concerning the selection of candidate data objects for pre-fetching. Providing a mix of pre-fetching and “on-demand” traffic in the same network might introduce some system inefficien- cies. The main network costs associated with the delivery of “predicted” content are the increased interference in the system, that could damage (some of) the ongoing downlink sessions with active users, and the additional increased delay experienced by active users if their traffic is not prioritized over the pre-fetching one served in their cells. On the other hand the costs introduced by pre-fetching can be significantly lower than those required for serving in an “on-demand” fashion the same users’ requests in future time instants, e.g., when cells

Upload: dodiep

Post on 04-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Context-aware RRM for Opportunistic Content Delivery in ... · Context-aware RRM for Opportunistic Content Delivery in Cellular Networks ... directed to users of an HSDPA ... cases

Context-aware RRM for Opportunistic ContentDelivery in Cellular Networks

Pietro Lungaro, Zary Segall and Jens ZanderWireless@KTH, The Royal Institute of Technology

Electrum 229, 164 40 Kista, SwedenEmail: [email protected], [email protected], [email protected]

Abstract—To opportunistically exploit excess resources avail-able at different times and locations we propose in this paperto include context aware information in the RRM schemesadopted in cellular networks. Both an user and a network centricapproaches to content pre-fetching are described and evaluatedfor different network dimensioning and service scenarios. Theobtained results show that for different levels of accuracyin predicting future content requests, operator controlled pre-fetching outperforms the user controlled approach, and that theformer can also bring robustness and significant cost reductionas compared to “classical” RRM schemes: the achieved level ofperformances can be mapped into a tri-dimensional gain regionwhere fewer BSs are needed, or more users can be served, orlarger files delivered per user, while maintaining a given levelof user-perceived service quality. Finally, considering also thedeployment of content caches at the BSs we show that the impactof backhaul limitations on experienced delays can be furthermitigated, when there are similar content interests among users.

Index Terms—Context-aware, wireless networks, mobile con-tent delivery, user behavior, predictive context.

I. BACKGROUND

Throughout the last decade, significant research and eco-nomic efforts have been invested in developing advancedcellular systems capable of supporting a large variety ofwireless data services. Currently, while voice and SMS arestill securing the largest part of revenues, mobile data servicesare dominating in volumes, but bringing in lower revenues.This “revenue gap” can be explained by considering both the“flat rate” pricing policy that has been adopted by the majorityof mobile operators and the success of mobile applicationsand multimedia for smartphones. To prevent “data hogs” frommonopolizing the wireless resources with their traffic [1], flatrate subscriptions typically have a maximum data cap (fewGBs per month). While this cap is probably not large enoughto substitute the Internet experience achievable with fixedconnections, it is more than adequate to perform a significantnumber of minutes per day of VoIP calls, which poses anadditional threat to future operators’ revenues [2].

In order to deliver a real flat rate for data services, followingthe “everytime-everywhere” paradigm adopted for voice, theoperators may need to deploy networks capable of delivering amuch higher cell throughput. However, since the cost structureof a “classical” cellular infrastructure scales poorly with thebandwidth provided to the end users, the wide area supportfor higher data rates would require a substantial increase inBase Station (BS) density. This, in turn, would uplift boththe CAPEX and the OPEX of the network ([3],[4]) and mayforce the operators to charge higher prices. Since the averagerevenue per user is anticipated not to increase with datavolumes, this calls for models that exploit the characteristicsof the services and user behavior to optimize content delivery.Further, in a scenario where wireless access is graduallybecoming a commodity, the reduction of production costsseems to be one of the key objectives for market survival.

Most of the currently deployed networks are typicallydimensioned considering the “peak hour” traffic demand.However, throughout the day the instantaneous loads at theBSs are quite different from the expected peak values.

Opportunistically utilizing these “excess” resources mightbe an effective way for improving utilization and loweringthe “production” costs. This insight has recently generateda significant amount of research in the area of the dynamicspectrum management, where several research proposals havesuggested smart mechanisms for dynamically re-assigningunused bandwidth to ongoing transmissions within the samesystem or to other systems experiencing resource limitations.

An alternative approach, which has received attention incurrent literature, for utilizing the instantaneous availability ofresources is based on “pre-fetching”. This content deliverymethod consists of decoupling the time when user requestedcontent is delivered and stored in the terminals, from the onewhen it is accessed and “consumed” by the end users. Inorder to be successful, pre-fetching requires accurate predic-tions on future user requests, however it is still unclear howthese requirements depend on different network and serviceconditions (BS and user densities, file sizes). In particular, itis not obvious which type of system performances to expect ina multiuser cellular environment when mixing together trafficgenerated by pre-fetching and “on demand” content requests.

Different schemes for delivering content through pre-fetching in cellular networks have been proposed and evaluatedin literature (e.g., [5]-[9]). In some cases ([5],[7]), local storageof information at the terminals has been proposed to reducethe consumption of wireless resources for frequently accesseddata items. In other investigations, pre-fetching solutions havebeen suggested for reducing the effects of channel quality fluc-tuations ([6]) and improving the performances of (streaming)protocols in wireless environments ([8],[9]).

Exploiting mobility information for performing content pre-fetching in heterogeneous networks has been investigated in[10]-[11], where pre-fetching was mainly performed withinWLAN coverage. In order to prevent pre-fetching terminalsfrom consuming too extensively the network’s resources, somethresholds associated with the required probability of futureaccess were included in the decision strategies concerning theselection of candidate data objects for pre-fetching.

Providing a mix of pre-fetching and “on-demand” trafficin the same network might introduce some system inefficien-cies. The main network costs associated with the deliveryof “predicted” content are the increased interference in thesystem, that could damage (some of) the ongoing downlinksessions with active users, and the additional increased delayexperienced by active users if their traffic is not prioritized overthe pre-fetching one served in their cells. On the other handthe costs introduced by pre-fetching can be significantly lowerthan those required for serving in an “on-demand” fashion thesame users’ requests in future time instants, e.g., when cells

Page 2: Context-aware RRM for Opportunistic Content Delivery in ... · Context-aware RRM for Opportunistic Content Delivery in Cellular Networks ... directed to users of an HSDPA ... cases

are more congested. Thus, whenever a pre-fetched item is notconsumed by the end-users, the already sustained costs cannotbe recovered, potentially leading to lower performances.

In this work we propose to include context information inthe radio resource management policies used when schedul-ing downlink transmissions directed to users of an HSDPAnetwork. Context-awareness is here intended as informationconcerning the multimedia items which are most likely to beaccessed by a given user at a given time and location. A crucialdesign aspect, considered in our work, is represented by theselection of the “entity” that has access to the aforementionedcontext information. Two different approaches are considered:on one hand context is available to software agents onlylocated in the user terminals, while, in the other approachthe core part of the context aware system is controlled by thenetwork operator, even though some “sensing” functionalitiesmight still be distributed to the individual user terminals.

Since the effectiveness of context aware schedulers dependon the accuracy of the predictions concerning future users’content requests, in this paper we investigate the impact onuser and network performances of different agents predictioncapabilities. Moreover, we compare these results, for bothproposed implementations, with the performances that canbe achieved, in the same network settings, by classical “on-demand” resource schedulers such as C/I and proportional fair.

Following the current trends where the datarates achievablein the downlink connections are expected to increase substan-tially (e.g., in LTE), a more limiting role is anticipated tobe played by the dimensioning of the BSs’ backhauls, whichare more likely to become the bottleneck of the communi-cation [12]. Thus, in combination with pre-fetching our pro-posed resource management schemes (schedulers) also include“caching” at the BSs as an additional way to improve userexperienced performances for the backhaul-limited scenarios.In particular, caching is also very effective in saving trans-mission resources when multiple users access the same dataobjects from the same geographical locations. In this work,for different degrees of similarity in the content preferencesamong users, and different cases of backhaul dimensioning, wewant to quantify the additional performance improvements thatcan be expected by introducing memory caches in the BSs.

In some previous works (e.g., [13]), pre-fetching has beenshown to be effective in hiding the sparsity of the network tothe end users’ perception. In this paper, we want to furtherinvestigate the potential coverage extension gains introducedby pre-fetching in cellular network. At the same time, we wantalso to express these potential gains, in terms of additionalnumber of users that can be served and/or larger data objectsthat can be delivered, for a given network configuration, whilemeeting a predefined level of user perceived quality.

The paper is organized as follows: in Section II differentcontext-aware resource schedulers are presented while in Sec-tion III the adopted models are described. The selected per-formance measures and investigation settings are respectivelydescribed in Sections IV and V. The results of the differentstudies are presented in Section VI, while the key findings aresummarized in Section VII.

II. CONTEXT-AWARE SCHEDULERS

When context information is only available at individualuser terminals the pre-fetching operations are completelytransparent to the network operator. This approach to contentpre-fetching is here defined as “Over-The-Top” (OTT ). Inour proposed implementation, an idle terminal enters in thepre-fetching mode with probability ppre, in each time slot ofduration Ts seconds. After that, every Wt slots the terminal

agent evaluates the average downlink rate R̄d(Δt) achievedduring the previous Δt = Wt · Ts seconds and compares itwith a reference target rate R̂d. If R̄d(Δt) ≥ R̂d the terminalremains in the pre-fetching mode for another Wt slots andupdates its wake up for pre-fetching probability according to

ppre [(n + 1)Δt] = min {cup · ppre (nΔt) , 1} . (1)

Instead if R̄d(Δt) < R̂d the terminal goes back to sleep modeand updates its pre-fetching probability in the following way:

ppre [(n + 1)Δt] = cdn · ppre (nΔt) . (2)

In these equations, both coefficients cup and cdn are introducedto have an adaptation of the pre-fetching intensity to the actualnetwork conditions. One of key aspects, implicitly capturedby those equations is the fact that keeping radio interfacesswitched on is highly expensive in terms of battery drainage,thus it should be avoided if a minimum level of service qualitycannot be met. Since in this approach the operator is notaware of the probability pi

j , with which user j will request andconsume item i, all requests are treated equally and scheduledaccording to a C/I scheduler, without any distinction betweenpre-fetching traffic (associated with pi

j < 1) and user initiatedsessions (the so-called “on-demand”, with pi

j = 1)1.In the case in which context information is available at

the operator, this also coordinates the pre-fetching operations,introducing a policy for discriminating between pre-fetchingand on-demand types of traffic. In our proposed scheme,called “BS Polling with Random Wakeup” (BSP,RW ),the transition between sleeping and pre-fetching mode can betriggered either by an explicit “poll” message sent by the BSs,or by a random wake-up with probability ppre.

In the first case, a polling message is sent by the operatorwhenever a BS has an excess of resources (e.g., availablepower for downlink transmission in HSDPA) at any givenpoint in time. The reception of this message triggers the“wake up” for pre-fetching of the sleeping terminals. Theagents, whose terminals are connected to that BS, respondby communicating the pi

js associated with the item they wantto be served with. Once all reports are received, the operatorselects to serve terminals based on the effective rate associatedwith their request. This is computed at time t as

Reff (i, j, t) = pij · Rj(t), (3)

where Rj(t) is the instantaneous downlink rate for user j.When instead some “sleeping” terminals are located in

coverage of a BS serving some active user requests, thesecould still try to pre-fetch multimedia content, with a wake upprobability ppre. This is updated in a similar manner to theOTT case, however, the main difference is that in this case theresource allocation performed by the operator is also based onthe effective rate Reff (i, j, t), with consumption probabilitypi

j = 1 associated to the requests of all active users.

III. MODELS

This Section describes the different models adopted in ourinvestigations. In particular we highlight the content requestmodel and agent prediction capabilities, the physical layermodel and both the backbone resource management andcaching schemes considered in this work.

A. Content request model and agent prediction capabilities

In order to test the effectiveness of context-based schedulersin providing multimedia content, we consider a case in which

1Any item i, which is originally requested with an a priori probability pij ,

changes its probability value to pij = 1 after being requested by user j.

Page 3: Context-aware RRM for Opportunistic Content Delivery in ... · Context-aware RRM for Opportunistic Content Delivery in Cellular Networks ... directed to users of an HSDPA ... cases

all users in the network are interested in a given set of Ni

items. The request probability associated with a given item,with popularity rank k within the user population, is modeledaccording to a Zipf distribution with density function

ppop(k, αpopz , Ni) =

1/kαpopz∑Ni

n=1 1/nαpopz

(4)

where the parameter αpopz is introduced to modulate the request

probability density associated with the different items.At a given time instant, we assume that the context-agents

associated with each individual user (either at the network orat the terminal side) are uncertain over Nu items among whicheach user will perform his next content request.

These Nu items are a subset of the complete item setof size Ni, obtained through realizations of Zipf distributedrandom variables over the complete set. Starting from the mostimportant item and progressing in a decreasing ranking order,through each realization we obtain one of the Nu candidateitems (all different) for each individual user.

Whenever a user “wakes up” and performs a content request,the specific item selection is obtained again using a Zipfdistribution, but this time over the set of Nu objects specific forthat user and with αus

z as exponent for the distribution2. Whileαpop

z represented the skewness of interests among the users ofa given population, the parameter αus

z is instead introducedfor modeling the prediction uncertainty associated with thesoftware agents. The extreme case in which users have fullynon-overlapping content interests can be obtained by lettingNi → +∞. In that case, the probability that two users have anoverlapping object in their wanted data set (Nu items), even ifwith different ranking position, tends to 0. This setting can beused to model a scenario in which the “long tail” dominatesthe consumption of mobile multimedia (full personalization),or a case in which caching at the BS is not possible. Instead,the case in which users have full overlapping (ID) contentinterests can be obtained by setting, for all users, identicalitems in all ranking positions associated with their individualsets of Nu wanted items. This setting can be used to model ascenario in which all users have a subscription with a contentprovider who needs to distribute his Nu latest multimedia filesin a given order. For the purpose of this study, we do notinvestigate methods to extract information from usage patternsand to predict future user behavior, but we simply postulate theaforementioned distributions and consider that this predictionproblem is solved elsewhere (e.g., see [14]-[16]).

B. Physical layer and backhaul resource assignment

The data rate Rkj achievable on the physical layer by userj when served by BS k is computed in the following way:

Rkj = min{

ηw · W log2

(1 +

Γkj

ηγ

), R̂max

}, (5)

where Γkj is the received SINR, W the bandwidth availablefor the communication, ηw represents the spectral efficiencycoefficient and ηγ is an offset factor for the SINR (“SINRgap”). In order to be effectively connected with a BS, a mini-mum SNR γ̂ needs to be achieved. Any time a terminal j expe-riences an SNR γkj < γ̂,∀k is considered in “outage”, while ifthere are multiple BSs satisfying the SNR requirement the oneproviding the highest value is selected for communication. Anevaluation on the load at the different BSs is not consideredwhen performing access selection decision. Each BS (node B)k is connected to a backhaul of capacity Rk

back Mbps. The

2In this case pus(k, αusz , Nu) for user i coincides to the previously

introduced pij , for the value of k that corresponds to the item with identity i.

backhaul plays a crucial role, since in some configurationsit might limit the capacity of the communication, acting asbottleneck for the achievable downlink datarates. In generalthe end rate perceived by the user can be computed as

rkj = min{

Rkj , Rkjback

}, (6)

where Rkjback represents the share of total backbone capacity

at BS k (Rkback) assigned for serving the requests of user j.

Given a set of users Nku , all simultaneously served by BS

k, two different backhaul resource management schemes havebeen considered, one for each context-aware approach.

In the case of OTT , we consider a scheme in which theshares of backhaul assigned to the various users depend on thelink quality experienced in their individual wireless link. Thismeans that terminals with a better channel will also receivemore bits at the BS. The backbone shares assigned to terminalj are computed as

Rkjback =

⎛⎝Γkj/

Nku∑

l=1

Γkl

⎞⎠ · Rk

back, ∀j ∈ Nku . (7)

Instead in the case of BSP,RW a service differentiationbetween pre-fetching and on-demand traffic is also performedin the policy assigning backhauls shares, by taking intoaccount the probability of consumption associated with therequested items. The corresponding backhaul shares are:

Rkjback =

⎡⎣(pi

jRkj)/Nk

u∑l=1

(pilRkl)

⎤⎦ · Rk

back, ∀j ∈ Nku . (8)

C. Caching at the BSs

In order to improve performances when Rkback < R̂max we

also considered the adoption of caches at the BSs. Wheneveruser terminals request an item, a copy of the delivered fileis stored in a memory location available at the serving BS.Once the item is placed in a cache, all consequent accessesto the item, from the same location do not require additionalbackhaul resources. In this way, both larger shares of Rk

back areavailable for serving other users in the cell, and the backhaulis not limiting the downlink rates for serving user terminalsrequesting content already cached at their BSs.

Whenever there are shares of a BS’s backhaul capacitywhich are not allocated in a given time slot3, those are used forretrieving and locally storing in the cache “popular” content.The decision on which specific item to cache is taken consider-ing the overall population access probability on the availableitem set (ppop). Starting from the most popular objects theoperator scans the BS cache in a decreasing popularity rankuntil when the first not completely stored item is identified.

IV. PERFORMANCE MEASURES

In order to establish the impact of the different context-aware scheduling schemes and architectures, we have selectedperformance measures that can highlight both user service per-ception and network utilization. In particular, at the user sidewe have selected the delay for retrieving in users’ terminalsthe requested data objects, while at network side we quantifiedthe interference experienced by active users and the numberof competing terminals per sector.

The delay (D) is in our work defined as the time, in seconds,elapsing between an user request and the complete deliveryof the wanted item in his terminal’s memory. In particular,when requesting an object that has been entirely pre-fetched

3This could be the case when a BS does not have connected users, or ifall connected users are requesting cached content.

Page 4: Context-aware RRM for Opportunistic Content Delivery in ... · Context-aware RRM for Opportunistic Content Delivery in Cellular Networks ... directed to users of an HSDPA ... cases

in his terminal we assume that the user experiences zerodelay, even though we are aware that there might be someaccess delay to memory. As previously described, contentpre-fetching introduces some additional interference in thenetwork. In this work, we quantify the total interference powerreceived at the terminals of active users while being served.

Finally, the average number of active users per sectorrepresents a measure of the load experienced by the networkunder the different content distribution schemes. By adoptingopportunistic content pre-fetching, some users are likely toreceive in advance some information that they might access ata later time. This means that, if content prediction is accurate,fewer active users per sector will be competing for radioresources. In practical terms this also implies that an operatorcould use this excess capacity for serving more users, or largermultimedia items (higher definition).

V. INVESTIGATION SETTINGS

TABLE ISUMMARY OF MAIN PARAMETERS USED IN THE SIMULATIONS

Parameter ValueCellular layout Hexagonal, 3-sector sitesSimulated cells 19 with wraparoundCell Radius [m] [300; 350; 400; 450]

Propagation model [dB] L = 128.1 + 37.6 log10(R), R in KmMax Power [W] P̂T = 20, 20% signaling

Carrier frequency [GHz] fc = 2Channel Bandwidth [MHz] W = 5

SNR target [dB] γ̂ = 2.5Spectral efficiency [bps/Hz] ηw = 0.7

SINR offset [dB] 3Noise figure [dB] 9

Shadow fading, stdev [dB] 8Downlink correlation 0.5

Fast fading White noise filtered through Bessel func.Max rate [Mbps] R̂max = 14.4

The evaluation of the proposed schemes has been performedthrough extensive simulations of an HSDPA network. Thebasic system level assumptions used in the simulations aresummarized in Table I. A set of four different cell sizes havebeen selected for the investigations, with associated outageprobabilities equal to Pout=[6%; 10%; 16%; 22%]. Thesedifferent values of Rcell have been introduced to representdifferent stages of an HSDPA network roll-out, ranging fromhigh density of BSs to initial roll-out values. Three cases ofbackhaul capacity have been considered: 6 Mbps, 20 Mbps,and 50 Mbps, representing Low, Medium and High dimen-sioning values [17]. The investigated user density have beenranging between 15 and 35 Users/Km2. In each case the userrequests of multimedia items have been modeled according toa Poisson process with expected inter-arrival time equal to 15minutes. While in all cases agents have been uncertain overNu = 100 items, three different item sizes have been studied:10, 30 and 50 MBytes, representing different quality settingsassociated with YouTube-like videos. Two different contentpredictability cases have been considered: high predictability(H), with αus

z = 1.8, and medium predictability (M), withαus

z = 0.9. For what concerns the similarity of interests amongusers, most of the studies have been performed either for “fullynon-overlapping” or for “fully overlapping” interests, whilea specific evaluation on caching has been performing withαpop

z =[0.9; 1.8] and Ni =[100; 1000; 10000] items.After an initialization period of 10 minutes, to let the system

reach a steady state, in each realization we monitored theperformances of all the terminals in the system, while servingthe first 100 incoming user requests until their completions.

0 100 200 300 400 500 6000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

D̂ [s]

Pr[

D≤

D̂]

C/IC/I, COTT (H)OTT (M)BSP, RW(H)BSP, RW(M)BSP, RW(H), C(ID)BSP, RW(M), C(ID)PF

Fig. 1. Cdf of the delay for Rback = 6 Mbps, λU = 15 Users/Km2,Si = 50 MBytes and Rcell = 300m.

VI. RESULTS

The results obtained in three different studies are presentedin this section. We start by describing the different users’and network’s performances achievable by context-aware and“classical” schemes in a full-deployment scenario and thenwe quantify the different types of gains (served users, objectssize) that can be achieved with context-based schedulers fordifferent BS densities. Finally, we describe the impact ofcaching when users have partially overlapping interests.

A. Full-deployment scenario

This study is characterized by an investigation of theaforementioned performance indicators in a network exposedto medium load and with Rcell = 300m. A constant userdensity of λu = 15 Users/Km2 is considered, with usersrequesting multimedia items with an arrival intensity dis-tributed according to a Poisson process with exponential inter-arrivals with average 15 minutes. The content requested bythe users is modeled as a file of constant size equal to 50MBytes. For both terminal and operator controlled context-aware approaches Nu = 100 items and two different casesfor content predictability are considered: high predictability(H) with αus

z = 1.8 and medium predictability (M) withαus

z = 0.9. In both cases cup = 1.1, cdn = 0.9 and R̂ = 1Mbps or 3 Mbps depending on whether Rk

back ≤ 6 Mbpsor Rk

back > 6 Mbps, respectively. Ideal caching with fullyoverlapping interests (C(ID)) is assumed when three differentvalues of Rk

back have been considered: 6, 20 or 50 Mbps.The cumulative distribution functions of the delay D are

shown in Figure 1 for Rback = 6 Mbps, and in Figure 2for Rback = 20 Mbps. In Figure 1 we notice that an “over-the-top” pre-fetching scheme can degrade the performancesfor a significant number of users as compared to both C/Iand PF resource schedulers. While with both high (H) andmedium (M) agent prediction capabilities only a small portionof users can achieve an “instantaneous gratification” (cache hitratio), a large share of the user population (33% with OTT (H)and 93% with OTT (M)) experiences increased delays. Thisdepends mainly on the fact that with OTT , in many cases,resource are allocated, and additional interference introducedin the system, for pre-fetching content that has low selectionprobability. In turn, this implies that users experience longerdelays, and remain for more time in an “active state”. Thiseffect is also confirmed by the results concerning the averagenumber of active users per sector, which are shown in TableII for both the cases in which Rback = 6 and 20 Mbps.

Page 5: Context-aware RRM for Opportunistic Content Delivery in ... · Context-aware RRM for Opportunistic Content Delivery in Cellular Networks ... directed to users of an HSDPA ... cases

0 50 100 150 200 250 300 3500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

D̂ [s]

Pr[

D≤

D̂]

C/IC/I, COTT (H)OTT (M)BSP, RW(H)BSP, RW(M)BSP, RW(H), C(ID)BSP, RW(M), C(ID)PF

Fig. 2. Cdf of the delay for Rback = 20 Mbps, λU = 15 Users/Km2,Si = 50 MBytes and Rcell = 300m.

Even though some users have zero delay and therefore are notcompeting for resources, the highest average number of activeusers per sector, obtained with context-aware schedulers, isachieved with both versions (H and M ) of OTT .

Still in Figure 1, the large distance between the points inthe distribution associated to the C/I,C(ID) scheme and theones in the “classical” C/I , shows that the introduction ofcaches can lead to significant performance improvements, evenwithout pre-fetching. The performance of the dual case, whenpre-fetching is not combined with caching is represented bythe curves corresponding to the scheme BSP,RW for bothH and M prediction cases. In particular, with high accuracy,pre-fetching can lead to approximately 30% of cache hit ratios.Even if in the (M ) case instantaneous gratification is obtainedonly in 5% of the cases, the overall delay experienced by theusers is significantly smaller than with both reference cases.Finally, by considering both ideal caching and pre-fetchingwe can obtain an upper bound on performances which almostdelivers 55% cache hit ratio probability, potentially leading tounprecedented levels of user satisfaction.

For Rback = 20Mbps, the backbone capacity is rarelylimiting the performances. In order to be limiting, at least2 or more active users per Node B are required. This isan event with medium/low probability, given the moderateaverage numbers of active users per sector shown in Table II.Moreover, this effect is also confirmed by the almost perfectoverlap between the cdf curves corresponding to BSP,RWand BSP,RW,C, and between the ones of C/I and C/I,Cas shown in Figure 2. There we can further notice that alsofor medium speed backhauls, OTT can have some potentiallynegative effects on the delay, especially in situations with onlymedium prediction capabilities at the terminal agents.

TABLE IIEXPECTED NUMBER OF COMPETING USERS PER SECTOR

Rcell = 300m, λu = 15 Users/Km2

Strategy Rback = 6Mbps Rback = 20MbpsC/I 0.6558 0.3257C/I, C 0.4401 0.2912OTT (H) 0.6273 0.1856OTT (M) 0.9150 0.4033BSP, RW (H) 0.4356 0.1304BSP, RW (M) 0.5393 0.2816BSP, RW (H), C(ID) 0.2031 0.1430BSP, RW (M), C(ID) 0.4285 0.3028PF 0.6272 0.3263

15 20 25 30 3510

15

20

25

30

35

40

45

50

λU [Users/Km2]

File

Size

[MB

ytes

]

Rcell = 300m, BSP, RW(H)Rcell = 300m, BSP, RW(M)Rcell = 350m, BSP, RW(H)Rcell = 350m, BSP, RW(M)Rcell = 400m, BSP, RW(H)Rcell = 400m, BSP, RW(M)Rcell = 450m, BSP, RW(H)Rcell = 450m, BSP, RW(M)

Fig. 3. Iso-curves representing the operating points, on the plane[λu, File Size], that provide identical E[D] between the reference C/I schemeand the proposed BSP, RW schedulers. The results are parametric in Rcell,and obtained for Rback = 6Mbps.

15 20 25 30 3510

15

20

25

30

35

40

45

50

λU [Users/Km2]

File

Size

[MB

ytes

]

Rcell = 300m, BSP, RW(H)Rcell = 300m, BSP, RW(M)Rcell = 350m, BSP, RW(H)Rcell = 350m, BSP, RW(M)Rcell = 400m, BSP, RW(H)Rcell = 400m, BSP, RW(M)Rcell = 450m, BSP, RW(H)Rcell = 450m, BSP, RW(M)

Fig. 4. Iso-curves representing the operating points, on the plane[λu, File Size], that provide identical E[D] between the reference C/I schemeand the proposed BSP, RW schedulers. The results are parametric in Rcell,and obtained for Rback = 20Mbps.

B. Tri-dimensional gains of context-aware schedulers

In Figures 3 and 4, we instead show the tri-dimensionalgain space associated with context-aware schedulers, for bothRback = 6Mbps and 20Mbps respectively. The results areobtained by considering the expected delay (E[D]) achievedin a reference system operating under a C/I scheduler, servingλu = 15 Users/Km2 with items of size equal to 30MBytesand characterized by an high BS density (Rcell = 300m). Thecontext-aware strategies considered in this study have beenboth BSP,RW (H) and BSP,RW (M). For each one, wehave been collecting the E[D] values obtained in differentoperating points located within a tri-dimensional space de-scribed by cell size, item size and user density. In particular,our investigation has been delimited by Rcell =[300, 350, 400,450]m, item size values varying between 10 and 50 MBytesand user density ranging between 15 and 35 Users/Km2. Thecurves shown in Figure 3-4 are essentially the projections onthe plane {λu −File Size} of the points in the expected delaysurfaces for which the BSP,RW schemes delivered E[D]values identical to the one of the reference system (iso-curves).

For low backhaul capacity (Figure 3), only moderate gains,in term of coverage extension, can be achieved, and exclu-sively for high prediction capabilities. This is due to thefact that the amount of bits that can pre-fetched, while a

Page 6: Context-aware RRM for Opportunistic Content Delivery in ... · Context-aware RRM for Opportunistic Content Delivery in Cellular Networks ... directed to users of an HSDPA ... cases

0 50 100 150 200 250 300

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

D̂ [s]

Pr[

D≤

D̂]

DissFullNi = 100, αpop

z = 1.8Ni = 1K, αpop

z = 1.8Ni = 10K, αpop

z = 1.8Ni = 100, αpop

z = 0.9Ni = 1K, αpop

z = 0.9Ni = 10K, αpop

z = 0.9

Fig. 5. Cdf of D for different similarity of interests among the users, adoptingthe same agent and network settings considered in Figure 1.

BS has excess resources, does not provide enough cache hitratio for compensating an increased duration of the outageperiods. However, when considering the same cell size of thereference system (300m), context-aware schedulers can allowan operator either to serve up to 23 Users/Km2 or, for anidentical user density as in the reference system, to provideeach user with multimedia items as large as 42 MBytes, thusallowing to deliver video content with higher quality.

Instead, when the backbone is not limiting the communi-cation (Figure 4) it also possible to convert the gains fromcontext-aware scheduler into increased coverage perception,allowing the cell sizes to be larger than 400m, provided thataccurate content predictions can be made. This is an importantresult, which highlights the additional capability of context-based schedulers to provide effective means for cost efficientnetwork deployment and content delivery.

C. Caching with non-fully overlapping user interests

In order to understand the impact of different degrees ofsimilarity in users’ interests on the overall delay performances,we performed a specific investigation under the same sys-tem settings used in Section VI-A. The performance of theBSP,RW (H) scheme, when Rback = 6Mbps and Rcell =300m are shown in Figure 5. Note that the curves labeled Dissand Full correspond respectively to the curves BSP,RW (H)and BSP,RW (H), C of Figure 1, and represents the two ex-treme cases in which users have completely dissimilar interestsand fully overlapping interests. Three different cases of thetotal number of items in the system have been selected, cor-responding to Ni = [100, 1000, 10000]. At the same time, weconsidered both a medium and high values for what concernsthe similarity of interests among users, with αpop

z = [0.9, 1.8].The results show that for high level of similarity of interestsamong users the performances are not significantly affectedby the size of the item set and they remain close to the idealcaching performances (upper bound) previously described. Forless focused interests, performances decrease with increasingNi, and only asymptotically for Ni → ∞ they reach thesame performance levels as the case without caches. Whilethese results have been achieved considering only aggregatedinformation on the “static” demand associated to the overalluser population, we strongly believe that additional cachinggains can be obtained by exploiting the finer granularitythat could be provided by including location and requesttime information in the caching policies, as well as groupingtogether sub-sets of users based on similarity of interests (e.g.,through collaborative filtering).

VII. CONCLUSION

To opportunistically exploit the excess of resources availablein different times and locations of a cellular network, wehave proposed a set of RRM schemes including context-aware information in their scheduling procedures. Both userand network centric approaches to content pre-fetching havebeen considered and their performances evaluated in respectto different network configurations and different degrees ofcontent prediction capabilities in software agents.

The results showed that whenever context information isonly available at user terminals, and therefore pre-fetchingis transparent to the network operators, user experienceddelays are significantly larger than with the operator centricapproach. Moreover, in backhaul-limited scenarios as wellas for low content prediction capabilities, the user centricapproach has also been shown to achieve larger delays thanthose experienced with “classical” RRM schemes, for a num-ber of distribution percentiles. Furthermore, depending on thecontent prediction capabilities of context servers and backhaulcapacities, the gains achievable with network centric context-aware schedulers can also be mapped into a tri-dimensionalgain region where: up to 40% less BSs can be deployed, orup to 2.8 more users can be served, or 2 times larger dataobject can be provided to the end users, while maintainingthe same user service perception as with the reference RRMschemes. Finally, considering an additional deployment ofcontent caches at the BSs we have also shown that userexperienced delays can be further reduced in backhaul limitedscenarios, provided that there is a certain similarity of contentrequests in the user population.

REFERENCES

[1] P. Thomasch and S. Orlofsky Holiday bargain hunters jumpstart AT&TWeb sales, Article in Reuters Website, available at the following URL:http://www.reuters.com/article/idUSTRE5B84H220091209

[2] C. Yu and S. Corbett, Global Survey of Communications ExecutivesIdentifies Skype-like Services as Primary Cause of Declining Revenues,Oracle Press Release, 12th of March 2007.

[3] J. Zander, On the cost structure of future wideband wireless access, inProceedings of VTC97, 1997.

[4] T. Giles et al., Cost drivers and deployment scenarios for futurebroadband wireless networks, VTC04-Spring, May 2004.

[5] D. Barbara, T. Imielinski, Sleepers and Workaholics: Caching Strategiesin Mobile Environments, in Proceedings of SIGMOD94, 1994.

[6] S. Gitzenis, N. Bambos, Power-Controlled Data Prefetching/Caching inWireless Packet Networks, in Proceedings of INFOCOM02, 2002.

[7] Y. Lin et al., Effects of Cache Mechanism on Wireless Data Access, IEEETransactions on Wireless Communications, Vol.2, November 2003.

[8] F. Fitzek, M. Reisslein A Prefetching Protocol for Continuous MediaStreaming in Wireless Environments, IEEE Journal on Selected Areas inCommunications, Vol.19, November 2001.

[9] J. Hu, G. Feng, K. Yeung Hierarchical Cache Design for Enhancing TCPOver Hetereogeneous Network with Wired and Wireless Links, IEEETransactions on Wireless Communications, Vol.2, March 2003.

[10] S. Drew and B. Liang, Mobility-aware web prefetching over heteroge-neous wireless networks, in Proceedings of PIMRC04, Sept. 2004.

[11] B. Liang, S. Drew and D. Wang, Performance of multiuser network-aware prefetching in heterogeneous wireless systems, in Wireless Net-works, Springer, Vol. 15, January 2009.

[12] Nokia Siemens Networks, Mobile backhaul - the power behind LTE, Ex-ecutive Summary, 2009. Available at www.nokiasiemensnetworks.com

[13] J. Hultell, P. Lungaro, J. Zander, Service Provisioning with Ad-Hoc De-ployed High-Speed Access Points in Urban Environments, in Proceedingsof PIMRC05, 2005.

[14] A. Chen, Context-Aware Collaborative Filtering System: Predicting theUser’s Preference in the Ubiquitous Computing Environment, Location-and Context-Awareness, 2005, pp. 244-253.

[15] A.K. Dey, G.D. Abowd, Towards a better understanding of context andcontext-awareness, Proceedings of the Workshop on the What, Who,Where, When and How of Context-Awareness, ACM Press, New York.

[16] D. Ejigu et al., Semantic approach to context management and reasoningin ubiquitous context-aware systems, in Proceedings of ICDIM ’07.

[17] Mobile broadband backhaul: Addressing the challenge, EricssonReiview, March 2008. Available at http://www.ericsson.com/ericsson/corpinfo/publications/review/2008 03/files/Backhaul.pdf