features setup: environment, scale and data 7 7 7 a setup

1
LTE, UCI-Mobile C ITY -W IDE S IGNAL S TRENGTH M APS :P REDICTION WITH R ANDOM F ORESTS E. A LIMPERTIS * , A. M ARKOPOULOU * , C. T. B UTTS * , K. P SOUNIS + U NIVERSITY OF C ALIFORNIA ,I RVINE * U NIV . OF S OUTHERN C ALIFORNIA + M OBILE IS K ING Figure 1: Exponential Global Mo- bile Data Traffic Growth, Source 1 . Figure 2: More People with Mobile than Running Water, Source 1 . Figure 3: However, all of us have experienced: Poor Performance and Failed Calls. 1. Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2017-2022 S IGNAL S TRENGTH M APS O VERVIEW Mobile Signal (Coverage) Maps by Users’ Mobiles as Sensors! XLarge Scale, XLong Periods. However: Measurements are sparse, inadequate, expensive. Cellular Providers Signal Maps Data: 1. Themselves (wardriving, privacy concerns). 2. Mobile Analytics Companies ($$), e.g., OpenSignal, Tutela, RootMetrics. Figure 4: T-Mobile LTE Map of UCI, Collected by Ourselves [1]. G OALS AND C ONTRIBUTIONS Goal: How to Predict missing values in space, time and other features? Benefits and Contributions: Cheaper & More Accurate Maps for: 1. Users: Find Best Network. 2. Cellular Carriers: (a) Monitor their and competitors’ Cellular Network. (b) Network Management and Upgrades (e.g., Deploy more cells). (c) SDN/SON e.g., network selection etc. 3. Mobile analytics Companies: Reduce Operational Costs (e.g., AWS cost). S IGNAL S TRENGTH MAPS P RIOR WORK Features Setup: Environment, Scale and Data Spatial Time Device Network Environment Agnostic City-Wide No Expensive LiDar Data Log-Distance Path-Loss (LDP L) [2] COST-231/ WINNER I-II/ Ray Tracing Geostatistics SpecSense [3] BCS [4] RAIK-DNNs [5] Our Work: Random Forests Table 1: Signal Maps Approaches Compared with Our Work. R ANDOM F ORESTS ( RFs ) LTE RSRP P REDICTION LDPL RSRP modeling : P (t) cID ~ l j = P (t) 0 - 10n j log 10 || ~ l BS - ~ l j || 2 /d 0 + w (t) j . RSRP Prediction: b P ∼N (RFs μ (x)2 x ): P j target, x j feature vector. Random Forests (RFs) are an ensemble of multiple decision trees. RSRP Predictors: 1. RFs x,y : Spatial Only (localization in [6]). 2. RFs x,y,t : Spatiotemporal Features. 3. RFs all : All Features. For each measurement P j we consider the full set of features: Features x j : x full j =(l x j ,l y j , d, h, cID, dev, out, || ~ l BS - ~ l j || 2 ,freq dl ) 1. Location, (l x j ,l y j ). 2. Time Features, t j =(d, h). RSRP Variance Is Time Dependent. 3. Cell-ID, cID. RSRP is defined per serving cell. 4. Device Hardware Type, dev . RSRP calculation differs per device/hardware differences. 5. Downlink Carrier Frequency, freq dl . Radio propagation depends on freq dl . 6. Outdoors, out. From Android’s API GPS velocity. 7. Distance between UE – BS, || ~ l BS - ~ l j || 2 . Why RFs for Data-Driven Prediction? 1. RFs inherently considers all features x; Geostatistics [3] only spatial. 2. RFs Automatically identifies areas with spatially (and temporal) correlated RSRP (similar wireless propagation characteristics). Figure 5: Example of decision boundaries chosen by RFs x,y for Campus cell x306. D ATASETS Dataset Period Areas Type of Measurements Characteristics Source Campus 02/10/17 - 06/18/17 Univ. Campus Area u 3km 2 LTE KPIs: RSRP, [RSRQ]. Context: GPS Location, timestamp, dev , cid. Features: x = l x j ,l y j , d, h, dev, out, || ~ l BS - ~ l j || 2 No. Cells = 25 No. Meas u 180K Density ( N m 2 ) Per Cell: 0.01 - 0.66 (Table 3) Overall Density: 0.06 Ourselves [1] NYC & LA 09/01/17- 11/30/17 NYC Metropolitan Area u 300km 2 LTE KPIs: RSRP, [RSRQ, CQI]. Context: GPS Location, timestamp, dev , cid. EARFCN. Features:x = l x j ,l y j , d, h, cid, dev, out, || ~ l BS - ~ l j || 2 ,freq dl No. Meas NYC u 4.2M No. Cells NYC u 88k Density NYC-all u 0.014 N m 2 Mobile Analytics Company LA metropolitan Area u 1600km 2 No. Meas LA u 6.7M No. Cells LA u 111K Density LA-all u 0.0042 N m 2 Table 2: Overview of Signal Maps Datasets used in this study NYC Manhattan Midtown (Fig.8) NYC E. Brooklyn LA Southern MN-Carrier MNC-1 MNC-1 MNC-2 No. Measurements u 63K u 104K u 20K Area km 2 1.8km 2 44.8 km 2 220 km 2 Data Density N m 2 u 0.035 u 0.002 u 0.0001 No. Cells |C | 429 721 353 Cell Density |C | km 2 238.3 16.1 1.6 Table 3: NYC and LA datasets: LTE TAs Examples. D ATASETS E XAMPLES AND F EATURE I MPORTANCE MNCarrier-1 LTE RSRP (dBm) TAC: xx640 Cell-ID: x204 RSRP (dBm) Legend 250m Figure 6: Campus example cell x204: high density (0.66), low dispersion (325). MNCarrier-1 LTE RSRP (dBm) TAC: xx640 Cell-ID: x355 RSRP (dBm) Legend 250 m Figure 7: Campus: example cell x355: small density (0.12) more dispersed data (573). Figure 8: NYC: Manhatta Mid- townn LTE TA Figure 9: NYC: zooming in Manhattan Midtown (Time Square) for some of the available cells (Different color per cID). Feature Importance d h l y l x || l BS - l j || 2 dev out RFs Features 0.00 0.05 0.10 0.15 0.20 0.25 0.30 MDI (Mean Decrease Impurity) Feat. Importance Campus - CID: x204 l x l y || l BS - l j || 2 h dev d out RFs Features 0.0 0.1 0.2 0.3 0.4 0.5 0.6 MDI (Mean Decrease Impurity) Feat. Importance Campus - CID: x355 (1) Model – Based (Wireless Propagation) (a) LDPL (b) LDPL-knn (Heteregoneous n j ) (2) Geospatial Interpolation (a) OK Ordinary Kriging (b) OKD OK Detrening (3) Random Forests (RF s) (a) RF s x,y (b) RF s x,y,t (c) RF s all Table 4: Evaluation: Methods For Comparison RFs Setup Model Per LTE TA Model per cID RFs hyperparameters n trees = {20, 100} max depth = {20, 30} Table 5: RFs Setup and Hyperparameters. C OMPARING P REDICTORS PER CELL Cell Characteristics RMSE (dB) cID N N sq m 2 SDD E[P ] σ 2 LDPL hom LDPL kNN OK OKD RF s x,y RF s x,y,t RF s all x914 3215 0.007 791 -94.5 96.3 13.3 3.47 3.59 2.28 3.43 1.71 1.67 x034 1564 0.010 441 -101.2 337.5 19.5 7.82 7.44 5.12 7.56 3.82 3.84 x901 16051 0.162 355 -107.9 82.3 8.9 4.60 4.72 3.04 4.54 1.73 1.66 x204 55566 0.666 325 -96.0 23.9 6.9 3.84 3.85 2.99 3.83 2.30 2.27 x922 3996 0.107 218 -102.7 29.5 5.6 3.1 3.16 2.01 3.10 1.92 1.82 x902 34193 0.187 481 -111.5 8.1 21.0 2.60 2.47 1.64 2.50 1.37 1.37 Cell Characteristics RMSE (dB) cID N N sq m 2 SDD E[P ] σ 2 LDPL hom LDPL kNN OK OKD RF s x,y RF s x,y,t RF s all x470 7699 0.034 533 -107.3 16.9 24.8 3.64 2.73 1.87 2.78 1.26 1.26 x915 4733 0.042 376 -110.6 203.9 14.3 7.54 7.39 4.25 7.31 3.29 3.15 x808 12153 0.035 666 -105.1 7.7 4.40 2.41 2.42 1.60 2.34 1.75 1.59 x460 4077 0.040 361 -88.0 32.8 11.2 2.35 2.28 1.56 2.31 1.84 1.84 x306 4076 0.011 701 -99.2 133.3 18.3 4.85 4.30 2.80 3.94 3.1 3.06 x355 30084 0.116 573 -94.3 42.6 9.3 2.42 2.31 1.85 2.26 1.79 1.79 UCI N ETWORKING G ROUP W EBPAGE Networking Group@UCI: athinagroup.eng.uci.edu M ODEL G RANULARITY: cID vs. LTE TA Question: At What Level of Granularity Should we Train Our RFs Models? RFs x,y RFs all Methods 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 RMSE (dB) Global RFs Model: CID as a Feat. Unique RFs Model per CID Figure 10: MNC-1, Man- hattan Midtown (urban). RFs x,y RFs all Methods 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 RMSE (dB) Global RFs Model: CID as a Feat. Unique RFs Model per CID Figure 11: MNC-1, East Brooklyn (suburban) RFs x,y RFs all Methods 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 RMSE (dB) Global RFs Model: CID as a Feat. Unique RFs Model per CID Figure 12: MNC-2, South- ern LA (suburban). N UMBER OF M EASUREMENTS vs. RMSE T RADE - OFF Tradeoffs: (1) 80% Less Data: Same Accuracy. (2) Same Data 17% Relative Error Reduction (or 1dB error reduction). 10 20 30 40 50 60 70 80 90 Train Data Size (%) 2.0 2.5 3.0 3.5 4.0 4.5 RMSE (dB) LDPL - kNN OK OKD RFs x,y RFs x,y,t RFs all Figure 13: Campus dataset: RMSE vs. Training Size. Our methodology (RFs with more than spatial features, i.e., RFs x,y,t , RFs all ) significantly improves the RMSE-cost tradeoff: it can reduce RMSE by 17% for the same number of measurements compared to state-of-the-art data-driven predictors(OKD); or it can achieve the lowest error possible by OKD (2.8dB) with 10% instead of 90% (and 80% reduc- tion) of the measurements. C ITY-W IDE M APS : NYC and LA datasets E XPERIMENTS CDFs for RMSE per cID for two different LTE TA, for the same major MNC-1. Benefits: (1) RFs all offer 2dB gain over the baselines for the 90th percentile. (2) 2dB for 1-bar in voLTE means 1-5% call drop rate [7] 0 2 4 6 8 10 12 14 16 18 RMSE (dB) 0.0 0.2 0.4 0.6 0.8 1.0 CDF: P (E e) RFs x,y (Spatial Features) OK (Geostatistics) RFs all (Full Features) LDPL - kNN Figure 14: MNC-1, NYC Manhattan Midtown (urban). 0 2 4 6 8 10 12 14 16 18 RMSE (dB) 0.0 0.2 0.4 0.6 0.8 1.0 CDF: P (E e) RFs x,y (Spatial Features) OK (Geostatistics) RFs all (Full Features) LDPL - kNN Figure 15: MNC-1, LA Southern (Sub- urb) R EFERENCES [1] E. Alimpertis and A. Markopoulou. A system for crowdsourcing passive mobile net- work measurements. In 14th USENIX NSDI’17, Posters Sessions, Boston, Massachusetts, USA, March 2017. USENIX Association. [2] E. Alimpertis, N. Fasarakis-Hilliard, and A. Bletsas. Community RF Sensing for Source Localization. IEEE Wireless Commun. Lett., 3(4):393–396, Aug 2014. [3] A. Chakraborty et al. Specsense: Crowdsensing for efficient querying of spectrum occu- pancy. In Proc. of the IEEE INFOCOM, Atlanta, Georgia, USA, May 2017. [4] S. He and K. G. Shin. Steering crowdsourced signal map construction via bayesian com- pressive sensing. In Proc. of the IEEE INFOCOM ’18, pages 1016–1024. IEEE, April 2018. [5] R. Enami et al. RAIK: Regional analysis with geodata and crowdsourcing to infer key performance indicators. In Proc. of the IEEE WCNC ’18, pages 1–6, April 2018. [6] A. Ray et al. Localization of lte measurement records with missing information. In Proc. of the IEEE INFOCOM, pages 1–9, San Fransisco, CA, USA, April 2016. [7] Yunhan Jack et al. Jia. Performance characterization and call reliability diagnosis support for voice over LTE. In Proc. of the ACM MobiCom, pages 452–463, 2015. A CKNOWLEDGMENTS This work was supported by NSF Awards 1649372, 1526736, 1444060; as well as by Networked Systems and CPCC at UC Irvine and the Broadcom Foundation Fel- lowship program.

Upload: others

Post on 21-Apr-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Features Setup: Environment, Scale and Data 7 7 7 A Setup

LTE, UCI-Mobile

CITY-WIDE SIGNAL STRENGTH MAPS: PREDICTION WITH RANDOM FORESTSE. ALIMPERTIS∗, A. MARKOPOULOU∗, C. T. BUTTS∗, K. PSOUNIS+

UNIVERSITY OF CALIFORNIA, IRVINE ∗ UNIV. OF SOUTHERN CALIFORNIA+

MOBILE IS KING

Figure 1: Exponential Global Mo-bile Data Traffic Growth, Source1.

Figure 2: More People with Mobile thanRunning Water, Source1.

Figure 3: However, all of us have experienced: Poor Performance and Failed Calls.

1. Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2017-2022

SIGNAL STRENGTH MAPS OVERVIEW

• Mobile Signal (Coverage) Maps by Users’ Mobiles as Sensors!

• XLarge Scale, XLong Periods.

• However: Measurements are 7sparse, 7inadequate, 7expensive.

• Cellular Providers Signal Maps Data:

1. Themselves (wardriving, privacy concerns).

2. Mobile Analytics Companies ($$), e.g., OpenSignal, Tutela, RootMetrics.

Figure 4: T-Mobile LTE Map of UCI, Collected by Ourselves [1].

GOALS AND CONTRIBUTIONS

Goal: How to Predict missing values in space, time and other features?Benefits and Contributions: Cheaper & More Accurate Maps for:

1. Users: Find Best Network.2. Cellular Carriers:

(a) Monitor their and competitors’ Cellular Network.

(b) Network Management and Upgrades (e.g., Deploy more cells).

(c) SDN/SON e.g., network selection etc.

3. Mobile analytics Companies: Reduce Operational Costs (e.g., AWS cost).

SIGNAL STRENGTH MAPS PRIOR WORK

Features Setup: Environment, Scale and Data

Spatial TimeDevice

NetworkEnvironment

Agnostic City-WideNo Expensive

LiDar DataLog-DistancePath-Loss(LDPL) [2]

7 7 7

COST-231/WINNER I-II/Ray Tracing

7 7 7

GeostatisticsSpecSense [3] 7 7 7 7

BCS [4] 7 7 7

RAIK-DNNs [5] 7 7 7 7

Our Work:Random Forests

Table 1: Signal Maps Approaches Compared with Our Work.

RANDOM FORESTS (RFs) LTE RSRP PREDICTION

• LDPL RSRP modeling : P (t)cID

(~lj

)= P

(t)0 − 10nj log10

(||~lBS −~lj ||2/d0

)+w

(t)j .

• RSRP Prediction: P̂ ∼ N (RFsµ(x), σ2x): Pj target, xj feature vector.

• Random Forests (RFs) are an ensemble of multiple decision trees.

• RSRP Predictors:

1. RFsx,y : Spatial Only (localization in [6]).

2. RFsx,y,t: Spatiotemporal Features.

3. RFsall: All Features.

• For each measurement Pj we consider the full set of features:

• Features xj : xfullj = (lxj , l

yj , d, h, cID, dev, out, ||~lBS −~lj ||2, freqdl)

1. Location, (lxj , lyj ).

2. Time Features, tj = (d, h). RSRP Variance Is Time Dependent.

3. Cell-ID, cID. RSRP is defined per serving cell.

4. Device Hardware Type, dev. RSRP calculation differs per device/hardware differences.

5. Downlink Carrier Frequency, freqdl. Radio propagation depends on freqdl.

6. Outdoors, out. From Android’s API GPS velocity.

7. Distance between UE – BS, ||~lBS −~lj ||2.

Why RFs for Data-Driven Prediction?

1. RFs inherently considers all features x; Geostatistics [3] only spatial.

2. RFs Automatically identifies areas with spatially (and temporal) correlatedRSRP (similar wireless propagation characteristics).

0.044 0.042 0.040 0.038 0.036 0.034 0.032 0.030 0.028

Longitude (deg.) 1.178e2

0.002

0.003

0.004

0.005

0.006

0.007

0.008

0.009

Latit

ude

(deg

.)

+3.364e1

Base Station (BS) aka Cell TowerRSRP measurements

120

110

100

90

80

70

RSR

P(dB

m) P

redi

ctio

n

Figure 5: Example of decision boundaries chosen by RFsx,y for Campus cell x306.

DATASETSDataset Period Areas Type of Measurements Characteristics Source

Campus02/10/17 -06/18/17

Univ. CampusArea u 3km2

LTE KPIs: RSRP, [RSRQ].Context: GPS Location, timestamp, dev, cid.Features: x =

(lxj , l

yj , d, h, dev, out, ||~lBS −~lj ||2

)No. Cells = 25No. Meas u 180KDensity ( N

m2 )Per Cell: 0.01 - 0.66 (Table 3)Overall Density: 0.06

Ourselves [1]

NYC & LA 09/01/17-11/30/17

NYC MetropolitanArea u 300km2 LTE KPIs: RSRP, [RSRQ, CQI].

Context: GPS Location, timestamp, dev, cid. EARFCN.Features:x =

(lxj , l

yj , d, h, cid, dev, out, ||~lBS −~lj ||2, freqdl

)No. Meas NYC u 4.2MNo. Cells NYC u 88k

Density NYC-all u 0.014 Nm2

MobileAnalyticsCompany

LA metropolitanArea u 1600km2

No. Meas LA u 6.7MNo. Cells LA u 111KDensity LA-all u 0.0042 N

m2

Table 2: Overview of Signal Maps Datasets used in this study

NYC ManhattanMidtown (Fig.8)

NYCE. Brooklyn

LASouthern

MN-Carrier MNC-1 MNC-1 MNC-2No. Measurements u 63K u 104K u 20KArea km2 1.8km2 44.8 km2 220 km2

Data Density Nm2 u 0.035 u 0.002 u 0.0001

No. Cells |C| 429 721 353Cell Density |C|

km2 238.3 16.1 1.6

Table 3: NYC and LA datasets: LTE TAsExamples.

DATASETS EXAMPLES AND FEATURE IMPORTANCE

MNCarrier-1 LTE RSRP (dBm)TAC: xx640Cell-ID: x204

RSRP (dBm) Legend

250m

Figure 6: Campus example cell x204: high density (0.66),low dispersion (325).

MNCarrier-1 LTE RSRP (dBm)TAC: xx640Cell-ID: x355

RSRP (dBm) Legend

250 m

Figure 7: Campus: example cell x355: small density(0.12) more dispersed data (573).

Figure 8: NYC:Manhatta Mid-townn LTE TA

Figure 9: NYC: zooming in Manhattan Midtown (TimeSquare) for some of the available cells (Different color percID).

Feature Importance

d h ly lx ||~lBS −~lj||2dev out

RFs Features

0.00

0.05

0.10

0.15

0.20

0.25

0.30

MD

I(M

ean

Dec

reas

eIm

puri

ty)

Feat. Importance Campus - CID: x204

lx ly||~lBS −~lj||2h dev d out

RFs Features

0.0

0.1

0.2

0.3

0.4

0.5

0.6

MD

I(M

ean

Dec

reas

eIm

puri

ty)

Feat. Importance Campus - CID: x355

(1) Model – Based(Wireless Propagation) (a) LDPL

(b) LDPL-knn(Heteregoneous nj)

(2) GeospatialInterpolation

(a) OKOrdinary Kriging

(b) OKDOK Detrening

(3) Random Forests (RFs) (a) RFsx,y (b) RFsx,y,t (c) RFsall

Table 4: Evaluation: Methods For Comparison

RFs Setup Model Per LTE TA Model per cIDRFs hyperparameters ntrees = {20, 100} maxdepth = {20, 30}

Table 5: RFs Setup and Hyperparameters.

COMPARING PREDICTORS PER CELL

Cell Characteristics RMSE (dB)

cID N Nsq m2 SDD E[P ] σ2 LDPL

hom

LDPL

kNNOK OKD

RFsx,y

RFsx,y,t

RFs

all

x914 3215 0.007 791 -94.5 96.3 13.3 3.47 3.59 2.28 3.43 1.71 1.67x034 1564 0.010 441 -101.2 337.5 19.5 7.82 7.44 5.12 7.56 3.82 3.84x901 16051 0.162 355 -107.9 82.3 8.9 4.60 4.72 3.04 4.54 1.73 1.66x204 55566 0.666 325 -96.0 23.9 6.9 3.84 3.85 2.99 3.83 2.30 2.27x922 3996 0.107 218 -102.7 29.5 5.6 3.1 3.16 2.01 3.10 1.92 1.82x902 34193 0.187 481 -111.5 8.1 21.0 2.60 2.47 1.64 2.50 1.37 1.37

Cell Characteristics RMSE (dB)

cID N Nsq m2 SDD E[P ] σ2 LDPL

hom

LDPL

kNNOK OKD

RFsx,y

RFsx,y,t

RFs

all

x470 7699 0.034 533 -107.3 16.9 24.8 3.64 2.73 1.87 2.78 1.26 1.26x915 4733 0.042 376 -110.6 203.9 14.3 7.54 7.39 4.25 7.31 3.29 3.15x808 12153 0.035 666 -105.1 7.7 4.40 2.41 2.42 1.60 2.34 1.75 1.59x460 4077 0.040 361 -88.0 32.8 11.2 2.35 2.28 1.56 2.31 1.84 1.84x306 4076 0.011 701 -99.2 133.3 18.3 4.85 4.30 2.80 3.94 3.1 3.06x355 30084 0.116 573 -94.3 42.6 9.3 2.42 2.31 1.85 2.26 1.79 1.79

UCI NETWORKING GROUP WEBPAGE

Networking Group@UCI: athinagroup.eng.uci.edu

MODEL GRANULARITY: cID vs. LTE TAQuestion: At What Level of Granularity Should we Train Our RFs Models?

RFsx,y RFsall

Methods

0.02.55.07.5

10.012.515.017.520.0

RMSE

(dB

) Global RFs Model: CID as a Feat.

Unique RFs Model per CID

Figure 10: MNC-1, Man-hattan Midtown (urban).

RFsx,y RFsall

Methods

0.02.55.07.5

10.012.515.017.520.0

RMSE

(dB

) Global RFs Model: CID as a Feat.

Unique RFs Model per CID

Figure 11: MNC-1, EastBrooklyn (suburban)

RFsx,y RFsall

Methods

0.02.55.07.5

10.012.515.017.520.0

RMSE

(dB

) Global RFs Model: CID as a Feat.

Unique RFs Model per CID

Figure 12: MNC-2, South-ern LA (suburban).

NUMBER OF MEASUREMENTS vs. RMSE TRADE-OFFTradeoffs: (1) 80% Less Data: Same Accuracy.

(2) Same Data 17% Relative Error Reduction (or 1dB error reduction).

10 20 30 40 50 60 70 80 90

Train Data Size (%)

2.0

2.5

3.0

3.5

4.0

4.5

RMSE

(dB

)

LDPL− kNNOK

OKD

RFsx,y

RFsx,y,tRFsall

Figure 13: Campus dataset: RMSE vs. Training Size. Our methodology (RFs with more than spatialfeatures, i.e., RFsx,y,t, RFsall) significantly improves the RMSE-cost tradeoff: it can reduce RMSE by17% for the same number of measurements compared to state-of-the-art data-driven predictors(OKD);or it can achieve the lowest error possible by OKD (' 2.8dB) with 10% instead of 90% (and 80% reduc-tion) of the measurements.

CITY-WIDE MAPS: NYC and LA datasets EXPERIMENTSCDFs for RMSE per cID for two different LTE TA, for the same major MNC-1.Benefits: (1) RFsall offer 2dB gain over the baselines for the 90th percentile.

(2) 2dB for 1-bar in voLTE means 1-5% call drop rate [7]

0 2 4 6 8 10 12 14 16 18RMSE (dB)

0.0

0.2

0.4

0.6

0.8

1.0

CD

F:P

(E≤e)

RFsx,y (Spatial Features)

OK (Geostatistics)

RFsall (Full Features)

LDPL− kNN

Figure 14: MNC-1, NYC ManhattanMidtown (urban).

0 2 4 6 8 10 12 14 16 18RMSE (dB)

0.0

0.2

0.4

0.6

0.8

1.0

CD

F:P

(E≤e)

RFsx,y (Spatial Features)

OK (Geostatistics)

RFsall (Full Features)

LDPL− kNN

Figure 15: MNC-1, LA Southern (Sub-urb)

REFERENCES

[1] E. Alimpertis and A. Markopoulou. A system for crowdsourcing passive mobile net-work measurements. In 14th USENIX NSDI’17, Posters Sessions, Boston, Massachusetts,USA, March 2017. USENIX Association.

[2] E. Alimpertis, N. Fasarakis-Hilliard, and A. Bletsas. Community RF Sensing for SourceLocalization. IEEE Wireless Commun. Lett., 3(4):393–396, Aug 2014.

[3] A. Chakraborty et al. Specsense: Crowdsensing for efficient querying of spectrum occu-pancy. In Proc. of the IEEE INFOCOM, Atlanta, Georgia, USA, May 2017.

[4] S. He and K. G. Shin. Steering crowdsourced signal map construction via bayesian com-pressive sensing. In Proc. of the IEEE INFOCOM ’18, pages 1016–1024. IEEE, April 2018.

[5] R. Enami et al. RAIK: Regional analysis with geodata and crowdsourcing to infer keyperformance indicators. In Proc. of the IEEE WCNC ’18, pages 1–6, April 2018.

[6] A. Ray et al. Localization of lte measurement records with missing information. In Proc.of the IEEE INFOCOM, pages 1–9, San Fransisco, CA, USA, April 2016.

[7] Yunhan Jack et al. Jia. Performance characterization and call reliability diagnosis supportfor voice over LTE. In Proc. of the ACM MobiCom, pages 452–463, 2015.

ACKNOWLEDGMENTSThis work was supported by NSF Awards 1649372, 1526736, 1444060; as well asby Networked Systems and CPCC at UC Irvine and the Broadcom Foundation Fel-lowship program.