misha bilenko, principal researcher, microsoft at mlconf sea - 5/01/15
TRANSCRIPT
Many Shades of Scale: Big Learning
Beyond Big Data
Misha Bilenko
Principal Researcher
Microsoft Azure Machine Learning
ML β₯ More Data
What we see in production[Banko and Brill, 2001]
What we [used to] learn in school[Mooney, 1996]
ML β₯ More Data
What we see in production[Banko and Brill, 2001]
Is training on more examples
all there is to it?
Big Learning β Learning(BigData)
β’ Big data: size β distributing storage and processing
β’ Big learning: scale bottlenecks in training and prediction
β’ Classic bottlenecks: bytes and cyclesLarge datasets β distribute training on larger hardware (FPGAs, GPUs, cores, clusters)
β’ Other scaling dimensions
Features Components/People
5
Learning from Countswith
DRACuLaDistributed Robust Algorithm for Count-based Learning
joint work with Chris Meek (MSR)Wenhan Wang, Pete Luferenko (Azure ML)
Scaling to many Features
Learning with relational data
π(πππππ|ππ,ππππ‘ππ₯π‘,π’π ππ) adid = 1010054353adText = K2 ski sale!adURL= www.k2.com/sale
Userid = 0xb49129827048dd9bIP = 131.107.65.14
Query = powder skisQCategories = {skiing, outdoor gear}
6
#π’π πππ ~109 #ππ’πππππ ~109+ #πππ ~107 # ππ Γ ππ’πππ¦ ~1010+
β’ Information retrieval
β’ Advertising, recommending, search: item, page/query, user
β’ Transaction classification
β’ Payment fraud: transaction, product, user
β’ Email spam: message, sender, recipient
β’ Intrusion detection: session, system, user
β’ IoT: device, location
Learning with relational data
π(πππππ|π’π ππ,ππππ‘ππ₯π‘,ππ)
adid: 1010054353adText: Fall ski sale!adURL: www.k2.com/sale
userid 0xb49129827048dd9bIP 131.107.65.14
query powder skisqCategories {skiing, outdoor gear}
7
β’ Problem: representing high-cardinality attributes as featuresβ’ Scalable: to billions of attribute values
β’ Efficient: ~105+ predictions/sec/node
β’ Flexible: for a variety of downstream learners
β’ Adaptive: to distribution change
β’ Standard approaches: binary features, hashing
β’ What everyone should use in industry: learning with countsβ’ Formalization and generalization
Standard approach 1: binary (one-hot, indicator)
Attributes are mapped to indices based on lookup tables- Not scalable cannot support high-cardinality attributes- Not efficient large value-index dictionary must be retained- Not flexible only linear learners are practical- Not adaptive doesnβt support drift in attribute values
0010000..00 0..01000000 00000..001 0..00001000
#userIPs #ads #queries #queries x #ads
πππ₯π’ 131.107.65.14 πππ₯π πππ€πππ π πππ πππ₯π π2. πππ πππ₯ πππ€πππ π πππ , π2. πππ
8
Standard approach 1+: feature hashing
Attributes are mapped to indices via hashing: β π₯π = βππ β π₯π modπβ’ Collisions are rare; dot products unbiased
+ Scalable no mapping tables+ Efficient low cost, preserves sparsity- Not flexible only linear learners are practicalΒ± Adaptive new values ok, no temporal effects
0000010..0000010000..0000010...000001000
β powder skis + k2. comβ powder skis
β k2. comβ 131.107.65.14
π βΌ 107
[Moody β89, Tarjan-Skadron β05, Weinberger+ β08]
9
π(π₯)
Learning with counts
β’ Features are per-label counts [+odds] [+backoff]
π = [N+ N- log(N+)-log(N-) IsRest]
β’ log(N+)-log(N-) = logπ(+)
π(β): log-odds/NaΓ―ve Bayes estimate
β’ N+, N-: indicators of confidence of the naΓ―ve estimate
β’ IsFromRest: indicator of back-off vs. βreal countβ
131.107.65.14
πΆππ’ππ‘π (131.107.65.14) πΆππ’ππ‘π (k2.com)
k2.com
πΆππ’ππ‘π (powder skis)
powder skis
πΆππ’ππ‘π (powder skis, k2.com)
powder skis, k2.com
IP π΅+ π΅β
173.194.33.9 46964 993424
87.250.251.11 31 843
131.107.65.14 12 430
β¦ β¦ β¦
REST 745623 13964931
π(πͺπππππ (π°π·)) π(πͺπππππ (ππ )) π(πͺπππππ (πππππ)) π(πͺπππππ (πππππ, ππ ))
Learning with counts
β’ Features are per-label counts [+odds] [+backoff]
π = [N+ N- log(N+)-log(N-) IsRest]
+ Scalable βheadβ in memory + tail in backoff; or: count-min sketch+ Efficient low cost, low dimensionality+ Flexible low dimensionality works well with non-linear learners+ Adaptive new values easily added, back-off for infrequent values, temporal counts
π(πͺπππππ(ππππ)) π(πͺπππππ(ππ )) π(πͺπππππ(πππππ) π(πͺ(πππππ Γ ππ ))
131.107.65.14
πΆππ’ππ‘π (131.107.65.14) πΆππ’ππ‘π (k2.com)
k2.com
πΆππ’ππ‘π (powder skis)
powder skis
πΆππ’ππ‘π (powder skis, k2.com)
powder skis, k2.com
π(πͺπππππ (π°π·)) π(πͺπππππ (ππ )) π(πͺπππππ (πππππ)) π(πͺπππππ (πππππ, ππ ))
IP π΅+ π΅β
173.194.33.9 46964 993424
87.250.251.11 31 843
131.107.65.14 12 430
β¦ β¦ β¦
REST 745623 13964931
Backoff is a pain. Count-Min Sketches to the Rescue![Cormode-Muthukrishnan β04]
Intuition: correct for collisions by using multiple hashes
Featurize: ππππ (π[π][βπ(π)]) Estimation Time : O(d)
= M (d x w)
Count: for each hash function M[j][hj(i)] ++ Update Time: O(d)
Learning from counts: aggregationAggregate πΆππ’ππ‘(π¦, πππ π₯ ) for different πππ π₯
β’ Standard MapReduce
β’ Bin function: any projection
β’ Backoff options: βtail binβ, hashing, hierarchical (shrinkage)
IP π΅+ π΅β
173.194.33.9 46964 993424
87.250.251.11 31 843
131.253.13.32 12 430
β¦ β¦ β¦
REST 745623 13964931
query π΅+ π΅β
facebook 281912 7957321
dozen roses 32791 640964
β¦ β¦ β¦
REST 6321789 43477252
Query Γ AdId π΅+ π΅β
facebook, ad1 54546 978964
facebook, ad2 232343 8431467
dozen roses, ad3 12973 430982
β¦ β¦ β¦
REST 4419312 52754683
timeTnow
Counting
IP[2] π΅+ π΅β
173.194.*.* 46964 993424
87.250.*.* 6341 91356
131.253.*.* 75126 430826
β¦ β¦ β¦
13
Learning from counts: combiner trainingIP π΅+ π΅β
173.194.33.9 46964 993424
87.250.251.11 31 843
131.253.13.32 12 430
β¦ β¦ β¦
REST 745623 13964931
query π΅+ π΅β
facebook 281912 7957321
dozen roses 32791 640964
β¦ β¦ β¦
REST 6321789 43477252
timeTnow
Train predictor
β¦.
IsBackoff
lnπ+ β lnπβ
Aggregatedfeatures
Original numeric features
πβπ+
Counting
Train non-linear model on count-based features
β’ Counts, transforms, lookup properties
β’ Additional features can be injected
Query Γ AdId π΅+ π΅β
facebook, ad1 54546 978964
facebook, ad2 232343 8431467
dozen roses, ad3 12973 430982
β¦ β¦ β¦
REST 4419312 52754683
14
Prediction with countsIP π΅+ π΅β
173.194.33.9 46964 993424
87.250.251.11 31 843
131.253.13.32 12 430
β¦ β¦ β¦
REST 745623 13964931
query π΅+ π΅β
facebook 281912 7957321
dozen roses 32791 640964
β¦ β¦ β¦
REST 6321789 43477252
URL Γ Country π΅+ π΅β
url1, US 54546 978964
url2, CA 232343 8431467
url3, FR 12973 430982
β¦ β¦ β¦
REST 4419312 52754683
timeTnow
β¦.
IsBackoff
lnπ+ β lnπβ
Aggregatedfeatures
πβπ+
Counting β
β’ Counts are updated continuously
β’ Combiner re-training infrequent
Ttrain
Original numeric features
Where did it come from?
Li et al. 2010
Pavlov et al. 2009
Lee et al. 1998
Yeh and Patt, 1991
16
Hillard et al. 2011
β’ De-facto standard in online advertising industry
β’ Rediscovered by everyone who really cares about accuracy
Do we need to separate counting and training?
β’ Can we use use same data for both counting and featurization
β’ Bad idea: leakage = count features contain labels β overfittingβ’ Combiner dedicates capacity to decoding exampleβs label from features
β’ Can we hold out each exampleβs label during train-set featurization?
β’ Bad idea: leakage and biasβ’ Illustration: two examples, same feature values, different labels (click and non-click)
β’ Different representations are inconsistent and allow decoding the label
Train predictorCounting
Example ID Label N+[a] N-[a]
1 + ππ+ β 1 ππ
β
2 - ππ+ ππ
β-1
Solution via Differential privacy
β’ What is leakage? Revealing information about any individual label
β’ Formally: count table cT is Ξ΅-leakage-proof if same features for βπ₯, π, πβ² = π\(π₯π , π¦π)
β’ Theorem: adding noise sampled from Laplace(k/π) makes counts π-leakage-proof
β’ Typically 1 β€ π β€ 100
β’ Concretely: N+ = N+ + LaplaceRand(0,10k) N- = N- + LaplaceRand(0,10k)
β’ In practice: LaplaceRand(0,1) sufficient
Learning from counts: why it works
β’ State-of-the-art accuracy
β’ Easy to implement on standard clusters
β’ Monitorable and debuggableβ’ Temporal changes easy to monitor
β’ Easy emergency recovery (bot attacks, etc.)
β’ Error debugging (which feature to blame)
β’ Modular (vs. monolithic)β’ Components: learners and count features
β’ People: multiple feature/learner authors
19
Big Learning: Pipelines and Teams
Ravi: text features in R
Jim: matrix projections
Vera: sweeping boosted trees
Steph: count featureson Hadoop
How to scale up Machine Learning toParallel and Distributed Data Scientists?
AzureML
β’ Cloud-hosted, graphical environmentfor creating, training, evaluating, sharing, and deployingmachine learning models
β’ Supports versioning and collaboration
β’ Dozens of ML algorithms, extensible via R and Python
Learning with Counts in Azure ML
Criteo 1TB dataset
Counting:an hour on HDInsight Hadoop cluster
Training: minutes in AzureML Studio
Deploymentone click to RRS service
Maximizing Utilization: Keeping it Asynchronous
β’ Macro-level: concurrently executing pipelines
β’ Micro-level: asynchronous optimization (with overwriting updates)β’ Hogwild SGD [Recht-Re], Downpour SGD [Google Brain]
β’ Parameter Server [Smola et al.]
β’ GraphLab [Guestrin et al.]
β’ SA-SDCA [Tran, Hosseini, Xiao, Finley, B.]
Semi-Asynchronous SDCA: state-of-the-art linear learning
β’ SDCA: Stochastic Dual Coordinate Ascent [Shalev-Schwartz & Zhang]β’ Plot: SGD marries SVM and they have a beautiful baby
β’ Algorithm: for each example: update exampleβs πΌπ, then re-estimate weights
β’ Letβs make it asynchronous, Hogwild-style!
β’ Problem: primal and dual diverge
β’ Solution: separate thread for primal-dual synchronization
β’ Taking it out-of-memory: block pseudo-random data loading
SGD updateπ€π‘+1 β π€π‘βπΎπ‘ ππ€π‘ β π¦πππ
β²(π€π‘ β π₯π) π₯π
SDCA updateπΌππ‘ β πΌπ
π‘β1 + ΞπΌπ
π€π‘ β π€π‘β1 +ΞπΌπππ
π₯π
In closing: Big Learning = Streetfighting
⒠Big features are resource-hungry: learning with counts, projections⦠⒠Make them distributed and easy to compute/monitor
β’ Big learners are resource-hungryβ’ Parallelize them (preferably asynchronously)
β’ Big pipelines are resource-hungry: authored by many humansβ’ Run them a collaborative cloud environment