unexpected challenges in large scale machine learning by charles parker

22
Unexpected Challenges in Large Scale Machine Learning Charles Parker, BigML, Inc.

Upload: bigmine

Post on 04-Jul-2015

2.482 views

Category:

Technology


3 download

DESCRIPTION

Talk by Charles Parker (BigML) at BigMine12 at KDD12. In machine learning, scale adds complexity. The most obvious consequence of scale is that data takes longer to process. At certain points, however, scale makes trivial operations costly, thus forcing us to re-evaluate algorithms in light of the complexity of those operations. Here, we will discuss one important way a general large scale machine learning setting may diff er from the standard supervised classification setting and show some the results of some preliminary experiments highlighting this di fference. The results suggest that there is potential for signifi cant improvement beyond obvious solutions.

TRANSCRIPT

Page 1: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Unexpected Challenges in Large Scale Machine Learning

Charles Parker, BigML, Inc.

Page 2: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Who Am I?

Ph. D. from Oregon State University, 2007 Four years with Eastman Kodak Research Labs

− Data mining

− Computer vision/image processing Currently with BigML

− Developing a scalable, available, and beautiful platform for machine learning

− Launched private beta in March

− Still early days (Nine employees in Europe/U.S.)

Page 3: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Brief Summary

Introduce you to BigML Review some of the recent research in the

large-scale ML community Pose some research questions that may not be

on the Big Data radar This is all very, very preliminary (comments

appreciated)

Page 4: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

A Little Bit More about BigML

Right now, only decision trees (more to come) Going for a wide range of users Goals

− All resources can be created and retrieved via our REST API

Programatic model creation Downloadable, white-box models

− A compelling front-end interface

− Ease-of-use: As few clicks as possible; easy to understand visualizations

A brief demo

Page 5: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Benefits

It's all in the cloud− Easy to share with others

− Can “deploy” the model to anywhere

− Can trigger learning from anywhere (couch-based machine learning)

Learns at scale− Up to 64 GB (and counting)

− No specialized hardware or software required

Page 6: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Where's The Big?

Among our users, we find that very few have data greater than 100mb

Why is this?− Takes too long?

− Inadequate infrastructure?

− Don't have the right algorithms? Maybe it's something else . . .

Page 7: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Research Direction #1Algorithms

Speed, speed, speed

− Langford's Vowpal Wabbit

− PEGaSoS

Parallelism

− Domingos, 2001

− Bekkerman's Tutorial at KDD '11

Page 8: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Research Direction #2Tools

Setting up clusters for large scale, parallel execution of jobs

− Hadoop

− Storm Languages allowing for hardware-independent

specification of parallel algorithms− Spark

− Scalops Using the GPU

Page 9: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

The Benefits of Big Data

Tools for processing of massive data are crucial

Often, worse learners can be “fixed” by more data

If the hypothesis space is large enough, accuracy improvement can be log-linear even to billions of examples Banko and Brill, 2001

Page 10: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Is This What is Needed?

Processing big data is crucial, but many interesting ML algorithms are trivially parallel

Is the focus on parallelism and architecture really necessary, or just popular?

For most jobs, multi-machine architectures are probably not necessary

“No one ever got fired for using Hadoop on a cluster”

http://research.microsoft.com/pubs/163083/hotcbp12%20final.pdf

If not parallel architectures, then what?

Page 11: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Some Old Assumptions

Much of the current work in large scale learning makes the standard assumptions about the data:

That it is drawn i.i.d. From a stationary distribution

That linear time algorithms are cheap That super-linear time algorithms are expensive

Page 12: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Big Data, Assumption Breaker

Could easily be non-i.d.d.− Even shuffling is expensive

− What if it's not all there?

− For many common large datasets, the distribution is almost certainly not stationary as the world itself isn't stationary

The easy solutions . . .− Make a pass over the data to shuffle it

− Wait for it all to be there . . . both break responsivness.

Page 13: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

The New Complexity

Network latency and disk read times may dominate the cost of some learning algorithms

− One pass over the data is expensive

− Multiple passes may be out of the question Because reading the data dominates costs, we

can do intensive computation in a given locality without significantly impacting cost

− Read the data once into memory, do several hundred passes, read the next block, . . .

− Super-linear algorithms aren't so bad?

Page 14: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Example:The “Slow Arrival” Problem

A lot of big data doesn't arrive all at once− Transactional data

− Sensor data

− Economic data We only get a chunk of the data every so often The distribution may be non-stationary

Page 15: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Some Simple Solutions

Streaming algorithm, incremental updates

− Good, but limits our options somewhat

− Typically have to make choices about how long it takes for data to “expire” (e.g., learning rate)

Lazy accumulators / Reservoir sampling

− Lazy algorithms limit options

− Reservoir sampling isn't using all data

− Implicit expiry of data is “never”

Window-based retraining

− Completely forgets past data

− Window size is an explicit choice

Page 16: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Related Research #1:Theory

Strong “Mixing Conditions” - Analysis of time-series data that is asymptotically independent when it is sufficiently far apart in time

Block-wise Stationarity – The data is drawn from the same distribution for some period of time before the distribution changes

Concept Drift – When the concept learned by a classifier becomes invalid due to changes in the generating distributions of either the input or the output

Page 17: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Some Slow Arrival Data

Simulated traffic data (closely mirrors some of our user data)

− Cars per minute on a busy street

− Predict: Number of cars that will be on the street in a given minute on a given date

Varies by time of day

− Rush hours have more traffic

− Night time has little

Varies by month of year

− Less weekend travel in the winter

− Less weekday travel in the summer

Gaussian noise added to make it interesting

Page 18: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Algorithm and Strawmen

Basic algorithm:

− Given: Classifier at time n and the data

− When a new block arrives at n + 1, train a classifier on half of the data

− Use the other half to estimate performance of the new classifier vs. the old

− Resample according to the amount of “drift” detected, train new classifier

− Repeat

Compare with

− Reservior Sampling

− Training only on last block

− Training on last four blocks

Page 19: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Some Results #1:Regular Seasonal Effects

Training on the last n blocks does well in the present but not in general

Reservoir sampling trades a little present performance for better performance in the general case

Adaptive resampling does more or less the same

Page 20: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Some Results #2:Dramatic Changes

Sampling fails completely as history outside of the current block doesn't matter

Adaptive resampling is able to detect the uselessness of the history and maintain performance

Page 21: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Summary

Processing big data quickly is important But it isn't everything!

− Big data brings new problems

− Some of these might be new learning settings that are scientifically interesting

“Slow Arrival” data is one of these− Seems general enough to be generally

interesting

− Benefits from something more than the naïve approach

Page 22: Unexpected Challenges in Large Scale Machine Learning by Charles Parker

Try BigML!

We're still in private beta, but go to:

www.bigml.com

And request an invitation!