ensemble methods construct a set of classifiers from the training data predict class label of...

21
Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made by multiple classifiers In Olympic Ice-Skating you have multiple judges? Why?

Upload: eleanor-mcbride

Post on 17-Jan-2018

218 views

Category:

Documents


0 download

DESCRIPTION

Why does it work? Suppose there are 25 base classifiers Each classifier has error rate,  = 0.35 Assume classifiers are independent Probability that the ensemble classifier makes a wrong prediction: Practice has shown that even when independence does not hold results are good

TRANSCRIPT

Page 1: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

Ensemble MethodsConstruct a set of classifiers from the

training data

Predict class label of previously unseen records by aggregating predictions made by multiple classifiers

In Olympic Ice-Skating you have multiple judges? Why?

Page 2: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

General IdeaOriginal

Training data

....D1 D2 Dt-1 Dt

D

Step 1:Create Multiple

Data Sets

C1 C2 Ct -1 Ct

Step 2:Build Multiple

Classifiers

C*Step 3:

CombineClassifiers

Page 3: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

Why does it work?Suppose there are 25 base classifiers

Each classifier has error rate, = 0.35Assume classifiers are independentProbability that the ensemble classifier makes

a wrong prediction:

Practice has shown that even when independence does not hold results are good

25

13

25 06.0)1(25

i

ii

i

Page 4: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

Methods for generating Multiple ClassifiersManipulate the training data

Sample the data differently each timeExamples: Bagging and Boosting

Manipulate the input featuresSample the featurres differently each time

Makes especially good sense if there is redundancyExample: Random Forest

Manipulate the learning algorithmVary some parameter of the learning algorithm

E.g., amount of pruning, ANN network topology, etc. Use different learning algorithms

Page 5: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

BackgroundClassifier performance can be impacted by:

Bias: assumptions made to help with generalization "Simpler is better" is a bias

Variance: a learning method will give different results based on small changes (e.g., in training data). When I run experiments and use random sampling with

repeated runs, I get different results each time.Noise: measurements may have errors or the class

may be inherently probabilistic

Page 6: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

How Ensembles HelpEnsemble methods can assist with the bias and

variance Averaging the results over multiple runs will reduce

the variance I observe this when I use 10 runs with random sampling and

see that my learning curves are much smootherEnsemble methods especially helpful for unstable

classifier algorithms Decision trees are unstable since small changes in the

training data can greatly impact the structure of the learned decision tree

If you combine different classifier methods into an ensemble, then you are using methods with different biases You are more likely to use a classifier with a bias that is a

good match for the problem You may even be able to identify the best methods and weight

them more

Page 7: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

Examples of Ensemble MethodsHow to generate an ensemble of classifiers?

BaggingBoosting

These methods have been shown to be quite effective

A technique ignored by the textbook is to combine classifiers built separatelyBy simple votingBy voting and factoring in the reliability of

each classifier

Page 8: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

BaggingSampling with replacementBuild classifier on each bootstrap sampleEach sample has probability (1 – 1/n)n of

being selected (about 63% for large n)Some values will be picked more than once

Combine the resulting classifiers, such as by majority voting

Greatly reduces the variance when compared to a single base classifier

Page 9: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

BoostingAn iterative procedure to adaptively change

distribution of training data by focusing more on previously misclassified recordsInitially, all N records are assigned equal

weightsUnlike bagging, weights may change at the

end of boosting round

Page 10: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

BoostingRecords that are wrongly classified will have

their weights increasedRecords that are classified correctly will have

their weights decreasedOriginal Data 1 2 3 4 5 6 7 8 9 10Boosting (Round 1) 7 3 2 8 7 9 4 10 6 3Boosting (Round 2) 5 4 9 4 2 5 1 7 4 2Boosting (Round 3) 4 4 8 10 4 5 4 6 3 4

• Example 4 is hard to classify

• Its weight is increased, therefore it is more likely to be chosen again in subsequent rounds

Page 11: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made
Page 12: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

Netflix Prize Videohttps://www.youtube.com/watch?

v=ImpV70uLxyw

Page 13: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

NetflixNetflix is a subscription-based movie and

television show rental service that offers media to subscribers: Physically by mail Over the internet

Has a catalog of over 100,000 movies and television shows

Subscriber base of over 10 million

Page 14: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

RecommendationsNetflix offers users the ability to rate movies

and television shows that they have seenDepending on those ratings, Netflix provides

recommendations of movies and television shows that the subscriber would like to watch

These recommendations are based on an algorithm called Cinematch

Page 15: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

CinematchUses straightforward statistical linear models

with a lot of data conditioningThis means that the more a subscriber rates,

the more accurate the recommendations will become

Page 16: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

Netflix PrizeCompetition for the best collaborative filtering

algorithm to predict user ratings for movies and television shows, based on previous ratings

Offered a $1 million prize to the team who could improve Cinematch’s accuracy by 10%

Awarded a $50,000 progress prize for the team who makes the most progress for each year before the 10% mark was reached

The contest started on October 2, 2006 and would run until at least October 2, 2011, depending on when a winner was chosen

Page 17: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

MetricsThe accuracy of the algorithms was

measured by using root mean square error, or RMSE

Chosen because it is a well-known, single value that can account for and amplify the contributions of errors such as false positives and false negatives

Page 18: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

MetricsCinematch scored 0.9525 on the test subsetThe winning team needed to score at least

10% lower, with an RMSE of 0.8563

Page 19: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

ResultsThe contest ended on June 26, 2009The threshold was broken by the teams

“BellKor's Pragmatic Chaos” and “The Ensemble”, both achieving a 10.06% improvement over Cinematch, with an RMSE of 0.8567

“BellKor's Pragmatic Chaos” won the prize due to the team submitting their results 20 minutes before “The Ensemble”

Page 20: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

Netflix Prize SequelDue to the success of their contest, Netflix

announced another contest to further improve their recommender system

Unfortunately, it was discovered that the anonymized customer data that they provided to the contestants could actually be used to identify individual customers

This, combined with a resulting investigation by the FTC and a lawsuit, led Netflix to cancel their sequel

Page 21: Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made

Sourceshttp://blog.netflix.com/2010/03/this-is-neil-hunt-chief-p

roduct-officer.htmlhttp://www.netflixprize.comhttp://www.nytimes.com/2010/03/13/technology/13net

flix.html?_r=1