a family of online boosting algorithms

23
Boris Babenko 1 , Ming-Hsuan Yang 2 , Serge Belongie 1 1. University of California, San Diego 2. University of California, Merced OLCV, Kyoto, Japan

Upload: saki

Post on 22-Feb-2016

49 views

Category:

Documents


0 download

DESCRIPTION

A Family of Online Boosting Algorithms. Boris Babenko 1 , Ming-Hsuan Yang 2 , Serge Belongie 1 1. University of California, San Diego 2. University of California, Merced OLCV, Kyoto, Japan. Motivation. Extending online boosting beyond supervised learning - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: A Family of Online Boosting Algorithms

Boris Babenko1, Ming-Hsuan Yang2, Serge Belongie1

1. University of California, San Diego2. University of California, Merced

OLCV, Kyoto, Japan

Page 2: A Family of Online Boosting Algorithms

• Extending online boosting beyond supervised learning

• Some algorithms exist (i.e. MIL, Semi-Supervised), but would like a single framework

[Oza ‘01, Grabner et al. ‘06, Grabner et al. ‘08, Babenko et al. ‘09]

Page 3: A Family of Online Boosting Algorithms

• Goal: learn a strong classifier

where is a weak classifier, and is the learned parameter vector

Page 4: A Family of Online Boosting Algorithms

• Have some loss function

• Have

• Find next weak classifier:

Page 5: A Family of Online Boosting Algorithms

• Find some parameter vector that optimizes loss

Page 6: A Family of Online Boosting Algorithms

• If loss over entire training data can be split into sum of loss per training example

can use the following update:

Page 7: A Family of Online Boosting Algorithms

• Recall, we want to solve

• What if we use stochastic gradient descent to find ?

Page 8: A Family of Online Boosting Algorithms
Page 9: A Family of Online Boosting Algorithms
Page 10: A Family of Online Boosting Algorithms
Page 11: A Family of Online Boosting Algorithms

• For any differentiable loss function, can derive boosting algorithm…

Page 12: A Family of Online Boosting Algorithms
Page 13: A Family of Online Boosting Algorithms

• Loss:

• Update rule:

Page 14: A Family of Online Boosting Algorithms

• Training data: bags of instances and bag labels

• Bag is positive if at least one member is positive

Page 15: A Family of Online Boosting Algorithms

• Loss:

where

[Viola et al. ‘05]

Page 16: A Family of Online Boosting Algorithms

• Update rule:

Page 17: A Family of Online Boosting Algorithms

• So far, only empirical results• Compare

– OSB– BSB– standard batch boosting algorithm– Linear & non-linear model trained with stochastic

gradient descent (BSB with M=1)

Page 18: A Family of Online Boosting Algorithms

[LeCun et al. 98, Kanade et al. ‘00, Huang et al. ‘07

Page 19: A Family of Online Boosting Algorithms

[UCI Repository, Ranganathan et al. ‘08]

Page 20: A Family of Online Boosting Algorithms

LeCun et al. ‘97, Andrews et al ‘02

Page 21: A Family of Online Boosting Algorithms

• Friedman’s “Gradient Boosting” framework = gradient descent in function space– OSB = gradient descent in parameter space

• Similar to Neural Net methods (i.e. Ash et al. ‘89)

Page 22: A Family of Online Boosting Algorithms

• Advantages:– Easy to derive new Online Boosting algorithms for

various problems / loss functions– Easy to implement

• Disadvantages:– No theoretic guarantees yet– Restricted class of weak learners

Page 23: A Family of Online Boosting Algorithms

• Research supported by:– NSF CAREER Grant #0448615– NSF IGERT Grant DGE- 0333451– ONR MURI Grant #N00014-08-1-0638