mapping stacked decision forests to deep and sparse...

11
Mapping Stacked Decision Forests to Deep and Sparse Convolutional Neural Networks for Semantic Segmentation Presented by Daniela Massiceti Reading Group - January 2016 v2 – 22 December 2015 David L Richmond 1 , Dagmar Kainmueller 1 , Michael Y Yang 2 , Eugene W Myers 1 , and Carsten Rother 2 1 MPI-CBG, 2 TU Dresden

Upload: others

Post on 14-Jul-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

Mapping Stacked Decision Forests toDeep and Sparse ConvolutionalNeural Networks for Semantic

Segmentation

Presented by Daniela MassicetiReading Group - January 2016

v2 – 22 December 2015

David L Richmond1, Dagmar Kainmueller1, Michael Y

Yang2, Eugene W Myers1, and Carsten Rother2 1MPI-CBG, 2TU Dresden

Page 2: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 2

Random Forests vs Neural Networks

• RFs:– Easy to train; multi-class; training requires little data– Non-differentiable; each tree trained greedily

• NNs:– End-to-end learning of all parameters; feature learning– Training requires a lot of data & time

• Goal: bridge gap between RFs & NNs

Page 3: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 4

Sethi and Welbl’s mapping

Page 4: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 5

Sethi and Welbl’s mapping

-1-1 +1

0 1 0 0

-ω+ω

y1

y2

Page 5: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 6

Sethi and Welbl’s mapping

● Replicate T times for T trees in the RF● Fully connect output layer

Page 6: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 7

Relationship between RFs and CNNs

● RF with contextual features = special case of CNN with sparseconvolutional kernels & no max pooling

● Difference:● RFs: pixel's feature vector is precomputed● CNNs: features learnt

Page 7: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 8

Mapping forward: RF Stack to Deep CNN

Page 8: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 9

Mapping back: Deep CNN to RFStack

● Map back #1

– Keep weights and bias between H1 and H2 fixed, and re-train bybackprop

– Trivial mapping back to original RFwith updated thresholds and leafdistributions

● Map back #2

– Distributed activation on H2 = manyleafs contributing to prediction

– New leaf distributions:

Page 9: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 10

Results● Kinect Body Part Classification

Depth input Ground truth Stacked RF

82%

Retrained CNN

91%

Groundtruth

Stacked RF

RetrainedCNN

Page 10: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 11

Results● Zebrafish Somite Classification

Input GT Stacked RF RetrainedCNN

Mapback #1 Mapback #2

Method RF FCN CNN MB1 MB2

Dicescore

0.60 0.18 0.66 0.59 0.63

Page 11: Mapping Stacked Decision Forests to Deep and Sparse ...tvg/publications/talks/mappingStackedDF.pdf · Mapping back: Deep CNN to RF Stack Map back #1 – Keep weights and bias between

23/01/16 Daniela Massiceti – RG Jan 2016 12

Conclusions & comments

● Mapping from RF to CNN and back again

– Works with minimal training data

– Distributed activation on H2 – soft DTs

– Hand-selected features – representation learning

– Weights between H1 and H2 fixed – Decision Jungles

● Methodology-focussed rather than results-focussed

● Map-backed RF – improved performance, fastevaluation at test time?