“vanilla” neural network - github pages · 2019-12-27 · pascanu et al, “on the difficulty...

57
Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018 13 Vanilla Neural Networks “Vanilla” Neural Network

Upload: others

Post on 06-Feb-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201813

Vanilla Neural Networks

“Vanilla” Neural Network

Page 2: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201814

Recurrent Neural Networks: Process Sequences

e.g. Image Captioningimage -> sequence of words

Page 3: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201815

Recurrent Neural Networks: Process Sequences

e.g. Sentiment Classificationsequence of words -> sentiment

Page 4: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201816

Recurrent Neural Networks: Process Sequences

e.g. Machine Translationseq of words -> seq of words

Page 5: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201817

Recurrent Neural Networks: Process Sequences

e.g. Video classification on frame level

Page 6: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201822

Recurrent Neural Network

x

RNN

yWe can process a sequence of vectors x by applying a recurrence formula at every time step:

new state old state input vector at some time step

some functionwith parameters W

Page 7: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201823

Recurrent Neural Network

x

RNN

yWe can process a sequence of vectors x by applying a recurrence formula at every time step:

Notice: the same function and the same set of parameters are used at every time step.

Page 8: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201824

(Simple) Recurrent Neural Network

x

RNN

y

The state consists of a single “hidden” vector h:

Sometimes called a “Vanilla RNN” or an “Elman RNN” after Prof. Jeffrey Elman

Page 9: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201825

h0 fW h1

x1

RNN: Computational Graph

Page 10: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201826

h0 fW h1 fW h2

x2x1

RNN: Computational Graph

Page 11: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201827

h0 fW h1 fW h2 fW h3

x3

x2x1

RNN: Computational Graph

hT

Page 12: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201828

h0 fW h1 fW h2 fW h3

x3

x2x1W

RNN: Computational Graph

Re-use the same weight matrix at every time-step

hT

Page 13: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201829

h0 fW h1 fW h2 fW h3

x3

yT

x2x1W

RNN: Computational Graph: Many to Many

hT

y3y2y1

Page 14: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201830

h0 fW h1 fW h2 fW h3

x3

yT

x2x1W

RNN: Computational Graph: Many to Many

hT

y3y2y1 L1L2 L3 LT

Page 15: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201831

h0 fW h1 fW h2 fW h3

x3

yT

x2x1W

RNN: Computational Graph: Many to Many

hT

y3y2y1 L1L2 L3 LT

L

Page 16: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201832

h0 fW h1 fW h2 fW h3

x3

y

x2x1W

RNN: Computational Graph: Many to One

hT

Page 17: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201833

h0 fW h1 fW h2 fW h3

yT

xW

RNN: Computational Graph: One to Many

hT

y3y2y1

Page 18: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201834

Sequence to Sequence: Many-to-one + one-to-many

h0

fWh1

fWh2

fWh3

x3

x2

x1

W1

hT

Many to one: Encode input sequence in a single vector

Sutskever et al, “Sequence to Sequence Learning with Neural Networks”, NIPS 2014

Page 19: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201835

Sequence to Sequence: Many-to-one + one-to-many

h0

fWh1

fWh2

fWh3

x3

x2

x1

W1

hT

y1

y2

Many to one: Encode input sequence in a single vector

One to many: Produce output sequence from single input vector

fWh1

fWh2

fW

W2

Sutskever et al, “Sequence to Sequence Learning with Neural Networks”, NIPS 2014

Page 20: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201836

Example: Character-levelLanguage Model

Vocabulary:[h,e,l,o]

Example trainingsequence:“hello”

Page 21: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201837

Example: Character-levelLanguage Model

Vocabulary:[h,e,l,o]

Example trainingsequence:“hello”

Page 22: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201838

Example: Character-levelLanguage Model

Vocabulary:[h,e,l,o]

Example trainingsequence:“hello”

Page 23: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201839

Example: Character-levelLanguage ModelSampling

Vocabulary:[h,e,l,o]

At test-time sample characters one at a time, feed back to model

.03

.13

.00

.84

.25

.20

.05

.50

.11

.17

.68

.03

.11

.02

.08

.79Softmax

“e” “l” “l” “o”Sample

Page 24: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201840

.03

.13

.00

.84

.25

.20

.05

.50

.11

.17

.68

.03

.11

.02

.08

.79Softmax

“e” “l” “l” “o”SampleExample:

Character-levelLanguage ModelSampling

Vocabulary:[h,e,l,o]

At test-time sample characters one at a time, feed back to model

Page 25: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201841

.03

.13

.00

.84

.25

.20

.05

.50

.11

.17

.68

.03

.11

.02

.08

.79Softmax

“e” “l” “l” “o”SampleExample:

Character-levelLanguage ModelSampling

Vocabulary:[h,e,l,o]

At test-time sample characters one at a time, feed back to model

Page 26: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201842

.03

.13

.00

.84

.25

.20

.05

.50

.11

.17

.68

.03

.11

.02

.08

.79Softmax

“e” “l” “l” “o”SampleExample:

Character-levelLanguage ModelSampling

Vocabulary:[h,e,l,o]

At test-time sample characters one at a time, feed back to model

Page 27: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201843

Backpropagation through timeLoss

Forward through entire sequence to compute loss, then backward through entire sequence to compute gradient

Page 28: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201865

Explain Images with Multimodal Recurrent Neural Networks, Mao et al.Deep Visual-Semantic Alignments for Generating Image Descriptions, Karpathy and Fei-FeiShow and Tell: A Neural Image Caption Generator, Vinyals et al.Long-term Recurrent Convolutional Networks for Visual Recognition and Description, Donahue et al.Learning a Recurrent Visual Representation for Image Caption Generation, Chen and Zitnick

Image Captioning

Figure from Karpathy et a, “Deep Visual-Semantic Alignments for Generating Image Descriptions”, CVPR 2015; figure copyright IEEE, 2015.Reproduced for educational purposes.

Page 29: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201866

Convolutional Neural Network

Recurrent Neural Network

Page 30: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

test image

This image is CC0 public domain

Page 31: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

test image

Page 32: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

test image

X

Page 33: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

test image

x0<START>

<START>

Page 34: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

h0

x0<START>

y0

<START>

test image

before:h = tanh(Wxh * x + Whh * h)

now:h = tanh(Wxh * x + Whh * h + Wih * v)

v

Wih

Page 35: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

h0

x0<START>

y0

<START>

test image

straw

sample!

Page 36: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

h0

x0<START>

y0

<START>

test image

straw

h1

y1

Page 37: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

h0

x0<START>

y0

<START>

test image

straw

h1

y1

hat

sample!

Page 38: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

h0

x0<START>

y0

<START>

test image

straw

h1

y1

hat

h2

y2

Page 39: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

h0

x0<START>

y0

<START>

test image

straw

h1

y1

hat

h2

y2

sample<END> token=> finish.

Page 40: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201877

A cat sitting on a suitcase on the floor

A cat is sitting on a tree branch

A dog is running in the grass with a frisbee

A white teddy bear sitting in the grass

Two people walking on the beach with surfboards

Two giraffes standing in a grassy field

A man riding a dirt bike on a dirt track

Image Captioning: Example Results

A tennis player in action on the court

Captions generated using neuraltalk2All images are CC0 Public domain: cat suitcase, cat tree, dog, bear, surfers, tennis, giraffe, motorcycle

Page 41: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201878

Image Captioning: Failure Cases

A woman is holding a cat in her hand

A woman standing on a beach holding a surfboard

A person holding a computer mouse on a desk

A bird is perched on a tree branch

A man in a baseball uniform throwing a ball

Captions generated using neuraltalk2All images are CC0 Public domain: fur coat, handstand, spider web, baseball

Page 42: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201891

time

depth

Multilayer RNNs

LSTM:

Page 43: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201892

ht-1

xt

W

stack

tanh

ht

Vanilla RNN Gradient Flow Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 44: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201893

ht-1

xt

W

stack

tanh

ht

Vanilla RNN Gradient FlowBackpropagation from ht to ht-1 multiplies by W (actually Whh

T)

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 45: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201894

Vanilla RNN Gradient Flow

h0 h1 h2 h3 h4

x1 x2 x3 x4

Computing gradient of h0 involves many factors of W(and repeated tanh)

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 46: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201895

Vanilla RNN Gradient Flow

h0 h1 h2 h3 h4

x1 x2 x3 x4

Largest singular value > 1: Exploding gradients

Largest singular value < 1:Vanishing gradients

Computing gradient of h0 involves many factors of W(and repeated tanh)

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 47: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201896

Vanilla RNN Gradient Flow

h0 h1 h2 h3 h4

x1 x2 x3 x4

Largest singular value > 1: Exploding gradients

Largest singular value < 1:Vanishing gradients

Gradient clipping: Scale gradient if its norm is too bigComputing gradient

of h0 involves many factors of W(and repeated tanh)

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 48: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201897

Vanilla RNN Gradient Flow

h0 h1 h2 h3 h4

x1 x2 x3 x4

Computing gradient of h0 involves many factors of W(and repeated tanh)

Largest singular value > 1: Exploding gradients

Largest singular value < 1:Vanishing gradients Change RNN architecture

Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013

Page 49: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201898

Long Short Term Memory (LSTM)

Hochreiter and Schmidhuber, “Long Short Term Memory”, Neural Computation 1997

Vanilla RNN LSTM

Page 50: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 201899

Long Short Term Memory (LSTM)[Hochreiter et al., 1997]

x

h

vector from before (h)

W

i

f

o

g

vector from below (x)

sigmoid

sigmoid

tanh

sigmoid

4h x 2h 4h 4*h

i: Input gate, whether to write to cellf: Forget gate, Whether to erase cello: Output gate, How much to reveal cellg: Gate gate (?), How much to write to cell

Page 51: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018

100

ct-1

ht-1

xt

fig

o

W ☉

+ ct

tanh

☉ ht

Long Short Term Memory (LSTM)[Hochreiter et al., 1997]

stack

Page 52: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018

101

ct-1

ht-1

xt

fig

o

W ☉

+ ct

tanh

☉ ht

Long Short Term Memory (LSTM): Gradient Flow[Hochreiter et al., 1997]

stack

Backpropagation from ct to ct-1 only elementwise multiplication by f, no matrix multiply by W

Page 53: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018102

Long Short Term Memory (LSTM): Gradient Flow[Hochreiter et al., 1997]

c0 c1 c2 c3

Uninterrupted gradient flow!

Page 54: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018103

Long Short Term Memory (LSTM): Gradient Flow[Hochreiter et al., 1997]

c0 c1 c2 c3

Uninterrupted gradient flow!

Input

Softm

ax

3x3 conv, 64

7x7 conv, 64 / 2

FC 1000

Pool

3x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 1283x3 conv, 128 / 2

3x3 conv, 1283x3 conv, 128

3x3 conv, 1283x3 conv, 128

...

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

Pool

Similar to ResNet!

Page 55: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018104

Long Short Term Memory (LSTM): Gradient Flow[Hochreiter et al., 1997]

c0 c1 c2 c3

Uninterrupted gradient flow!

Input

Softm

ax

3x3 conv, 64

7x7 conv, 64 / 2

FC 1000

Pool

3x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 1283x3 conv, 128 / 2

3x3 conv, 1283x3 conv, 128

3x3 conv, 1283x3 conv, 128

...

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

3x3 conv, 643x3 conv, 64

Pool

Similar to ResNet!

In between:Highway Networks

Srivastava et al, “Highway Networks”, ICML DL Workshop 2015

Page 56: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018105

Other RNN Variants

[LSTM: A Search Space Odyssey, Greff et al., 2015]

[An Empirical Exploration of Recurrent Network Architectures, Jozefowicz et al., 2015]

GRU [Learning phrase representations using rnn encoder-decoder for statistical machine translation, Cho et al. 2014]

Page 57: “Vanilla” Neural Network - GitHub Pages · 2019-12-27 · Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013. Fei-Fei Li & Justin Johnson

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 10 - May 3, 2018106

Summary- RNNs allow a lot of flexibility in architecture design- Vanilla RNNs are simple but don’t work very well- Common to use LSTM or GRU: their additive interactions

improve gradient flow- Backward flow of gradients in RNN can explode or vanish.

Exploding is controlled with gradient clipping. Vanishing is controlled with additive interactions (LSTM)

- Better/simpler architectures are a hot topic of current research- Better understanding (both theoretical and empirical) is needed.