multilayer neural networks - computer action...

56
Multilayer Neural Networks (sometimes called “Multilayer Perceptrons” or MLPs)

Upload: others

Post on 20-May-2020

26 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Multilayer Neural Networks

(sometimes called “Multilayer Perceptrons” or MLPs)

Page 2: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Linear separability

Feature 1

Feature 2

Hyperplane In 2D:

w1x1 +w2x2 +w0 = 0

x2 = −w1w2

x1 −w0w2

A perceptron can separate data that is linearly separable.

Page 3: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

A bit of history

•  1960s: Rosenblatt proved that the perceptron learning rule converges to correct weights in a finite number of steps, provided the training examples are linearly separable.

•  1969: Minsky and Papert proved that perceptrons cannot represent non-linearly separable target functions.

•  However, they showed that adding a fully connected hidden layer makes the network more powerful. –  I.e., Multi-layer neural networks can represent non-linear decision

surfaces

•  Later it was shown that by using continuous activation functions (rather than thresholds), a fully connected network with a single hidden layer can in principle represent any function.

Page 4: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Decision regions of a multilayer feedforward network. (From T. M. Mitchell, Machine Learning) The network was trained to recognize 1 of 10 vowel sounds occurring in the context “h_d” (e.g., “had”, “hid”) The network input consists of two parameters, F1 and F2, obtained from a spectral analysis of the sound. The 10 network outputs correspond to the 10 possible vowel sounds.

Multi-layer neural network example

Page 5: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

•  Good news: Adding hidden layer allows more target functions to be represented.

•  Bad news: No algorithm for learning in multi-layered networks, and no convergence theorem!

•  Quote from Minsky and Papert’s book, Perceptrons (1969):

“[The perceptron] has many features to attract attention: its linearity; its intriguing learning theorem; its clear paradigmatic simplicity as a kind of parallel computation. There is no reason to suppose that any of these virtues carry over to the many-layered version. Nevertheless, we consider it to be an important research problem to elucidate (or reject) our intuitive judgment that the extension is sterile.”

Page 6: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

•  Two major problems they saw were:

1.  How can the learning algorithm apportion credit (or blame) to individual weights for incorrect classifications depending on a (sometimes) large number of weights?

2.  How can such a network learn useful higher-order features?

•  Good news: Successful credit-apportionment learning

algorithms developed soon afterwards (e.g., back-propagation).

•  Bad news: However, in multi-layer networks, there is no guarantee of convergence to minimal error weight vector.

But in practice, multi-layer networks often work very well.

Page 7: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Summary

•  Perceptrons only be 100% accurate only on linearly separable problems.

•  Multi-layer networks (often called multi-layer perceptrons, or MLPs) can represent any target function.

•  However, in multi-layer networks, there is no guarantee of convergence to minimal error weight vector.

Page 8: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

A “two”-layer neural network

(activation represents classification)

(internal representation)

(activations represent feature vector for one training example)

inputs

hidden layer

output layer

Page 9: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Example: ALVINN (Pomerleau, 1993)

•  ALVINN learns to drive an autonomous vehicle at normal speeds on public highways.

•  Input: 30 x 32 grid of pixel intensities from camera

Page 10: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Each output unit correspond to a particular steering direction. The most highly activated one gives the direction to steer.

(Note: bias units and weights not shown)

Page 11: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Activation functions

•  Advantages of sigmoid function: nonlinear, differentiable, has real-valued outputs, and approximates a threshold function.

Page 12: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Sigmoid activation function:

o =σ(w⋅ x), where σ(z) = 11+ e−z

-2 .12 -1.5 .18 -1 .27 -.5 .38 0 .50 .5 .62 1 .73

1.5 .82 2 .88

w ⋅x σ w ⋅x( )

Page 13: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

•  The derivative of the sigmoid activation function is easily

expressed in terms of the function itself:

This is useful in deriving the back-propagation algorithm.

dσ(z)dz

=σ(z)⋅ (1−σ(z))

Page 14: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

σ (z) = 11+ e−z

= (1+ e−z )−1

dσdz

= −1(1+ e−z )−2 ddz

(1+ e−z )

= −1

(1+ e−z )2 −e−z( )

=e−z

(1+ e−z )2

σ (z) ⋅ (1−σ (z))

=1

1+ e−z⎛

⎝⎜

⎠⎟ 1−

11+ e−z⎛

⎝⎜

⎠⎟

⎝⎜

⎠⎟

=1

1+ e−z⎛

⎝⎜

⎠⎟−

11+ e−z⎛

⎝⎜

⎠⎟2

=1

1+ e−z⎛

⎝⎜

⎠⎟−

1(1+ e−z )2⎛

⎝⎜

⎠⎟

=1+ e−z

(1+ e−z )2⎛

⎝⎜

⎠⎟−

1(1+ e−z )2⎛

⎝⎜

⎠⎟

=e−z

(1+ e−z )2

QED.

Page 15: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

(activation represents classification)

(internal representation)

(activations represent feature vector for one training example)

Neural network notation

Page 16: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

(activation represents classification)

(internal representation)

(activations represent feature vector for one training example)

Sigmoid function:

Neural network notation

Page 17: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Neural network notation

(activation represents classification)

(internal representation)

(activations represent feature vector for one training example)

xi : activation of input node i. hj : activation of hidden node j. ok : activation of output node k. wji : weight from node i to node j. σ : sigmoid function. For each node j in hidden layer, For each node k in output layer,

hj =σ wjixi +wj0i∈ input layer∑

⎝⎜⎜

⎠⎟⎟

ok =σ wkjhj +wk0j∈hidden layer∑

⎝⎜⎜

⎠⎟⎟

Sigmoid function:

Page 18: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Classification with a two-layer neural network (“Forward propagation”)

Assume two-layer networks (i.e., one hidden layer):

1.  Present input to the input layer.

2.  Forward propagate the activations times the weights to each node in the hidden layer.

3.  Apply activation function (sigmoid) to sum of weights times inputs to each hidden unit.

4.  Forward propagate the activations times weights from the hidden layer to the output layer.

5.  Apply activation function (sigmoid) to sum of weights times inputs to each output unit.

6.  Interpret the output layer as a classification.

Page 19: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Forward Propagation Example

x1 x2

h1

1

h2

0.4 0.1

.1 .2

−.2 .3

.1 −.4

o1

−.1 −.2

o2

.1 −.5

1

.1 .2

Page 20: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

What kinds of problems are suitable for neural networks?

•  Have sufficient training data

•  Long training times are acceptable

•  Not necessary for humans to understand learned target function or hypothesis

Page 21: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Advantages of neural networks

•  Designed to be parallelized

•  Robust on noisy training data

•  Fast to evaluate new examples

Page 22: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Training a multi-layer neural network

Repeat for a given number of epochs or until accuracy on training data is acceptable:

For each training example:

1.  Present input to the input layer.

2.  Forward propagate the activations times the weights to each node in the hidden layer.

3.  Forward propagate the activations times weights from the hidden layer to the output layer.

4.  At each output unit, determine the error.

5.  Run the back-propagation algorithm one layer at a time to update all weights in the network.

Page 23: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Training a multilayer neural network with back-propagation (stochastic gradient descent)

•  Suppose training example has form (x, t) (i.e., both input and target are vectors).

•  Error (or “loss”) E is sum-squared error over all output units:

•  Goal of learning is to minimize the mean sum-squared error

over the training set.

E(w) = 12

(tkk∈output layer∑ −ok )

2

Page 24: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

I’m not going to derive the back-propagation equations here, but you can find derivations in the optional reading (or in many other places online). Here, I’ll just give the actual algorithm.

Page 25: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

•  Initialize the network weights w to small random numbers (e.g., between −0.05 and 0.05).

•  Until the termination condition is met, Do: –  For each (x,t) ∈ training set, Do:

1.  Propagate the input forward:

–  Input x to the network and compute the activation hj of each hidden unit j.

–  Compute the activation ok of each output unit k.

Backpropagation algorithm

(Stochastic Gradient Descent)

Page 26: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

2.  Calculate error terms For each output unit k, calculate error term δk : For each hidden unit j, calculate error term δj :

δk ← ok (1−ok )(tk −ok )

δ j ← hj (1− hj ) wkjk∈output units∑ δk

⎝⎜⎜

⎠⎟⎟

Page 27: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

3.  Update weights

Hidden to Output layer: For each weight wkj

where Input to Hidden layer: For each weight wji

where

wkj ← wkj +Δwkj

Δwkj =ηδkhj

wji ← wji +Δwji

Δwji =ηδ j xi

Page 28: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Step-by-step back-propagation example (and other resources)

http://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/

Page 29: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Batch Gradient Descent: Change weights only after averaging gradients from all training examples: Weights from hidden units to output units: Weights from input units to hidden units:

Δwkj =η1M

δk

mhj

m

m=1

M

Δwji =η1M

δj

mxi

m

m=1

M

Page 30: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Mini-Batch Gradient Descent: Change weights only after averaging gradients from a subset of B training examples: At each iteration t: Get next subset of B training examples, Bt , until all examples have been processed.

Weights from hidden units to output units:

Weights from input units to hidden units:

Δwkj =η1B

δk

mhj

m

m∈Bt

Δwji =η1B

δj

mxi

m

m∈Bt

Page 31: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Momentum

To avoid oscillations, introduce momentum (α), in which change in weight is dependent on past weight change:

(hidden-to-output)

(input-to-hidden) where t is the iteration through the main loop of back-propagation. α is a parameter between 0 and 1. The idea is to keep weight changes moving in the same direction.

Δwtji =ηδ j xi +α Δw

t−1ji

Δwkjt =ηδkhj +α Δw

t−1kj

Page 32: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Backprop Example

Page 33: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2

h1

1

h2

o1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

Page 34: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2

h1

1

h2

o1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

Target: .9

1 0

Page 35: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

h1 =σ (1)(.1)+ (1)(.1)+ (0)(.1)( ) =σ (.2) = 11+ e−.2

= .55

h2 =σ (1)(.1)+ (1)(.1)+ (0)(.1)( ) =σ (.2) = 11+ e−.2

= .55

Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

1 0

x1 x2

h1

1

h2

o1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

Target: .9

Page 36: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0 Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2 1

o1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

.55 .55

Target: .9

h1 =σ (1)(.1)+ (1)(.1)+ (0)(.1)( ) =σ (.2) = 11+ e−.2

= .55

h2 =σ (1)(.1)+ (1)(.1)+ (0)(.1)( ) =σ (.2) = 11+ e−.2

= .55

Page 37: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0 Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2 1

o1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

.55 .55

Target: .9

Page 38: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0 Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2 1

o1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

.55 .55

o1 =σ (1)(.1)+ (.55)(.1)+ (.55)(.1)( ) =σ (.21) = 11+ e−.21

= .552Target: .9

Page 39: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0 Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2 1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

.55 .55

.552 Target: .9 o1 =σ (1)(.1)+ (.55)(.1)+ (.55)(.1)( ) =σ (.21) = 1

1+ e−.21= .552

Page 40: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0 Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2 1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

.55 .55 Here we interpret o1 > .5 as “positive”. Classification is correct. But we still update weights.

.552 Target: .9

Page 41: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0 Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2 1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

.55 .55

.552 Target: .9

δk=1 = (.552)(.448)(.9−.552) = .086

Calculate error terms:

δ j=1 = .55( )(.45)(.1) .086( ) = .002δ j=2 = .55( )(.45)(.1) .086( ) = .002

Page 42: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0

x1 x2 1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

.55 .55

.552

Update hidden-to-output weights (learning rate = 0.2; momentum = 0.9): Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

Target: .9

δk=1 = (.552)(.448)(.9−.552) = .086

Calculate error terms:

δ j=1 = .55( )(.45)(.1) .086( ) = .002δ j=2 = .55( )(.45)(.1) .086( ) = .002

Page 43: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0 Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2 1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

.55 .55

.552 Target: .9 Calculate error terms:

Δwk=1, j=11 = (.2) .086( )(.55)+ (.9)(0) = .0095

Δwk=1, j=21 = (.2) .086( )(.55)+ (.9)(0) = .0095

Δwk=1, j=01 = (.2) .086( )(1)+ (.9)(0) = .0172

Update hidden-to-output weights (learning rate = 0.2; momentum = 0.9):

δk=1 = (.552)(.448)(.9−.552) = .086δ j=1 = .55( )(.45)(.1) .086( ) = .002δ j=2 = .55( )(.45)(.1) .086( ) = .002

Page 44: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0 Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

x1 x2 1

.1 .1 .1

.1

.1

.1

1

.1

.1

.1

.55 .55

.552 Target: .9 Calculate error terms:

Δwk=1, j=11 = (.2) .086( )(.55)+ (.9)(0) = .0095

Δwk=1, j=21 = (.2) .086( )(.55)+ (.9)(0) = .0095

Δwk=1, j=01 = (.2) .086( )(1)+ (.9)(0) = .0172

Update hidden-to-output weights (learning rate = 0.2; momentum = 0.9):

δk=1 = (.552)(.448)(.9−.552) = .086δ j=1 = .55( )(.45)(.1) .086( ) = .002δ j=2 = .55( )(.45)(.1) .086( ) = .002

wk=1, j=11 = .1+.0095= .1095

wk=1, j=21 = .1+.0095= .1095

wk=1, j=01 = .1+.0172 = .1172

Page 45: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

1 0

x1 x2 1

.1 .1 .1

.1

.1095

.1

1

.1172

.1

.1095

.55 .55

.552

Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

Target: .9 Calculate error terms:

Δwk=1, j=11 = (.2) .086( )(.55)+ (.9)(0) = .0095

Δwk=1, j=21 = (.2) .086( )(.55)+ (.9)(0) = .0095

Δwk=1, j=01 = (.2) .086( )(1)+ (.9)(0) = .0172

Update hidden-to-output weights (learning rate = 0.2; momentum = 0.9):

δk=1 = (.552)(.448)(.9−.552) = .086δ j=1 = .55( )(.45)(.1) .086( ) = .002δ j=2 = .55( )(.45)(.1) .086( ) = .002

wk=1, j=11 = .1+.0095= .1095

wk=1, j=21 = .1+.0095= .1095

wk=1, j=01 = .1+.0172 = .1172

Page 46: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Update input-to-hidden weights (learning rate = 0.2; momentum = 0.9):

Δwj=1,i=11 = (.2) .002( )(1)+ (.9)(0) = .0004

Δwj=1,i=01 = (.2) .002( )(1)+ (.9)(0) = .0004 wj=1,i=0

1 = .1+.0004 = .1004

Δwj=1,i=21 = (.2) .002( )(0)+ (.9)(0) = 0

wj=1,i=11 = .1+.0004 = .1004

wj=1, i=21 = .1

Label: Positive

1 0 Label: Positive

0 1 Label: Negative

Training set:

Target: .9 Calculate error terms:

δk=1 = (.552)(.448)(.9−.552) = .086δ j=1 = .55( )(.45)(.1) .086( ) = .002δ j=2 = .55( )(.45)(.1) .086( ) = .002

1 0

x1 x2 1

.1004 .1004

.1004

.1004

.1095

.1

1

.1172

.1

.1095

.55 .55

.552

wj=2, i=21 = .1

Page 47: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Hidden Units

•  Two few – can’t represent target function

•  Too many – leads to overfitting

Use “cross-validation” to decide number of hidden units. (We’ll go over cross-validation later on.)

Page 48: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Underfitting, Overfitting and Regularization

Adapted from https://en.wikipedia.org/wiki/Overfitting

Page 49: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Weight decay

•  Modify error function to add a penalty for magnitude of weight vector, to decrease overfitting.

•  This modifies weight update formula (with momemtum) to:

where λ is a parameter between 0 and 1. This kind of penalty is called “regularization”.

Δwtji =ηδ j xi +α Δw

t−1ji − λw

tji

Page 50: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Dropout

•  For each example x (or minibatch B):

–  Randomly (and temporarily) delete half the hidden

units in the network (!!)

–  Forward propagate through modified network

–  Backpropagate errors through modified network

–  Restore deleted hidden units

Repeat with new random set of units deleted

Page 51: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Patent on Dropout algorithm

https://patentimages.storage.googleapis.com/0f/5b/ea/f9866af722be1c/WO2014105866A1.pdf

Page 52: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Artificially expanding (augmenting) the training data

•  Rotations •  Scale transformations •  Add noise, other distortions

Page 53: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

http://playground.tensorflow.org/

Page 54: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Many other topics I’m not covering

E.g., •  Other methods for training the weights

•  Recurrent networks

•  Dynamically modifying network structure

Page 55: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Neural Network Exercises

Page 56: Multilayer Neural Networks - Computer Action Teamweb.cecs.pdx.edu/~mm/MachineLearningFall2018/NNs.pdf · Multilayer Neural Networks (sometimes called “Multilayer Perceptrons”

Homework 2