hebbian learning

53
Next Assignment Train a counter-propagation network to compute the 7- segment coder and its inverse. You may use the code in /cs/cs152/book: counter.c readme

Upload: piyush-bansal

Post on 01-Nov-2014

133 views

Category:

Documents


1 download

DESCRIPTION

neural

TRANSCRIPT

Page 1: Hebbian Learning

Next Assignment

Train a counter-propagation network to compute the 7-segment coder and its inverse.

You may use the code in /cs/cs152/book:

counter.c readme

Page 2: Hebbian Learning

ART1 Demo

Increasing vigilance causes the network to be more selective, to introduce a new prototype when the fit is not good.

Try different patterns

Page 3: Hebbian Learning

Hebbian Learning

Page 4: Hebbian Learning

Hebb’s Postulate“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”

D. O. Hebb, 1949

A

BIn other words, when a weight contributes to firing a neuron, the weight is increased. (If the neuron doesn’t fire, then it is not).

Page 5: Hebbian Learning

A

B

Page 6: Hebbian Learning

A

B

Page 7: Hebbian Learning

Colloquial Corollaries

“Use it or lose it”.

Page 8: Hebbian Learning

Colloquial Corollaries?

“Absence makes the heart grow fonder”.

Page 9: Hebbian Learning

Generalized Hebb Rule

When a weight contributes to firing a neuron, the weight is increased.

When a weight acts to inhibit the firing of a neuron, the weight is decreased.

Page 10: Hebbian Learning

Flavors of Hebbian Learning

UnsupervisedWeights are strengthened by the actual response to a stimulus

SupervisedWeights are strengthened by the desired response

Page 11: Hebbian Learning

Unsupervised Hebbian Learning

(aka Associative Learning)

Page 12: Hebbian Learning

Simple Associative Network

a hardlim wp b+ hardlim wp 0.5– = =

p1 stimulus0 no stimulus

= a1 response0 no response

=

input output

Page 13: Hebbian Learning

Banana Associator

p0 1 shape detected

0 shape not detected

= p1 smell detected0 smell not detected

=

Unconditioned Stimulus Conditioned Stimulus

Didn’t Pavlov anticipate this?

Page 14: Hebbian Learning

Banana Associator Demo

can be toggled

Page 15: Hebbian Learning

Unsupervised Hebb Rulew ij q w ij q 1– ai q p j q +=

W q W q 1– a q pT q +=

Vector Form:

p 1 p 2 p Q

Training Sequence:

actual response

input

Page 16: Hebbian Learning

Learning Banana Smell

w0

1 w 0 0= =

Initial Weights:

p0

1 0= p 1 1= p0

2 1= p 2 1=

Training Sequence:

w q w q 1– a q p q +=

a 1 hardlim w0

p0

1 w 0 p 1 0.5–+ hardlim 1 0 0 1 0.5–+ 0 (no banana)

=

= =

First Iteration (sight fails, smell present):

w 1 w 0 a 1 p 1 + 0 0 1+ 0= = =

= 1

unconditioned(shape)

conditioned(smell)

Page 17: Hebbian Learning

Example

a 2 hardlim w0

p0

2 w 1 p 2 0.5–+ hardlim 1 1 0 1 0.5–+ 1 (banana)

== =

Second Iteration (sight works, smell present):

w 2 w 1 a 2 p 2 + 0 1 1+ 1= = =

Third Iteration (sight fails, smell present):

a 3 hardlim w0

p0

3 w 2 p 3 0.5–+ hardlim 1 0 1 1 0.5–+ 1 (banana)

=

= =

w 3 w 2 a 3 p 3 + 1 1 1+ 2= = =

Banana will now be detected if either sensor works.

Page 18: Hebbian Learning

Problems with Hebb Rule

Weights can become arbitrarily large.

There is no mechanism for weights to decrease.

Page 19: Hebbian Learning

Hebb Rule with DecayW q W q 1– a q pT q W q 1– –+=

W q 1 – W q 1– a q pT q +=

This keeps the weight matrix from growing without bound, which can be demonstrated by setting both ai and pj to 1:

wi jm ax

1 – wi jm ax ai pj+=

wi jm ax

1 – wi jm ax +=

wi jm ax

---=

Page 20: Hebbian Learning

Banana Associator with Decay

Page 21: Hebbian Learning

Example: Banana Associator

a 1 hardlim w0

p0

1 w 0 p 1 0.5–+ hardlim 1 0 0 1 0.5–+ 0 (no banana)

=

= =

First Iteration (sight fails, smell present):

w 1 w 0 a 1 p 1 0.1w 0 –+ 0 0 1 0.1 0 –+ 0= = =

a 2 hardlim w0

p0

2 w 1 p 2 0.5–+ hardlim 1 1 0 1 0.5–+ 1 (banana)

== =

Second Iteration (sight works, smell present):

w 2 w 1 a 2 p 2 0.1w 1 –+ 0 1 1 0.1 0 –+ 1= = =

= 0.1 = 1

Page 22: Hebbian Learning

ExampleThird Iteration (sight fails, smell present):

a 3 hardlim w0

p0

3 w 2 p 3 0.5–+ hardlim 1 0 1 1 0.5–+ 1 (banana)

=

= =

w 3 w 2 a 3 p 3 0.1w 3 –+ 1 1 1 0.1 1 –+ 1.9= = =

Page 23: Hebbian Learning

General Decay Demo

no decay

larger decay

wijmax

---=

Page 24: Hebbian Learning

Problem of Hebb with DecayAssociations will be lost if stimuli are not occasionally presented.

wij q 1 – w ij q 1– =

If ai = 0, then

If = 0, this becomes

wi j q 0.9 wi j q 1– =

Therefore the weight decays by 10% at each iterationwhere there is no stimulus.

0 10 20 300

1

2

3

Page 25: Hebbian Learning

Solution to Hebb Decay Problem

Don’t decay weights when there is no stimulus

We have seen rules like this (Instar)

Page 26: Hebbian Learning

Instar (Recognition Network)

Page 27: Hebbian Learning

Instar Operationa hardlim Wp b+ hardlim wT

1 p b+ = =

The instar will be active when

wT1 p b–

or

wT1 p w1 p cos b–=

For normalized vectors, the largest inner product occurs when the angle between the weight vector and the input vector is zero -- the input vector is equal to the weight vector.

The rows of a weight matrix represent patternsto be recognized.

Page 28: Hebbian Learning

Vector Recognition

b w1 p–=

If we set

the instar will only be active when =0.

b w1 p–>

If we set

the instar will be active for a range of angles.

As b is increased, the more patterns there will be (over a wider range of ) which will activate the instar.

w1

Page 29: Hebbian Learning

Instar Rule

w ij q wij q 1– ai q p j q +=

Hebb with Decay

Modify so that learning and forgetting will only occurwhen the neuron is active - Instar Rule:

wij q wij q 1– ai q p j q ai q w q 1– –+= ij

w ij q wij q 1– ai q pj q wi j q 1– – +=

w q i w q 1– i ai q p q w q 1– i– +=

or

Vector Form:

Page 30: Hebbian Learning

Graphical Representation

w q i w q 1– i p q w q 1– i– +=

For the case where the instar is active (ai = 1):

orw q i 1 – w q 1– i p q +=

For the case where the instar is inactive (ai = 0):w q i w q 1– i=

Page 31: Hebbian Learning

Instar Demo

weight vector

input vector

W

Page 32: Hebbian Learning

Outstar (Recall Network)

Page 33: Hebbian Learning

Outstar Operation

W a=

Suppose we want the outstar to recall a certain pattern a* whenever the input p = 1 is presented to the network. Let

Then, when p = 1

a satlins Wp satlins a 1 a= = =

and the pattern is correctly recalled.

The columns of a weight matrix represent patterns to be recalled.

Page 34: Hebbian Learning

Outstar Rule

wij q wi j q 1– ai q p j q p j q w ij q 1– –+=

For the instar rule we made the weight decay term of the Hebbrule proportional to the output of the network.

For the outstar rule we make the weight decay term proportional to the input of the network.

If we make the decay rate equal to the learning rate ,

wi j q wi j q 1– ai q w ij q 1– – pj q +=

Vector Form:

w j

q w j

q 1– a q w j

q 1– – p j q +=

Page 35: Hebbian Learning

Example - Pineapple Recall

Page 36: Hebbian Learning

Definitionsa satl ins W0p0 Wp+ =

W0

1 0 0

0 1 00 0 1

=

p0

shape

tex tureweight

=

p1 if a pineapple can be seen0 otherwise

=

ppi neap ple1–1–

1

=

Page 37: Hebbian Learning

Outstar Demo

Page 38: Hebbian Learning

Iteration 1

p01

00

0

= p 1 1=

p02

1–1–

1

= p 2 1=

a 1 satlins00

0

00

0

1+

00

0

(no response)= =

w1 1 w1 0 a 1 w1 0 – p 1 +00

0

00

0

00

0

1+00

0

= = =

= 1

Page 39: Hebbian Learning

Convergence

a 2 satlins1–1–

1

00

0

1+

1–1–

1

(measurements given)= =

w1 2 w1 1 a 2 w1 1 – p 2 +0

00

1–

1–1

0

00

1+1–

1–1

= = =

w1 3 w1 2 a 2 w1 2 – p 2 +1–1–

1

1–1–

1

1–1–

1

1+1–1–

1

= = =

a 3 satlins0

00

1–

1–1

1+

1–

1–1

(measurements recalled)= =

Page 40: Hebbian Learning

Supervised Hebbian Learning

Page 41: Hebbian Learning

Linear Associator

a Wp=

p1 t1{ , } p2 t2{ , } pQ tQ{ , }

Training Set:

ai w ij p jj 1=

R

=

Page 42: Hebbian Learning

Hebb Rulew ij

ne w wijol d f i ai q gj p jq +=

Presynaptic Signal

Postsynaptic Signal

Simplified Form:

Supervised Form:

wi jne w w ij

ol d aiq p jq+=

wi jne w w ij

ol d tiq p jq+=

Matrix Form:

Wne w Wold tqpqT

+=

actual output

input pattern

desired output

Page 43: Hebbian Learning

Batch Operation

W t1p1T

t2p2T tQpQ

T+ + + tqpq

T

q 1=

Q

= =

W t1 t2 tQ

p1T

p2T

pQT

TPT

= =

T t1 t2 tQ=

P p1 p2 pQ=

Matrix Form:

(Zero InitialWeights)

Page 44: Hebbian Learning

Performance Analysisa Wpk tqpq

T

q 1=

Q

pk tq

q 1=

Q

pqT pk = = =

pqTpk 1 q k==

0 q k=

Case I, input patterns are orthogonal.

a Wpk tk= =

Therefore the network output equals the target:

Case II, input patterns are normalized, but not orthogonal.

a Wpk tk tq pqT pk

q k+= =

Error term

Page 45: Hebbian Learning

Example

p1

1–1

1–

= p2

11

1–

= p1

0.5774–

0.57740.5774–

t1 1–= =

p2

0.5774

0.57740.5774–

t2 1= =

W TPT1– 1

0.5774– 0.5774 0.5774–0.5774 0.5774 0.5774–

1.1548 0 0= = =

Wp1 1.1548 0 00.5774–0.5774

0.5774–

0.6668–= =

Wp2 0 1.1548 0

0.5774

0.5774

0.5774–

0.6668= =

Banana Apple Normalized Prototype Patterns

Weight Matrix (Hebb Rule):

Tests:

Banana

Apple

Page 46: Hebbian Learning

Pseudoinverse Rule - (1)

F W ||tq Wpq||–2

q 1=

Q

=

Wpq tq= q 1 2 Q =

WP T=

T t1 t2 tQ= P p1 p2 pQ=

F W ||T WP||–2

||E||2

= =

||E||2

eij2

j

i=

Performance Index:

Matrix Form:

Mean-squared error

Page 47: Hebbian Learning

Pseudoinverse Rule - (2)WP T=

W TP 1–=

F W ||T WP||–2

||E||2

= =

Minimize:

If an inverse exists for P, F(W) can be made zero:

W TP+=

When an inverse does not exist F(W) can be minimizedusing the pseudoinverse:

P+ PTP 1–PT

=

Page 48: Hebbian Learning

Relationship to the Hebb Rule

W TP+=

P+ PTP 1–PT

=

W TPT=

Hebb Rule

Pseudoinverse Rule

PT P I=

P+

PTP

1–P

TP

T= =

If the prototype patterns are orthonormal:

Page 49: Hebbian Learning

Example

p1

1–

11–

t1 1–= =

p2

1

11–

t2 1= =

W TP+1– 1

1– 11 1

1– 1– +

= =

P+

PTP

1–P

T 3 11 3

1–1– 1 1–1 1 1–

0.5– 0.25 0.25–0.5 0.25 0.25–

= = =

W T P+1– 1

0.5– 0.25 0.25–

0.5 0.25 0.25–1 0 0= = =

Wp1 1 0 01–1

1–

1–= = Wp2 1 0 011

1–

1= =

Page 50: Hebbian Learning

Autoassociative Memory

p1 1– 1 1 1 1 1– 1 1– 1– 1– 1– 1 1 1– 1 1–T

=

W p1p1T

p2p2T

p3p3T

+ +=

Page 51: Hebbian Learning

Tests50% Occluded

67% Occluded

Noisy Patterns (7 pixels)

Page 52: Hebbian Learning

Supervised Hebbian Demo

Page 53: Hebbian Learning

Spectrum of Hebbian LearningW

ne wW

oldtqpq

T+=

Wne w

Wold

tqpqT

+=

Wne w

Wold

tqpqT

Wold

–+ 1 – Wold

tqpqT

+= =

Wne w Wol d tq aq– pqT

+=

Wne w Wold aqpqT

+=

Basic Supervised Rule:

Supervised with Learning Rate:

Smoothing:

Delta Rule:

Unsupervised:

targetactual