intelligent control methods lecture 14: neuronal nets (part 2) slovak university of technology...

18
Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty of Material Science and Technology in Trnava

Upload: elwin-floyd

Post on 20-Jan-2018

212 views

Category:

Documents


0 download

DESCRIPTION

3 Decision example: – characters classification x1=1 x2=1 x3=0 x4=1 x15=1 w11 y1 = 0.84  1 y2 = 0.04  0 y3 = 0.75  1 y4 = 0.92  1 y5 = 0.12  0 y = (1,0,1,1,0) responds to „R“

TRANSCRIPT

Page 1: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

Intelligent Control Methods

Lecture 14: Neuronal Nets (Part 2)

Slovak University of TechnologyFaculty of Material Science and Technology in Trnava

Page 2: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

2

Decision (solution, active dynamics)

The input xk=(x1k, x2k, ... xnk) is applied to input layer.

Neurons in all layers work out output signals according to their input(s), threshold and transfer function. The output signals are multiplicated by weights.

The neuron output signals lead to next layer. The output of the last layer is yk=(y1k, y2k, ... ymk).

The net gives (declares, indicates, defines) with decision yk what is xk.

Page 3: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

3

Decision example: – characters classification

x1=1

x2=1

x3=0

x4=1

x15=1

w11y1 = 0.84 1

y2 = 0.04 0

y3 = 0.75 1

y4 = 0.92 1

y5 = 0.12 0

y = (1,0,1,1,0) responds to „R“

Page 4: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

4

Learning (adaptive dynamics)

Theoretically possible ways of learning: construction of new connections removing of connections changing of neurons threshold values changing of transfer functions changing of layers number or number of neurons in layers changing of synaptic weights (practical used only)

Base idea: Synaptic weights are adapted to values, which guarantee the proper output vector y for each input vector x.

The way (not only) of weight adaptation: Learning with training set.

Page 5: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

5

Learning with training set: Training set:

M = {(x1,b1), ... , (xn,bn)} xk – input vector

bk – correct output vector (response)

The real net response to input xk: yk

Adapted (learnt) net: yk = bk

Page 6: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

6

Learning with training set: Vector xk from training set is connected to net input. The

signal spreads in net and the net produces an output yk. yk compares with bk. The needed changes of synaptic weights are computed

according to differences (net mistake) between yk and bk..The biggest weight changes are in connections, where are the greatest differences (delta rule). Because the difference is measured in output, the first calculations are performed in output layer. The calculation process moves along to net from right to left (back propagation).

The global net mistake is calculated after complete training net using. If it is in allowed range, the adaptation process ends. Otherwise the training set must be used again (perhaps thousands iterations).

Page 7: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

7

Learning with training set:Net global mistake (with weights w):

))(()( k j

kjkj bywE

mistake of element jmistake of pattern k

training net global mistake

The sum of quadrates errors is used for mistake of pattern k estimation, therefore:

j

kjkj

k

bywE 2)(21)( wopt = arg min E(w)

w

Page 8: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

8

Learning with training set:

Changes wij are computed so, that the mistake of pattern k is minimal:

)(),()(

kwwkEkw

ijij

- learning rate - defines a speed of learning process.

yij, wij yij i

Iterative process. The initial synaptic weights are set up.

Page 9: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

9

Learning with training set:The result of derivation: delta rule

For hidden layer:

m

ijmmmiij kykykbwkw )())]()(([)( all outputs into next layer

)())()(()( kykykbkw ijiiij needed learning mistake contributionchange rate of of neuron j of wij output i to input i

For output layer:

Page 10: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

10

Learning with training set:

The weights are adapted mostly in places, where are the greatest mistakes.

The calculations are performed in direction from output to input layer (back propagation).

Page 11: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

11

Example of neuronal net learning:

p w31

1 3 w53 w41 y

5w32 w54

q 2 4 w42

p q y

1 1 0

0 1 1

1 0 1

0 0 0

XOR:

= 0.01 (for all neurons), unit jump transfer functions, = 1.

initial weights for 1. pattern (p=1, q=1) are:

w31(1) = -4.9, w41(1) = 4.6, w32(1) = 5.0, w42(1) = -5.1, w53(1) = 2.2, w54(1) = 2.5

Page 12: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

12

Example of neuronal net learning: b(1) = 0

For the 1st pattern: y53(1) =1, y54(1) = 0, y5(1) = 1. w53(1) = 1 (b5(1) – y5(1)) y53(1) = 1 (0 – 1) 1 = -1 w54(1) = 1 (b5(1) – y5(1)) y54(1) = 1 (0 – 1) 0 = 0

In hidden layer: w31(1) = 1 [w53(1) (b5(1) – y5(1))] y31(1) = 1 [2.2 (0 – 1)] 1 = -2.2 w41(1) = 1 [w54(1) (b5(1) – y5(1))] y41(1) = 1 [2.5 (0 – 1)] 1 = -2.5 w32(1) = 1 [w53(1) (b5(1) – y5(1))] y32(1) = 1 [2.2 (0 – 1)] 1 = -2.2 w42(1) = 1 [w54(1) (b5(1) – y5(1))] y42(1) = 1 [2.5 (0 – 1)] 1 = -2.5

Page 13: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

13

Example of neuronal net learning: New weights for pattern 2: (p=0, q=1) wij(2) = wij(1) + wij(1)

w31(2) = w31(1) + w31(1) = -4.9 + (-2.2) = -7.1 w41(2) = w41(1) + w41(1) = 4.6 -2.5 = 2.1 w32(2) = w32(1) + w32(1) = 5.0 - 2.2 = 2.8 w42(2) = w42(1) + w42(1) = -5.1 - 2.5 = -7.6 w53(2) = w53(1) + w53(1) = 2.2 - 1 = 1.2 w54(2) = w54(1) + w54(1) = 2.5 + 0 = 2.5

Pattern 2 is used, the net produces an adequate output. The weights are right (expected and real output are equal), therefore they are not changed. The same for patterns 3 and 4. The net is learnt.

Page 14: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

14

Applications of NNs:

Pattern recognition (classification) example with characters recognition in raster

Optimisation There are various input combinations and their

optimal outputs in training set. The learnt net can find the optimum for another inputs, too. Used in cases, where the analytical formulation input-output misses.

Page 15: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

15

Applications of NNs: data evaluation, state monitoring (system, organism, TP,

...) – example: chemical column inputs (linguistic variables):

volume quantity of input hydrocarbons mixture volume quantity of reflux (= backward) flow volume quantity of heating steam volume quantity of product output

outputs: temperature pressure distilled material concentration in product contamination concentration on the column bottom

Page 16: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

16

Applications of NNs: Processes control

Example: Abrasive cutting. (Material with abrasium circulate along a closed loop.

Inputs: flow speed hardness of abrasium, hardness of material size of abrasium, size of material number of cycles

outputs: material decrease surface roughness

Regulation The controllers constants are estimated according to

combination of input, state, output and desired values.

Page 17: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

17

Neuronal nets – concluding remarks: It do not exist rules for number of layers and for

number of neurons in layers estimation. (Nets with 0 - 2 hidden layers are used. The number of input neurons depends on input number, the number of input neurons depends on the needed outputs number n (example: 2i > n). The number of hidden layers is 1 or 2, the number of neurons in hidden layers is low.)

net size

Net global mistake

Page 18: Intelligent Control Methods Lecture 14: Neuronal Nets (Part 2) Slovak University of Technology Faculty…

18

Neuronal nets – concluding remarks:

I have not found recommedations for the choice of transfer function, threshold value, initial synaptic weights setup (average values from allowed scope, random values?)

Learning rate is selected from 0 - 1. (little value needs more iterations but is more precise, the bigger one learns rapid, but it can oscillate around the extreme).

It is not defined, when to stop the learning process. After some iterations the net global mistake can start to grow. (net overlearning).