neural networks kasin prakobwaitayakit department of electrical engineering chiangmai university...
TRANSCRIPT
Neural Networks
Kasin PrakobwaitayakitDepartment of Electrical Engineering
Chiangmai University
EE459Neural Networks
The Structure
Neural Networks
The Structure of Neurones
–Axons connect to dendrites via synapses.–Electro-chemical signals are propagated from the dendritic input, through the cell body, and down the axon to other neurons
A neurone has a cell body, a branching inputstructure (the dendrIte) and a branching output
structure (th axOn)
Neural Networks
The Structure of Neurones• A neurone only fires if its input signal
exceeds a certain amount (the threshold) in a short time period.
• Synapses vary in strength– Good connections allowing a large signal– Slight connections allow only a weak signal.– Synapses can be either excitatory or inhibitory.
Neural Networks
Sj f (Sj) Xj
ao
a1
a2
an
+1wj0
wj1wj2
wjn
A Classic Artifical Neuron(1)
Neural Networks
All neurons contain an activation function which determines whether the signal is strong enough to produce an output.
Shows several functions that could be used as an activation function.
A Classic Artifical Neuron(2)
Neural Networks
Learning• When the output is
calculated, the desire output is then given to the program to modify the weights.
• After modifications are done, the same inputs given will produce the outputs desired.
Formula : Weight N = Weight N +
learning rate * (Desire Output-Actual Output) * Input N * Weight N
Neural Networks
Tractable Architectures
• Feedforward Neural Networks– Connections in one direction only– Partial biological justification
• Complex models with constraints (Hopfield and ART).– Feedback loops included– Complex behaviour, limited by constraining
architecture
Neural Networks
Fig. 1: Multilayer PerceptronOutput Values
Input Signals (External Stimuli)
Output Layer
AdjustableWeights
Input Layer
Neural Networks
Types of Layer
• The input layer.– Introduces input values into the network.– No activation function or other processing.
• The hidden layer(s).– Perform classification of features– Two hidden layers are sufficient to solve any
problem– Features imply more layers may be better
Neural Networks
Types of Layer (continued)
• The output layer.– Functionally just like the hidden layers– Outputs are passed on to the world outside the
neural network.
Neural Networks
A Simple Model of a Neuron
• Each neuron has a threshold value• Each neuron has weighted inputs from other
neurons• The input signals form a weighted sum• If the activation level exceeds the threshold, the
neuron “fires”
w1jw2jw3j
wij
y1
y2
y3
yi
O
Neural Networks
An Artificial Neuron
• Each hidden or output neuron has weighted input connections from each of the units in the preceding layer.
• The unit performs a weighted sum of its inputs, and subtracts its threshold value, to give its activation level.
• Activation level is passed through a sigmoid activation function to determine output.
w1jw2jw3j
wij
y1
y2
y3
yi
f(x) O
Neural Networks
Mathematical Definition
• Number all the neurons from 1 up to N
• The output of the j'th neuron is oj
• The threshold of the j'th neuron is j
• The weight of the connection from unit i to unit j is wij
• The activation of the j'th unit is aj
• The activation function is written as (x)
Neural Networks
Mathematical Definition
• Since the activation aj is given by the sum of the weighted inputs minus the threshold, we can write:
oj = (aj )
aj = ( wijoi ) - ji
Neural Networks
Activation functions
• Transforms neuron’s input into output.
• Features of activation functions:– A squashing effect is required
• Prevents accelerating growth of activation levels through the network.
– Simple and easy to calculate– Monotonically non-decreasing
• order-preserving
Neural Networks
Standard activation functions
• The hard-limiting threshold function– Corresponds to the biological paradigm
• either fires or not
• Sigmoid functions ('S'-shaped curves)– The logistic function– The hyperbolic tangent (symmetrical)– Both functions have a simple differential– Only the shape is important
(x) = 1
1 + e -ax
Neural Networks
Training Algorithms
• Adjust neural network weights to map inputs to outputs.
• Use a set of sample patterns where the desired output (given the inputs presented) is known.
• The purpose is to learn to generalize– Recognize features which are common to good
and bad exemplars
Neural Networks
Back-Propagation
• A training procedure which allows multi-layer feedforward Neural Networks to be trained;
• Can theoretically perform “any” input-output mapping;
• Can learn to solve linearly inseparable problems.
Neural Networks
Activation functions and training
• For feedforward networks:– A continuous function can be differentiated
allowing gradient-descent.– Back-propagation is an example of a gradient-
descent technique.– Reason for prevalence of sigmoid
Neural Networks
Training versus Analysis
• Understanding how the network is doing what it does
• Predicting behaviour under novel conditions