neural-networks (1)

22
NEURAL NETWORKS Harshita Gupta (11508013) Bioprocess Engineering

Upload: rockeysuseelan

Post on 11-May-2015

1.030 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: neural-networks (1)

NEURAL NETWORKSHarshita Gupta (11508013)

Bioprocess Engineering

Page 2: neural-networks (1)

What are ANN’s? Conventional computers VS

ANNs How do they work? Why them? Applications

Page 3: neural-networks (1)

What Are Artificial Neural Network?

An Artificial Neural Network (ANN) is an information processing system

It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems.

ANNs, like people, learn by example, each configured for a specific application

Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.

Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs or to find patterns in data.

Page 4: neural-networks (1)

ANNs Vs Computers

Neural networks process information in a similar way the human brain does, composed of a large number of highly interconnected processing units

Neural networks learn by example which must be selected very carefully. They cannot be programmed to perform a specific task.

Conventional computers use an algorithmic approach

The problem solving capability of conventional computers is restricted to problems that we already understand and know how to solve.

010101010111101011100010100101001101001010010101001010101000100010

Page 5: neural-networks (1)

They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly.

The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable

Conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions.

These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.

Page 6: neural-networks (1)

As we are all familiar this is how a conventional neuron looks

A Human Neurons

Click icon to add picture

Page 7: neural-networks (1)

In an artificial neural network, simple artificial nodes, variously called "neurons", "neurodes", "processing elements" (PEs) or "units", are connected together to form a network of nodes mimicking the biological neural networks — hence the term "artificial neural network".

A Network Neuron

Click icon to add picture

Dendrites

Cell Body

Threshold

axon

Summation

Page 8: neural-networks (1)

An Improved Neurod

X2

W1

W2

WnXn

X1

A more sophisticated neuron (figure 2) is the McCulloch and Pitts model (MCP). Here inputs are 'weighted', the effect that each input has at decision making is dependent on the weight of the particular input. In mathematical terms, the neuron fires if and only if; X1W1 + X2W2 + X3W3 + ... > T

f(x)=K(iwigi(x))

Page 9: neural-networks (1)

Network Model

The most common type of artificial neural network consists of three groups, or layers, of units:

a layer of "input" units is connected to The activity of the input units represents the raw information that is fed into the network.

a layer of "hidden" units, The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.

a layer of "output" units. The behavior of the output units depends on the activity of the hidden units.

Page 10: neural-networks (1)

Network Architectures

Feed-forward ANNs allow signals to travel one way only; from input to output. The output of any layer does not affect that same layer.

Feedback networks can have signals travelling in both directions by introducing loops in the network.

Feed-forward networks Feedback networks

input

Hidden

ou

tput

Page 11: neural-networks (1)

input

Hidden

ou

tput

Feed-forward ANNs tend to be straight forward networks that associate inputs with outputs.

This type of organization is also referred to as bottom-up or top-down.

Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found.

Also known as interactive or recurrent

Page 12: neural-networks (1)

Firing rules

The firing rule is an important concept in neural networks and accounts for their high flexibility.

A firing rule determines how one calculates whether a neuron should fire for any input pattern. It relates to all the input patterns, not only the ones on which the node was trained.

Page 13: neural-networks (1)

Take a collection of training patterns for a node, some inputs in which

Cause it to fire -1 Prevent it from firing- 0 Then the patterns not in the collection??

Here the node fires when in comparison , they have more input elements in common with the 'nearest' pattern in the 1-taught set than with the 'nearest' pattern in the 0-taught set.

If there is a tie, then the pattern remains in the undefined state.

Page 14: neural-networks (1)

For Example

For example, a 3-input neuron is taught As follows

Output 1 when input (X1,X2 and X3) -111, 101

Output 0 when the input is 000 or 001. ;

Page 15: neural-networks (1)

X1 0 0 0 0 1 1 1 1

X2 0 0 1 1 0 0 1 1

X3 0 1 0 1 0 1 0 1

Out 0 0 0 0/1 0/1 1 1 1

X1 0 0 0 0 1 1 1 1

X2 0 0 1 1 0 0 1 1

X3 0 1 0 1 0 1 0 1

Out 0 0 0/1 0/1 0/1 1 0/1 1

Before Generalization

After Generalization

111, 101=1 000 or 001=0

Page 16: neural-networks (1)

Pattern Recognition

TAN1 F1

F2

F3

X11

X12X13

X21X22X23

X31X32X33

TAN2

TAN3

Page 17: neural-networks (1)

X11

0 0 0 0 1 1 1 1

X12

0 0 1 1 0 0 1 1

X13

0 1 0 1 0 1 0 1

Out 0 0 1 1 0 0 1 1X21

0 0 0 0 1 1 1 1

X22

0 0 1 1 0 0 1 1

X23

0 1 0 1 0 1 0 1

Out 1 0/1 1 0/1 0/1 0 0/1 0X31

0 0 0 0 1 1 1 1

X32

0 0 1 1 0 0 1 1

X33

0 1 0 1 0 1 0 1

Out 0 0 1 1 0 0 1 0

Top Neuron

Middle Neuron

Bottom Neuron

Page 18: neural-networks (1)
Page 19: neural-networks (1)

Learning methods

Page 20: neural-networks (1)

Based on The memorization of patterns

associative mapping in which the network learns to produce a particular pattern on the set of input units whenever another particular pattern is applied on the set of input units. The associative mapping can generally be broken down into two mechanisms: auto-association: an input pattern is associated with itself and the states of

input and output units coincide. This is used to provide pattern competition, i.e. to produce a pattern whenever a portion of it or a distorted pattern is presented. In the second case, the network actually stores pairs of patterns building an association between two sets of patterns.

hetero-association: is related to two recall mechanisms: nearest-neighbor recall, where the output pattern produced corresponds to the input

pattern stored, which is closest to the pattern presented, and interpolative recall, where the output pattern is a similarity dependent interpolation of

the patterns stored corresponding to the pattern presented. Yet another paradigm, which is a variant associative mapping is classification, i.e. when there is a fixed set of categories into which the input patterns are to be classified.

regularity detection in which units learn to respond to particular properties of the input patterns. Whereas in associative mapping the network stores the relationships among patterns, in regularity detection the response of each unit has a particular 'meaning'. This type of learning mechanism is essential for feature discovery and knowledge representation

and the subsequent response of the network

Page 21: neural-networks (1)

The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. Furthermore there is no need to devise an algorithm (or understand inner mechanism) in order to perform a specific task; They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture. Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain.Perhaps the most exciting aspect of neural networks is the possibility that some day 'conscious' networks might be produced. There is a number of scientists arguing that consciousness is a 'mechanical' property and that 'conscious' neural networks are a realistic possibility.Finally, I would like to state that even though neural networks have a huge potential we will only get the best of them when they are integrated with computing, AI, fuzzy logic and related subjects.

Conclusion

Page 22: neural-networks (1)

References

An introduction to neural computing. Alexander, I. and Morton, H. 2nd edition http://media.wiley.com/product_data/excerpt/19/04713491/0471349119.pdf Neural Networks at Pacific Northwest National Laboratory

http://www.emsl.pnl.gov:2080/docs/cie/neural/neural.homepage.html Industrial Applications of Neural Networks (research reports Esprit, I.F.Croall,

J.P.Mason) A Novel Approach to Modeling and Diagnosing the Cardiovascular System

http://www.emsl.pnl.gov:2080/docs/cie/neural/papers2/keller.wcnn95.abs.html Artificial Neural Networks in Medicine

http://www.emsl.pnl.gov:2080/docs/cie/techbrief/NN.techbrief.html Neural Networks by Eric Davalo and Patrick Naim Learning internal representations by error propagation by Rumelhart, Hinton and

Williams (1986). Klimasauskas, CC. (1989). The 1989 Neuron Computing Bibliography.

Hammerstrom, D. (1986). A Connectionist/Neural Network Bibliography. DARPA Neural Network Study (October, 1987-February, 1989). MIT Lincoln Lab.

Neural Networks, Eric Davalo and Patrick Naim Assimov, I (1984, 1950), Robot, Ballatine, New York. Electronic Noses for Telemedicine

http://www.emsl.pnl.gov:2080/docs/cie/neural/papers2/keller.ccc95.abs.html Pattern Recognition of Pathology Images

http://kopernik-eth.npac.syr.edu:1200/Task4/pattern.html