22.10.20051 prof. pushpak bhattacharyya, iit bombay cs 621 artificial intelligence lecture 28 –...
DESCRIPTION
Prof. Pushpak Bhattacharyya, IIT Bombay IIT Bombay3 Higher brain Brain Cerebellum Cerebrum 3- Layers: Cerebrum Cerebellum Higher brainTRANSCRIPT
22.10.2005 1Prof. Pushpak Bhattacharyya, IIT Bombay
CS 621 Artificial Intelligence
Lecture 28 – 22/10/05
Prof. Pushpak Bhattacharyya
Generating Function, Clustering,SOM, K- Means
22.10.05 IIT Bombay 222.10.2005 2Prof. Pushpak Bhattacharyya, IIT Bombay
Self Organization
Biological Motivation
Brain
Designated area for
specific tasks
22.10.05 IIT Bombay 322.10.2005 3Prof. Pushpak Bhattacharyya, IIT Bombay
Higher brain
Brain
Cerebellum
Cerebrum3- Layers: Cerebrum
Cerebellum
Higher brain
22.10.05 IIT Bombay 422.10.2005 4Prof. Pushpak Bhattacharyya, IIT Bombay
Maslow’s hierarchy
Search for
Meaning
Contributing to humanity
Achievement,recognition
Food,rest
survival
22.10.05 IIT Bombay 522.10.2005 5Prof. Pushpak Bhattacharyya, IIT Bombay
Higher brain ( responsible for higher needs)
Cerebrum(crucial for survival)
3- Layers: Cerebrum
Cerebellum
Higher brain
22.10.05 IIT Bombay 622.10.2005 6Prof. Pushpak Bhattacharyya, IIT Bombay
Mapping of Brain
Side areasFor auditory information processing
Back of brain( vision)
22.10.2005 7Prof. Pushpak Bhattacharyya, IIT Bombay
Brain is very Resilient. Lot of Redundancy.
Auditory area can compute visual task if the visual part is damaged
22.10.2005 8Prof. Pushpak Bhattacharyya, IIT Bombay
Left Brain and Right Brain
Dichotomy
Left Brain
Right Brain
22.10.2005 9Prof. Pushpak Bhattacharyya, IIT Bombay
Left Brain – Logic, Reasoning, Verbal abilityRight Brain – Emotion, Creativity
Music
Words – left Brain
Tune – Right Brain
Maps in the brain. Limbs are mapped to brain
22.10.2005 10Prof. Pushpak Bhattacharyya, IIT Bombay
A : A, A, , ,
O/p grid
. . . . I/p neuron
22.10.2005 11Prof. Pushpak Bhattacharyya, IIT Bombay
Self Organization or kohonen network fires a group of neurons instead of a single one. The group “some how” produces a “picture” of the cluster.
Fundamentally SOM is competitive learning. But weight changes are incorporated on a neighborhood. Find winner neuron, apply weight change for the winner and its “neighbors”.
22.10.2005 12Prof. Pushpak Bhattacharyya, IIT Bombay
Neurons on the contour are the “neighborhood” neurons.
Winner
22.10.2005 13Prof. Pushpak Bhattacharyya, IIT Bombay
Weight change rule for SOM
W(n+1) = W(n) + η(n) (I(n) – W(n)) P+δ(n) P+δ(n) P+δ(n)
Neighborhood is also a function of n
Learning rate is a function of n
22.10.2005 14Prof. Pushpak Bhattacharyya, IIT Bombay
δ(n) is a decreasing function of n
η(n) learning rate is also a decreasing function of n
0 < η(n) < η(n –1 ) <=1
22.10.2005 15Prof. Pushpak Bhattacharyya, IIT Bombay
PictoriallyWinner
δ(n)
. . . .
Convergence for kohonen not proved except for uni-dimension
22.10.2005 16Prof. Pushpak Bhattacharyya, IIT Bombay
Ref for “Neural Net part”
BP: Mcclelland and Rumelhart, Parallel and Distributed Processing- V. 1, MIT Press,1986.
Text books on Neural Network by:a) Zurada b) Haykins c) Hassoun d) Schalkoff
22.10.2005 17Prof. Pushpak Bhattacharyya, IIT Bombay
Theoretical foundations of learning:
Framework: Probably approximately correct learning(PAC) – L.G. Valiant
22.10.2005 18Prof. Pushpak Bhattacharyya, IIT Bombay
Defines learning in a formal way
False missFalse hit
Hypothesis, h
Concept , c
22.10.2005 19Prof. Pushpak Bhattacharyya, IIT Bombay
Error Region
C h
Goal of learning is to minimize
C h+
+
22.10.2005 20Prof. Pushpak Bhattacharyya, IIT Bombay
Probabilistic Notions to address the question of minimizing C h+
Assume
U – UniverseC Uh U
C h U +
22.10.2005 21Prof. Pushpak Bhattacharyya, IIT Bombay
Assume a probability distribution on U.
If U is discrete probability mass function
0 <= P(X belong to U) <= 1
P is the probability that an oracle present’s the point X from out of U and also gives it a label of + / - or 1/ 0
22.10.2005 22Prof. Pushpak Bhattacharyya, IIT Bombay
Learning performance tied to probabilityDistribution that generates and labels the points.
C
U
22.10.2005 23Prof. Pushpak Bhattacharyya, IIT Bombay
PAC definition of learning is
Pr(P(C h) <= Є ) >= 1 - δ +
Accuracy parameter Confidence parameter
Underlying probability distribution
22.10.2005 24Prof. Pushpak Bhattacharyya, IIT Bombay
Pr(P(C h) <= Є ) >= 1 - δ +
C Hypothesis - h
C h+Error region
U
22.10.2005 25Prof. Pushpak Bhattacharyya, IIT Bombay
Example
A B
CD
++-
-
x
y
Rectangle learning
22.10.2005 26Prof. Pushpak Bhattacharyya, IIT Bombay
+ from within the rectangle
- from outside the rectangle
Algorithm
1. Ignore –ve examples
2. Advance h as the closest enclosing rectangle
22.10.2005 27Prof. Pushpak Bhattacharyya, IIT Bombay
+++++
+
+++++
+
Error region
Target region
A
D C
B