2806 neural computation stochastic machineslecture 10

32
2806 Neural Computation Stochastic Machines Lecture 10 2005 Ari Visa

Upload: telma

Post on 21-Mar-2016

60 views

Category:

Documents


1 download

DESCRIPTION

2806 Neural Computation Stochastic MachinesLecture 10. 2005 Ari Visa. Agenda. Some historical notes Some theory Metropolis Algorithm Simulated Annealing Gibbs Sampling Boltzmann Machine C onclusions. Some Historical Notes. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: 2806 Neural Computation Stochastic MachinesLecture 10

2806 Neural ComputationStochastic Machines

Lecture 10

2005 Ari Visa

Page 2: 2806 Neural Computation Stochastic MachinesLecture 10

Agenda

Some historical notes Some theory Metropolis Algorithm Simulated Annealing Gibbs Sampling Boltzmann Machine Conclusions

Page 3: 2806 Neural Computation Stochastic MachinesLecture 10

Some Historical Notes

Statistical mechanics encompasses the formal study of macroscopic equilibrium prorerties of large systems of elements.

The ”canonical distribution” (= Gibbs distribution = Boltzmann distribution) (Willard Gibbs, 1902)

The use of statistical mechanics as basis for the study of neural networks (Cragg and Temperley, 1954) (Cowan, 1968)

Page 4: 2806 Neural Computation Stochastic MachinesLecture 10

Some Historical Notes

The idea of introducing temperature and simulated annealing into combinatorial optimization problems (Kirkpatrick, Gelatt, Vacchi, 1983).

The Boltzmann machine is among the first multilayer learning machine inspired by statistical mechanics (Hinton & Sejnowski, 1983,1986), (Ackley et al 1985).

Page 5: 2806 Neural Computation Stochastic MachinesLecture 10

Some TheoryConsider a physical system with many degrees of freedom, that can reside in amy

one of a large number of possible states.Let pi denote the probability of occurrence of state i, pi ≥0 for all i and ipi =1.Let Ei denote the energy of the system when it is in state i.When the system is in thermal equilibrium with its surrounding environment, state

i occurs with a probability: pi = 1/Z exp(- Ei / kBT) where T is the absolute temperature (K) and kBis Boltzmann’s constant (= canonial or Gibbs distribution).

The normalizing quantity Z is called the sum over states or the partitioning function Z = i exp(- Ei / kBT).

Note two points: 1) States of low energy have a higher probability of occurrence than states of high

energy.2) As the temperature T reduced, the probability is concentrated on a smaller

subset of low-energy states.

Page 6: 2806 Neural Computation Stochastic MachinesLecture 10

Some Theory The Helmholtz free energy of a physical system F is

defined in terms of the partition function Z : F = -T log Z The average energy: <E> = i piEi <E> = -F = -Ti pilog pi (= entropy H) <E> = -F = -TH ∆H + ∆H’ ≥ 0 The principle of minimal free energy: the principle of minimal free energy of a stochastic system

with respect to variables of the system is achieved at thermal equilibrium, at which point the system is governed by the Gibbs distribution.

Page 7: 2806 Neural Computation Stochastic MachinesLecture 10

Some Theory

Consider a system whose evolution is described by a stochastic process {Xn, n=1,2,...}, consisting of a family of random variables.

The value xn assumed by the random variable Xn at discrete time n is called the state of the system.

The space of all possible values that the random variables can assume is called the state space of the system.

Page 8: 2806 Neural Computation Stochastic MachinesLecture 10

Some Theory If the structure of the stochastic process {Xn,

n=1,2,...}, is such that the conditional probability distribution of Xn+1 depends only on the value of Xn and is independent of all previous values, is a Markov chain.

Markov property P(Xn+1= xn| Xn= xn ,...,X1= x1} A sequence of random variables X1,X2 ,...,Xn, Xn+1

forms a Markov chain if the probability that the system is in state xn+1 at time n+1 depends exclusively on the probability that the system is in state xn at time n.

Page 9: 2806 Neural Computation Stochastic MachinesLecture 10

Some Theory In a markov chain, the transition from one state to another is

probabilistic : pij = P(Xn+1= j|Xn = i) denote the transition probability from state i at time n to state j at time n+1 (pij ≥0 for all (i,j) and jpij = 1 for all i).

Markov chain is homogeneous in time if the transition probabilities are fixed and not change with time.

Let pij(m) denote the m-step transition probability from state i to state j:

pij(m) = P(Xn+m= xj |Xn = xi) m=1,2,...

We may view as the sum over all intermediate states k through which the system passes in its transition from state i to state j. p ij

(m) = kpik(m) pkj m

=1,2, ... with pik (1) = pik .

pij(m+n) = kpik

(m) pkj (n) m,n =1,2, ... (Chapman-Kolmogorov identity).

When a state of the chain can only reoccur at time intervals that are multiples of d (d is the largest such number), we say the state has period d.

Page 10: 2806 Neural Computation Stochastic MachinesLecture 10

Some Theory The state i is said to be a recurrent state if the markov chain returns to

state i with probability 1 fi = P(ever returning to state i) = 1. If the probability fi is less than 1, the state i is said to be a transient

state. The state j of a Markov chain is said to be accessible from state i if

there is a finite sequence of transitions from i to j with positive probability.

If the states i and j are accessible to each other, the states i and j of the Markov chain are said to communicate with each other.

If two states of a Markov chain communicate with each other, they belong to the same class.

If all the states consist of a single class, Markov chain is said to be indecomposible or irreducible.

Page 11: 2806 Neural Computation Stochastic MachinesLecture 10

Some Theory The mean recurrence time of state i is defined as the

expectations of Ti(k) over the returns k. Ti(k) denotes the time that elapses between the (k-1)th and kth returns to state i .

The steady-state probability of state i, denoted by i , is equal to the reciprocal of the mean recurrence time i = 1 /E[Ti(k) ]

If E[Ti(k) ]<∞, that is i >0, the state i is asid to be a positive recurrent state.

If E[Ti(k) ]=∞, that is i =0, the state i is asid to be a null recurrent state.

Page 12: 2806 Neural Computation Stochastic MachinesLecture 10

Some Theory Ergodicity = we may substitute time averages for ensemble averages. In the context of Markov chain = the long-term proportion of time

spent by the chain in state i corresponds to the steady-state probability i .

The proportion of time spent in state i after k returns vi(k), vi(k) = k/ k

l=1Ti(l). Consider an ergodic Markov chain characterized by a stochastic

matrix P. Let the row vector (n-1) denote the state distribution vector of the chain at time n-1; the jth element of (n-1) is the probability that the chain is in state xj at time n-1.

(n) = (n-1) P The state distribution vector of Markov chain at time n is the product

of the initial state distribution vector (0) and the nth power of the stochastic matrix P. (n) = (0) Pn.

Page 13: 2806 Neural Computation Stochastic MachinesLecture 10

Some Theory

The ergodicity theorem: 11.24-11.27

Page 14: 2806 Neural Computation Stochastic MachinesLecture 10

Some Theory

The principle of detailed balance states that at thermal equilibrium, the rate of occurrence of any transition equals the corresponding rate of occurrence of the inverse transition.

i pij = j pji

Page 15: 2806 Neural Computation Stochastic MachinesLecture 10

Metropolis Algorithm Metropolis algorithm (Metropolis et al 1953) is a modified Monte Carlo

method for stochastic simulation of a collection of atoms in equilibrium at a given temperature.

The random variable Xn representing an arbitary Markov chain is in state xi at time n. We randomly generate a new state xj representing a realization of another random variable Yn. The generation of this new statesatisfies the symmetry condition: P(Yn= xj |Xn = xi) = P(Yn= xi |Xn = xj)

Let ∆E = Ej - Ei denote the energy difference resulting from the transition of the system from state Xn= xi to state Yn = xj.

1) ∆E <0: We find that j / i = exp(- ∆E/T) > 1 j pji = i pji 1) ∆E >0: We find that j / i = exp(- ∆E/T) < 1 j pji = i ji The a prior transition probabilities ij are in fact the probabilistic model

of the random step in the Metropolis algorithm.

Page 16: 2806 Neural Computation Stochastic MachinesLecture 10

Simulated Annealing 1) A schedule that determines the rate at which the temperature is

lowered. 2) An algorithm that iteratively finds the equilibrium distribution at

each new temperature in the schedule by using the final state of the system at the previous temperature as the starting point for the new temperature (Kirkpatric et al. 1983).

The Metropolis algorithm is the basis for the simulated annealing process. The temperature T plays the role of a control parameter. the simulated annealing process will converge to a configuration of minimal energy provided that the temperature is decreased no faster than logarithmically – too slow to be of practical use finite-time approximation (no longer guaranteed to find a global minimum with probability one)

Page 17: 2806 Neural Computation Stochastic MachinesLecture 10

Simulated Annealing

To implement a finite-time approximation of the simulated annealing algorithm, we must specify a set of parameters governing the convergence of the algorithm. these parameters are combined in a so-called annealing schedule or cooking schedule. The annealing schedule specifies a finite sequence of values of the temperature and and a finite number of transitions attempted at each value of the temperature.

Page 18: 2806 Neural Computation Stochastic MachinesLecture 10

Simulated Annealing Initial Value of the Temperature. The initial value T0 of the

temperature is chosen high enough to ensure that virtually all proposed transitions are accepted by the simulated annealing algorithm.

Decrement of the Temperature. Ordinarily, the cooling is performed exponentially, and the changes made in the value of the temperature are small. The decrement function is defined by Tk =αTk-1, k=1,2,... where α is a constant [0.8, 0.99]. At each temperature, enough transitions are attempted so that there are 10 accepted transitions per experiment on average.

Final Value of the Temperature. The system is frozen and annealing stops if the desired number of acceptances is not achieved at three successive temperatures.

Page 19: 2806 Neural Computation Stochastic MachinesLecture 10

Gibbs Sampling Gibbs sampler generate a Markov chain with the Gibbs

distribution as equilibrium distribution. The transition probabilities associated with Gibbs sampler

are nonstationary. 1) Each component of the random vector X is visited in the

natural order, with the result that a total of K new variates are generated on each iteration.

2) The new value of component Xk-1 is used immediately when a new value of Xk is drawn for k =2,3,...,K.

The Gibbs sampler is an iterative adaptive scheme.

Page 20: 2806 Neural Computation Stochastic MachinesLecture 10

Gibbs Sampler

(11.35, 11.36, 11.37)

Page 21: 2806 Neural Computation Stochastic MachinesLecture 10

Boltzmann Machine The primary goal of Boltzmann

learning is to produce a neural network that correctly models input patterns according to a Boltzmann distribution.

The Boltzmann machine consists of stochastic neurons. A stochastic neuron resides in one of two possible states (± 1) in a probabilistic manner.

The use of symmetric synaptic connections between neurons.

The stochastic neurons partition into two functional groups: visible and hidden.

Page 22: 2806 Neural Computation Stochastic MachinesLecture 10

Boltzmann Machine During the training phase of the network , the

visible neurons are all clamped onto specific states determined by the environment.

The hidden neurons always operate freely; they are used to explain underlying constraints contained in the environmental input vectors.

This is accomplished by capturing higher-order statistical correlations in the clamping vectors.

The network can perform pattern completition provided that it has learned the training distribution properly.

Page 23: 2806 Neural Computation Stochastic MachinesLecture 10

Boltzmann Machine Let x denote the state of the

Boltzmann machine, with its component xi denoting the state of neuron i. The state x represents a realization of the random vector X. The synaptic connection from neuron i to neuron j is denoted by wji, with wji = wij for all (i,j) and wii = 0 for all i.

E(x) = - ½∑ i∑jwji xixj , i≠j P(X =x) = 1/Z exp(-E(x)/T) = (x/T ∑iwji xi) where (.) is a

sigmoid function of its arguments. Gibbs sampling and simulated

annealing are used.

Page 24: 2806 Neural Computation Stochastic MachinesLecture 10

Boltzmann Machine The goal of Boltzmann learning is to maximize the likelihood or log-likelihood function in

accordance with the maximum-likelihood principle. Positve phase. In this phase the network operates in its clamped condition. Negative phase. In this second phase, the network is allowed to run freely, and therefore

with no environmental input. The log-likelihood function L(w) = log∏ xα T P(Xα= xα) L(w) = ∑xα T (log ∑x exp(-E(x)/T) - log ∑x exp(-E(x)/T) ) Differentiating L(w) with respect to wji and introducing +

ji and -ji .

∆ wji = L(w) / wji = (+

ji - -ji ) where is a learning-rate parameter = /T.

From a learning point of view, the two terms that constitute the Boltzmann learning rule have opposite meaning: +

ji corresponding to the clamped condition of the network is a Hebbian learning rule; -

ji corresponding to the free-running condition of the network is unlearning (forgetting) term.

We have also a primitive form of an attention mechanism. The two phase approach and ,specifically, the negative phase means also increased

computational time and sensitivity to statistical errors.

Page 25: 2806 Neural Computation Stochastic MachinesLecture 10

Sigmoid Belief Networks Sigmoid belief networks or logistic belief nets (Neal 1992)

were developed to find a stochastic machine that would share with the Boltzmann machine the capacity to learn arbitarily probability distributions over binary vectors, but would not need the negative phase of the Boltzmann machine learning procedure.

This objective was achieved by replacing the symmetric connections of the Boltzmann machine with directed connections that form an acyclic graph.

A sigmoid belief network consists of a multilayer architecture with binary stochastic neurons. The acyclic nature of the machine makes it easy to perform probabilistic calculations.

0

50

100

1st

Qtr

Page 26: 2806 Neural Computation Stochastic MachinesLecture 10

Sigmoid Belief Networks

Let the vector X, consisting of two-valued random variables X1,X2,...,XN, define a sigmoid belief network composed of N stochastic neurons.

The parents of element Xj in X are denoted by pa(Xj) {X1,X2,...,Xj-1}

pa(Xj) is the smallest subset of random vector X for which we have P(Xj= xj |X1 = x1,...,Xj-1= xj-1 ) = P(Xj= xj | pa( Xj) = (xj/T ∑i<jwji xi)

Note that 1. wji =0 for all Xi not belonging to pa(Xi) and 2. wji =0 for all i ≥ j.

Page 27: 2806 Neural Computation Stochastic MachinesLecture 10

Sigmoid Belief Networks

Learning:It is assumed that each sample is two-valued, representing certain attributes. Repetition of

training examples is permitted, in proportion to how commonly a particular combination of attributes is known to occur.

1. Some size for a state vector, x, is decided for the network.2. A subset of the state vector, say xα, is selected to represent the attributes in the

training cases; that is xα represent the state vector of the visible neurons.3. The remaining part of the state vector x, denoted by x,defines the state vector of the

hidden neurons.Different arrangements of visible and hidden neurons may result in different configuration! The log-likelihood function L(w) = ∏ xα T log P(Xα= xα) Differentiating L(w) with respect to wji ∆ wji = L(w) / wji = ji where is a learning-rate parameter = /T and ji is ∑x ∑x P(X= x |Xα= xα)

(-xj/T ∑i<jwji xi)xjxi which is an average correlation between the states of neurons i and j, weighted by the factor (-xj/T ∑i<jwji xi) .

Page 28: 2806 Neural Computation Stochastic MachinesLecture 10

Sigmoid Belief Networks

table 11.2

Page 29: 2806 Neural Computation Stochastic MachinesLecture 10

Helmholtz Machine The Helmholtz machine (Dayan et al. 1995,

Hinton et al. 1995) uses two entirely different sets of synaptic connections.

The forward connections constitute teh recognition model. The purpose of this model is to infer a probability distribution over the underlying causes of the input vector.

The backward connections constitute the generative model. The purpose of this second model is to reconstruct an approximation to the original input vector from the underlyingrepresentations captured by the hidden layers of the network, thereby enabling it to operate in a self-supervised manner.

Both the recognition and generative models operate in a strictly feedforward fashion, with no feedback; they interact with each other only via the learning procedure.

Page 30: 2806 Neural Computation Stochastic MachinesLecture 10

Helmholtz Machine Wake-sleep algorithmfor calculating the recognition and generative weights. Wake phase: the network is driven in the forward direction by the recognition

weights. A representation of the input vector is thereby produced in the first hidden layer. This is followed by a second representation of that first representation, which is produced in the second hidden layer, and so on for the other hidden layers of the network.

In the sleep phase, the recognition weights are turned off. The network is driven in the backward direction by the generative weights, startting at the outermost hidden layer and working backwards layer by layer , all the way to the input layer. Because the neurons are stochastic, repeating this process would typically give rise to many different ”fantasy” vectors on the input layer. These fantasies supply an unbiased sample of the network’s generative model of world.

Delta rule is used to adjust the recognition weights. The learning rule for the generative weights uses also delta-rule, however, this

rule follows the gradient of a penalized log-likelihood fuction (Kullback-Leibler).

Page 31: 2806 Neural Computation Stochastic MachinesLecture 10

Mean-Field Theory

The use of mean-field theory as the mathematical basis for deriving deterministic approximation to the stochastic machines to speed up learning.

1. Correlations are replaced by their mean-field aproximations.

2. An intractable model is replaced by a tractable model via a variational principle.

Page 32: 2806 Neural Computation Stochastic MachinesLecture 10

SummarySome ideas rooted in statistical mechanics have represented.The Boltzmann machine uses hidden and visible neurons that are in the

form of stochastic, binary-state units. It exploits the properties of the Gibbs distribution, thereby offering some appealing features:

Through training, the probability distribution exhibited by the neurons is matched to that of the environment.

The network offers a generalized approach that is applicable to the basic issues of search, representation, and learning.

The network is quaranteed to find the global minimum of the energy surface with respect to the states, provided that the annealing schedule in the learning process is performed slowly enough.