artificial spiking neural networks

29
Artificial Spiking Neural Networks Sander M. Bohte CWI Amsterdam The Netherlands

Upload: lemuel

Post on 08-Feb-2016

98 views

Category:

Documents


4 download

DESCRIPTION

Artificial Spiking Neural Networks. Sander M. Bohte CWI Amsterdam The Netherlands. Overview. From neurones to neurons Artificial Spiking Neural Networks (ASNN) Dynamic Feature Binding Computing with spike-times Neurons-to-neurones Computing graphical models in ASNN Conclusion. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Artificial Spiking Neural Networks

Artificial Spiking Neural Networks

Sander M. Bohte

CWIAmsterdam

The Netherlands

Page 2: Artificial Spiking Neural Networks

Overview

• From neurones to neurons• Artificial Spiking Neural Networks

(ASNN)– Dynamic Feature Binding– Computing with spike-times– Neurons-to-neurones– Computing graphical models in ASNN

• Conclusion

Page 3: Artificial Spiking Neural Networks

Of neurones and neurons

• Artificial Neural Networks– (neuro)biology -> Artificial Intelligence (AI)

– Model of how we think the brain processes information

• New data on how the brain works!– Artificial Spiking Neural Networks

Page 4: Artificial Spiking Neural Networks

Real Neurons

• Real cortical neurons communicate with spikes or action potentials

current response'EPSC'

Page 5: Artificial Spiking Neural Networks

Real Neurons

• The artificial sigmoidal neuron models the rate at which spikes are generated

• artificial neuron computes function of weighted input:

x = f( )w x ij ijjx

w x ij i

Page 6: Artificial Spiking Neural Networks

Artificial Neural Networks

• Artificial Neural Networks can:– approximate any function

• (Multi-Layer Perceptrons)– act as associative memory

• (Hopfield networks, Sparse Distributed Memory)

– learn temporal sequences• (Recurrent Neural Networks)

Page 7: Artificial Spiking Neural Networks

ANN’s

• BUT....– for AI neural networks are not competitive

• classification/clustering– ... or not suitable

• structured learning/representation (“binding” problem, e.g. grammar)

– and scale poorly• networks of networks of networks...

– for understanding the brain the neuron model is wrong

• individual spikes are important, not just rate

Page 8: Artificial Spiking Neural Networks

Dynamic Feature Binding

• “bind” local features into coherent percepts:

Page 9: Artificial Spiking Neural Networks

Binding

• representing multiple objects?

• like language without grammar! (i.e. no predicates)

or

?

?

Page 10: Artificial Spiking Neural Networks

Binding

• Conjunction coding:

or

?

?

Page 11: Artificial Spiking Neural Networks

Binding

• Synchronizing spikes?

Page 12: Artificial Spiking Neural Networks

New Data!

• neurons belonging to same percept tend to synchronize (Gray & Singer, Nature 1987)

• timing of (single) spikes can be remarkably reproducible– fly: same stimulus (movie)

• same spike ± < 1ms

• Spikes are rare: average brain activity < 1Hz– “rates” are not energy efficient

Page 13: Artificial Spiking Neural Networks

Computing with Spikes

• Computing with precisely timed spikes is more powerful than with “rates”.(VC dimension of spiking neuron models)[W. Maass and M. Schmitt., 1999]

• Artificial Spiking Neural Networks??[W. Maass Neural Networks, 10, 1997]

Page 14: Artificial Spiking Neural Networks

Artificial Spiking Neuron

• The “state” (= membrane potential) is a weighted sum of impinging spikes– spike generated when potential crosses threshold,

reset potential

Page 15: Artificial Spiking Neural Networks

Artificial Spiking Neuron

• Spike-Response Model:

– where ε(t) is the kernel describing how a single spike changes the potential:

t e (1-t/ )

PS P:

Page 16: Artificial Spiking Neural Networks

Artificial Spiking Neural Network

• Network of spiking neurons:

Page 17: Artificial Spiking Neural Networks

Error-backpropagation in ASNN

• Encode “X-OR” in (relative) spike-times

Page 18: Artificial Spiking Neural Networks

XOR in ASNN

• Change weights according to gradient descent using error-backpropagation (Bohte etal, Neurocomputing 2002)

• Also effective for unsupervised learning(Bohte etal, IEEE Trans Neural Net. 2002)

Page 19: Artificial Spiking Neural Networks

Computing Graphical Models

• What kind of intelligent computing can we do?

• recent work: computing Hidden Markov Models in noisy recurrent ASNN(Rao, NIPS 2004, Zemel etal, NIPS 2004)

Page 20: Artificial Spiking Neural Networks

From Neurons to Neurones

• artificial spiking neurons are fairly accurate model of real neurons

• learning rules -> predictions for real neuronal behavior

• example: reducing response variance in stochastic spiking neuron yields learning rule like biology (Bohte & Mozer, NIPS 2004)

Page 21: Artificial Spiking Neural Networks

STDP from variance reduction

• neurons fire stochastically as a function of membrane potential

• Good idea to minimize response variability: – response entropy:

– gradient:

Page 22: Artificial Spiking Neural Networks

STDP?

• Spike-timing dependent plasticity:

Page 23: Artificial Spiking Neural Networks

Variance Reduction

• Simulate STDP experiment (Bohte&Mozer,2005):

• predicts dependence shape STDP -> neuron parameters

Page 24: Artificial Spiking Neural Networks

STDP -> ASNN

• Variance reduction replicates experimental results.

• Suggests: learning in ASNN based on– (mutual) information maximization– minimum description length (MDL)

(based on similar entropy considerations)• Suggests: new biological experiments

Page 25: Artificial Spiking Neural Networks

Hidden Markov Model

• Bayesian inference in simple single level (Rao, NIPS 2004):

• hidden state of model at time t

Page 26: Artificial Spiking Neural Networks

• Let be the observable output at time t• probability: • forward component of belief propagation:

Page 27: Artificial Spiking Neural Networks

Bayesian SNN

• Recurrent spiking neural network:

Page 28: Artificial Spiking Neural Networks

Bayesian SNN

• Current spike-rate:

• The probability of spiking is directly proportional to the posterior probability of the neuron’s preferred state and the current input given all past inputs

• Generalizes to Hierarchical Inference

Page 29: Artificial Spiking Neural Networks

Conclusion

• new neural networks: Artificial Spiking Neural Networks

• can do what traditional ANN’s can• we are researching how to use these networks

in more interesting ways• many open directions:

– Bayesian inference / graphical models in ASNN– MDL/information theory based learning– distributed coding for binding problem in ASNN– applying agent-based reward distribution ideas to

scale learning in large neural nets