crcns conference 2016 - wg3crcns2016.anr.fr/sites/crcns2016.anr.fr/files/crcns conference 201… ·...
TRANSCRIPT
Omri Barak
October, 2016
Mixed selectivity and reservoir computing
Understanding the brain
Neural correlates of behaviorA model (more or less formal) that links neural activity to behavior
Neural correlates
Traditionally answered by considering single neurons.
Nice neurons for nice models
Clear (pure) selectivity inspires models
Serre, Oliva, Poggio 2007Ben Yishai et al 1995Blumenfeld et al 2006Machens et al 2005
Input/output specification
Low dimensional dynamics
High dimensional neural network
Behavior
Neural data
The conventional way of understanding
Formalize:
Understand:
Implement:
Mixed selectivity
Neurons change their tuning based on context.
Barak et al 2010Rigotti et al 2013
Raposo et al 2014
Input/output specification
Low dimensional dynamics
High dimensional neural network
Implementation
Behavior
Neural data
Algorithmichypothesis
Machine learning
The conventional way of understanding
Recurrent neural networks(Echo state, Liquid state, Reservoir)
Dominey et al 1995Buonomano and Merzenich 1995Jaeger 2001Maass et al 2002
Comparison to experiment
Time
Act
ivity
-1.5 0 0.5 1.5 2.5
0
10
20
30
40
50
60
70
Time (s)
Firin
g ra
te
-1.5 0 0.5 1.5 2.5Time (s)
Simulation Experiment
Barak et al. 2013
Mixed selectivityA3A1 A2
-0.5 0 0.5 1.5 2.5 3.5 4 4.50
0.5
1
Time (s)
Cor
r. co
effic
ient
-1.4 0 1.4-1.1
0
1.1
a1 (stim)
a 1 (end
del
ay)
-0.7 0 0.7-1.1
0
1.1
a1 (mid delay)
a 1 (end
del
ay)
31 2
-0.5 0 0.5 1.5 2.5 3.5 4 4.50
0.5
1
Time (s)
Cor
r. co
effic
ient
-2.8 0 2.8-1.9
0
1.9
a1 (stim)
a 1 (end
del
ay)
-1.9 0 1.9-1.9
0
1.9
a1 (mid delay)
a 1 (end
del
ay)
Engineered:
Trained:
Data
Time (s)
D3D1 D2
-0.5 0 0.5 1.5 2.5 3.5 4 4.5
0
0.5
1
Time (s)
Cor
r. co
effic
ient
-1.1 0 1.1-1.5
0
1.5
a1 (stim)
a 1 (end
del
ay)
-1.5 0 1.5-1.5
0
1.5
a1 (mid delay)
a 1 (end
del
ay)
Barak et al. 2013
The Machine learning way(Recurrent neural networks)
TrainIt works!It has some stuff that looks like neurons!
We have no clue how it works…
But how does it work??
Input/output specification
Low dimensional dynamics
High dimensional neural network
Implementation
Behavior
Neural data
Algorithmichypothesis
Machine learning
Reverseengineering
Opening the black box
We developed an algorithm to find fixed points in trained neural networks.
Sussillo & Barak 2013
RNNs explain data & mechanism
Context dependent computationMante et al. Nature 2013
Dynamics of dACC during complex taskEnel et al. PLoS Comp Bio 2016
Representing temporal expectationsCarnevale et al. Neuron 2015
Sequene generationRajan et al. Neuron 2016
Similar work is being done in deep (feedforward) neural networks (DiCarlo).
What is missing?
Trajectory vs. dynamicsInvariance of solutionsLimits of the approachDesign considerationsA good forward model help reverse engineering
… Theory
Our approach
Thorough understanding of very simple tasks.
Analytical solutionsBuilding block for more complex tasks.
Rivkind & Barak, Arxiv
Conclusions
Focusing on single neurons has its limits.Understanding population dynamics is a hard task.Combining machine learning and dynamical systems can lead to new insights.Doing this properly requires theoryLow-D dynamics can be the relevant quantity to look for in models & data.
Conceptual framework
Input/output specification
Low dimensional dynamics
High dimensional neural network
Behavior
Neural data
Algorithmichypothesis
Machine learning
Reverseengineering
Implementation
Low D dynamics
Dimensionality reduction
Thank you
Lab members:
Oded BarzelayAlexander RivkindXu Tie
Funding:
ERC FP7 CIG 2013-618543Fondation Adelis;
Collaborators:
David SussilloLarry AbbottMisha TsodyksRanulfo RomoStefano FusiMattia RigottiMelissa WardenEarl MillerFederico CarnevaleNestor Parga
Reward No Reward
1 2 5
Time (s)
3 4
0 7 14
Mem
ory
trace
(Hz)
0
Firin
g ra
te (H
z)
Time (trials)