a master equation formalism for macroscopic modeling of...

55
LETTER Communicated by Arvind Kumar A Master Equation Formalism for Macroscopic Modeling of Asynchronous Irregular Activity States Sami El Boustani [email protected] Alain Destexhe [email protected] Unit´ e de Neurosciences Int´ egratives et Computationnelles, CNRS, 91198 Gif-sur-Yvette, France Many efforts have been devoted to modeling asynchronous irregular (AI) activity states, which resemble the complex activity states seen in the cerebral cortex of awake animals. Most of models have considered bal- anced networks of excitatory and inhibitory spiking neurons in which AI states are sustained through recurrent sparse connectivity, with or without external input. In this letter we propose a mesoscopic descrip- tion of such AI states. Using master equation formalism, we derive a second-order mean-field set of ordinary differential equations describ- ing the temporal evolution of randomly connected balanced networks. This formalism takes into account finite size effects and is applicable to any neuron model as long as its transfer function can be characterized. We compare the predictions of this approach with numerical simulations for different network configurations and parameter spaces. Considering the randomly connected network as a unit, this approach could be used to build large-scale networks of such connected units, with an aim to model activity states constrained by macroscopic measurements, such as voltage-sensitive dye imaging. 1 Introduction Cortical activity in awake animals manifests highly complex behavior, often characterized by seemingly noisy activity. At the level of single neurons, the activity in awake animals is associated with considerable subthreshold fluctuations of the membrane potential and irregular firing (Matsumara, Cope, & Fetz, 1988; Steriade, Timofeev, & Grenier, 2001; Destexhe, Rudolph, & Par´ e, 2003). It is during this regime that the main computational tasks are performed, and understanding those network dynamics is a crucial step toward an analytical study of information processing in neural networks (Destexhe & Contreras, 2006). Much effort has been devoted to the study of how such activity emerges. Balanced networks have been introduced as a possible model to generate Neural Computation 21, 46–100 (2009) C 2008 Massachusetts Institute of Technology

Upload: others

Post on 26-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

LETTER Communicated by Arvind Kumar

A Master Equation Formalism for Macroscopic Modelingof Asynchronous Irregular Activity States

Sami El [email protected] [email protected] de Neurosciences Integratives et Computationnelles, CNRS,91198 Gif-sur-Yvette, France

Many efforts have been devoted to modeling asynchronous irregular (AI)activity states, which resemble the complex activity states seen in thecerebral cortex of awake animals. Most of models have considered bal-anced networks of excitatory and inhibitory spiking neurons in whichAI states are sustained through recurrent sparse connectivity, with orwithout external input. In this letter we propose a mesoscopic descrip-tion of such AI states. Using master equation formalism, we derive asecond-order mean-field set of ordinary differential equations describ-ing the temporal evolution of randomly connected balanced networks.This formalism takes into account finite size effects and is applicable toany neuron model as long as its transfer function can be characterized.We compare the predictions of this approach with numerical simulationsfor different network configurations and parameter spaces. Consideringthe randomly connected network as a unit, this approach could be usedto build large-scale networks of such connected units, with an aim tomodel activity states constrained by macroscopic measurements, such asvoltage-sensitive dye imaging.

1 Introduction

Cortical activity in awake animals manifests highly complex behavior, oftencharacterized by seemingly noisy activity. At the level of single neurons,the activity in awake animals is associated with considerable subthresholdfluctuations of the membrane potential and irregular firing (Matsumara,Cope, & Fetz, 1988; Steriade, Timofeev, & Grenier, 2001; Destexhe, Rudolph,& Pare, 2003). It is during this regime that the main computational tasks areperformed, and understanding those network dynamics is a crucial steptoward an analytical study of information processing in neural networks(Destexhe & Contreras, 2006).

Much effort has been devoted to the study of how such activity emerges.Balanced networks have been introduced as a possible model to generate

Neural Computation 21, 46–100 (2009) C© 2008 Massachusetts Institute of Technology

Page 2: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 47

dynamical states similar to the biological ones (see Figure 1). Two antag-onistic states have been highlighted: synchronous regular states (SR) andasynchronous irregular states (AI) (Brunel, 2000). AI states are of particularinterest because their dynamical characteristics are very similar to thoseobserved in awake animals. For conductance-based integrate-and-fire neu-ron networks, they have even been observed without external stimulation(Vogels & Abbott, 2005; El Boustani, Pospischil, Rudolph-Lilith, & Des-texhe, 2007; Kumar, Schrader, Aertsen, & Rotter, 2008). Large networks(over 10,000 neurons) are required to yield states consistent with experi-mental data (El Boustani et al., 2007; Kumar et al., 2008).

In parallel to these studies, population measures of neural activity havealso been of great interest, in particular through the emergence of newimaging techniques such as voltage-sensitive dye imaging or two-photonimaging. Although the relation between these signals and single-cell prop-erties is still not completely clear, these measurements reveal structures andcorrelations over large distances (millimeters or centimeters). No modelcurrently is able to describe neuronal dynamics in large-scale networks atsuch distance scales, and there is a need for theoretical models specificallydesigned to handle the temporal and spatial scales of optical imaging.

The type of model that seems most appropriate for such scales aremean-field approaches. Self-consistent mean-field approaches have beenproposed and gave predictions about the network stability in a stationaryregime (Amit & Brunel, 1997; Brunel, 2000; Latham, Richmond, Nelson, &Nirenberg, 2000; Hertz, Lerchner, & Ahmadi, 2004). However, first-ordermean-field approximation fails to fully describe these networks because oftheir inherent dynamics, which can rely dramatically on activity fluctua-tions. Moreover, the large network limit is usually performed for randomlyconnected networks despite the lack of biological relevance.

In this letter, our aim is to obtain a macroscopic description of distributedneuronal activity during AI states, where the unit is not the neuron but asmall network of neurons. The difficulty, however, is to obtain a descriptionthat captures the statistics of network activity while being consistent withsingle-cell behavior. For this reason, we introduce a mesoscopic descriptionof neuronal activity, in which finite size effects are explicitly taken intoaccount. We consider networks of typical sizes of a few thousand neurons,far away from the large network limit.

To obtain such a mesoscopic model, we use a master equation formalismappropriate for a second-order mean-field description of network activity.The AI states, characterized by low firing rates and exponential decreaseof the activity autocorrelation, can be incorporated in such a framework. Acomplete description of the correlations and covariances can be extractedfor timescales governed by the network time constants. Correlations andcovariances in the neural dynamics convey crucial information and can beresponsible for radical changes in network state according to the parameterregime. This question has already been addressed with a similar formalism

Page 3: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

48 S. El Boustani and A. Destexhe

Figure 1: Example of a self-sustained asynchronous irregular (AI) state in asparsely connected network of conductance-based neurons. The network con-tains 5000 neurons with a ratio of 4:1 between excitatory and inhibitory neuronsand a connection probability pconn = 0.01. Otherwise the neuron model is iden-tical to Vogels and Abbott (2005) with �gexc = 7 nS and �ginh = 100 nS. (a) Top:Raster plot for a subset of excitatory (black dots) and inhibitory (gray dots) neu-rons. Bottom: Population activity with a bin size of 1 ms (black) and 5 ms (gray).The network has a mean activity of 23.68 Hz. (b) Autocorrelation of the totalnetwork activity. The dashed gray curve indicates the exponential envelope fitto the autocorrelation positive peaks. The dashed black curve indicates the slopeat the origin. The decay time is equal to 10 ms, and the activity has been com-puted with a bin size of 1 ms. (c) Characterizing the asynchronous irregular stateusing statistical measures. The population distribution of the interspike intervalcoefficient of variation (CV) with a mean value of 1.7933, which is higher than1 for the Poisson process. In the inset, the averaged pairwise cross-correlationcomputed with a bin size of 5 ms. The value at origin (0.034) estimates thenetwork synchrony. This very low value indicates that there is no substantialsynchrony among the neurons. (d) Stationary activity distribution of the net-work activity. The dashed gray curve indicates the gaussian fit made with thesame mean value (23.68 Hz) and standard deviation (3.43 Hz). The activity hasbeen computed with a time bin of 5 ms.

Page 4: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 49

for binary neural networks (Ginzburg & Sompolinsky, 1994; Ohira &Cowan, 1993). Here we intend to develop a more general formalism, appli-cable to spiking neural networks (El Boustani & Destexhe, 2007).

A similar theory has been studied in parallel (Soula & Chow, 2007) witha discrete description of the network activity. Although a discrete descrip-tion seems more natural for finite-size networks, several weaknesses appearwhen considering many populations, which is necessary for networks dis-playing spontaneously AI states. Indeed, the core of the theory is the transferfunction, which maps the output firing rate of the neuron as a function ofits synaptic input rates. Theoretical work has been done to obtain analyticaltransfer functions for a range of neuron models (Tuckwell, 1988; Brunel& Sergi, 1998; Brunel, 2000; Fourcaud & Brunel, 2002; Plesser & Gerstner,2000). However, to take advantage of such results, we have to rely on acontinuous description of network activity to link the neuron statistics tothe network ones.

Those computations are not always possible, and a semianalytical ap-proach has been used to tackle this problem (Kumar et al., 2008; Soula &Chow, 2007). In these studies, the neuron transfer function was determinednumerically for every network state and used directly for a mean-field ap-proach. This method becomes excessively time-consuming as soon as thenetwork is slightly heterogeneous, which is the case when excitatory and in-hibitory neurons have different intrinsic properties, for instance. Indeed, itis necessary to characterize numerically the transfer function for each popu-lation and conditionally to every population state. A continuous descriptionis thus necessary to describe heterogeneous networks while benefiting fromthe theoretical work that has been made at the single-neuron level.

We consider different neuron models ranging from those for which atransfer function has been derived, to those for which an approximativemodel is required. For the latter, we suggest empirical models that canaccount for the network dynamics for a broad range of parameters. In par-ticular, for conductance-based neurons, a transfer function can be foundthat gives a good qualitative description of different network regimes.Those transfer functions are detailed in the next sections and comparedwith numerical simulations for different model configurations. Finally, togain some insight into the stability issues of self-sustained AI states andtheir lifetime, we use second-order statistics to discuss stability in differentdynamical regimes, as well as discuss the limits of our approach. A prelimi-nary version of this work has been published previously as a master’s thesis(El Boustani, 2006) and a conference abstract (El Boustani & Destexhe, 2007).

2 Methods

All simulations were performed using the NEST simulator (http://www.nest-initiative.org) through the PyNN interface (http://neuralensemble.org/PyNN). We used different models of AI states in networks of spiking

Page 5: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

50 S. El Boustani and A. Destexhe

neurons, based on previous work (Brunel, 2000; Mehring, Hehl, Kubo,Diesmann, & Aertsen, 2003; Vogels & Abbott, 2005). Network size willbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory(γ = Ninh

Ntot= 0.2), unless stated otherwise. For current-based models, an

event-based strategy (Brette et al., 2007) was used to solve the networkequations, and in the conductance-based model, a clock-based strategy wasused with a time step of 0.1 ms, which is well beyond every time constantpresent in the system. In numerical and analytical models, we considernetworks without any time delay in synaptic interactions. Based on pre-vious work (Brunel & Hakim, 1999; Brunel, 2000; Vogels & Abbott, 2005),we expect this model to provide a broad range of AI states. In particular,the absence of interaction delay should decrease synchronous regions, asshown in Brunel (2000). Those states will be characterized using the usualstatistical measures. To estimate the firing regularity, we will compute themean interspike interval coefficient of variation. Neuron synchronizationwill be estimated by computing the mean pairwise cross-correlation amonga set of 500 disjoint pairs and at time lag 0. This correlation coefficient is com-puted with a time bin of 5 ms, which gives a good estimation of synchronyamong the neurons (Kumar et al., 2008). The measure is normalized so thatit takes values between 0 (no synchrony) and 1 (complete synchrony). Thosestatistics quantities are illustrated in Figure 1c.

3 Results

We start by describing the general formalism in section 3.1, then considerdifferent neuron models in sections 3.2 and 3.3, and end by illustrating thepredictions of this formalism with numerical simulations in section 3.4. Thesymbol definitions can be found in Table 1 and the simulation parametersin Table 2.

3.1 The Master Equation Formalism. In this section, we develop themathematical framework in which network dynamics will be described.The mean-field approach has been proven to be a powerful approach to de-scribe networks of spiking neurons if pertinent approximations are done.In previous work, spontaneous activity and networks under stimulationhave been studied with self-consistent equations for the neuron stationarymean firing rate (Amit & Brunel, 1997; Brunel, 2000). In these contexts, theindividual neuron spike trains are considered as Poisson point processes,which allows one to take into account current fluctuations due to irregularfiring. In particular, when a Fokker-Planck approach is used, the mem-brane potential distribution can be obtained and state diagrams drawn forthe network activity. However, once the mean neuron firing rate has beendetermined, the self-consistent equations are studied without considera-tion to the population activity fluctuations and temporal dynamics. Wepropose a framework where a second-order description can be done at the

Page 6: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 51

Table 1: Table of Symbols.

Symbol Definition Unit

NetworkNµ Number of neuron in population µ -γ Inhibitory/excitatory neuron number ratio -Cαµ Number of synaptic input from population α to µ -pconn

αµ Proportion of synaptic connection from population α to µ -mµ Network activity Hzmext

µ External network activity HzT Network time constant ms

Neuronνµ Transfer function of a neuron in population µ HzVµ Membrane potential mVVrest

µ Resting membrane potential mVVreset

µ Firing reset membrane potential mVVthreshold

µ Firing threshold potential mVRµ Input resistance at rest M�

GLµ Leak conductance nS

Cmemµ Membrane capacitance pF

τmemµ Resting membrane time constant ms

τre fµ Firing refractory period ms

SynapseAαµ Current synaptic strength from population α to µ pAJαµ Voltage synaptic strength from population α to µ mV�gαµ Conductance synaptic strength from population α to µ nSEα Reversal potential for the α-type synapse mVτα Synaptic time constant for the α-type synapse ms

population level. In addition to using a stochastic process for the membranepotential, we invoke the master equation to obtain a stochastic process forthe network activity using the corresponding neuron transfer function. InAI states, the activity autocorrelation decreases exponentially with the timelag (see Figure 1b), and our main hypothesis is that the network can bemodeled as a Markov process for time steps in the order of this decaytime.

3.1.1 Main Hypothesis of the Phenomenological Model. Cortical tissue ismade of a great variety of neurons, which can been classified according totheir biophysical properties. More precisely, electrophysiological measure-ments can be used to categorize neurons in their contribution to networkdynamics. Although those properties exhibit a great diversity, for model-ing, we can adopt a stereotypic description where only a few homogeneouspopulations are considered. In this letter, we consider only two popula-tions, excitatory and inhibitory neurons, but the formalism can accommo-date more classes if needed. To keep the model general, we assume that

Page 7: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

52 S. El Boustani and A. Destexhe

Table 2: Network Configurations in Numerical Simulations.

Parameter Value Unit

γ 0.2 -T 5 (except Figure 4) msVrest

µ −60 mVVreset

µ −60 (except Figure 5) mVVthreshold

µ −50 mVRµ 100 M�

τmemµ 20 ms

τre fµ 5 ms

Eexc 0 mVEinh −80 mVFigure N pconn (τexc , τinh )2 and 4 5000 0.01 (1,3) ms3a 15,000 0.01 (1,3) ms3b and 3c 5000–25,000 0.01 (1,3) ms1 and 5 5000 0.01 (5,10) ms6a–6c 10,000 0.02 (5,10) ms6d–6g 5000–10,000 0.01–0.02 (5,10) ms7 and 10 10,000 0.01 (5,10) ms11 10,000 0.02 (5,10) ms12a and 12b 112,500 0.1 (0.3,0.3) ms12d–12e 10,000 Variable (5,10) ms

the network contains K homogeneous populations of neurons, denoted by1, 2, . . . ,K . We define mγ (t) as the network activity at time t,

mγ (t) = lim�t→0

nγ (t − �t, t)�tNγ

, (3.1)

where nγ (t − �t, t) is the number of spikes emitted by population γ duringthe time interval [t − �t, t] and Nγ is the population size for γ = 1, . . . ,K .In order to build the master equation and obtain the desired Markoviandescription of network dynamics, we are interested in the conditional prob-ability distribution for a short time interval T ,

P({mγ (t)} | {m′γ (t − T)}), (3.2)

with γ = 1, . . . , K . The system is assumed to be time invariant, so this prob-ability depends only on the time constant T , and we write PT ({mγ } | {m′

γ }).This model is eventually intended to describe balanced network dynamics,which are made with sparse connectivity. For large enough networks, thisproperty guarantees the existence of AI states in which pairwise correlationsare negligible (van Vreeswijk & Sompolinsky, 1996; Brunel, 2000). We can

Page 8: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 53

thus assume that the population-conditional probabilities are independentfrom each other beyond the timescale of T :

PT ({mγ } | {m′γ }) = PT (m1 | {m′

γ }) . . . PT (mK | {m′γ }). (3.3)

Indeed, for a probabilistic system defined through a joint probability dis-tribution, if the random variables are assumed to be independent, thisdistribution can be factorized in a product of marginal distributions de-scribing each variable exclusively. As the network dynamics is assumed tobe memoryless beyond the time interval T , we can then define a Markoviantransition operator W({mγ } | {m′

γ }) through the continuous master equationfor population activities,

∂t Pt({mγ }) =∏

α=1,...,K

∫ 1/T

0dm′

α(Pt({m′γ })W({mγ } | {m′

γ })

− Pt({mγ })W({m′γ } | {mγ })), (3.4)

where mγ ∈ [0, T−1] and ∂t Pt({mγ }) is the time derivative of the probabilitydistribution density. In this context, the population activity must be un-derstood as the proportion of neurons that have fired at least once duringthe last period T , divided by the duration T and not as the instantaneousactivity (see equation 3.1). The parameter T controls the temporal resolu-tion of the model, and therefore the activity variables are bound by T−1.However, if T−1 is larger than or equal to the neuron maximal firing rate,then we can avoid underestimation of the activity. Indeed, the fastest be-havior in the network is defined by the neurons’ maximal firing rate. Inthe SR regime, where excitation prevails over inhibition, the network firingrate is close to the neuron maximal firing rate. However, the time constantT cannot be taken as 0 because significant correlations at this scale wouldnot be considered in this Markovian approach. We aim to describe networkstates with activity below 50 Hz, which can allow a quite large value forT without the risk of underestimation. An equivalent discrete formalismhas been studied by Soula and Chow (2007) with a Markovian approach.Here we insist on keeping a continuous description characterized by thetimescale T in order to obtain a framework entirely consistent with theFokker-Planck approach at the neuron level. The crucial quantity, whichmust be small to allow this continuous description, is the activity resolu-tion �m = 1

NT . It describes the minimal change in network activity dueto a supplementary spike. It is thus necessary to find a good compromisebetween network size and time resolution to keep �m small. The transitionoperator W({mγ } | {m′

γ }) provides the rate of transition from state {m′γ } to

state {mγ } giving the master equation its intuitive interpretation. It can be

Page 9: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

54 S. El Boustani and A. Destexhe

defined using the conditional probability density, equation 3.3, by

W({mγ } | {m′γ }) = lim

T→0

PT ({mγ } | {m′γ })

T

= limT→0

∏α=1,...,K PT (mα | {m′

γ })T

. (3.5)

Therefore, we have to compute PT (mα | {m′γ }) to fully specify the model. We

will adopt a different definition that is more appropriate to our phenomeno-logical model. We assume that the network follows a quasi-stationary evo-lution, which means that during time T , the system reaches a stationarystate, determined by the previous state a time T earlier. This correspondsto the adiabatic hypothesis used in physics. Of course, this approximationno longer holds if the system is stimulated by a signal that possesses fre-quencies larger than T−1, which can bring the system far from equilibrium.Therefore, and to avoid divergencies due to irrelevant high-order fluctua-tions, we define the transition function for finite T :

W({mγ } | {m′γ }) =

∏α=1,...,K PT (mα | {m′

γ })T

. (3.6)

Similarly as mentioned before, equation 3.1 is reconsidered for finite T andis bounded by the maximal activity value. Hence, if T is small enough, themaster equation formalism can be used. If we consider the limit T → 0, theactivity fluctuations become too important, and the transition function di-verges. This parameter is equivalent to the bin size sampling in experiments,and it is well known that bin sizes that are too small give rise to irrelevantfluctuations in the population activity. Furthermore, for an infinitesimalbin size, those fluctuations become punctual, and the transition operatorshould be reduced to a two-dimensional matrix between the states wherethe network spikes or does not. To avoid this problem, we require that Thas a finite value in the range of the network time constants. The Marko-vian approach is intended to describe the network dynamics responsiblefor the activity autocorrelation envelope (see Figure 1b). Correlations’ finestructures in the AI regime are caused by residual global oscillations dueto finite size effects. However, as the network size increases, the time con-stant T can take smaller values because the temporal finite-size effect inthe autocorrelation vanishes because of sparse connectivity (Brunel, 2000).Thus, we can define a mesoscopic scale for sparsely connected networks inwhich the network is large enough to avoid temporal correlation finite-sizeeffects but small enough to require second-order statistics to describe itspopulation dynamics. However, the activity autocorrelation decreases ex-ponentially on a timescale of the same order as the system time constants.A Markovian approach should not be acceptable below this timescale, and

Page 10: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 55

we have to choose T carefully according to the regime under consideration.A T that is too large could underestimate the firing rate in high-activityregimes, and T values that are too small will overestimate the second-orderstatistics. This point has also been discussed in Soula and Chow (2007), butno mention has been made of the fact that the typical autocorrelation finestructure timescales of the network could be smaller than any time con-stant in the system (Gerstner, 2000), as shown in Figure 1b. Eventually, for alarge enough network, we can consider T = ν−1

max, where νmax is the neuronmaximum firing rate, for almost the whole range of the AI regime.

3.1.2 Differential Equations for the Statistical Moments. Using equation 3.4,we can obtain a hierarchy of first-order differential equations for the sta-tistical moments. Indeed, the master equation solution provides the timeevolution of the activity probability density. It generally cannot be solvedexactly, but one can extract a hierarchy of equations for the statistical mo-ments directly from the differential equation. For balanced networks, thisset of equations can be stopped at the second order, and we avoid the clo-sure problem. Indeed, the stationary activity distribution during AI statesis well described by a gaussian function when the bin size is not too small(see Figure 1d). This can be understood thanks to the central limit theorem,because we are averaging out higher-order fluctuations during time stepsT in the asynchronous activity. Of course, this can hold only if N or T islarge enough.

If we close the master equation statistical moments hierarchy to thesecond order, we get (see appendix A)

∂t〈mµ〉= aµ({〈mγ 〉}) + 12∂λ∂ηaµ({〈mγ 〉})cλη (3.7)

∂tcµν = aµν({〈mγ 〉}) + ∂λaµ({〈mγ 〉})cνλ + ∂λaν({〈mγ 〉})cµλ,

where 〈mµ〉 is the mean population activity and cµν = 〈(mµ − 〈mµ〉)(mν −〈mν〉)〉 is the activity covariance matrix. Here and in the following, weuse Einstein index summation convention to avoid excessively awkwardexpressions. If an index is present in only one side of an equality, implicitsummation over the whole range of value is understood.

The step moment functions aµ({〈mγ 〉}) and aµν({〈mγ 〉}) are defined asfollows:

aµ({〈mγ 〉}) =∏

α=1,...,K

∫ 1/T

0dm′

α(m′µ − 〈mµ〉)W({m′

γ } | {〈mγ 〉})

=∫ 1/T

0dm′

µ(m′µ − 〈mµ〉) P(m′

µ | {〈mγ 〉})T

Page 11: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

56 S. El Boustani and A. Destexhe

aµν({〈mγ 〉}) =∏

α=1,...,K

∫ 1/T

0dm′

α(m′µ − 〈mµ〉)

× (m′ν − 〈mν〉)W({m′

γ } | {〈mγ 〉})

=∫ 1/T

0dm′

µ

∫ 1/T

0dm′

ν(m′µ − 〈mµ〉)(m′

ν − 〈mν〉)

× P(m′µ | {〈mγ 〉})P(m′

ν | {〈mγ 〉})T

. (3.8)

The second and last identity has been obtained by using equation 3.5given by the independence hypothesis. To complete the second-order de-velopment, we need to describe the correlation matrix of the networkCorrµν(t, t + τ ) = 〈(mµ(t) − 〈mµ(t)〉)(mν(t + τ ) − 〈mν(t + τ )〉)〉. This is donein appendix B, and the resulting differential equation for a stationary stateis given by

∂τCorrµν(τ ) = ∂λaν

({⟨mstat

γ

⟩})Corrµλ(τ ), (3.9)

where aν({〈mγ 〉}) is defined in equation 3.8 and {〈mstatγ 〉} is a the stationary

solution of the set in equation 3.7.

3.1.3 A Model for the Transition Function. Finally, to complete this frame-work, we need to specify the transition functions W(mα | {m′

γ }) for α =1, . . . , K . These functions depend directly on the neuron properties. Whena neuron is part of network dynamics and is exposed to intense synapticinputs, it can be described using a transfer function that maps the neuronoutput firing rate to the synaptic inputs, regardless of their origin. In a re-current neuron network, the synaptic input is provided by other neuronsin the network and thus depends on the network activity. If we assume thatwe know the stationary transfer function of neurons in population α, να ,then the probability pα that a neuron in this population fires during time Tgiven the previous state of the network {m′

γ } is

pα({m′γ }) � να({m′

γ })T ≤ 1.

This is a direct consequence of the adiabatic hypothesis we made in sec-tion 3.1.1. Indeed, the network in its previous state is characterized by a sta-tionary transfer function so that the network activity is assumed to evolvenear stationary states. During time T , each neuron can fire only once, sothat we can express for the population α the desired conditional proba-bility PT (mα | {m′

γ }) with a binomial distribution using the independence

Page 12: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 57

hypothesis,

PT (mα | {m′γ }) =

(Nα

mα NαT

)pα({m′

γ })mα Nα T (1 − pα({m′γ }))Nα (1−mα T),

(3.10)

where it is implicitly understood that we take the integer part of mα NαT .However, as population size Nα increases, the correction due to the noninte-ger part becomes negligible, and we can apply the gaussian approximationby using the Stirling formula n! ∼ √

2πn(n+1/2)e−n, leading to

PT (mα | {m′γ }) �

√Nα

2πνα({m′γ })(1/T − να({m′

γ }))

× exp

[−Nα

(mα − να({m′γ }))2

2να({m′γ })(1/T − να({m′

γ }))

]. (3.11)

We see that this population conditional probability density follows a nor-mal law with a variance that decreases when the neuron firing rate is nearsaturation να = 1/T or in a quiescent state να = 0. If symmetries are brokenin the population α and the neurons are not identical, this law is no longervalid. However, this law is a good first approximation because if the neu-ron firing rate in the population α follows a particular distribution due toslight heterogeneities and Nα are big enough, then the central limit theoremimplies that the activity distribution is well described by a normal law withthe same mean value. In this case, this mean value is the neuron transferfunction in population α, formally να . Furthermore, the more Nα increases,the narrower the normal distribution becomes. Thus, the theory describeslarge networks but also accounts for finite-size effects and is therefore closeto a “mesoscopic” description. Indeed, if {Nα} is taken to infinity, the normallaw tends to a Dirac function in the sense of distribution theory. If T is keptconstant while the network size is taken to infinity, the first-order meanfield is recovered because fluctuations are completely averaged out andthe second-order development is irrelevant. This phenomenological modelstays in accordance with numerical models as long as T is kept finite. Wefinally get the transition function by dividing the conditional probabilityby T according to definition 3.6,

W({mγ } | {m′γ })

= 1T

√det(A)(2π )K

exp[−1

2(mµ − νµ({m′

γ }))Aµν(mν − νν({m′γ }))

], (3.12)

Page 13: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

58 S. El Boustani and A. Destexhe

where Aµν = δµνNµ

νµ({m′γ })(1/T−νµ({m′

γ })) . The integral of this transition functionover the entire state space gives trivially 1

T , which can now be roughlyinterpreted as the degree of fluctuation taken into account. Indeed, if T →∞, the total integral goes to 0, which means that we are in the first-ordermean-field approximation and the variance has no meaning, whereas ifT → 0, the total integral goes to infinity, which means that infinitesimalfluctuations are taken into account. These fluctuations are not relevant in ourframework, and we keep T ∼ ν−1

max. If the interneuron correlations becomesubstantial, the independence hypothesis no longer holds, and we haveto consider another model to replace the binomial transition function. Forlarge, sparsely connected networks, however, the independence hypothesisgives accurate results.

The whole formalism is completely described by the transition function,equation 3.12. The set of equations for the first- and second-order statisticalmoments can be further specified by injecting this transition function in thefunctions 3.8. We integrate over the complete real line because equation 3.12is centered on 0 < να < T−1 with a variance that vanishes at the boundaries,so that the corrective terms are of order O(e−N) and can be ignored forlarge enough populations. This is a natural consequence of the gaussianapproximation. We then obtain

aµ({〈mγ 〉}) = 1T

∫ ∞

−∞dm′

µ(m′µ − 〈mµ〉)P(m′

µ | {〈mγ 〉})

= 1T

(νµ − 〈mµ〉) (3.13)

aµν({〈mγ 〉}) =∫ ∞

−∞dm′

µ

∫ ∞

−∞dm′

ν(m′µ − 〈mµ〉)

× (m′ν − 〈mν〉)W({〈m′

γ 〉} | {〈mγ 〉})

= δµν

T

∫ ∞

−∞dm′

µ(m′2µ − ν2

µ + ν2µ − 2m′

µ〈mµ〉

+〈mµ〉2)P(m′µ | {〈mγ 〉})

+ (1 − δµν)T

(νµ − 〈mµ〉)(νν − 〈mν〉)

= 1T

(δµν

νµ(1/T − νµ)Nµ

+ (νµ − 〈mµ〉)(νν − 〈mν〉))

= 1T

(δµν A−1

µµ + T2aµaν

)), (3.14)

where νµ = νµ({〈mγ 〉}) is the transfer function of a neuron in the popula-tion µ, which depends on the mean activity of every population. To get

Page 14: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 59

every term in equation 3.7, we need to differentiate equation 3.13. Thecorresponding functions and derivatives give

aµ({〈mγ 〉}) = 1T

(νµ − 〈mµ〉)

∂λaµ({〈mγ 〉}) = 1T

(∂λνµ − δµλ) (3.15)

∂λ∂ηaµ({〈mγ 〉}) = 1T

∂λ∂ηνµ.

Equations 3.7, 3.15, 3.9, and 3.14 provide a complete description of this mas-ter equation formalism, and we can write the set of differential equationsaccording to those functions:

T∂t〈mµ〉 = (νµ − 〈mµ〉) + 12∂λ∂ηνµcλη (3.16)

T∂tcµν = δµν A−1µµ + (νµ − 〈mµ〉)(νν − 〈mν〉)

+ ∂λνµcνλ + ∂λννcµλ − 2cµν

T∂τCorrµν(τ ) = (∂λνν

({〈mstatγ 〉}) − δλν

)Corrµλ(τ ) (3.17)

We see that the determinant function for the first-order differential equationis the transfer function itself, which can be expected from the usual first-order analysis. The second-order statistics (covariances and correlations)are led mainly by the first derivative of the transfer function. We can expecthigher-order statistical moments to depend on higher-order derivatives ofthis function in this framework. Thus, the set of transfer function {να} forα = 1, . . . , K plays a crucial role and should be determined to complete themodel. In the next section, we consider different models.

3.2 The Linear Model. In this section, we illustrate some aspect ofthis master equation formalism, which could be harder to analyze once itis used for more realistic transfer functions. Following Soula and Chow(2007), we treat the simple case of a linear transfer function for a network ofexcitatory and inhibitory cells. Every neuron has the same transfer function(homogeneous intrinsic properties):

ν({mγ }) = ν0 + kexcmexc + kinhminh . (3.18)

This problem is similar to a balanced network where all neurons receive thesame synaptic input and have the same transfer function. In this case, thedifferential equations for the mean activities do not depend on the second-order moments. The set of second-order differential equations 3.16, can

Page 15: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

60 S. El Boustani and A. Destexhe

thus be written with equation 3.15:

T∂t〈mµ〉= ν0 + kexc〈mexc〉 + kinh〈minh〉 − 〈mµ〉 (3.19)

T∂tcµν = δµν

ν(T−1 − ν) + (ν − 〈mµ〉)(ν − 〈mν〉)

+ kλcµλ + kλcνλ − 2cµν. (3.20)

This set of equations is linear, and we can look for the fixed point of themean activity, equation 3.19, first. The solution of the linear system gives

m0 = 〈mF Pexc 〉 = 〈mF P

inh 〉 = ν0

1 − �, (3.21)

where � = kexc + kinh is the transfer function slope, which is modulated byexcitatory and inhibitory strengths. The stability of this fixed point is givenby the eigenvalues of the linear system, and we get the following values:

λ1 = −1

λ2 = � − 1,

and we see as expected that the two populations converge to a commonfixed point, whereas this fixed-point stability depends on the total slope �,which must be inferior to unity. We can now study the remaining equationsfor the second-order moments 3.20. With the fixed-point 3.21, the systemreduced to a simple set of three linear equations, which gives the fixed-pointvalue for the covariance matrix. The solution is given by the following form:

σ 2(mexc)FP = m0(1/T − m0)2(� − 2)(� − 1)

×(

(� − 2)(1 − �) + kexckinh(kinh − 1)(kexc − 1)Nexc

+ k2inh

Ninh

)

σ 2(minh)FP = m0(1/T − m0)2(� − 2)(� − 1)

×(

k2exc

Nexc+ (� − 2)(1 − �) + kinhkexc(kexc − 1)

(kinh − 1)Ninh

)

c FPexc/ inh = − m0(1/T − m0)

2(� − 2)(� − 1)

(kexc(kinh − 1)

Nexc+ kinh(kexc − 1)

Ninh

).

(3.22)

Page 16: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 61

Again we have to compute the eigenvalues of this linear system to studythis fixed-point stability. We get

λ3 =−2

λ4 = 2(� − 1)

λ5 =� − 2.

Those values guarantee the coherence in the covariance matrix. Nothing islearned about the network stability that was not already deduced from themean activity set of equations. There is only a need for a balanced activitysuch that � < 1 to avoid the system’s exploding. However, one has to re-member that the fixed point is described with its fluctuations, so transitionsbetween states are not excluded in a more complex system. We see fromthe set 3.22 that the covariance matrix entries decrease with the networksize, and eventually we recover the first-order mean field when the net-works are large enough, which is not the case with a description in termsof the number of spiking neurons. This relation is not exactly observed insmall networks (Soula & Chow, 2007) due to a strong pairwise correlation,but if the network size is increased to biophysical scales, the relation isquite correct (Kumar et al., 2008). For finite-size networks, excitatory andinhibitory variances are different when sizes are different, even though themean activity has exactly the same value in the stationary regime. Similarto Soula and Chow (2007), the network displays large fluctuations when itoperates near the critical point � = 1. Here again, we see that a descrip-tion in terms of activity gives a unified description between neurons andnetworks where the limit N → ∞ is well defined. We give the correlationmatrix (see equation B.5) to finish the example:

T∂τCorrµν(τ ) = (∂λν(

{mFP

γ

}) − δνλ)Corrµλ(τ ) (3.23)

= kexcCorrµ/exc(τ ) + kinhCorrµ/ inh(τ ) − Corrµν(τ ), (3.24)

for which the eigenvalues are easily determined, and we find the samevalues as for the mean activity λ1 and λ2 with an algebraic multiplicity of2. They describe the decrease of autocorrelation and cross-correlation. Wesee here again that � = 1 corresponds to a critical point where correlationsare infinite spatially (between population activities) and temporally (withinpopulation activities).

3.3 Spiking Network Models.

3.3.1 Network Structure. Chaotic spontaneous activity as well as anasynchronous irregular regime have been observed in sparsely connectednetworks (Brunel, 2000; Brunel & Hakim, 1999; Kumar et al., 2008; van

Page 17: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

62 S. El Boustani and A. Destexhe

Vreeswijk & Sompolinsky, 1996, 1998; Mehring et al., 2003; Vogels & Abbott,2005). Sparse connections have been shown to be crucial to provide the net-work’s irregular behavior. However, the connections do not need to bepurely random (Mehring et al., 2003). A degree of locality in the connec-tions can be tolerated as long as the correlations between neurons do notbecome critically strong and destroy the chaotic activity. We will considersparsely connected networks with random connectivity. We will show in thenumerical simulations that the model can still give good predictions whenthe neurons are locally sparsely connected. For the moment, every neuronfrom population µ receives randomly Cαµ synaptic input from populationα where Cαµ

Nα= pconn

αµ < 1. Usually pconnαµ is taken between 1% and 10%.

3.3.2 Current-Based Integrate-and-Fire Neurons. We will consider current-based integrate-and-fire (IAF) neurons with the corresponding membranepotential equation,

τmemµ

ddt

Viµ (t) = − (Viµ (t) − Vrest

µ

) + Rµ Iiµ (t)µ∈ [0, K ] and iµ ∈ [0, Nµ],

(3.25)

where Vrestµ is the resting potential and Rµ and τmem

µ are, respectively, themembrane resistance and time constant of neurons in the population µ. Ifthe threshold Vthreshold

µ is crossed, the neuron emits a spike, and the mem-brane potential is clamped to the reset potential Vreset

µ during a refractoryperiod τ

re fµ . Iiµ (t) is the external current coming from other neurons in the

network I intiµ (t) or from an external source I ext

iµ (t). Because each populationis homogeneous, we write µ instead of iµ to simplify the notation.

To build the transition function, we have used a binomial law based onthe independence approximation on timescale T . However, nothing wasspecified on the temporal structure of the spike trains emitted by each pop-ulation during this timescale. If we assume the classical Poisson model foreach neuron, the entire population can be modeled as a Poisson process too.The internal contribution is then represented as the convolution betweena Poisson spike train and a postsynaptic potential function PSPαµ(t) frompopulation α to population µ,

Rµ I intµ (t) =

∑α=1,...,K

∫R

PSPαµ(t − s)Nαµ(ds), (3.26)

where Nαµ(ds) are Poisson point processes describing the incoming spiketrains. We discuss different synapse functions in the next section. We con-sider external stimulation currents also as Poisson spike trains with ratemext

α .

Page 18: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 63

One of the advantages of a description in term of continuous activity isthat we can benefit from the Fokker-Planck approach to compute differenttransfer functions for our neurons. Although neurons can have differentfiring rates during the dynamics, we consider that, independently for everypopulation, the law of large numbers prevails after time T , and only themean firing rate of the whole process should be considered. Therefore, ifwe have Cαµ incoming spike trains from population α to population µ witha mean firing rate mα , then the total spike train can be considered a Poissonprocess of total rate Cαµmα . Using the diffusion approximation, we can de-duce the free (without spike mechanism) membrane potential distribution.This can be used as a first approximation to estimate the neuron transferfunction (Amit & Brunel, 1997). Once the Fokker-Planck solution Pµ(V) isfound, we consider that the output firing rate is given by the distributiontail that lies above the threshold divided by the membrane time constant:

νµ = 1τmemµ

∫ ∞

Vthresholdµ

dVP(V) = 12τmem

µ

(1 + erf

(〈Vµ〉 − Vthreshold

µ√2σ (Vµ)

)).

(3.27)

This approximation is valid as long as the membrane time constant leadsthe dynamics, more specifically in the AI regime. If the spike mechanism isincluded in the Fokker-Planck approach, an exact solution can be obtainedfor some synapse type for the IAF neuron transfer function. Brunel (2000)showed that for instantaneous synapses (Dirac functions), the solution isgiven by the inverse mean interspike interval of the first passage problemwith white noise (Tuckwell, 1988). More recently (Fourcaud & Brunel, 2002),a correction has been added to take into account the colored noise producedby exponential synapses as long as the ratio

√τsyn

τmemµ

is small compared tounity. The transfer function for this first-order correction is similar to theone used in Brunel (2000) but with a corrective term �hµ ∼ 1.03

√τsyn

τmemµ

,

νµ =τ re f

µ + τmemµ

√π

∫ Vthresholdµ −〈Vµ〉√

2σ (Vµ )+�hµ

Vresetµ −〈Vµ〉√

2σ (Vµ )+�hµ

dueu2(1 + erf(u))

−1

, (3.28)

where the membrane potential statistics are given by the white noise modeland the new synaptic weights must be normalized to ensure the corre-spondence between exponential synapses and Dirac synapses when thesynaptic time constants are taken to 0. We have the following relation,J Diracα = J Exp

ατα

τmemµ

, where Jα and τα are the synaptic strength and time con-stant corresponding to synapse α. If we consider a balanced network madeof excitatory and inhibitory neurons, the largest synaptic time constant willmainly be responsible for the corrective term. In all simulations presented

Page 19: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

64 S. El Boustani and A. Destexhe

here, the inhibitory synaptic time constant will be at least twice as large asthe excitatory time constant, so that the corrective term must be of the or-der �hµ ∼ 1.03

√τinhτmemµ

. However, in the following, we will consider networkmodels where the synaptic time constant can be large compared to the mem-brane time constant (Vogels & Abbott, 2005), so we decided to adopt a differ-ent model for the transfer function. Instead of using the membrane potentialmean value and variance computed with white noise, we will use shot noiseprocesses to deduce those values for different synapses and replace 〈Vµ〉 andσ 2(Vµ) in the transfer function, equation 3.28, without the corrective term:

νµ =τ re f

µ + τmemµ

√π

∫ Vthresholdµ −〈Vµ〉√

2σ (Vµ )

Vresetµ −〈Vµ〉√

2σ (Vµ )

dueu2(1 + erf(u))

−1

. (3.29)

The mean membrane potential 〈Vµ〉 and the variance σ 2(Vµ) depend on theincoming activity and the chosen synapse. This point is discussed in thenext section, where different types of synapses are considered. As longas the model allows, we can also estimate the coefficient of variation ofthe interspike interval by using the recurrence relation developed for thefirst-passage problem (Tuckwell, 1988). For the model introduced in Brunel(2000), this quantity is given in a stationary point by

CV2µ = 2π

(τmemµ 〈mµ〉stat)2

∫ Vthresholdµ −〈Vµ〉stat

σ (Vµ )stat

Vresetµ −〈Vµ〉stat

σ (Vµ )stat

dxex2

×∫ x

−∞dyey2

(1 + erf(y))2. (3.30)

It is thus possible to access second-order statistics at the single-neuron levelas well as the network level as long as the neuron model is specified. Thisis a powerful aspect of this theory, and it will play a crucial role in applyingthis framework to the study of voltage-sensitive dyes optical imaging data.Indeed, this type of signal is proportional to the subthreshold membrane po-tential, and we need a simple dynamic description that can give us access tothe membrane potential distribution. This is done on timescales T with thenetwork activity differential equation. Therefore, it is necessary to have a de-scription at both levels. The formal equation for the interspike interval coef-ficient of variation (ISI CV) is not always determined, however, Kumar et al.(2008) noticed that the activity variance is very informative on the spikingregularity for self-sustained balanced networks. In some cases, the activityvariance could describe the firing irregularity without the need for an exactequation for ISI CV. The limitation in the analytical derivation of the trans-fer function and the ISI CV depends on the chosen synapse or, equivalently,the nature of correlation in the current input. In the next section, we presentdifferent kind of synapses, some of which allow exact analytical results.

Page 20: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 65

Synapse models. In the preceding section, we mentioned the Fokker-Planck approach. We need to determine the relation between the incomingspike train statistics and the membrane potential probability distribution. Inparticular, it is necessary to compute the mean and the variance according tothe firing rate. Under the Poisson approximation, shot noise theory providesthe required relations through Campbell’s theorem,

〈Vµ〉= Vrestµ +

∑α=1,...,K

Cαµ

(mα + mext

α

) ∫R

dt PSPαµ(t) (3.31)

σ 2(Vµ) =∑

α=1,...,K

Cαµ

(mα + mext

α

) ∫R

dt PSP2αµ(t), (3.32)

where PSPαµ(t) with α,µ ∈ {1, . . . ,K } are the membrane potential timecourses elicited by population α synapses on population µ. We will con-sider different synapse functions and compute these functions using equa-tion 3.25, which is exactly solvable.

� Dirac synapses (instantaneous). Our first model is the Dirac synapsecurrent,

synαµ(t) = Aαµτmemµ δ(t),

with Aαµ the synaptic strength. Once integrated through equa-tion 3.25, the membrane potential equation, this synapse gives thefollowing postsynaptic potential,

PSPαµ(t) = Rµ Aαµe−t/τmemµ H(t),

where H(t) is the Heaviside function. Using equations 3.31 and 3.32,we finally get the desired statistics characteristics,

〈Vµ〉 = Vrestµ + τmem

µ

∑α=1,...,K

JαµCαµ

(mα + mext

α

)σ 2(Vµ) = 1

2

∑α=1,...,K

τmemµ J 2

αµCαµ

(mα + mext

α

),

(3.33)

with Jαµ = Rµ Aαµ the potential increment.� Exponential synapses. A more realistic model would include a decay-

ing tail to the synaptic current with a time constant specific to eachsynapse type. The next step in modeling realistic synapses is the ex-ponential synapse model,

synαµ(t) = Aαµe−t/τα H(t)

PSPαµ(t) = Jαµτα

τmemµ − τα

(e−t/τmem

µ − e−t/τα)

H(t),

Page 21: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

66 S. El Boustani and A. Destexhe

where τα is the decay time constant for synapses coming from thepopulation α, so that we finally get

〈Vµ〉 = Vrestµ +

∑α=1,...,K

JαµταCαµ

(mα + mext

α

)

σ 2(Vµ) = 12

∑α=1,...,K

J 2αµτ 2

α

τα + τmemµ

Cαµ

(mα + mext

α

).

(3.34)

� α-synapses. To include a finite rising time of the synaptic current, weuse alpha functions, which correspond to the following forms:

synαµ(t) = Aαµ

tτα

e1−t/τα H(t)

PSPαµ(t) = Jαµe

−te−t/τα

τmemµ − τα

+ τmemµ τα(

τmemµ − τα

)2

(e−t/τmem

µ −e−t/τα)H(t)

and

〈Vµ〉 = Vrestµ + e

∑α=1,...,K

JαµταCαµ

(mα + mext

α

)

σ 2(Vµ) =∑

α=1,...,K

(2τmem

µ + τα

) (e Jαµτα

2(τmemµ + τα

))2

Cαµ(mα + mextα ).

(3.35)

Neuron transfer function in the master equation. Now that we have speci-fied the membrane potential mean value and variance, we have to incorpo-rate these functions in the neuron transfer function to derive the networkdynamics equations. We compute the necessary functions using the freemembrane potential transfer function, as well as the phenomenologicaltransfer function defined with the Campbell’s theorem in the previous sec-tion. We thus use the definitions—equation 3.27 or 3.29 into 3.15—to geta complete description of the set of differential equations. We make somedefinitions to simplify the computation for equation 3.27,

Qµ = 〈Vµ〉 − Vthresholdµ

= αµ

(mα + mext

α

) + (Vrest

µ − Vthresholdµ

), (3.36)

where αµ depends on the chosen synapses,

Diracαµ = τmem

µ JαµCαµ Expαµ = τα JαµCαµ αsyn

αµ = eτα JαµCαµ, (3.37)

and similarly

Kµ = 2σ 2(Vµ)

= �αµ

(mα + mext

α

), (3.38)

Page 22: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 67

with the corresponding synapse functions,

�Diracαµ = τmem

µ J 2αµCαµ �Exp

αµ = J 2αµτ 2

α

τα + τmemµ

Cαµ

�αsynαµ =

(2τmem

µ + τα

)2

(e Jαµτα(

τmemµ + τα

))2

Cαµ, (3.39)

such that the transfer function, equation 3.27, can be written as

νµ = 12τmem

µ

(1 + erf

(Qµ√Kµ

)). (3.40)

To obtain the full set of differential equations, equation 3.16, we need tocompute the step function derivatives, equation 3.15,

∂λaµ({mγ }) = 1T

e− Q2

µ

√πτmem

µ

(2Kµ λµ − Qµ�λµ

2K 3/2µ

)− δλµ

and for the second derivative,

∂λ∂ηaµ({mγ }) = e− Q2µ

√πTτmem

µ

·2Kµ

(2Q2

µ−Kµ

)( λ�η+ η�λ)+Qµ

(3Kµ−2Q2

µ

)�λ�η−8K 2

µ Qµ λ η

4K 7/2µ

.

If we consider the transfer function, equation 3.29, we have to define slightlydifferent functions,

Qthµ = Vthreshold

µ − (Vrest

µ + αµ

(mα + mext

α

))Qre

µ = Vresetµ − (

Vrestµ + αµ

(mα + mext

α

))(3.41)

Kµ =�αµ

(mα + mext

α

),

where αµ and �γµ depend on the chosen synapses and are defined aspreviously. We also define

xthµ = Qth

µ√K µ

xreµ = Qre

µ√K µ

Page 23: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

68 S. El Boustani and A. Destexhe

such that equation 3.29 can be written in a shorter form:

νµ =(

τ re fµ + τmem

µ

√π

∫ xthµ

xreµ

dueu2(1 + erf(u))

)−1

.

As before, we compute the step function derivatives, equation 3.15,

∂λaµ({mγ }) = − 1T

(ν2

µτmemµ

√π

(e (xth

µ )2(1 + erf(xth

µ ))∂λxthµ

− e (xreµ )2(

1 + erf(xreµ

))∂λxre

µ

) + δλµ

)∂λ∂ηaµ({mγ }) = 2∂λνµ∂ηνµ

Tνµ

− ν2µτmem

µ

√π

T

×((

2xthµ e (xth

µ )2(1 + erf

(xth

µ

)) + 2√π

)∂λxth

µ ∂ηxthµ

+ e (xthµ )2

(1 + erf(xthµ ))∂λ∂ηxth

µ

−(

2xreµ e (xre

µ )2(1 + erf(xre

µ )) + 2√π

)∂λxre

µ ∂ηxreµ

− e (xreµ )2

(1 + erf(xre

µ

))∂λ∂ηxre

µ

)

with

∂λxth/reµ = −Qth/re

µ �λµ − 4Kµ λµ

4K 3/2µ

∂λ∂ηxth/reµ = 4Kµ( λµ�ηµ + ηµ�λµ) + 3Qth/re

µ �λµ�ηµ

16K 5/2µ

.

From now on, the current-based model can be used to perform a param-eter space exploration. The stationary solutions can be numerically com-puted for any set of parameters and compared with network simulations.This is done in section 3.4.

3.3.3 Conductance-Based Models. For the conductance-based model, everysynaptic event results in an increase in the corresponding conductance, andthe integrate-and-fire equation can be written as

Cmemµ

ddt

Vµ(t) = GLµ

(Vrest

µ − Vµ(t)) + Gαµ(t)(Eα − Vµ(t)), (3.42)

Page 24: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 69

where Cmemµ and GL

µ are the membrane capacitance and leak conductance,respectively, such that Cmem

µ

GLµ

= τmemµ is the resting membrane time constant.

Gαµ is the total conductance of the synaptic set α and Eα the correspondingreversal potential. Similar to the current-based model, the synaptic inputcan be modeled by Poisson processes, and the total conductance of thesynapses α is

Gαµ(t) =∫

R

gαµ(t − s)Nαµ(ds), (3.43)

where gαµ(t) is the conductance time course elicited by an incoming spikefrom population α. We can write equation 3.42 in an analogous form to thecurrent-based model with an effective membrane time constant τ

e f fµ (t),

τ e f fµ (t)

ddt

Vµ(t) = −Vµ(t) + GLµVrest

µ + Gαµ(t)Eα

Gtotµ (t)

, (3.44)

where the total conductance and the effective time constant are defined asfollows:

Gtotµ (t) = GL

µ +∑

α=1,...,K

Gαµ

τ e f fµ (t) = Cmem

µ

Gtotµ (t)

.

Even when the synaptic input is considered as white noise, equation 3.42cannot be solved, and there is no exact solution for the transfer function. Wecan use equation 3.27 as a first approximation, but we need to compute themean membrane potential 〈Vµ〉 and the variance σ 2(Vµ) in the conductance-based model. Equation 3.44 can be approximated by the following effectivecurrent-based equation (Kuhn, Aertsen, & Rotter, 2004),

⟨τ e f fµ

⟩ ddt

Vµ(t) = −Vµ(t) + GLµVrest

µ + 〈Gαµ〉Eα

〈Gtotµ 〉 , (3.45)

with

〈Gαµ〉 = Cαµ

(mα + mext

α

) ∫R

dsgαµ(ds)

⟨Gtot

µ

⟩ = GLµ +

∑α=1,...,K

〈Gαµ〉

⟨τ e f fµ

⟩ = Cmemµ

〈Gtotµ 〉

Page 25: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

70 S. El Boustani and A. Destexhe

Synapse model. Following Kuhn et al. (2004), we can deduce from equa-tion 3.45 a good approximation for the mean membrane potential and thevariance. The mean membrane potential is given by the following form,

〈Vµ〉 = 〈τ e f fµ 〉

(Vrest

µ

τmemµ

+ Eα

Cmemµ

〈Gαµ〉)

(3.46)

= 〈τ e f fµ 〉

(Vrest

µ

τmemµ

+ αµ

(mα + mext

α

)), (3.47)

where

αµ = Cαµ

Cmemµ

∫R

dsgαµ(ds)

depends on the chosen synapse. Using this mean membrane potential,equation 3.45 can be integrated for a synaptic event while the neuron isclamped around the mean value 〈Vµ〉. The corresponding PSPαµ(t) allowsus to compute the variance:

σ 2(Vµ) = Cαµ

(mα + mext

α

) ∫R

dsPSP2αµ(s) (3.48)

= �αµ

2

(mα + mext

α

). (3.49)

We have to specify the form of αµ and �αµ for different synapses. Forthe conductance-based model, we consider only exponential synapses andα-synapses. Following the same computation as in the current-based model,we can determine the desired functions,

Expαµ = τα�gαµCαµ Eα

Cmemµ

αsynαµ = eτα�gαµCαµ Eα

Cmemµ

(3.50)

and

�Expαµ = Cαµ

τα + 〈τ e f fµ 〉

(τα�gαµ〈τ e f f

µ 〉(Eα − 〈Vµ〉)Cmem

µ

)2

(3.51)

�αsynαµ = 1

2Cαµ(2〈τ e f f

µ 〉 + τα)

(eτα�gαµ〈τ e f f

µ 〉(Eα − 〈Vµ〉)Cmem

µ (τα + 〈τ e f fµ 〉)

)2

, (3.52)

where �gαµ is the synaptic strength.

Neuron transfer function in the master equation. The main difference withthe current-based model is that the effective time constant is now activitydependent, and �αµ depends on the mean membrane potential. We consider

Page 26: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 71

that the leading time constant in the neuron dynamic is the effective timeconstant whatever the regime. The corresponding approximated transferfunction can be written as

νµ = 1

2〈τ e f fµ 〉

(1 + erf

(〈Vµ〉 − Vthreshold

µ√2σ (Vµ)

)). (3.53)

The computation is thus more complicated but still straightforward. Wefirst define as previously the functions

Qµ = 〈Vµ〉 − Vthresholdµ (3.54)

Kµ = 2σ 2(Vµ) (3.55)

xµ = Qµ√Kµ

. (3.56)

Because those functions now depend on mα in a more intricate way, the stepfunction derivatives, equation 3.15, must be written as follows:

∂λaµ({mγ }) = 1T

(e−x2

µ

√π〈τ e f f

µ 〉

(2Kµ∂λ Qµ − Qµ∂λKµ

2K 3/2µ

)

− νµ

∂λ〈τ e f fµ 〉

〈τ e f fµ 〉

− δλµ

), (3.57)

and for the second derivative,

∂λ∂ηaµ({mγ })

= e−x2µ

√πT〈τ e f f

µ 〉

(1

4K 7/2µ

(2Kµ(2Q2

µ−Kµ)(∂λ Qµ∂η Kµ + ∂η Qµ∂λKµ)

+ 4K 3µ∂η∂λ Qµ − 8K 2

µ Qµ∂λ Qµ∂η Qµ

+ Qµ(3Kµ − 2Q2µ)∂λKµ∂η Kµ−2K 2

µ Qµ∂η∂λKµ

)

−(

2Kµ∂λ Qµ − Qµ∂λKµ

2K 3/2µ

)∂η〈τ e f f

µ 〉〈τ e f f

µ 〉

−(

2Kµ∂η Qµ − Qµ∂η Kµ

2K 3/2µ

)∂λ〈τ e f f

µ 〉〈τ e f f

µ 〉

)

+ 2∂λ〈τ e f f

µ 〉∂η〈τ e f fµ 〉

T〈τ e f fµ 〉2

− ∂λ∂η〈τ e f fµ 〉

T〈τ e f fµ 〉

,

Page 27: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

72 S. El Boustani and A. Destexhe

with

Qµ =〈τ e f fµ 〉

(Vrest

µ

τmemµ

+ αµ

(mα + mext

α

)) − Vthresholdµ

∂λ Qµ = ∂λ〈τ e f fµ 〉

(Vrest

µ

τmemµ

+ αµ

(mα + mext

α

)) + 〈τ e f fµ 〉 λµ

∂η∂λ Qµ = ∂η∂λ〈τ e f fµ 〉

(Vrest

µ

τmemµ

+ αµ

(mα + mext

α

)) + ∂η〈τ e f fµ 〉 λµ

+ ∂λ〈τ e f fµ 〉 ηµ

and

Kµ = �αµ

(mα + mext

α

)∂λKµ = ∂λ�αµ

(mα + mext

α

) + �λµ

∂η∂λKµ = ∂λ∂η�αµ

(mα + mext

α

) + ∂η�λµ + ∂λ�ηµ.

The derivatives of functions 〈τ e f fµ 〉 and �αµ are given in appendix C. The

conductance-based model is now completely specified and can be used fora parameter space exploration as well.

3.4 Numerical Results. Simulations were done to compare the pre-dicted stationary states given by the master equation and the correspondingneuron networks. In section 3.4.1, we explore different parameter spaces forthe current-based neuron model, and the conductance-based neuron net-work is studied in section 3.4.2. Section 3.4.3 is devoted to more generalmodels where a more realistic connectivity scheme is used.

3.4.1 Current-Based Models. We consider here two types of state diagramsbased on the literature. The three concerned parameters are the external ex-citatory firing rate stimulation mext

exc and the excitatory/inhibitory synapticstrength Aµ with µ ∈ {exc, inh}. With those parameters, we were interestedin the (mext

exc, g) space where g = AinhAexc

, as in Brunel (2000) and Mehring etal. (2003). The second type of state diagram is generated by varying inde-pendently the excitatory and inhibitory synaptic strength (Aexc, Ainh) byfeeding the network with a constant current in every neuron to sustain theactivity (Vogels & Abbott, 2005). For each simulation, we want to investigatewhether the model can provide a good prediction for the first and secondstatistical moments. Therefore, we systematically compare the mean activ-ity and the standard deviation. First-order predictions have already beenstudied for similar network models (Brunel, 2000), and the corrective term

Page 28: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 73

due to the second-order development does not contribute much. How-ever, because we suggest using another neuron transfer function in thefollowing, it is still interesting to perform the first-order analysis. For thosesimulations, N = 5000 neurons with a probability connection pconn = 0.01.Neurons’ intrinsic properties are homogeneous, with a membrane timeconstant τmem

µ = 20 ms, a refractory period τre fµ = 5 ms, the resting poten-

tial and the reset potential Vrestµ = Vreset

µ = −60 mV, the threshold poten-tial Vthreshold

µ = −50 mV, and the membrane resistance Rmemµ = 100 M� for

µ ∈ {exc, inh}. Synaptic properties and external stimulations depend on theparameter space under consideration.

The (mextexc, g) parameter space. For this state diagram we took exponential

synapses with τexc = 1 ms and τinh = 3 ms for the excitatory and inhibitorysynaptic time constants. The external stimulation will be considered inthe νth unit which is the frequency needed to bring the mean membranepotential to threshold with an excitatory Poisson input. As we are usingexponential synapses we have, based on equation 3.34,

νth = Vthreshold − Vrest

J excCexcτexc,

where J exc = Rmem Aexc , we have omitted the second index because ofnetwork homogeneity. We chose Aexc = 0.02 pA such that the resultingEPSP peak is J exc = 2 mV. The inhibitory synaptic strength is defined byAinh = g Aexc where g is the second parameter. We represent in Figure 2athe network mean cross-correlation and interspike interval to characterizethe asynchronous irregular states.

Three regions can be outlined: a broad AI domain, an intermediateasynchronous regular (AR) region where neurons begin to fire periodically,and saturated synchronous regular (SR) states.We were interested in the AIregime in which the Markovian approximation is assumed to apply. There-fore, we limited our analysis of the parameter regime to firing rates below1/τmem = 50 Hz. Above this frequency, long-range correlations in each neu-ron firing appear, and the analytical framework is not well suited to predictthe macroscopic quantities. This is manifest by the emergence of a newpeak near 0 in the ISI CV distribution (not shown), whereas interneuroncorrelations reach very small values. The absence of synchronous irregularstates (SI) for high firing rates is essentially due to the absence of interactiondelays between the neurons (Brunel, 2000). When a finite homogeneous de-lay is present, SI states could occur in the transitory region (Mehring et al.,2003); however, if the delays are drawn randomly from a broad distribution,this SI transition region is replaced by AR states. In the low-activity region,there is an increase of synchrony concomitant with a decrease in the meanISI CV typical of slow SI states. For the whole region, as long as the network

Page 29: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

74 S. El Boustani and A. Destexhe

Figure 2: Characterization of the AI states and the first- and second-order statis-tics of the excitatory population activity in the (mext

exc, g) parameter space, similarto Brunel (2000). The network contains N = 5000 neurons randomly connectedwith probability pconn = 0.01 and current-based synaptic interactions. Everystatistical quantity has been computed with a time bin of T = 5 ms, and theanalytical model was solved with the same parameters. (a) The AI region is de-limited using the mean ISI CV and the mean pairwise cross-correlation. In theright panel, we compute the activity CV to evaluate the validity of the indepen-dence hypothesis. (b) Top: Mean activity estimated from numerical simulations(left) and computed using the master equation formalism (middle). In the rightpanel, the relative difference between measured and predicted values. Bottom:The activity standard deviation is estimated from numerical simulations andcompared as well with the mean-field predictions.

is homogeneous and the neuron are independents, the synchrony phasediagram should be well predicted by the activity second-order statistics.More precisely, it should be directly related to the activity coefficient ofvariation (CV). Therefore, in the last panel in (see Figure 2a), we computedthe activity CV, which can be compared with the synchrony panel. Both di-agram exhibit the same tendency according to the parameter regime. Thus,this quantity could be estimated in the framework of the independence

Page 30: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 75

hypothesis as long as the first- and second-order statistics are wellpredicted.

In Figure 2b, we computed and compared the mean excitatory activityand the standard deviation of excitatory activity for the parameter space.For each point, a simulation was run for 10 s in order to have an accept-able evaluation of the second-order statistics. The corresponding stationaryquantities were numerically computed using the master equation formal-ism, equation 3.16, with the transfer function, equation 3.29. To compareour prediction with simulations, we computed the relative difference,

�(mext

exc, g) = OSimulation

(mext

exc, g) − OPrediction

(mext

exc, g)

| OSimulation(mext

exc, g) | + | OPrediction

(mext

exc, g) | ,

where O can be either the mean activity or the standard deviation. First-order and second-order statistics are in good agreement over almost theentire AI region with a relative error smaller than 0.1. However, the relativeerror is larger in the low-activity regime, as can be seen in Figure 2b. Thereasons are twofold. On the one hand, for small networks exhibiting lowactivity, the synaptic input impinging on each neuron is equally low, andthe diffusion approximation at the membrane potential level is not a goodapproximation anymore. On the other hand, the increase of pairwise corre-lations among neurons could partially invalidate the model, as can be seenfrom the discrepancies between the synchrony and the activity CV. For verylarge networks, a mean-field approach can be recovered even in the low-activity regime, and it provides accurate predictions (Kumar et al., 2008).

As the in vivo activity in awake animals usually displays low firingrates, we further investigated whether larger networks could generate morerealistic cortical activity regimes. In Figure 3a, we show the results of asimulation of a larger network (N = 15, 000 neurons) while keeping otherparameters exactly the same as used in the simulation of a smaller networkdescribed in Figure 2. It is apparent that the region between 6 Hz and 10 Hz,which was not well predicted previously, can now be better described bythe mean-field approach (with relative errors smaller than 0.1). This regionis delimited by black curves in Figure 3a.

To pursue this analysis, we chose a row of parameters in the phase dia-gram and computed the mean firing rate numerically and with the masterequation for various network sizes. We found that the relative error for eachnetwork size increased very slightly for smaller firing rates, as expected (seeFigure 3b). However, the relative error still stays below 0.1, even for firingrates around 5 Hz. The corresponding predicted firing rates are given inFigure 3c for the different network sizes. The error bars are computed withthe relative error using OPrediction(mext

exc, g) | �(mextexc, g) |. In the right panel,

we show for a particular point of the phase diagram (mextexc, g) = (6 · νth, 141)

a decrease in firing rate as well as an increase in the relative error. The

Page 31: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

76 S. El Boustani and A. Destexhe

Figure 3: Master equation predictions for low firing rates depend on networksize. (a) A large network of N = 15, 000 neurons has been used to draw a similarphase diagram as in Figure 2. The region between the two black curves exhibitsfiring rates smaller than 10 Hz (higher curve) and relative errors smaller than0.1 (lower curve). The predictions are improved for a broad range of frequen-cies compared to a smaller network. (b) The relative error between numericalsimulations and predictions for a row of parameters in the phase diagram (in-dicated by a dashed rectangle in a for different network sizes. (c) The relativeerror is slightly increasing for larger networks, but the firing rate decreases atthe same time. Left: Predicted firing rate in the row for each network size. Theerror bars indicate the relative error in units of the corresponding predictedvalue. The firing rates decrease quickly compared to the error. Right: The firingrate decreases as a function of network size, for a particular point in the phasediagram (mext

exc, g) = (6 · νth, 141). Inset: Decay of the standard deviation with thenetwork size. All networks had indentical parameters as in Figure 2.

predicted firing rate seems to decrease exponentially compared to the rela-tive error, which does not change much. A similar tendency has been foundfor the standard deviation. The inset of the right panel of Figure 3c showsthat the error remains almost constant while the activity standard deviationdecreases for larger networks. The activity coefficient of variation indicatesthat the averaged pairwise correlations also decrease. Therefore, based onthis result, it should be possible to find large enough networks to cover arealistic range of firing frequencies.

Page 32: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 77

Figure 4: Correlation functions for the specific point (mextexc, g) = (8 · νth, 41) of

Figure 2. (a) Cross-correlation between inhibitory and excitatory populationactivity for the master equation (thick line) and the numerical simulations (thinline). (b) Autocorrelation functions computed from the master equation for theinhibitory and excitatory activity (thick lines) and with T = 10 ms. The samefunctions have been estimated from the numerical simulations (thin lines). Theactivity traces have been computed with a bin size of 1 ms and then filtered witha gaussian function to get a smoother curve. The gaussian standard deviationwas chosen to be 1.5T to match the corresponding function without filtering.

Covariance and inhibitory population statistics are equally well pre-dicted. To compare the activity correlations, we chose a point in the param-eter space for which we estimated the inhibitory and excitatory autocorrela-tion and the cross-correlation. The corresponding functions were computedfrom the master equation using the last equation of equation 3.16 and thestationary mean activity computed for that particular state (see Figure 4).To keep a good resolution of the numerical functions, we took a time binT = 10 ms and filter the signal with a gaussian function with a standarddeviation equal to 1.5T . This gives a good prediction except for some partof the function, which can be due to a temporal finite-size effect. Indeed,the network is homogeneous, so the only difference between both pop-ulations is the size. If those populations are not large enough, residualoscillations that could not be predicted in our framework occur in the cor-relation functions. These oscillations also appear when the bin size T is toosmall (see Figure 1b). According to equation 3.16, we know that the inter-play between network size and the considered timescale must be carefullytaken into account to validate the model. For a very large network sizeor time constant T , the fast oscillations completely vanish. Moreover, weknow from equation 3.16 that the correlation matrix depends directly onthe neuron transfer function first derivative. More precisely, the predictedcorrelation exponential decrease as well as the activity standard deviationare led by this function. Therefore, regions where the autocorrelation and

Page 33: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

78 S. El Boustani and A. Destexhe

the cross-correlation functions can be correctly described are regions wherethe second-order statistics match the numerical simulations.

The (Aexc, Ainh) parameter space. To reproduce results obtained in Vogelsand Abbott (2005), we chose a similar model to probe the (Aexc, Ainh)parameter space. In the AI regime, current-based neurons cannot sustainactivity without external stimulation, and one needs to inject a constantinput in each neuron by bringing the resting potential to Vrest = −49 mV, asin the original letter. We also took exponential synapses with excitatory andinhibitory time constant τ exc = 5 ms and τ inh = 10 ms, respectively. The onlyfree parameters remaining are the current quantal increments (Aexc, Ainh).We first computed the mean ISI CV, the mean pairwise cross-correlation,and the activity CV to find the boundaries of the AI region. As in the previ-ous section, the state diagram was drawn in the AI regime for the excitatorypopulation with the master equation prediction using equation 3.29 and theneuron network simulations (see Figure 5). The mean activity predictionis in good agreement with the numerical simulations, with a relative errorsmaller than 0.1, as shown in the first row of Figure 5b. The relative error islarger for the low-activity regime (<8 Hz), where it can reach 0.2. This canbe understood as previously; in this region, the synaptic input is not strongenough to fulfill the required conditions for the diffusion approximation.This error can be reduced for larger networks (see Figure 3). The predictionfor the standard deviation matches quite well except for the region of highand low activity, where the difference is more substantial. The latter canbe expected from the first-order comparison. Concerning the former, theregion is at the boundary of the AR regime, and the Markov hypothesis be-gins to fail because of the emergence of regularities in the neuron individualfiring.

Similar simulations were done using the transfer function, equation 3.28,and the resulting predictions were worse compared to previous results (notshown). This transfer function has been obtained as a first-order approx-imation for exponential synaptic noise when the ratio

√τsyn

τmem is small. Inthe numerical simulations here, the inhibitory synaptic time constant isonly half the membrane time constant, and the assumption underlyingequation 3.28 begins to fail. The phenomenological transfer function, equa-tion 3.29, therefore seems more appropriate for these neuron properties.

3.4.2 Conductance-Based Models. Conductance-based neuron networkshave been shown to display self-sustained activity for specific parameterspace (Vogels & Abbott, 2005). For the (�gexc,�ginh) parameter space, tworegions have been identified exhibiting, respectively, AI and SR states (seeFigures 6a–6c).

In this section, we study a similar model with the resting membranetime constant τmem = 20 ms, the membrane resistance Rmem = 100 M�, the

Page 34: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 79

Figure 5: Characterization of AI states and the first- and second-order statis-tics of the excitatory population activity in the (Aexc, Ainh) parameter space, ina Vogels-Abbott-type current-based network. The network contains N = 5000neurons randomly connected with probability pconn = 0.01. Every statisticalquantity has been computed with a time bin of T = 5 ms, and the analyticalmodel was solved with the same parameters. (a) The AI region is delimited us-ing the mean ISI CV and the mean pairwise cross-correlation. In the right panel,we compute the activity CV to evaluate the validity of the independence hy-pothesis. (b) Top: Mean activity estimated from numerical simulations (left) andcomputed using the master equation formalism (middle). In the right panel, therelative difference between measured and predicted values. Bottom: The activ-ity standard deviation is estimated from numerical simulations and comparedas well with the mean-field predictions.

resting and reset membrane potential Vrest = Vreset = −60 mV, the thresh-old Vthreshold = −50 mV, and the refractory period τ re f = 5 ms. Synaptictime constant and reversal potential are taken to be τexc = 5 ms, Eexc = 0 mV,τinh = 10 ms, and Einh = −80 mV for excitatory and inhibitory synapses, re-spectively. Given the state diagram (see Figures 6a–6c), we see that the AIregion is surrounded by unstable states, which is not present in the current-based correspondent (compare with Figure 5). This is a specificity of theconductance-based model and an interesting issue that should be discussed

Page 35: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

80 S. El Boustani and A. Destexhe

Figure 6: Sources of instabilities in Vogels-Abbott-type networks. Study of thestability (a), the mean activity (b), and the mean ISI CV (c) in the (�gexc, �ginh)parameter space for a self-sustained conductance-based neuron network. Thenetwork in panels a to c contains N = 10,000 neurons, each randomly connectedwith pconn = 0.02 of the population. The network is considered stable if its ac-tivity lasts longer than 1 s. (a–c modified, with authorization, from Vogels andAbbott, 2005). (d–f ) Excitatory population mean activity for different networkstructures. For these numerical simulations, the network is considered stable ifits activity lasts longer than 3 s. The transitory region between the AI and theSR domain in the (�gexc,�ginh) parameter space is sensitive to synaptic inputfluctuation. (d) Network of N = 5000 neurons with pconn = 0.01. The AR domainis fully stable. (e) Network of N = 5000 neurons with pconn = 0.02. Some dis-parate points in the AR domain lose their stability. (f ) Network of N = 10, 000neurons with pconn = 0.01. The AR domain is almost completely unstable.(g) Network lifetime in log scale computed in the (�gexc,�ginh) parameter spacefor a network of N = 10, 000 neurons and pconn = 0.01. We took mcrit = 1 Hz forthe critical activity because no self-sustained activity of 1 Hz could be producedwith this network model. Furthermore, this prediction has been computed byusing the effective transfer function introduced below. Stability criteria matchqualitatively with f for the low-activity AI region.

Page 36: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 81

about the network stability in numerical simulation and the correspondingmaster equation prediction.

Stability in self-sustained networks. According to the stationary solution ofthe master equation, the active state could be unstable or stable in the usualsense. However, for stable points, there is an important consideration to takeinto account to match the numerical simulations. The framework presentedhere is a mean-field solution that does not take into account every finite-size effect that can be encountered in numerical models. For instance, stablestates in the model with very weak activity are not sustained in networksimulation. In some cases, the network activity fluctuations are too strongand can bring the dynamics to a quiescent state after a transient period. It hasbeen shown that for multi-unit systems such as neural networks, a quasi-stationary state could survive for a period that is exponentially proportionalto the system size (Crutchfield & Kaneko, 1988). More particularly, in self-sustained conductance-based networks, the state loses its stability whenthe network activity exhibits fluctuations that are too strong (Kumar et al.,2008). Therefore, we can adopt a criterion to keep those states with largefluctuations according to the timescale we are interested in. For instance,in Vogels and Abbott (2005), a network is said to be stable if it can sustainits activity longer than a second. Those networks will eventually fall into aquiescent state after a long transient. In our framework, once the equationsare solved, we can compute the total mean activity and the total variance.For a balanced network, we have

〈mtot〉= 〈(1 − γ )mexc + γ minh〉= (1 − γ )〈mexc〉 + γ 〈minh〉

σ 2(mtot) =〈((1 − γ )mexc + γ minh)2〉 − 〈(1 − γ )mexc + γ minh〉2

= (1 − γ )2σ 2(mexc) + γ 2σ 2(minh) + 2(1 − γ )γ cexc/ inh .

As mexc and minh are described as a normal law defined by only the firsttwo statistical moments, the total activity also follows a normal law withthe following characteristics:

mtot ∼ N ((1 − γ )〈mexc〉 + γ 〈minh〉, (1 − γ )2σ 2(mexc)

+ γ 2σ 2(minh) + 2(1 − γ )γ cexc/ inh). (3.58)

Therefore, we can use a simple criterion for stability by saying that thesurvival time is inversely proportional to the probability for the activity tobe below a critical value mcrit near 0. Indeed, the spontaneous AI state issaid to be quasi-stationary such that its activity is described by a stationarydistribution. Eventually the dynamics will fall into this quiescent state,which is represented in mean-field theory by the probability below mcrit

Page 37: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

82 S. El Boustani and A. Destexhe

normalized by the minimal time step,

Tsurvival = TP(mtot < mcrit)

(3.59)

= TF (mtot = mcrit)

, (3.60)

where F (mtot = m) is the repartition function. A similar criterion has beenproposed in Kumar et al. (2008) but based on a first-order semianalyticalmodel. In their argument, they mention a critical time window that corre-sponds to our network effective time constant T . In numerical simulations,they could estimate its value around 1 ms, which is very short comparedto other time constants in the network model. This is encouraging for con-sidering the master equation formalism well beyond large time constants.We can easily see from equations 3.16 and 3.22 that the covariance matrixis inversely proportional to the number of neurons in each population, sothat Tsurvival grows exponentially with the number of neurons. Discrepancyfrom this result can be caused by residual correlations between neurons thatcould invalidate the binomial model. We can further estimate the networklifetime dependency on other parameters. We show in Figure 6g in a logscale the network lifetime evaluated with equation 3.59 for the (�gexc,�ginh)parameter space for a network of N = 10, 000 neurons with pconn = 0.01.We see that the AI region considered unstable in Figure 6f matches the re-gion where the lifetime is beyond the order of a second. Therefore, the statediagram for self-sustained networks is sensitive to the time-scale underconsideration.

Despite the stability issue, which is due to the limited lifetime of thenetwork activity, there is finally an important source of instability that canalso cause the network to reach the quiescent state. We can see numericallyin Figures 6d to 6f that if the subthreshold membrane potential fluctuationsare too strong, this induces supplementary fluctuations at the network level,which can destroy the activity spontaneously. These strong fluctuations canbe caused by high levels of connectivity. Therefore, the upper AI region,which is predicted to be stable, can be numerically unstable because ofthose strong fluctuations. In Kumar et al. (2008), this region is not systemat-ically unstable, and the lack of evidence with a second-order theory seemsto reveal an artifact of the simulation or the breakdown of the model pre-diction. The sensitive region lay between the AI and the SR regime in whichcorrelations can become critical. Indeed, the AR region displays correlatedspike trains, which is not taken into account within the Poisson hypothesis,and this could go beyond the Markovian prediction. Kumar et al. (2008),have also suggested that the initial stimulation could contribute to this insta-bility. Indeed, the initial stimulation must be adequate to allow the networkto reach a self-sustained state. Increasing the network size will diminish the

Page 38: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 83

firing rate, as predicted by the numerical and analytical transfer function,making the network more likely to shut down given the fluctuation state.This instability is reduced for sparser connectivities.

The (�gexc,�ginh) parameter space. We focus on the last configurationshown in Figure 6 to check the validity of the theoretical prediction. Foreach stable point of the state diagram, the set of differential equation 3.16, issolved with the transfer function, equation 3.53. The resulting analysis (seeFigure 7) shows that the predictions are qualitatively correct, but the relativedifference increases rapidly for low activity. There is a narrow region in theAI regime where the prediction is good for the first and second statisticalmoments. This important discrepancy is due to the approximation madefor the transfer function, which is not derived for a complete conductance-based neuron with threshold. Similar results have already been reported inKuhn et al. (2004) using the same transfer function.

To circumvent this problem, a semianalytical method has been proposed(Soula & Chow, 2007; Kumar et al., 2008), where the conductance-basedneuron transfer function is determined numerically and then used directlyin the model for a first-order or second-order prediction. Although thisapproach can give good predictions in the AI regime, it requires computingnumerically the transfer function for each point of the state diagram, whichcan be time-consuming for heterogeneous networks. Indeed, the transferfunction is determined by computing the neuron firing rate for a givenpopulation and for every state of each population in the network. Therefore,it is necessary to find an effective analytical function that can provide abetter approximation. Although we cannot have an exact expression, it isstill possible to fit a phenomenological model to the numerical simulations.Based on previous observations, two parameters seem critical in the transferfunction: the time constant τ in the denominator and a corrective term �h,which takes into account colored noise in the synaptic input (Brunel & Sergi,1998; Fourcaud & Brunel, 2002). The phenomenological function based onequation 3.27 can thus be written

ν = 12τ

(1 + er f

( 〈V〉 − Vthreshold

√2σ (V)

+ �h))

. (3.61)

Considering the limited region of the whole (�gexc,�ginh) parameter spacein Figure 7, we can estimate the total error made for the set of transferfunction, equation 3.61, according to the free parameters (τ,�h). The totalerror is estimated by taking the sum of the relative difference between thesimulation mean activity and the mean activity given by equation 3.61 inabsolute value normalized by the number of stable points. For this opti-mization problem, there is a global minimum for which the error is small(see Figure 8). This computation depends on the network configuration and

Page 39: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

84 S. El Boustani and A. Destexhe

Figure 7: AI states characterization and the first- and second-order statis-tics of the excitatory population activity in the (�gexc, �ginh) parameter spacein a Vogels-Abbott-type conductance-based network. The network containsN = 10, 000 neurons randomly connected with probability pconn = 0.01. Everystatistical quantity has been computed with a time bin of T = 5 ms, and theanalytical model was solved with the same parameters. (a) The AI region is de-limited using the mean ISI CV and the mean pairwise cross-correlation. In theright panel, we compute the activity CV to evaluate the validity of the indepen-dence hypothesis. (b) Top: Mean activity estimated from numerical simulations(left) and computed using the master equation formalism with the neuron trans-fer function, equation 3.53 (middle). In the right panel, the relative differencebetween measured and predicted values. Bottom: The activity standard devi-ation is estimated from numerical simulations and compared as well with themean-field predictions.

the portion of the state diagram used to estimate the error. Therefore, thisprocedure gives a local good approximation that is acceptable as long asthe network parameters (here the synaptic strengths) are kept in the fittedregion.

The second-order statistics depends on the transfer function behavioraround the stationary point through the first and second derivative of this

Page 40: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 85

Corrective Term ∆h

Tim

e C

onst

ant

τ (m

s)Error Landscape in the Parameter Space

0.06 0.36 0.66 0.96 1.26 1.56 1.86 2.16 2.46 2.760.3

1.8

3.3

4.8

6.3

7.8

9.3

10.8

12.3

13.8

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

Figure 8: Error landscape for the optimization problem that determines thebest parameter set for the effective transfer function, equation 3.61. It has beencomputed for a network of N = 10, 000 neurons and pconn = 0.01. The minimalsolution (circle) gives τ = 5.3 ms and �h = 1.5.

function. In particular, the covariance and correlation matrix are strongly af-fected by the slope of the transfer function in the stationary point. Therefore,we compare for some point of the parameter space the effective transferfunction with the semianalytical approach (see Figure 9). For each networkconfiguration, the transfer function of a neuron is computed numericallyand compared with the optimized transfer function. The local behavioraround the stationary point is in good agreement with the latter.

Using the optimized transfer function, we can compare the predictionto the numerical simulation (see Figure 10). The error is reasonable com-pared to Figure 7, especially for the standard deviation for which predictionis much improved. Therefore, for a given network configuration, there isan effective transfer function that can provide a good description of thenetwork dynamics in a large part of the parameter space. This allows usto avoid the semianalytical approach but requires solving an optimizationproblem based on numerical simulations.

This method could be useful when considering several network unitsdescribed by the master equation. Indeed, eventually we would like to applythe master equation formalism to large-scale cortical recordings. In order todo so, it will be necessary to acquire a more realistic transfer function than

Page 41: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

86 S. El Boustani and A. Destexhe

Figure 9: Transfer function estimated with the semianalytical approach com-pared with the optimized effective transfer function for several points of the(�gexc, �ginh) parameter space. For each figure, the excitatory synaptic strengthis fixed, and the inhibitory synaptic strength take the values �ginh = 1, 11, 21,22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 41, 51, 61, 71, 81, and 91 nS from the top-mostcurve to the lowest monotonically. In the first-order mean-field approximation,the network stationary activity is found by imposing that the input and outputrate must be equal. The intersection with the diagonal line marks this first-ordersolution. (a) Effective transfer function and (b) numerical transfer function for�gexc = 7 nS. (c) Effective transfer function and (d) numerical transfer functionfor �gexc = 10 nS.

the one obtained from integrate-and-fire neurons. Using the optimizationstrategy, we could consider a large family of functions based on the usualtheoretical results and obtain a representative transfer function adaptedto high-conductance states. This could be done with dynamic-clamp invitro experiments by finding autoconsistent solutions for a broad range ofstimulation regimes. The resulting transfer function would then be usedas a kernel in the master equation formalism to represent the dynamicalproperty of the corresponding neuron population in the network unit. In

Page 42: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 87

Figure 10: First- and second-order statistics of the excitatory population activityin the (�gexc, �ginh) parameter space. A Vogels-Abbott network was simulatedwith N = 10,000 conductance-based neurons randomly connected with prob-ability pconn = 0.01. The moments are computed with a time bin of T = 5 ms,and the analytical model was solved with the same parameter and an effec-tive transfer function (see equation 3.61). (Top) Mean activity and standarddeviation estimated from numerical simulations. (Middle) Mean activity andstandard deviation computed from the master equation formalism. (Bottom)Relative difference between measured and predicted values.

the context of voltage-sensitive dyes optical imaging, this unit would beassociated with a small set of neighbor pixels. Once effective couplingsbetween these units are extracted from data, it should be possible to obtaina theoretical comparative model in order to study activity propagationwithin large-scale cortical areas, in particular, dynamical phase transitionsaccording to different network-controlled conditions.

3.4.3 Effect of Topology and Heterogeneity. In this section, we discuss thegenerality of the master equation formalism and possible sources of devi-ations from the predictions. Heterogeneity in the synaptic input across thenetwork can create a bias in the mean activity of the network that is nottaken into account in a mean-field model. Indeed, if neurons in the networkdo not receive exactly the same number of synapses, the firing rate distribu-tion in the network will tend to be skewed compared to the sharp gaussian

Page 43: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

88 S. El Boustani and A. Destexhe

Figure 11: The neuron firing rate distribution in the AI state according to thedegree of heterogeneity in the number of incoming synapses per neuron. Thenetwork contains N = 10,000 neurons with a mean connectivity proportionpconn = 0.02. The synaptic strengths are given by (�gexc, �ginh) = (6, 67) nS. Thenumber of incoming synapses is taken from a gaussian distribution. From theback curve to the front curve, the standard deviation is increasing, resulting inskewer curves.

distribution for a homogeneous system. If the incoming connection num-ber is gaussian distributed, the skewness of the firing rate distribution willincrease with the standard deviation of the gaussian (see Figure 11), re-sulting in a shift of the mean activity. Similar results were reported by vanVreeswijk and Sompolinsky (1998) for heterogeneous thresholds. Therefore,predictions given by the master equation could lose some accuracy if thenetwork is not perfectly homogeneous. However, this kind of heterogeneitydoes not create dramatic changes in the mean firing rate, so the model stillprovides good predictions.

Another interesting aspect of the theory concerns the connectivityschemes of the network. When the theoretical framework was built, twoimportant hypotheses were made that directly concern network circuitry.We assumed that the connectivity is sparse and that the spiking probabilityis independent from one neuron to another at each time step. Any connec-tivity scheme that can account for those two hypotheses should be describ-able by this master equation formalism. Although connections in the cortexare highly specific, they are known to exhibit the sparseness property. As

Page 44: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 89

we are describing macroscopic quantities of the network dynamics, high-order structure in the connectivity should not affect the analytical validity.However, the independence hypothesis can be broken if substantial cor-relations appear between neurons. In our simulations, we used a randomconnectivity scheme to avoid this situation. This choice is reasonable forvery small networks, but for larger networks, it is necessary to considera more realistic connectivity scheme. In previous work (Mehring et al.,2003), a numerical study of the (mext

exc, g) parameter space has been madefor networks of locally random connected neurons with periodic boundaryconditions. In this model, each neuron was connected to a fixed proportionpconn = 0.1 of its neighbors according to a gaussian probability law (seeFigure 12c). The standard deviation is taken to be 0.3 mm for a square net-work with boundary length of 2 mm. This network contains N = 112,500current-based neurons with a ratio of 4:1 between excitatory and inhibitoryneurons. Neurons interact with a fixed delay of 1.5 ms. The network is ho-mogeneous with membrane time constant τmem = 10 ms, refractory periodτ re f = 2 ms, resting and reset potential Vreset = Vrest = −70 mV, and thresh-old Vthreshold = −50 mV. α-synapses were used with τexc = τinh = 0.3 ms,and the excitatory synaptic strength was chosen such that the EPSP peakequals 0.14 mV. We implemented the same network in the formalism withthe transfer function (see equation 3.29) and computed the excitatory meanactivity to compare with their results (see Figures 12a and 12b). For thismacroscopic quantity, it seems that correlations due to local connections donot dramatically alter the first-order mean-field predictions. This is very en-couraging, and we hope to get qualitatively good descriptions of large-scalecortical networks based on this generic behavior of balanced networks. Ofcourse, those who are interested in higher-order statistics of the dynamicshave to rely on a more specific model of the network connectivity.

We investigated this question for a network that integrates somerealistic features. Based on the anatomical data for the rat (DeFelipe,Alonso-Nanclares, & Arellano, 2002), we modeled a portion of the cortexas a layer with periodic boundary conditions respecting the superficialneuron density. This is defined as ρVe = ρS ∼28,183 neurons/mm2 withρV ∼61,670 neurons/mm3 for layer 2–3 volume density, and e ∼0.457 mmthe depth of these layers. We also took distance-dependent delays with ahomogeneous propagation speed of vprop = 5 mm/ms. Intrinsic neuronproperties are identical to the model used in Vogels and Abbott (2005),and the network contains N = 10,000 neurons. Connections for a neuronfollow a uniform law on a disc centered on the neuron. We consideredas free parameters the radius of the connectivity disc and the proportionof connected neurons pconn inside the disc (see Figure 12f). For a givensynaptic strength set (�gexc,�ginh) = (6, 67) nS, we computed the meanexcitatory activity as well as the mean interspike interval coefficient ofvariation to probe the first- and second-order properties of the dynamics(see Figures 12d and 12e). We notice the existence of isostatistics lines where

Page 45: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

90 S. El Boustani and A. Destexhe

Figure 12: Effect of local network connectivity on macroscopic statistics.(a–c) Comparison of mean excitatory activity for a locally randomly connectedcurrent-based network (Mehring et al., 2003) in the (mext

exc, g) parameter space.(a) Numerical simulations with gaussian distribution (panel reproduced, withpermission, from Mehring et al., 2003). (b) First-order mean-field predictions.(c) The gaussian distributed connectivity scheme. (d–f) Effect of local uniformlyrandom connectivity. The parameter space is described by the connectivity discradius and the connection probability inside the disc. (d) Mean excitatory ac-tivity. (e) Mean excitatory interspike interval coefficient of variation. (f) Theuniform local random connectivity scheme.

the network displays the same macroscopic behavior. Those lines lie onnetwork configurations where the number of inputs per neuron is constant.Indeed, if the disc radius is increased, the number of incoming synapseswill also increase, and it is necessary to decrease the connection probabilityto recover the same statistical properties. This confirms the previous resultsshown in Figures 12a and 12b. Interneuron correlations due to more localconnections do not seem to invalidate drastically the mean-field predictionsas long as the number of connections per neuron is kept fixed and homoge-neous. Note that in Figure 12, the case of the randomly connected networkdoes not appear as a stable state because of the distant-dependent delays.Indeed, self-sustained activity cannot occur if the interactions among thenetwork are too slow and we see a large domain of the state diagram that isunstable.

Following the discussion started at the end of the previous section,this result provides an encouraging foundation for large-scale modeling.Indeed, at first sight, it seems intractable to obtain a realistic model of

Page 46: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 91

mesoscopic cortical dynamics when considering the high specificity of thewiring. However, this should not be relevant if population quantities areconsidered. Extrinsic optical imaging data provide such a quantity thatcould be related to the master equation variables. Therefore, all high-orderpatterns in the connectivity should be translated by statistical principles ineffective macroscopic coupling available from large-scale data. Further in-vestigations should be devoted to this question to ensure that specific high-order characteristics would not have a dramatic impact on global networkdynamics.

4 Discussion

In this letter, we have proposed a mean-field description of AI corticalactivity states in balanced networks. We have considered a master equationformalism to describe the activity of networks of size N for timescaleslarger than a characteristic time T . These numbers were kept finite in orderto account for finite-size effects and thus obtain a “mesoscopic” level ofdescription. The resulting phenomenological theory provides a dynamicaldescription of spiking neuron networks that sought to predict statediagrams, but could also be used beyond stability analysis. Furthermore,this framework can be used for any type of neuron as long as its transferfunction is known. We obtained a closed set of equations for the mean andvariance of the activity and showed that the state diagrams predicted bythe mean-field model well matched the diagrams obtained numerically fordifferent types of networks proposed previously (Brunel, 2000; Mehringet al., 2003; Vogels & Abbott, 2005).

Studies of network dynamics have shown that higher-order statistics arecrucial in balanced networks. Indeed, chaotic behavior in those systems isproduced by the balanced dynamics at the membrane potential level (vanVreeswijk & Sompolinsky, 1996, 1998; Brunel, 2000). Once the mean mem-brane potential is “clamped” at its subthreshold value, only fluctuations canbring the neuron to fire, thereby providing an irregular firing rate. However,although firing irregularity can be estimated from the stationary interspikeinterval coefficient of variation (Tuckwell, 1988), no model is able to describethe dynamics of second-order statistics at the network level. The Markovianapproach is directly constrained by the population activity correlation’s finestructure during AI states and the minimal bin size to capture network dy-namics. According to the numerical model, those values seem to be of thesame order, which considerably simplifies the choice of the parameter T .

The master equation model directly relies on the neuron transfer func-tion, which can be chosen according to the desired network model. Thistransfer function can be exactly determined in current-based networksprovided that the input spike trains follow Poisson processes with Diracsynapses (Tuckwell, 1988; Brunel, 2000). However, to cover a broad range ofsynapse models, we adopt a phenomenological function that can account

Page 47: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

92 S. El Boustani and A. Destexhe

for first- and second-order network statistics. For conductance-based mod-els, no exact solution of the equation can be found. We also have to seekapproximations to obtain the transfer function. Two main solutions havebeen proposed: estimating the transfer function numerically (Kumar et al.,2008; Soula & Chow, 2007) or using an approximation with the mean mem-brane potential of the conductance-based model and the variance takenfrom an effective current-based model Kuhn et al. (2004). To stay in an ana-lytical framework, we decided to use the second approach, which can easilybe implemented in our model.

In current-based models, the predicted state diagrams are in good agree-ment with the numerical simulations. In this case, the discrepancies arelikely to be due to residual correlations caused by finite size effects or by thephenomenological transfer function. Indeed, for activity in small networksthat is too low, the diffusion approximation is no longer legitimate, and thecorresponding transfer function leads to incorrect predictions. We showedthat those frequencies can be better described for larger networks, provid-ing a good model for cortical dynamical regimes. In conductance-basedmodels, the predicted diagrams match the numerical simulations qualita-tively but give poor quantitative predictions. It is, however, possible to findan optimized transfer function from a set of functions described by two freeparameters. This significantly improves the predictions. In section 3.4.2,we discussed the possibility of adapting this method to dynamics-clamprecordings in order to obtain biophysically more realistic transfer functionsdirectly estimated from real neurons. At a large-scale level, this model couldfaithfully represent the behavior of conductance-based balanced networksand is therefore a good candidate for building macroscopic models of localfield potentials or optical imaging data.

Preliminary results have shown that considering more localconnectivities—instead of random schemes—does not alter significantlythe master equation predictions as long as the sparseness is strong enough.Therefore, first- and second-order activity statistics do not require an exactdescription of the network structure. However, heterogeneity among theneurons can be responsible for slight discrepancies between simulationsand predictions. Although our model does not take into account specificdelays between neurons, for random delays of the order of T , the numericalsimulations are even closer to predictions (data not shown). Indeed, globaloscillations are destroyed by heterogeneous delays, and the AI region islarger. We are currently working on a systematic study of phase diagramsand their dependence on various parameters.

In conclusion, we have proposed here a mean-field approach to describethe activity of large networks but still staying in the finite-size regime.Such a “mesoscopic” description constitutes a first step toward obtaininga large-scale model of cerebral cortex tissue. The typical size of the net-works considered here (N ∼5000 neurons) can be thought of representingthe population of cortical neurons seen under one or several pixels of optical

Page 48: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 93

imaging data. Typical values are 100×100 pixels, covering from about3 × 3 mm to 3 × 3 cm of cortical tissue, which gives about 5 to 5000 neuronsper pixel in superficial layers according to neuronal densities publishedpreviously (Braitenberg & Schuz, 1998). Thus, constructing a 100 × 100 net-work of such populations, each described by a master equation analogousto the model presented here, would be possible if the connectivity betweenadjacent and distant populations could be incorporated in the formalism.This important addition will require studying interconnected networks ofneurons in AI states, which constitutes a natural extension of the modelingeffort examined in this letter.

Appendix A: Mean Activity and Covariance MatrixDifferential Equations

In this appendix, we compute the set of differential equations for the meanactivity and the covariance matrix from the master equation. For bin-sizedactivity defined on the time interval T , the central limit theorem allows oneto stop the statistical moment hierarchy at the second order. In other words,we consider a gaussian approximation of the stochastic process. We notethe mean activity 〈mµ〉 so that

∂t〈mµ〉

=∂t

∏α=1,...,K

∫ 1/T

0dmαmµ Pt({mγ })

=∏

α=1,...,K

∫ 1/T

0dmαmµ∂t Pt({mγ })

=∏

α=1,...,K

∫ 1/T

0dmα

∏β=1,...,K

∫ 1/T

0dm′

β

(mµ Pt({m′

γ })W({mγ } | {m′γ })

− mµ Pt({mγ })W({m′γ } | {mγ }))

=∏

α=1,...,K

∫ 1/T

0dmα

∏β=1,...,K

∫ 1/T

0dm′

β (m′µ−mµ)W({m′

γ }|{mγ })Pt({mγ })

=∏

α=1,...,K

∫ 1/T

0dmαaµ({mγ })Pt({mγ })

=〈aµ({mγ })〉, (A.1)

with

aµ({mγ }) =∏

β=1,...,K

∫ 1/T

0dm′

β (m′µ − mµ)W({m′

γ } | {mγ }).

Page 49: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

94 S. El Boustani and A. Destexhe

To obtain the first-order equation, we develop this function around eachpopulations mean activity values to the second order,

aµ({mγ }) = aµ({〈mγ 〉}) + ∂λaµ({〈mγ 〉}) · (mλ − 〈mλ〉) +

+ 12∂λ∂ηaµ({〈mγ 〉}) · (mλ−〈mλ〉)(mη−〈mη〉)+O(δm3),

which becomes, after averaging,

〈aµ({mγ })〉 = aµ({〈mγ 〉}) + 12∂λ∂ηaµ({〈mγ 〉})cλη,

where we have introduced the covariance matrix cλη = 〈(mλ −〈mλ〉)(mη − 〈mη〉)〉 containing the second-order moments. This gives thefirst-order set of equations:

∂t〈mµ〉 = aµ({〈mγ 〉}) + 12∂λ∂ηaµ({〈mγ 〉})cλη

aµ({mγ }) =∏

α=1,...,K

∫ 1/T

0dm′

α(m′µ − mµ)W({m′

γ } | {mγ }).(A.2)

To compute the second-order equations, we proceed as before:

∂tcµν = ∂t〈(mµ − 〈mµ〉)(mν − 〈mν〉)〉= ∂t〈mµmν〉 − ∂t

(〈mµ〉〈mν〉)

= ∂t〈mµmν〉 − 〈mν〉∂t〈mµ〉 − 〈mµ〉∂t〈mν〉. (A.3)

Following the previous equations, we can write,

∂t〈mµmν〉=∏

α=1,...,K

∫ 1/T

0dmα

×∏

β=1,...,K

∫ 1/T

0dm′

β (m′µm′

ν − mµmν)W({m′γ } | {mγ })Pt({mγ }).

Using the relation

m′µm′

ν−mµmν = (m′µ − mµ)(m′

ν − mν) + mν(m′µ − mµ) + mµ(m′

ν − mν),

we have

∂t〈mµmν〉= < aµν({mγ }) > +〈mνaµ({mγ })〉 + 〈mµaν({mγ })〉 (A.4)

Page 50: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 95

with

aµν({mγ }) =∏

β=1,...,K

∫ 1/T

0dm′

β (m′µ − mµ)(m′

ν − mν)W({m′γ } | {mγ }).

Inserting equation A.4 into A.3, and using A.1, we get

∂tcµν =〈aµν({mγ })〉 + 〈mνaµ({mγ })〉 + 〈mµaν({mγ })〉−〈mν〉∂t〈mµ〉 − 〈mµ〉∂t〈mν〉

= 〈aµν({mγ })〉 + 〈aµ({mγ }) · (mν − 〈mν〉)〉+ 〈aν({mγ }) · (mµ − 〈mµ〉)〉.

The first term can be expanded to the second order as we did for the first-order set of equations. Concerning the two other terms, we have,

〈aµ({mγ }) · (mν − 〈mν〉)〉= 〈(mν − 〈mν〉) · (

aµ({〈mγ 〉})+ ∂λaµ({〈mγ 〉}) · (mλ − 〈mλ〉)

)〉 + O(δm3)

= ∂λaµ({〈mγ 〉})〈(mν − 〈mν〉)(mλ − 〈mλ〉)〉 + O(δm3)

= ∂λaµ({〈mγ 〉})cνλ + O(δm3),

and the same for the third term but with µ and ν inverted. The second-orderset of equations can finally be written (van Kampen, 2003)

∂tcµν = aµν({〈mγ 〉}) + ∂λaµ({〈mγ 〉})cνλ + ∂λaν({〈mγ 〉})cµλ

aµν({mγ }) =∏

α=1,...,K

∫ 1/T

0dm′

α(m′µ − mµ)(m′

ν − mν)W({m′γ } | {mγ }).

(A.5)

Appendix B: Correlation Matrix Differential Equation

In this appendix, we compute the set of differential equations that describesthe correlation matrix in a stationary state. The derivation is slightly differ-ent from the computation in appendix A. A similar computation has beendone in Ginzburg and Sompolinsky (1994). The derivative can be written as:

∂τCorrµν(τ ) = ∂τ 〈(mµ(t) − 〈mµ(t)〉) (mν(t + τ ) − 〈mν(t + τ )〉)〉

= ∂τ

(〈mµ(t)mν(t + τ )〉 − 〈mµ(t)〉〈mν(t + τ )〉) . (B.1)

Page 51: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

96 S. El Boustani and A. Destexhe

If we consider the first term of equation B.1,

∂τ 〈mµ(t)mν(t + τ )〉 =

= ∂τ

∏α=1,...,K

∫ 1/T

0dmα

∏β=1,...,K

∫ 1/T

0dm′

βmµm′ν P(

{m′

γ

}, t

+ τ | {mγ

}, t)Pt(

{mγ

})

=∏

α=1,...,K

∫ 1/T

0dmαmµ

β=1,...,K

∫ 1/T

0dm′

βm′ν∂τ P(

{m′

γ

}, t

+ τ | {mγ

}, t)

Pt(

{mγ

}).

The conditional probability P({m′γ }, t + τ | {mγ }, t) is also a solution of the

master equation, so that the term in the bracket is similar to equation A.1,and we can write

∂τ 〈mµ(t)mν(t + τ )〉 = 〈mµ(t)aν({mγ (t + τ )

})〉. (B.2)

The second term in equation A.1 is simply

∂τ 〈mµ(t)〉〈mν(t + τ )〉 = ∂τ 〈mµ(t)〉∂τ 〈mν(t + τ )〉 (B.3)

= 〈mµ(t)〉〈aν({mγ (t + τ )

})〉.

Combining equations B.2 and B.3 into B.1, we get

∂τCorrµν(τ ) =〈(mµ(t) − 〈mµ(t)〉)aν({mγ (t + τ )

})〉. (B.4)

As we are considering time-scales that are beyond the decreasing time ofthe network activity correlations, we can develop aν({mγ (t + τ )}) aroundthe mean values to the second order,

aν({mγ (t + τ )

}) = aν(

{〈mγ (t + τ )〉}) + ∂λaν({〈mγ (t + τ )〉})

× (mλ(t + τ ) − 〈mλ(t + τ )〉) + O(δm2),

so that equation B.4 becomes, to second order,

∂τCorrµν(τ ) = ∂λaν({〈mγ (t + τ )〉})Corrµλ(τ ). (B.5)

We are interested in the activity correlations in the stationary state; therefore,the mean activities {〈mγ 〉} no longer depend on τ . Moreover, the initial

Page 52: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 97

conditions for those differential equations are given by the stationary valuesof the covariance matrix.

Appendix C: Conductance-Based Synaptic Functionsfor the Master Equation

In this appendix, we compute the remaining functions necessary to use theconductance-based model. The effective time constant is activity dependentin the conductance-based model, and it is given by

⟨τ e f fµ

⟩ = 11

τmemµ

+ ∑α=1,...,K

〈Gαµ〉Cµ

.

We need to compute the first and second derivatives:

∂λ

⟨τ e f fµ

⟩ =−⟨τ e f fµ

⟩2 λµ

∂η∂λ

⟨τ e f fµ

⟩ = 2⟨τ e f fµ

⟩3 ηµ

λµ

.

The function �αµ depends on the chosen synapse, and we compute thecorresponding derivatives for exponential and α-synapses:

� Exponential synapses. The function is given by

�Expαµ = Cαµ

τα + ⟨τ

e f fµ

⟩(

τα�gαµ〈τ e f fµ 〉(Eα − 〈Vµ〉)

)2

,

so that

∂λ�Expαµ =�Exp

αµ

(∂λ

⟨τ e f fµ

⟩ ( 2⟨τ

e f fµ

⟩ − 1⟨τ

e f fµ

⟩ + τα

)

− 2∂λ Qµ

(Eα − 〈Vµ〉))

∂η∂λ�Expαµ = ∂η�αµ∂λ�αµ

�αµ

+ �αµ

(∂η∂λ〈τ e f f

µ 〉

×(

2⟨τ

e f fµ

⟩ − 1⟨τ

e f fµ

⟩ + τα

)+ ∂η

⟨τ e f fµ

⟩∂λ〈τ e f f

µ 〉

×(

1

(⟨τ

e f fµ

⟩ + τα)2− 2⟨

τe f fµ

⟩2)

−2∂λ Qµ∂η Qµ

(Eα − 〈Vµ〉) − 2∂λ∂η Qµ

(Eα − 〈Vµ〉)2

).

Page 53: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

98 S. El Boustani and A. Destexhe

� α-synapses. Similarly

�αsynαµ = 1

2Cαµ(2

⟨τ e f fµ

⟩ + τα)

×(

eτα�gαµ

⟨τ

e f fµ

⟩(Eα − 〈Vµ〉)

Cµ(τα + ⟨τ

e f fµ

⟩)

)2

,

so that

∂λ�γµ = 2�γµ

∂λ

⟨τ e f fµ

⟩ 1(2⟨τ

e f fµ

⟩ + τγ

) + 1⟨τ

e f fµ

− 1(⟨τ

e f fµ

⟩ + τγ

) − ∂λ Qµ

(Eγ − 〈Vµ〉)

∂η∂λ�γµ = ∂λ�γµ∂η�γµ

�γµ

+∂η∂λ

⟨τ e f fµ

⟩ 1(2⟨τ

e f fµ

⟩ + τγ

) + 1⟨τ

e f fµ

− 1(⟨τ

e f fµ

⟩ + τγ

)

+∂η

⟨τ e f fµ

⟩∂λ

⟨τ e f fµ

⟩ 1(⟨τ

e f fµ

⟩ + τγ

)2

− 2(2⟨τ

e f fµ

⟩ + τγ

)2 − 1⟨τ

e f fµ

⟩2

− ∂η∂λ Qµ

(Eγ − 〈Vµ〉) − ∂η Qη∂λ Qµ

(Eγ − 〈Vµ〉)2

).

Acknowledgments

The research was supported by the CNRS, ANR, and the European Com-munity (FACETS grant FP6 15879). We thank Olivier Marre, Pierre Yger,Nicolas Brunel, and Romain Brette for fruitful discussions and commentson the manuscript.

Page 54: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

A Master Equation for Spiking Neuron Networks 99

References

Amit, D. J., & Brunel, N. (1997). Model of global spontaneous activity and localstructured activity during delay periods in the cerebral cortex. Cereb. Cortex, 7,237–252.

Braitenberg, V., & Schuz, A. (1998). Cortex: Statistics and geometry of neuronal connec-tivity (2nd ed.). Berlin: Springer-Verlag.

Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, D., Bower, J. M., et al.(2007). Simulation of networks of spiking neurons: A review of tools and strate-gies. J. Comput. Neurosci., 23, 349–398.

Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and in-hibitory spiking neurons. J. Comput. Neurosci., 8, 183–208.

Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput., 11, 1621–1671.

Brunel, N., & Sergi, S. (1998). Firing frequency of leaky integrate-and-fire neuronswith synaptic currents dynamics. J. Theor. Biol., 195, 87–95.

Crutchfield, J. P., & Kaneko, K. (1988). Are attractors relevant to turbulence? Phys.Rev. Lett., 60, 2715–2718.

DeFelipe, J., Alonso-Nanclares, L., & Arellano, J.I. (2002). Microstructure of the neo-cortex: Comparative aspects. J. Neurocytology, 39, 299–316.

Destexhe, A., & Contreras, D. (2006). Neuronal computations with stochastic net-work states. Science, 314, 85–90.

Destexhe, A., Rudolph, M., & Pare, D. (2003). The high-conductance state of neocor-tical neurons in vivo. Nature Reviews Neurosci., 4, 739–751.

El Boustani, S. (2006). Information transport in networks during irregular activity states(in French). Master’s thesis, Ecole Normale Superieure, France.

El Boustani, S., & Destexhe, A. (2007). Mesoscopic model of balanced neuron net-works using a master equation formalism. (abstract) In Computation and NeuralSystems 2007 Conference. Available online at http://www.cnsorg.org.

El Boustani, S., Pospischil, M., Rudolph-Lilith, M., & Destexhe, A. (2007). Acti-vated cortical states: Experiments, analyses and models. J. Physiol. Paris 101, 99–109.

Fourcaud, N., & Brunel, N. (2002). Dynamics of the firing probability of noisyintegrate-and-fire neurons. Neural Comput., 14, 2057–2110.

Gerstner, W. (2000). Population dynamics of spiking neurons: Fast transients, asyn-chronous states, and locking. Neural Comput., 12, 43–89.

Ginzburg, I., & Sompolinsky, H. (1994). Theory of correlations in stochastic neuralnetworks. Phys. Rev., 50, 3171–3191.

Hertz, J., Lerchner, A., & Ahmadi, M. (2004). Mean field methods for cortical networkdynamics. Comput. Neurosci., 3146, 71–89.

Kuhn, A., Aertsen, A., & Rotter, S. (2004). Neuronal integration of synaptic input inthe fluctuation-driven regime. J. Neurosci., 24, 2345–2356.

Kumar, A., Schrader, S., Aertsen, A., & Rotter, S. (2008). The high-conductance stateof cortical networks. Comput. Neurosci., 20, 1–43.

Latham, P. E., Richmond, B. J., Nelson, P. G, & Nirenberg, S. (2000). Intrinsic dynamicsin neuronal networks. I. Theory. J. Neurophysiol., 83, 808–827.

Page 55: A Master Equation Formalism for Macroscopic Modeling of ...cns.iaf.cnrs-gif.fr/files/Master2008.pdfbe 5000 neurons with a ratio of 1:4 between excitatory and inhibitory (γ = N inh

100 S. El Boustani and A. Destexhe

Matsumura, M., Cope, T., & Fetz, E. E. (1988). Sustained excitatory synaptic inputto motor cortex neurons in awake animals revealed by intracellular recording ofmembrane potentials. Exp. Brain Res., 70, 463–469.

Mehring, C., Hehl, U., Kubo, M., Diesmann, M., & Aertsen, A. (2003). Activitydynamics and propagation of synchronous spiking in locally connected randomnetworks. Biol. Cybern., 88, 395–408.

Ohira, T., & Cowan, J. D. (1993). Master-equation approach to stochastic neurody-namics. Phys. Rev. E, 48, 2259–2266.

Plesser, H. E., & Gerstner, W. (2000). Noise in integrate-and-fire neurons: Fromstochastic input to escape rates. Neural Comput., 12, 367–384.

Soula, H., & Chow, C. C. (2007). Stochastic dynamics of a finite-size spiking neuralnetwork. Neural Comput., 19, 3262–3292.

Steriade, M., Timofeev, I., & Grenier, F. (2001). Natural waking and sleep states: Aview from inside neocortical neurons. J. Neurophysiol., 85, 1969–1985.

Tuckwell, H. C. (1988). Introduction to theoretical neurobiology. Cambridge: CambridgeUniversity Press.

van Kampen, N. G. (2003). Stochastic processes in physics and chemistry. Amsterdam:North-Holland Personal Library.

van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks withbalanced excitatory and inhibitory activity. Science, 274, 1724–1726.

van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model ofcortical circuits. Neural Comput., 10, 1321–1371.

Vogels, T. P., & Abbott, L. F. (2005). Signal propagation and logic gating in networksof integrate-and-fire neurons. J. Neurosci., 25, 10786–10795.

Received February 12, 2008; accepted May 16, 2008.