an introduction to the principles of neuronal modelling

94
Chapter 8 An Introduction to the Principles of Neuronal Modelling K. A. L , J. M. O, D. M. H , ∗∗ J. R. R ∗∗ Introduction Neuronal modelling is the process by which a biological neuron is represented by a math- ematical structure that incorporates its biophysical and geometrical characteristics. This structure is referred to as the mathematical model or the model of the neuron. The behav- ior of this representation may serve a number of purposes: for example, it may be used as the basis for estimating the biophysical parameters of real neurons or it may be used to dene the computational and information processing properties of a neuron. Neuronal modelling requires not only an understanding of mathematical and computational tech- niques, but also an understanding of the what the process of modelling entails. A general treatment of models, however, would necessarily lead to the examination of a number of philosophical questions. Here we simply discuss some aspects of modelling that in our experience have proved to be useful in the construction and application of models. These topics are not usually considered in the neurophysiological modelling literature, but an understanding of the basic assumptions of modelling, and the presumed relation between model and reality is essential for constructive work in computational neuroscience. The main themes of this chapter concern: the mathematical formulation of a model of a dendritic tree based on elementary con- servation laws, leading to a new generalisation of the Rall equivalent cylinder; the numerical treatment of this model using traditional nite dierence schemes, high- lighting some previously unrecognised shortcomings of these schemes; the use of the nite dierence representation to generate fully equivalent cables for passive dendrites of arbitrary geometry; procedures for generating stochastic spike trains of known statistical characteristics as well as an arbitrarily large number of stochastic spike trains with any desired correlation structure (e.g. weakly to strongly correlated); the introduction of a generalised model of a neuron based on the selection of designated points as opposed to compartments, and for which the Rall compartmental model, and those related to it, appear as special cases; an introduction to the spectral methodology for solving partial dierential equations with applications to simple and branched dendritic structures, including a comparison of the numerical predictions based on the spectral technique with the results derived from the analytic solutions. Much contemporary modelling work, leading to an improved understanding of the impor- tance of dendrites and their functional role in shaping the behavior of neurons, developed from the pioneering studies of Wilfrid Rall (Segev, Rinzel and Shepherd, 995). An impor- tant aspect of this work has been directed toward the estimation of membrane parameters and how these parameters might be inuenced by neuron geometry (Segev et al., 995; Rall, Burke, Holmes, Jack, Redman and Segev, 992). This work has received a number of Corresponding author: K. A. Lindsay, Department of Mathematics, University of Glasgow, Glasgow G12 8QQ ∗∗ Division of Neuroscience and Biomedical Systems University of Glasgow, Glasgow G12 8QQ

Upload: javier-p-espinosa

Post on 20-Jul-2016

18 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Introduction to the Principles of Neuronal Modelling

Chapter 8

An Introduction to the Principles of Neuronal Modelling

K. A. L,∗ J. M. O,∗ D. M. H,∗∗ J. R. R∗∗

Introduction

Neuronal modelling is the process by which a biological neuron is represented by a math-ematical structure that incorporates its biophysical and geometrical characteristics. Thisstructure is referred to as the mathematical model or the model of the neuron. The behav-ior of this representation may serve a number of purposes: for example, it may be usedas the basis for estimating the biophysical parameters of real neurons or it may be usedto define the computational and information processing properties of a neuron. Neuronalmodelling requires not only an understanding of mathematical and computational tech-niques, but also an understanding of the what the process of modelling entails. A generaltreatment of models, however, would necessarily lead to the examination of a number ofphilosophical questions. Here we simply discuss some aspects of modelling that in ourexperience have proved to be useful in the construction and application of models. Thesetopics are not usually considered in the neurophysiological modelling literature, but anunderstanding of the basic assumptions of modelling, and the presumed relation betweenmodel and reality is essential for constructive work in computational neuroscience.

The main themes of this chapter concern:

– the mathematical formulation of a model of a dendritic tree based on elementary con-servation laws, leading to a new generalisation of the Rall equivalent cylinder;

– the numerical treatment of this model using traditional finite difference schemes, high-lighting some previously unrecognised shortcomings of these schemes;

– the use of the finite difference representation to generate fully equivalent cables forpassive dendrites of arbitrary geometry;

– procedures for generating stochastic spike trains of known statistical characteristics aswell as an arbitrarily large number of stochastic spike trains with any desired correlationstructure (e.g. weakly to strongly correlated);

– the introduction of a generalised model of a neuron based on the selection of designatedpoints as opposed to compartments, and for which the Rall compartmental model, andthose related to it, appear as special cases;

– an introduction to the spectral methodology for solving partial differential equationswith applications to simple and branched dendritic structures, including a comparisonof the numerical predictions based on the spectral technique with the results derivedfrom the analytic solutions.

Much contemporary modelling work, leading to an improved understanding of the impor-tance of dendrites and their functional role in shaping the behavior of neurons, developedfrom the pioneering studies of Wilfrid Rall (Segev, Rinzel and Shepherd, 995). An impor-tant aspect of this work has been directed toward the estimation of membrane parametersand how these parameters might be influenced by neuron geometry (Segev et al., 995;Rall, Burke, Holmes, Jack, Redman and Segev, 992). This work has received a number of

∗Corresponding author: K. A. Lindsay, Department of Mathematics, University of Glasgow, GlasgowG12 8QQ

∗∗Division of Neuroscience and Biomedical Systems University of Glasgow, Glasgow G12 8QQ

Page 2: An Introduction to the Principles of Neuronal Modelling

24 K. A. L, J. M. O, D. M. H, J. R. R

excellent and extensive reviews and is not reviewed here (Segev et al., 995). There is alsoan extensive literature on the derivation of neuronal models exhibiting particular typesof spike train output behavior such as bursting or periodic spike trains. The reader is re-ferred to specific reviews of this material (e.g. Rinzel and Ermentrout, 989; Getting, 989;Cohen, Rossignol and Grillner, 988). The discussion of this chapter will concentrate onthe fundamental issues of modelling that are important for any application. As always,linear models of neuronal behavior play a prominent role in any discussion of neuronalbehavior since they form the baseline against which nonlinear behavior is measured (Rallet al., 992). In addition, these models have distinctive features, such as exact solutions andequivalent cables, each providing insights into neuronal function that may not be appar-ent in nonlinear models (e.g. Evans and Kember, 998; Evans, Kember and Major, 992;Evans, Major and Kember, 995; Kember, 995; Major, 993; Major, Evans and Jack, 993a;Major, Evans and Jack, 993b; Major and Evans, 994; Ogden, Rosenberg and Whitehead,999; Whitehead and Rosenberg, 993). The numerical techniques described in this chap-ter, and applied to linear dendritic models, extend naturally to the nonlinear case simplybecause the nonlinearities in dendritic modelling occur through the dependence of currentinputs on membrane potential, while the differential operator remains unchanged1. Fullynonlinear problems have nonlinear differential operators.

It is assumed that readers are familiar with elementary linear algebra and basic calculus.Indeed, readers competent in linear algebra and knowledgeable in programming and thenumerical solution of systems of linear equations should, on the basis of this chapter,be able to construct their own neuronal representation of complex branching dendriticstructures in a straightforward algorithmic way and also obtain solutions. In addition, anunderstanding of the material of this chapter should prove invaluable in making effectiveuse of the high quality computer packages (Deschutter, 992;Hines andCarnevale, 997) forneuronal modelling now freely available over the web. These packages are comprehensiveand cover both the linear and non-linear behavior of neurons.

A Philosophy of Modelling

Modelling is the process by which a complex phenomenon or concrete object is replaced byanother entity called “the model” whose environment and operational characteristics aredefined by prescribed rules, and whose behavior is taken to represent that of the concreteobject or phenomenon (Regnier, 966). The rules describing the behavior of the model arealmost invariably couched in the language of mathematics. Indeed, the term “modelling”is often used interchangeably with “Mathematical Modelling”, although the underlyingphilosophy and procedures of modelling are not inherently mathematical.

Note that experimental data or empirical descriptions of the concrete object that resultfrom experimental measurement are not by themselves logically coherent. The work ofthe theoretician is to create, beyond the empirical descriptions of the data, the theoreticalrepresentation that constitutes the model, and at the same time provides a logical structurefor the data. The model may also be seen as a substitute for the real phenomenon that trans-lates difficult questions concerning this phenomenon into easier questions concerning themodel. In this sense the model facilitates one’s intuition concerning the real phenomenon.The primary purpose of modelling then becomes that of describing reality in terms ofmodels that are inherently coherent, and thereby order and predict the behavior of the realentity based on that of the model. In order to achieve this aim, the interaction betweenthe real entity and its model is quantified by experiment. The greater the variety of experi-mental conditions in which the predictions of the model are in acceptable agreement with

1Nonlinearities in neuronal modelling merely add technical difficulties, but do not change the funda-mental and conceptual issues of modelling.

Page 3: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 25

reality, the stronger the conviction that the predictions of the model and the experienceof reality will be in agreement in uncharted territory, although this remains to be tested.In effect, the model is used to “interpolate” reality much in the same way as a finite ordiscrete set of observations are used to represent a continuous process. Over the domainof experiment where the model is valid, reality is understood by the successful agreementof experimentation and model predictions. Implicit in this strategy of experimentationand model construction is the belief that reality consents to organise itself according tothe same logical structure as that of the model (Regnier, 966, 974).

Without doubt the success of contemporary science and technology is founded on thequality with which the existing models and paradigms explain observations. Some mod-els are so effective that the boundary between model characteristics and reality becomessufficiently blurred that the user perceives the model to be reality. For example, our ex-perience suggests that a surprisingly large fraction of those versed in mechanics actuallybelieve that the motion of an automobile is controlled by Newton’s Laws, all be it with theembellishments afforded by air resistance and the like!

The Modelling Cycle

Modelling is often perceived as a vague or woolly process shrouded in some kind of mys-tique. In fact, modelling is a cyclical process with well defined stages that must be pursuedin a prescribed order, irrespective of the complexity of the entity to be modelled. The initialstage of model development must specify aims and quantifiable criteria against which thepredictions of the model are to be tested.Once done, the next stage of the modelling proce-dure requires the specification of model parameters and variables, including assumptionsand the limitations that are incumbent on the model as a result of these assumptions. Forexample, a discretised model of a dendritic tree based on cylinders cannot be associatedwith a unique dendritic tree, but rather is associated with a continuum of trees that aregeometrically/electrically “close” a “fuzzy” dendrite. Since a number of different treesare represented by the same model, decisions on discretisation are not independent of thespecification of aims and objective criteria against which the model is to be validated.

Model variables and parameters, once defined with their associated limiting assump-tions, are now connected by rules/principles that have proved to be successful in similarapplications. These either express conservation of model properties conservation laws (e.g. conservation of electrical charge) or specify relationships between model variables constitutive laws (e.g. Ohm’s law connecting current through a resistive medium andpotential difference). The electrical behavior of a dendritic tree is fundamentally controlledby the conservation of charge. This is identified as a conservation law because its validity isindependent of circumstance the principle of conservation of charge is universally valid.By contrast, constitutive laws define the properties of the dendritic material by postulatingrelations connecting measurable variables such as current and potential difference. For ex-ample, dendritic current flow is typically assumed to obey Ohm’s law, that is, the dendriticcurrent is taken to be a linear function of potential gradient2. Similarly, membrane leakagecurrents obey Ohms law.

Once the conservation and constitutive laws are satisfied, the model is defined and theimplications of these laws can now be pursued using mathematical methods. However, an-swers must always be interpreted in the light of the modelling assumptions and limitationsthat are implicit in the construction of themodel. In particular,manifestations of themodelshould not be confused with properties of the real entity. For example, it is known thatpassive multi-cylinder models of dendritic trees can be transformed into equivalent cableswhich may contain disconnected sections (Ogden et al., 999; Whitehead and Rosenberg,

2Not all dendritic current flow is Ohmic.For example,Hodgkin-Huxley currents are nonlinear functionsof voltage.

Page 4: An Introduction to the Principles of Neuronal Modelling

26 K. A. L, J. M. O, D. M. H, J. R. R

993). Are we therefore to believe that real dendritic trees have regions that are physicallydisconnected from the soma? Of course not this would be an inappropriate interpreta-tion of the mathematical result. Rather, the model suggests that there are configurationsof inputs to the dendritic tree that exert no net electrical effect at the soma. Essentially themodel solution must always be appraised in the light of the modelling assumptions andlimitations, and not taken at face value.

Finally, the predictions of the model must always be compared with real measurementwhenever possible. If model predictions are at odds with measurement, the modellingcycle should be re-entered entailing possible adjustments to the choice of variables andparameters, revision of the aims and objective criteria through which the model is to beaccredited and a potential re-appraisal of the underlying conservation and constitutivelaws on which the model is based.

Formulation of Dendritic Model

Within the context of neuronalmodelling, the term “cable theory” refers to the collection oflinear and non-linear models that have been used to describe the time dependent electricalactivity in arbitrarily branched dendritic trees, axons and branched axonal terminals withvarying degrees of biophysical realism.A historical overview of cable theory can be foundin Rall (977). The general one dimensional non-linear cable equation for a cylindricaldendritic limb of arbitrary, but “small”, taper is now formulated, the derivation beingbased on the conservation of electrostatic charge together with a variety of assumptionsconcerning the electrical properties of the biological material fromwhich neurons are built.

A dendrite is typically modelled as an intracellular Ohmic medium enclosed by a thinhighly resistive membrane which is itself immersed in a perfectly conducting fluid. Theconfiguration is illustrated in Fig. . The cross-sectional dimensions of the intracellularregion are sufficiently small compared with its length so that the electrostatic potential iseffectively constant over the dendritic cross-section to a first approximation, and exactlyconstant in the idealised model. As a result, the corresponding current flow within theintracellular region is predominantly axial with negligible flow perpendicular to the axialdirection, and in the idealised model is entirely axial. Importantly, this means that axiallength along the dendritic core and axial length along the dendritic membrane surface areidentical to the level of resolution afforded by the one dimensional model unlike a full threedimensional model where surface arc length must be distinguished from axial distance.This is the criterion for a “small” taper. In a full three dimensional model, surface currentdensities are applied over the membrane surface, which may be parameterised in terms ofthe axial coordinate x, but is regarded as locally conical and not locally cylindrical. Thisdeparture from locally cylindrical geometry now inevitably generates significant non-axialcurrents3.

The mathematical formulation of the cable equation presented in this article will ig-nore potential variations and current flow across the dendritic cross-section. This model ofcurrent flow in dendrites may therefore be regarded as but the leading term in the asymp-totic expansion of the full dendritic potential. In fact, a simplistic nondimensionalisationof the full three dimensional cable equation suggests that the fine structure in the mem-brane potential arising from non-axial current flow is of the order of the ratio of the cross-sectional area of the dendrite to its length squared. This may be as small as 10−6. Formalanalyses of three dimensional current flow in the intracellular region for dendrites of rightcircular cross-section has been given by Rall (969), for striated muscle fibers by Falk and

3The distinction between dx, the differential of axial length, and ds, the differential of axial length alongthe dendritic membrane surface is essential in, for example, the theory of elasticity.

Page 5: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 27

x1

1 1J(x ,t) A(x )2J(x ,t) A(x )2

JIC

x2

x2 x1-( )

JSC +J IVDC

P(x)

Dendritic Membrane

A(x)

Dendritic CoreExtracellular Space

Fig. 1.Diagrammatic representation of a dendritic segment of length (x2−x1) illustrating axial currentflow J (x1, t )A(x1) into the segment and axial current flow J (x2, t )A(x2) out of the segment whereJ (x, t ) is the axial current density (rate of flow of charge per unit area) and A(x) is the cross sectionalarea. The perimeter of the segment is P(x). Note that both A and P are generally functions of theaxial coordinate x. Here JIC represents injected current (exogenous), JSC represents synaptic currentand JIVDC represents the sum of intrinsic and voltage dependent currents. Both JIC and JIVDC aregiven by constitutive formulae. Coordinate x measures length along the dendrite increasing awayfrom the soma.

Fatt (964) and considered in general by Eisenberg & Johnson (970). Their conclusionsreinforced the validity of the one dimensional model of a dendrite.

Model of a Dendritic Cylinder

The construction of the model dendrite requires a mathematical representation of theintracellular dendritic core, the dendritic membrane and the surrounding extracellularmedium together with descriptions of the soma of the cell, synaptic inputs onto dendritesand boundary conditions at dendritic terminals. At internal branch points, the membranepotential is continuous and current is conserved. The objective of the modelling procedureis to provide a quantitative description of the behavior of the membrane potential at thesoma (and by default, elsewhere) as it is shaped by the time and spatial characteristics ofinjected currents as well as synaptic inputs. The model therefore provides the basis forestimating membrane parameters and describes how different distributions of synapticinputs and different dendritic geometries affect the potential at the soma or elsewhere. Thebehavior of the potential at the soma will determine the temporal properties of the outputspike train from the neuron4. The mathematical representation of each component partof the model is now presented.

Definitions and Conventions

Let coordinate x be axial distance along a dendrite limb of length L and make the con-ventional decision that x = 0 always denotes the end of the limb closest to the soma sothat x ∈ [0, L]. Charge can be associated with the highly resistive dendritic membrane (acapacitance effect). Redistribution of this charge through the membrane itself and the axialdiffusion of this charge along the resistive intracellular core of the dendrite establishes a

4Much theoretical work has been devoted to modelling spike train outputs from neurons by an inte-grate-to-threshold-and-fire model where the threshold crossing times of the membrane potential dependon the characteristics of the simulated spike train inputs (see Tuckwell, 1988a; Holden, 1976)

Page 6: An Introduction to the Principles of Neuronal Modelling

28 K. A. L, J. M. O, D. M. H, J. R. R

transmembrane potential differenceV (x, t ) between the dendrite’s core and the extracellu-lar medium (see Fig. ). Note that the assumption that the extracellular material is perfectlyconducting means that it is at a uniform potential, which may be arbitrarily fixed at zero,and that charge distributions in the extracellular region are equilibrated instantly. Associ-ated with the potential V (x, t ) is the axial current density J (x, t ), measured per unit areaof the cross-section of the dendritic core. The fundamental relationship between V and Jis phenomenological and defines the electrical properties of the dendritic core material. Inpractice, the core is widely assumed to be an ohmic conductor so that

J (x, t ) = −gA ∂V (x, t )∂x

(8.)

where gA is the axial conductivity of the intracellular material. By convention, J is the cur-rent density in the direction of increasing x and therefore J (L, t ) is current flow out ofthe dendritic core at x = L whereas J (0, t ) is current flow from the soma or a contingentdendritic limb into the dendritic core at x = 0. By implication the current flow into thesoma from a contingent dendrite is −J (0, t ). In general, J is a vector although in this onedimensional application, it behaves like a signed algebraic quantity. The consistent appli-cation of these conventions is essential in more complex dendritic structures, particularlywhen synaptic inputs and injected currents are included in the model.

For readers who are more familiar with the formulation of Ohms law in circuit theory,it is useful to digress for a moment and demonstrate that equation (8.) is a more funda-mental description of Ohms law embodyingV = IR as a special case. Consider an ohmicconductor formed into a slab of thickness d and cross-sectional area A (see Fig. 2). If cur-rent I flows through this slab when potential difference V = Vd −V0 is applied across itsfaces as illustrated in Fig. 2, then the corresponding current density is J = I/A and theassociated potential gradient satisfies

J = IA

= −g dVdx

, x ∈ (0, d) , V (0) = V0

where g (assumed constant) is the electrical conductivity of the material from which theslab is constructed. This equation has solutionV (x) = −Ix/gA+V0 and the requirementthatV (d) = Vd leads to the conclusion thatV = (V0−Vd ) = Id/gA = IR where R is theohmic resistance of the slab.Hence the slab behaves like an ohmic conductor of resistance5

R = d/gA = �d/A where � (� m) is the resistivity corresponding to conductivity g .

Derivation of the Cable Equation

The intracellular core of a dendrite is enclosed by a thin layer or membrane of highly re-sistive tissue.Although this membrane is largely impermeable, it contains a distribution ofion channels which can allow charge to move between the intracellular and extracellular re-gions. Following the convention of Getting (989), the cable equation will incorporate threedistinct types of current density6, namely, injected current −JIC, intrinsic and voltage-de-pendent currents −JIVDC and synaptic current −JSC, all measured per unit area of thedendritic membrane. In reality, intrinsic or pumping currents arise from active transportprocesses, while the voltage-dependent currents (of which Hodgkin-Huxley type current

5If the slab has cross-sectional area A(x), then the resistance has the general form R = �∫ d0

dsA(s) . If it is

unreasonable to model current flow using formula (8.1), it is common to postulate that J = J (V −V0) inwhich case conductance is defined to be ∂ J/∂V . Note that such conductances are dimensionally differentfrom that arising in (8.1) but are dimensionally consistent.

6The positive direction of radial current flow is in the direction of increasing radial coordinate, thatis, outwards. Therefore current inputs from the domain exterior to the dendritic core are conventionallynegative.

Page 7: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 29

x = 0

x = d

Area A

Potential V0

Potential VdCurrent I

Fig. 2. The diagram illustrates a slab of ohmic material with thickness d and surface area A throughwhich a uniform current I flows under a constant potential difference V = V0 −Vd .

is an example) are due to various ion channels. If JIVDC is a nonlinear function of mem-brane potential (i.e. non-ohmic) or synaptic currents are present, the dendritic membraneis said to be active otherwise it is passive. Linear cable theory deals with arbitrary dendriticgeometry and passive membranes.

The equation describing the evolution of the membrane potential V (x, t ) is con-structed from a consideration of charge conservation in the section of dendritic limbx1 � x � x2. Suppose that this limb has cross-sectional area A(x) with associatedperimeter P(x), then the net current into the dendritic section is

A(x1)J (x1, t )− A(x2)J (x2, t )−∫ x2

x1

Jin(x, t )P(x) dx (8.2)

where

Jin(x, t ) = JIC(x, t )+ JIVDC(x, t )+ JSC(x, t ) . (8.3)

By definition, Jin is just the effective current density applied per unit area of the dendriticmembrane, being the sum of the injected current JIC, the synaptic current JSC and theintrinsic and voltage dependent currents JIVDC (all measured per unit area of the dendriticmembrane). Notice again that equations (8.2) and (8.3) make no distinction between axialcoordinate x and axial length s along the dendritic surface, despite the presence of den-dritic taper. To distinguish between x and s contradicts a premise of the model since itis tantamount to assuming that current flow in the dendritic core is not axial. During thetime interval [t1, t2], this net influx of current increases the charge stored in this dendriticsegment by∫

t1

t2[A(x1)J (x1, t )− A(x2)J (x2, t )−

∫ x2

x1

Jin(x, t )P(x) dx]dt . (8.4)

That additional charge is associated with the dendritic membrane which is assumed tohave capacitanceCM per unit area. By definition, the change in stored charge over the timeinterval [t1, t2] is∫ x2

x1

CM P(x)V (x, t2) dx −∫ x2

x1

CM P(x)V (x, t1) dx . (8.5)

Page 8: An Introduction to the Principles of Neuronal Modelling

220 K. A. L, J. M. O, D. M. H, J. R. R

In the absence of externally injected charge during the time interval [t1, t2], conservationof charge requires that expressions (8.4) and (8.5) are equal, that is,∫ x2

x1

CM P(x)[V (x, t2)−V (x, t1)

]dx =∫

t1

t2[A(x1)J (x1, t )− A(x2)J (x2, t )−

∫ x2

x1

Jin(x, t )P(x) dx]dt .

(8.6)

This fundamental equation describes the temporal and spatial evolution of the membranepotential V (x, t ). It can be manipulated into a more familiar form provided V (x, t ) is asufficiently differentiable function of space and time.On dividing equation (8.6) by (t2−t1)and taking limits as t1, t2 → t, it follows by elementary calculus that∫ x2

x1

CM P(x)∂V (x, t )∂t

dx = A(x1)J (x1, t )−A(x2)J (x2, t )−∫ x2

x1

Jin(x, t )P(x) dx .(8.7)

A similar operation applied to the spatial components of equation (8.7) reveals thatV (x, t )and J (x, t ) satisfy the partial differential equation

CM∂V (x, t )∂t

= − 1P(x)

∂(A(x)J (x, t ))∂x

−(JIC(x, t )+ JIVDC(x, t )+ JSC(x, t )

). (8.8)

where Jin has been replaced by its definition (8.3).Equation (8.8) is a consequence of chargeconservation only and makes no assumptions whatsoever about the constitutive propertiesof the dendrite. In practice, it is almost universally assumed that the dendritic core is wellmodelled as an Ohmic conductor in which case V and J are connected through OhmsLaw (8.). In this case, the membrane potential V is seen to satisfy the partial differentialequation (Cable Equation)

CM∂V (x, t )∂t

= 1P(x)

∂x

(gAA(x)

∂V∂x

)−(JIC(x, t )+ JIVDC(x, t )+ JSC(x, t )

).(8.9)

To complete the specification of the problem, it is necessary to quantify the input currentdensities JIC, JIVDC and JSC, the initial membrane potential and the conditions to be appliedat the soma, terminal boundaries and at dendritic branches.

Specification of Currents

As already seen in expression (8.3), the input current density Jin(x, t ) consists of threephilosophically different contributions.The first component JIC is exogenous and describescurrent injected into the dendritic core from outside. The second and third components ofJin are constitutive in nature and quantify respectively the properties of the ion channelswithin the dendritic membrane in terms of their reaction to membrane potential and thepresence of synaptic inputs from other neurons.

Typically JIVDC, the second component of Jin, is a constitutive function of membranepotential V with the property that JIVDC = 0 when V = EL, the resting membrane po-tential. We assume the existence of EL, the potential at which pumping and ion channelcurrents are balanced. Without loss of generality, the existence of the resting potentialindicates that JIVDC = C(V ) − C(EL) where C is a constitutive function of membranepotential. Assuming that C is a suitably differentiable function of V , then the mean valuetheorem states that

JIVDC = dC(V ∗)dV

(V − EL) = g (V ∗)(V − EL) (8.0)

Page 9: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 22

where V ∗ = V ∗(V, EL) is a potential between V and EL and g (V ∗) may be considered tobe a nonlinear conductance. Similarly, the mean value theorem applied to g (V ∗) yields

g (V ∗) = g (EL)+ dg (V ∗∗)dV ∗ (V ∗ − EL) = gM + gNL(V ) (8.)

where gM = g (EL) is the passive membrane conductance,V ∗∗ = V ∗∗(V ∗, EL) is a poten-tial between V ∗ and EL (and consequently between V and EL) and gNL defines a voltagedependent conductance from equation (8.). Note that gNL is implicitly a function of Vand is zero when V = EL. Assembling results (8.0, 8.) together gives

JIVDC = g (EL)(V − EL)+ gNL(V − EL) = gM(V − EL)+ gNL(V − EL) . (8.2)

Therefore for small deviations in potential from the resting potential, JIVDC will be domi-nated by the linear term gM(V−EL) and the dendrite is said to be passive.As themembranepotential increasingly deviates from its resting value, gNL may assume larger and larger val-ues and may eventually dominate the membrane behavior. Once gNL is considered to besignificantly different from zero, the membrane is said to be active. The form of gNL isusually determined experimentally7.

The third component of Jin is due to synaptic current input to the dendritic membrane.Synaptic inputs temporarily open ion channels across the dendritic membrane allowingmovement of charge in sympathy with the prevailing potential difference. The opening andclosing of these channels is commonly modelled by a time varying conductivity, while theactual current flow is assumed to be ohmic. Suppose that the channels are located at sites8

x = xk, k = 1, 2, . . . , N, then the synaptic current density JSC(x, t ) has general form

JSC(x, t ) =∞∑j=1

N∑k=1

∑α

gαsyn(t − tαkj )(V (xk , t )− Eα)δ(x − xk ) (8.3)

where tαk1, tαk2, . . . are the times (stochastic) at which synapse k associated with ionic species

α becomes active while gαsyn(t ) models the conductance and Eα is the reversal potentialassociated with this species and δ(x − xk ) is the Dirac delta function at x = xk . Thefunction g is typically modelled by the so-called “alpha” function

gαsyn(t ) =

0 t � 0

GαtTαe(1−t/Tα ) t > 0

(8.4)

whereGα is the maximum conductance andTα is the time constant associated with speciesα (Getting, 989).

In conclusion, the cable equation for a dendritic limb has final form

CM∂V (x, t )∂t

+ gM(V − EL) = 1P(x)

∂x

(gA A(x)

∂V∂x

)− Jextra (8.5)

where Jextra is a collective current density which includes exogenous currents, synapticcurrents and the nonlinear components of the intrinsic and voltage-dependent currents,that is,

Jextra = JIC(x, t )+ JSC(x, t )+ gNL(V − EL) (8.6)

inwhich gNL is a voltage dependent conductance and JSC, JIC have their definedmeaning. Inthe absence of exogenous input current, synaptic current activity and nonlinear membranecurrent flow, Jextra = 0.

7For example, the Hodgkin-Huxley equations provide a particular form for a nonlinear conductance.8Channels associated with different ionic species may be located at the same axial coordinate xk but

clearly not on the same area of the membrane. The synaptic current at coordinate xk is just the sum of thesynaptic currents due to all the species at xk .

Page 10: An Introduction to the Principles of Neuronal Modelling

222 K. A. L, J. M. O, D. M. H, J. R. R

Initial and Boundary Conditions

To obtain specific solutions to the cable equation (8.5), it is necessary to supply V (x, 0),the initial value of the membrane potential, and the boundary conditions to be satisfied atdendritic terminals, at the soma and at dendritic branch points (where appropriate).

Consider a dendritic terminal at membrane potential V (L, t ) leaking charge to theextracellular environment at potentialVex (already taken to be zero) and into which current−Iend is injected. Since the axial coordinate x is conventionally chosen to increase awayfrom the soma, then the net outflow of current at this terminal is A(L)J (L, t ) − Iend.This outflow is due to potential differenceV (L, t )−Vex between the dendritic tip and theextracellular region.Assuming that the flow is ohmic then the terminal boundary conditionhas form

A(L)J (L, t )− Iend = gL A(L)(V (L, t )−Vex

)(8.7)

where gL is the leakage conductance at the dendritic terminal. When J is replaced byexpression (8.), the membrane potential V (L, t ) is seen to satisfy the Robin condition

gA∂V (L, t )∂x

+ gL(V (L, t )−Vex

)= − Iend

A(L). (8.8)

Several common boundary conditions are now considered in detail.

When gL = 0, boundary condition (8.8) reduces to the current injection conditionCurrent injectioncondition

A(L) gA∂V (L, t )∂x

= −Iend .A sealed end is the special case of this condition in which Iend = 0, that is, there is noinjected current at x = L and the region between the dendrite and the extracellularmaterialis perfectly insulating. The sealed end boundary condition is therefore

∂V (L, t )∂x

= 0 . (8.9)

This condition is often taken as the natural terminal condition for dendritic tips.

A general voltage condition may be applied to a dendritic terminal whenever the regionGeneral voltage conditionbetween the dendritic terminal and the extracellular material is perfectly conducting. Es-sentially gL is infinite in equation (8.8) so that

V (l , t ) = Vex(t ) . (8.20)

A dendritic tip is said to be a cut end whenever Vex(t ) = 0.

At a dendritic branch point, the membrane potential on each limb must be continuousDendritic branch pointsand the sum of currents into the branch point must total zero. Suppose that current −IBPis injected into a branch point, then the net flow of charge into the branch point is

−IBP + A(p)(L(p))J (p)(L(p), t )−n∑

k=1

A(ck )(0)J (ck )(0, t )

where ϕ(p) is the value of ϕ on the parent limb and ϕ(ck ) (1 � k � n) is the value of ϕon the kth child dendrite.Voltage continuity and current conservation at the branch pointnow requires that

V (p)(L(p), t ) = V (c1)(0, t ) = · · · = V (cn )(0, t ) , (8.2)

−IBP − A(p)(L(p))g (p)A

∂V (p)(L(p), t )∂x

+n∑

k=1

A(ck )(0)g (ck )∂V (ck )(0, t )

∂x= 0 , (8.22)

Page 11: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 223

where current densities J (p) and J (ck ) have been replaced by potential gradients from Ohmslaw.

Somatic and terminal boundary conditions differ in the respect that charge can reside on Somatic boundaryconditionthe soma membrane surface, that is, the soma exhibits capacitance by virtue of its lumped

geometrical construction. Suppose that m dendrites are connected to the soma and that−IS is the current entering the soma across its membrane surface, then the total rate ofsupply of charge to the soma is

−IS −m∑i=1

Ai (0)Ji (0, t ) = −AS Jsoma +m∑i=1

Ai (0)gA∂Vi (0, t )∂x

,

where Ai (0) is the cross-sectional area of limb i, and it is assumed without significantloss of generalisation that gA is the same for each dendrite. Continuity of potential atthe soma-to-tree connection requires that VS(t ) = V1(0, t ) = V2(0, t ) = · · · = Vm(0, t ).Since the rate of increase in somal charge is justCS dVS(t )/dt, whereCS is the capacitance ofthe soma, then conservation of charge requires thatVS(t ) satisfies the ordinary differentialequation

CSdVS(t )dt

= −IS +m∑i=1

Ai (0)gA∂Vi (0, t )∂x

. (8.23)

In common with dendritic membranes, the input current IS is the sum of exogenous cur-rent injection ISIC, possible synaptic current activity ISSC (which may be instrumental indischarging the soma) and intrinsic voltage-dependent currents ISIVDC modelled as thesum of an ohmic leakage current AS gS (VS(t ) − EL) and a nonlinear voltage dependentcurrent AS gSNL(VS)(VS − EL) where AS is the surface area of the soma. The synaptic andintrinsic voltage-dependent currents in this case model charge movement across the somalmembrane due to the presence of ion channels.After minimal algebra, the somal boundarycondition has final form

CSdVS(t )dt

+ AS gS (VS(t )− EL) = −Iextra +m∑i=1

Ai (0) gA∂Vi (0, t )∂x

. (8.24)

Here Iextra is the sum of the exogenous current ISIC input, the synaptic current activityISSC and the nonlinear voltage dependent currents AS gSNL (VS)(VS −EL) across the somalmembrane. Note that gA has dimension �−1m−1 but that gS has dimension �−1m−2.

Cable Equation for Uniform Dendrites

For non-tapering dendrites, the cross sectional area A(x) and the perimeter P(x) areconstant. The simplified form of the cable equation (8.5) is now

CM

gM

∂V∂t

+ (V − EL) = gAAgMP

∂2V∂x2

− JextragM

(8.25)

where Jextra is defined by equation (8.6). Let time τ and length λ be defined by

τ = CM

gM, λ2 = gAA

gMP(8.26)

then equation (8.25) now becomes

τ∂V∂t

+ (V − EL) = λ2∂2V∂x2

− JextragM

. (8.27)

Page 12: An Introduction to the Principles of Neuronal Modelling

224 K. A. L, J. M. O, D. M. H, J. R. R

The homogeneous form of this equation corresponds to the usual cable equation, for ex-ample, Rall (989). This dimensional equation may be non-dimensionalised by rescalingtime and length by τ and λ respectively using the coordinate transformations t∗ = t/τand x∗ = x/λ. In this process, the length of the dendrite is rescaled to the non-dimen-sional length l = L/λ and the current density Jextra is rescaled to a non-dimensional linearcurrent density J∗ = λPJextra. By convention, τ is the time constant and λ is the length con-stant of a dendritic limb. Typically τ is assumed to be constant throughout a dendritic tree(determined by electrical parameters only) while λ varies from limb to limb. The resultingnondimensional cable equation is

∂V∂t

+ (V − EL) = ∂2V∂x2

− J (x, t )λgMP

= ∂2V∂x2

− J (x, t )√gAgMAP

, x ∈ (0, l ) (8.28)

where x, t and J are nondimensional in this equation although the superscript ∗ has beensuppressed, and will be suppressed in all future calculations for representational conve-nience. Non-dimensional length measurements are commonly referred to as electrotonicunits and J in equation (8.28) is electrotonic current density of input currents. The totalaxial current IA in a dendritic limb is now

IA = AJA = −A gAλ

∂V∂x

= −√gMgAAP ∂V∂x

. (8.29)

Since (gMgAAP)1/2 has dimension�−1 (dimension of conductance), it will henceforth becalled the g-value of the uniform dendrite. The final non-dimensional forms of the cableequation and axial dendritic current are respectively

∂V∂t

+ (V − EL) = ∂2V∂x2

− Jg, IA = −g ∂V

∂x, x ∈ (0, l ) (8.30)

where g, the g-value of the cable, is given by the expression

g = √gAgMAP . (8.3)

For example, a uniform dendrite with circular cross-section of diameter d has electrotonicscaling factor λ = √gAd/4gM in equation (8.26). The expression for axial current is

IA = −π2

√gMgA d

3/2 ∂V∂x

= −π2c√gMgA

∂V∂x

(8.32)

where c = d3/2 is usually called the c-value of the cable. As a general remark, the ratio ofg-value to c-value (i.e. g/c) for any limb of a dendrite is universally constant wheneverdendritic electrical properties are uniform everywhere.

Rall Equivalent Cylinder

The considerations of the previous sections lead naturally to the development of the Rallequivalent cylinder (Rall, 962a). Consider a dendritic branch point comprising a parentdendrite and N child dendrites all of which have constant values for gM, gA and the sametime constant τ . Suppose further that each child dendrite has uniform cross section (notall necessarily equal), identical electrotonic length l (each limb will likely have a differentphysical length) and identical terminal boundary condition. The membrane potential forbranch k (1 � k � N ) satisfies the cable equation

∂Vk

∂t+ (Vk − EL) = ∂2Vk

∂x2− Jkg (k)

, x ∈ (0, l ) , (8.33)

Page 13: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 225

where it is now assumed that Jk is independent of membrane potential (i.e. the dendrite ispassive so that Jk is only injected current). Continuity of potential among the parent andchild branches requires that

V1(0, t ) = V2(0, t ) = · · · = VN (0, t ) = V (p)(l (p), t ) (8.34)

in whichV (p)(l (p), t ) is the parentmembrane potential at the branch point. Equation (8.29)gives the form for the total axial current in each limb. Conservation of current requiresthat the total currents into the branch point sum to zero and therefore

N∑k=1

g (k)∂Vk (0, t )∂x

− g (p)∂V (p)(l (p), t )

∂x= 0 . (8.35)

Let the potential function ψ(x, t ) and conductance GS be defined by the formulae

ψ(x, t ) = 1GS

N∑k=1

g (k)Vk (x, t ) , GS =N∑

k=1

g (k) . (8.36)

Clearly ψ(x, t ) is just a weighted sum of the membrane potentials in the child limbs.Furthermore, the continuity of potential at the branch point embodied in equation (8.34)ensures that ψ(0, t ) = V (p)(l (p), t ). The superposition property of the cable equation alsoguarantees that ψ satisfies the cable equation

∂ψ

∂t+ (ψ − EL) = ∂2ψ

∂x2− C(x, t )

GS, C(x, t ) =

N∑k=1

Jk , x ∈ (0, l ) . (8.37)

Thus the current C(x, t ) is simply a weighted sum of input current densities on the childlimbs. The superposition principle indicates that the terminal boundary condition for theψ potential is determined by the terminal boundary conditions for the child potentials. Inthe special cases in which each child limb is cut, then so is ψ, while if each child limb issealed, then ψ is likewise sealed.

To complete the construction of the Rall equivalent cylinder, it remains to show that ψconserves current at the branch point. This can be achieved with a suitable choice for theelectrical and geometrical properties associated with the ψ cable. By differentiating ψ withrespect to x and then using the current balance condition of (8.35), it is now immediatelyobvious that I (p)A , the axial current in the parent branch, satisfies

− I (p)A = g (p)∂V (p)(l (p), t )

∂x= GS

∂ψ(0, t )∂x

. (8.38)

Expression (8.29) defines the axial current in the ψ cable to be

I (ψ)A = −g (ψ) ∂ψ(x, t )∂x

= −√g (ψ)M g (ψ)A (AP)1/2

ψ

∂ψ(x, t )∂x

at coordinate x and so the current at x = 0 on the ψ cable is

I (ψ)A = g (ψ)∂ψ(0, t )∂x

= g (ψ)

GSI (p)A

when ∂ψ(0, t )/∂x is replaced from (8.38). Therefore, to conserve current between theparent and the ψ cable, it is necessary that I (ψ)A = I (p)A and this is ensured by the choice

g (ψ) = GS =N∑

k=1

g (k) ≡√g (ψ)M g (ψ)A (AP)1/2

ψ=

N∑k=1

√g (k)M g (k)A (AP)1/2

k. (8.39)

Page 14: An Introduction to the Principles of Neuronal Modelling

226 K. A. L, J. M. O, D. M. H, J. R. R

This is the condition to be satisfied for a generalised Rall cable. The definition mixes elec-trical and geometrical characteristics of the child dendrites. Several simplifications of thegeneralised Rall cylinder are possible. For example, if the electrical properties of the den-drites are all equal and equal to that of the parent then condition (8.39) now becomes thepurely geometrical condition

(AP)1/2ψ

=N∑

k=1

(AP)1/2k

. (8.40)

Finally, if dendrites are assumed to have circular cross-section then A = πd2/4, P = πdso that (AP)1/2 = πd3/2/2. In this situation condition (8.40) simplifies to the well knownRall three-halves rule

d3/2ψ

=N∑

k=1

d3/2k

. (8.4)

If the parent dendrite also has diameter dψ, then the new cable extends seamlessly thelength of the parent dendrite by l and therefore opens up the possibility of a further Rallreduction.

The notion of full equivalence is discussed in detail in a later section, but full equiva-lence must allow for preservation of information and an essential condition for full equiv-alence is that electrotonic length is preserved. The Rall equivalent cylinder is necessar-ily incomplete since the original branched structure consisted of N child limbs, each oflength l, by contrast with the equivalent Rall cylinder which has length l, the length ofonly one branch. In fact, the form of ψ indicates that the Rall equivalent cylinder is anaverage of the individual child cables. The characteristics of these individual cables cannotbe retrieved from the Rall equivalent cylinder. Clearly, a truly equivalent structure enablesthe properties of the original child dendrites to be reconstructed from it. In this sense,the Rall equivalent cylinder is incomplete, but can be made complete by the inclusion ofa further N − 1 disconnected cables formed from the N − 1 pair-wise differences of thepotentials on the child limbs. Each of these potentials describes a cable disjoint from everyother cable, with one end cut and the other end satisfying the terminal condition of theoriginal child branches.

The Discrete Tree Equations

One popular way to solve the model equations for a dendritic tree is to use finite differencemethods to reformulate them as a discrete system of ordinary differential equations tobe integrated with respect to time, the dependent variables in these equations being themembrane potential at a predetermined set of points (or nodes) distributed uniformly overthe dendritic tree (Mascagni, 989 and references therein). Since the underlying differentialoperator in the cable equation (8.5) is linear, then the discretised equations have matrixrepresentation

dVdt

= AV + B (8.42)

whereV is a vector of membrane potentials,B is a vector of electrotonic input currents andA is a square matrix embodying the electrical and geometrical features of the tree includingthe connections between limbs, the terminal boundary conditions and a soma condition.For this reason, A is often called the structure matrix of the dendrite. If the dendrite ispassive, this matrix will be independent of time. The formulation of the dendritic structurematrix9 presented here follows the procedure described in detail by Ogden et al. (999).At

9For other matrix representations see Tuckwell (1988a), Perkel and Mulloney (1978a, 1978b), Perkel,Mulloney and Budelli (1981) and Mascagni (1989).

Page 15: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 227

this stage, it should be noted that equation (8.42) may be nonlinear since B is the sum ofthe exogenous currents and the nonlinear components of the intrinsic voltage dependentcurrents.

The continuous dendrite is first associatedwith a continuous “model dendrite”of multi-cylinder construction in a way that recognises that individual limb lengths can, in reality,be measured only to a prescribed experimental accuracy. Through a variety of steps in-volving the assignment of nodes, the representation of spatial derivatives, the treatmentof branch points, points of discontinuity in cable cross-sectional area, terminal boundaryconditions and soma boundary condition, the equations and boundary conditions implicitin the specification of the model dendrite are manipulated into the finite set of ordinarydifferential equations (8.42). Each component of the discretisation procedure is now de-scribed in detail.

Canonical Cable and Electrotonic Length

A dendrite is modelled by a family of interacting cable equations, one for each limb, suchthat membrane current is everywhere conserved and membrane potential is continuousat dendritic branch points and the soma. Each cable equation describes how the effectsof membrane capacitance (as measured by CM ∂V/∂t ), membrane resistance (as mea-sured by gM(V − EL)) and the axial diffusion of membrane potential (as measured by(1/P(x))(∂/∂x)(gAA(x)∂V/∂x)) combine to determine the spatial-temporal behavior ofthe membrane potential on that limb. The first step in the construction of the model den-drite requires a transformation of each cable equation to a canonical form. This canonicalform is formulated so as to give equal importance to each of the three interacting processesthat shape the membrane potential. This procedure also serves to simplify and nondimen-sionalise the cable equation so as to make it more amenable to analytical and numerical10

methods.Recall that the membrane potential V (x, t ) in a limb of a dendrite of cross-sectional

area A(x) and perimeter P(x) satisfies the partial differential equation

CM∂V (x, t )∂t

+ gM(V − EL) = 1P(x)

∂x

(gA A(x)

∂V∂x

)− Jextra , (8.43)

Jextra = JIC(x, t )+ JSC(x, t )+ gNL(V )(V − EL) . (8.44)

Classical analysis of dendritic structure usually begins by partitioning the physical dendriteinto uniform cylinders (see Rall, 962a). This discretisation process is based on subjectivecriteria11 to delimit cylinder boundaries. The subsequent rescaling of length from x → zand time from t → t∗ = t/τ (i.e. non-dimensionalisation) cannot be achieved untilthese cylinders have been defined. The rescaled length of each cylinder is defined as itselectrotonic length, while its associated non-dimensionalised cable equation has form

∂V∂t∗

+ (V − EL) = ∂2V∂z2

− J∗

g(8.45)

where J∗ is the electrotonic current corresponding to JNL defined in equation (8.44) and gis the g-value of the cable. The consequence of this non-dimensionalisation procedure isthat the coefficients of the membrane potential and its spatial and temporal derivatives areall unity, and therefore equation (8.45) is the canonical form for a uniform cylinder, sinceeach term carries equal weight. Indeed, the original scaling was motivated by this veryrequirement; it is the simplified from of equation (8.45) that recommends its acceptance

10Strictly speaking, numerical methods can only be applied to nondimensionalised equations since theydeal with arithmetical quantities that are implicitly nondimensional.

11A new cylinder is started after a diameter change of at least 0.2 µm (e.g. Segev et al. 1989).

Page 16: An Introduction to the Principles of Neuronal Modelling

228 K. A. L, J. M. O, D. M. H, J. R. R

as a canonical form for the equation. However, the restriction to uniform cylinders impliesthat equation (8.45) is likely to be a special case of a more general canonical form.

Ideally a non-dimensionalisation procedure is required that does not depend on the apriori definition of cylinders, but which reduces to equation (8.45) for a uniform cylinder.The more general canonical equation for non-uniform dendrites carries with it a gener-alised definition of electrotonic length,which is not subjective, but gives the well recognisedelectrotonic length for uniform cylinders and also leads to an objective procedure for dis-cretising the dendritic tree.

This aim is achieved by rescaling dendritic geometry in a nonlinear way. Let nondi-mensional axial coordinate z and time t∗ be defined by the transformations

z =∫ x

0

√gMP(s)gAA(s)

ds , t∗ = t/τ , τ = CM/gM . (8.46)

These nondimensional coordinates lead to the generalised canonical form of the cableequation for non-uniform dendrites.With these coordinate transformations,

∂V∂t

= gMCM

∂V∂t∗

,∂V∂x

= gMP(x)CMA(x)

∂V∂z

.

After further straightforward analysis, the corresponding generalised nondimensionalisedform of the cable equation (8.43) and (8.44) may be shown to be

∂V∂t∗

+ (V − EL) = 1g (z)

∂z

(g (z)

∂V∂z

)− Jg (z)

, (8.47)

in which

g (z) = √gAgMA(x)P(x) , z =∫ x

0

√gMP(s)gAA(s)

ds . (8.48)

In equation (8.47), J is electrotonic linear current density and is related to JNL by therequirement that for arbitrary 0 < a < b < L,

∫ b

aP(x) Jextra dx =

∫ z(b)

z(a)J dz ≡ Jextra = J

√gM

gAA(x)P(x). (8.49)

Equation (8.47) is the generalised canonical equation of a dendrite. For uniform den-dritic cylinders, A(x) and P(x) are constant functions of x. In this event, g (z) is alsoconstant and the canonical equation (8.47) simply reduces to the traditional form (8.45).Furthermore, z is simply a constant multiple of x, that is, the electrotonic length of adendritic cylinder is a fixed multiple of its physical length. The familiar Rall definition ofelectrotonic length applies this result to each of the uniform cylinders comprising a den-dritic limb.However, electrotonic length l and physical length L of a dendrite are generallyconnected through the nonlinear relationship

l =∫ L

0

√gMP(s)gAA(s)

ds , (8.50)

which becomes linear only when the integrand is constant.

Page 17: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 229

Uniformly Tapered Dendrite

It is of practical interest to determine the electrotonic length of a uniformly tapered dendritewhose physical length is L, and whose left hand and right hand cross-sectional area andperimeter are AL, PL and AR, PR respectively. It is shown in the section on generalisedcompartmental models (p. 270) that

A(x) = (1 − λ)2AL , λ ∈ [0, L/H] , P(x) = (1 − λ)PL , x = λH

where H is the theoretical length of the dendrite if it were tapered to extinction.Assumingthat gA and gM are constant, then from equation (8.50), the electrotonic length of thisdendrite is

l =∫ L

0

√gMgA

√(1 − λ)PL(1 − λ)2AL

dx , x = λH

= H√gMgA

√PLAL

∫ λR

0(1 − λ)−1/2 dλ

= 2H√gMgA

√PLAL

[1 −√(1 − λR)

]

=√gMgA

√PLAL

2HλR1 + √

(1 − λR)

=√gMgA

PL√AL

2L√PL + √

PR.

For a tapering dendrite, it is obvious that P(x)/√A(x) = PL/

√AL for all x. In effect, the

constancy of the ratio P(x)/√A(x) for all x is a test for uniform taper. The actual value

of this ratio depends on the geometry of the dendritic cross-sectional area. For circulardendrites, the constant is 2

√π. Since the circle encloses maximum area for a given perime-

ter (isoperimetric inequality of calculus of variations), then PL/√AL � 2

√π under all

circumstances. Therefore, for dendrites of uniform taper,

l =√gMgA

PL√AL

2L√PL + √

PR�√πgMgA

4L√PL + √

PR

with equality if and only if the dendrite has circular cross-section.

Construction of the Model Dendrite

The non-dimensionalisation procedure described in the previous subsections, when ap-plied to a dendritic tree with n limbs of physical lengths L1, . . . , Ln, generates a morpho-logically equivalent structure with limbs of electrotonic length l1, . . . , ln respectively. Al-though the notion of largest common denominator is not generally sensible for arbitrarylengths l1, . . . , ln, it can be replaced by the idea that, given a set of n arbitrarily positiveconstants ε1, ε2, . . . , εn, there exist non-negative integers m1, . . . , mn and a length l suchthat

|lk −mk l | � εk , 1 � k � n . (8.5)

Notionally, ε1, ε2, . . . , εn are the non-dimensional measurement errors inherent in the de-termination of the electrotonic lengths l1, . . . , ln respectively and represent the scaled errorsin L1, . . . , Ln. They may also be interpreted as a user-specified acceptable error. Once thiserror is specified, the discretisation procedure is automatic.Although the choice of l and theaccompanying integers m1, . . . , mn is not unique, for a prescribed choice of ε1, ε2, . . . , εn,

Page 18: An Introduction to the Principles of Neuronal Modelling

230 K. A. L, J. M. O, D. M. H, J. R. R

there is a largest value of l, say lmax, that is unique by definition. This largest value willhenceforth be called the quantum length and be unambiguously denoted by l . Even forthe quantum length, the values of m1, . . . , mn may still be non-unique but now acceptablechoices for any integermk (1 � k � n) differ by at most one. For the purpose of dendriticmodelling, the morphology of the original electrotonic dendrite is now approximated by anew “model dendrite” formed by replacing limb k (exact length lk) by a limb of lengthmk l .Once a quantum length and a set of integers m1, . . . , mn are specified, there are infinitelymany electrotonic dendrites that are represented by the same model dendrite. The habitantof these “equivalent morphologies” can be thought of as a fuzzy zone around the modeldendrite, the size of this zone being dependent on the values of ε1, ε2, . . . , εn. Conspicuousby its absence in this discussion is the thickness of the original dendrite; it has no directbearing on the choice of model dendrite, although it indirectly controls the specificationof the ε values by influencing the choice of l, the quantum length. Clearly there is a re-lationship between l, the maximum length of model dendrite over which no changes incross-sectional area and perimeter are permitted, and the sensitivity with which changesin dendrite thickness can be specified through the predetermined choice of ε1, ε2, . . . , εn.Therefore (8.50) and (8.5) together with the choice of an acceptable error completelydefine the discretisation procedure.

Once the quantum length is chosen and the model dendrite constructed, a series ofnodes, uniformly spaced distance h apart, are now distributed throughout the dendrite sothat one node lies on each cylinder boundary and there is at least one (internal) nodewithineach dendritic cylinder. Of course, smaller values of h provide superior refinement of themembrane potential but at the cost of reduced computational speed and increasedmemorystorage. The nodes, once placed, are numbered individually so that node z0 corresponds tothe tree-to-soma connection point. In a branched structure, it is impossible to guaranteeconsecutive numbering of nodes on all cylinders since there are nodes that lie on morethan one cylinder, but it is possible to guarantee that node numbers increase away fromthe soma. The enumeration scheme illustrated in figure 3 is implemented in this articlebecause it simplifies the matrix representation of a dendritic tree and the construction ofthe equivalent cable. A key point in the enumeration algorithm is that nodes at which themembrane potential is known, for example, clamped ends (membrane potential externallyfixed) or cut ends (membrane potential is zero) are left un-numbered. For a dendritic treewith N terminals, it is evident that consecutive numbering of nodes is broken on exactly(N − 1) occasions.

Finite Difference Formulae

As a preamble to the discretisation of the cable equations and the discussion of relatedissues, Taylor’s theorem is used to derive some familiar and less familiar finite differenceformulae for a function of a single variable. For sufficiently differentiable functions f ofthe single variable x, Taylor’s theorem states that

f (x + h) = f (x)+ hf ′(x)+ h2

2f ′′(x)+ h3

6f ′′′(x)+O(h4) (8.52)

provided h is suitably small. Similarly, it follows from Taylor’s theorem that

f (x − h) = f (x)− hf ′(x)+ h2

2f ′′(x)− h3

6f ′′′(x)+O(h4) ,

f (x + 2h) = f (x)+ 2hf ′(x)+ 2h2 f ′′(x)+ 4h3

3f ′′′(x)+O(h4) ,

f (x − 2h) = f (x)− 2hf ′(x)+ 2h2 f ′′(x)− 4h3

3f ′′′(x)+O(h4) .

Page 19: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 23

2h

2h

2h

SOMA

0

CUT

SEALED

SEALED

8

22

23

41

46

34

42

Fig. 3. Discretisation of a dendritic tree. All nodes are spaced h apart in electrotonic space. Nodenumbering starts from z0 at the soma.

Based on these expansions, it is a matter of elementary algebra to demonstrate that f ′(x)and f ′′(x) can be approximated to second order in h by the familiar central differenceformulae

f ′(x) = f (x + h)− f (x − h)2h

+O(h2) , (8.53)

f ′′(x) = f (x + h)− 2 f (x)+ f (x − h)h2

+O(h2) . (8.54)

Equally important, but less familiar, are the right and left hand finite difference formulae

f ′(x) = 3 f (x)+ f (x − 2h)− 4 f (x − h)2h

+O(h2) , (8.55)

f ′(x) = 4 f (x + h)− 3 f (x)− f (x + 2h)2h

+O(h2) . (8.56)

Results (8.55) and (8.56) are used in the finite difference treatment of boundary conditionsinvolving gradients, for example, sealed ends, branch points etc.

The Discrete Formulation

Suppose that the dendritic tree has n nodes denoted symbolically by z0, z1, . . . , zn−1 wherez0 is chosen to be the soma. The soma is treated separately from the tree itself. For treenodes (i.e. j > 0), let vj denote themembrane potential at node z j, let g j denote the g-valueof the dendritic segment (a mathematical cylinder of length h) which has z j as its distalend and let Jj be the electrotonic current density applied to the dendrite at node z j . Theconstruction of the discrete equations requires the consideration of three different classesof nodes:

Terminal nodes Given a distribution of nodes on amodel dendritic tree, a node is classifiedas a terminal node if it is either the node at the soma-to-tree connection or is the lastenumerated node on any pathway from the soma to a dendritic tip. For leaky or sealedends, the terminal node will be located at the dendritic tip but for a cut or clamped end,the terminal node will be the node connected to the dendritic tip, that is, the penultimatenode on that limb.

Page 20: An Introduction to the Principles of Neuronal Modelling

232 K. A. L, J. M. O, D. M. H, J. R. R

Shared nodes Shared nodes correspond to branch points and points of discontinuity incable geometry.

Internal nodes Any node that is not a shared node or a terminal node is, by definition, aninternal node.

For any node z j, let r−1j be defined as the sum of the g-values of all the segments of the

finite difference representation of the dendrite that contain z j . For example, r−10 is the sum

of the g-values of all the segments containing the soma z0 while if z j is a branch point at

which N child dendrites meet a parent dendrite, then r−1j = g j +

∑Nr=1 gkr where gkr is the

g-value of the r th child dendrite. If node z j is a dendritic terminal then r−1j = g j ; if z j is an

internal node then r−1j = 2g j and, finally, if z j is a point of discontinuous cable geometry

then r−1j = g j + g j+1.

Finite difference formulae are used to approximate the first and second spatial deriva-tives of membrane potential. Let zi, z j and zk be three nodes, physically in sequence alonga dendritic limb but not necessarily sequentially numbered and such that i < j < k, thenit follows from finite difference formulae (8.53) and (8.54) that

∂v(z j , t )

∂x= vk − vi

2h+O(h2) (8.57)

∂2v(z j , t )

∂x2= vk − 2vj + vi

h2+O(h2) . (8.58)

Suppose that z j is a shared node, the somal node or a node located at a tip of a dendriticOne sided gradientcalculation tree. The computation of the potential gradient at z j is now non-trivial since z j is either

flanked by a single node (soma or dendritic tip) or alternatively, a different model equationapplies to each of its flanking nodes (shared node). This impasse is often resolved usingan idea borrowed from the numerical treatment of boundary value problems for ordinarydifferential equations, namely the notion of extending the differential equation to z j itselfby creating a suitably positioned fictitious node zi or zk . The potential at the fictitious nodeis then manipulated to fix the gradient value at z j . To be specific, suppose that zi and z jare physically contingent nodes and that zk is a fictitious node more distal from the somathan z j . A rearrangement of the finite difference formulae for the potential gradient andcable equation at z j yields

vk = h2dvjdt

− vi + (h2 + 2)vj +O(h4) , vk = vi + 2h∂v

∂x

∣∣∣∣node j

+O(h3) .

Taken together, these two equations enable the fictitious potential vk at zk to be eliminatedto obtain

2h

∂v(z (−)j , t )

∂x= dvj

dt+ βvj − 2αvi +O(h) (8.59)

where the symbolism z (−)j indicates that the derivative has been computed by introducinga fictitious node to the right of z j . Equation (8.59) therefore provides a formula for thepotential gradient at z j in terms of known potentials less distal from the soma. Similarly,the formula

2h

∂v(z (+)j , t )

∂x= −dvj

dt+ 2αvk − βvj +O(h) (8.60)

expresses the potential gradient at z j in terms of potentials more distal from the soma.Results (8.59) and (8.60) are needed in the discussion of terminal and shared nodes butare not required for internal nodes that are now examined.

Page 21: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 233

Suppose that z j is an internal node flanked by nodes zi and zk with zk most distal from Internal nodesthe soma. The cable equation at z j, namely,

∂V (z j , t )

∂t+V (z j , t ) = ∂2V (z j , t )

∂x2− Jjg j

is replaced by a finite difference scheme centred on z j . The resultant equation is

dvjdt

= αvi − βvj + αvk − Jjg j

(8.6)

to O(h2) where α = 1/h2 and β = (1 + 2α). If V and J are the vector of membrane po-tentials and electrotonic input currents respectively then, in the notation of linear algebra,this equation has form

IjdVdt

= AjV − 2R j J

where Ij is the j th row of the n×n identity matrix In, R j is the j th row of the n×n diagonalmatrix R where R j, j = r j and Aj is the j th row of the n × n matrix A whose entries in thej th row are all zero except a j,i = a j,k = α and a j, j = −β.

Terminal nodes are more difficult to handle than internal nodes since one of zk or zi is Terminal nodeseither fictitious, as happens with the somal node or at a dendritic tip, or one is not a variableof the model as happens with cut and voltage clamped ends. The treatment of terminalnodes reduces to three different possibilities.Cut end boundary condition: If z j is a terminal node adjacent to a cut end then z j es-sentially behaves like an internal node with vk = 0. It follows immediately from (8.6)that

dvjdt

= αvi − βvj − Jjg j. (8.62)

In the notation of linear algebra, equation (8.62) is expressed in the form

IjdVdt

= AjV − 2R j J ,

where Ij is the j th row of the n×n identity matrix In, R j is the j th row of the n×n diagonalmatrix R where R j, j = r j and Aj is the j th row of the n × n matrix A whose entries inthe j th row are all zero except a j, j = −β and a j,i = α. The voltage clamped boundarycondition is formulated in a similar way.Current injected boundary condition: Suppose that z j is a dendritic terminal into whichcurrent I (i )j is injected but which leaks charge to the extracellular region at a rate propor-tional to the potential difference between the potential vj of the dendritic terminal and thepotential of the extracellular material, taken to be zero (Ohmic leakage). If gL is the lumpedleakage conductance of the dendritic tip, then the current injected boundary condition is

g j∂v(z (−)j , t )

∂x+ gLvj = −I (i )j . (8.63)

When formula (8.59) is used to replace the right handed gradient in (8.63) and it is recog-nised that i = j − 1, the final leakage boundary condition at node z j takes the form

dvjdt

= 2αvj−1 − βvj −gLvj + I (i )j

g jh(8.64)

Page 22: An Introduction to the Principles of Neuronal Modelling

234 K. A. L, J. M. O, D. M. H, J. R. R

to O(h). In the notation of linear algebra, equation (8.64) has matrix representation

IjdVdt

= AjV − 2R j J

where Ij is again the j th row of the n× n identity matrix In, R j is the j th row of the n× ndiagonal matrix R where R j, j = r j, the j

th entry of the vector J is I (i )j /(2h) and Aj is the

j th row of the n × n matrix A whose entries in the j th row are all zero except a j, j−1 = 2αand a j, j = −β − (2gL/g jh).

In particular, a sealed terminal corresponds to gL = 0 and I (i )j = 0.Tree-to-soma condition: Finally, the soma (or local origin when transforming a sub-tree)behaves like a terminal node in the sense that the soma must contribute a boundary con-dition in order to complete the system of nodal equations. Suppose that N dendritic limbsemanate from the soma and that limb r, (1 � r � N ), starts with z0 and has second nodezkr . If total electrotonic current J0 is applied to the soma, then the somal potential vS = v0satisfies

g(εdvSdt

+ vS

)= −J0 +

N∑r=1

gkr∂vkr (z

(+)0 , t )

∂x, (8.65)

in which g, the total somal conductance, and ε are defined by the expressions

g = AS gS ε = gMCS

gSCM.

The finite difference formulation of the somal condition (8.65) is derived by using formula(8.60) to replace potential gradients in equation (8.65). The result is

g(εdv0dt

+ v0

)= −J0 + h

2

N∑r=1

gkr

[−dv0dt

+ 2αvkr − βv0

]

to O(h2). After some algebraic manipulation, it can be shown that the somal condition(8.65) finally simplifies to

dv0dt

= −2g + βhg02εg + hg0

v0 + 2αh2εg + hg0

N∑r=1

gkrvkr − 2J02εg + hg0

. (8.66)

In the notation of linear algebra, the somal condition (8.66) has representation

I0dVdt

= A0V − 2R0 J

where I0 is the 0th row of the n × n identity matrix In, R0 is the 0th row of the n × ndiagonal matrix R where R0,0 = 1/(2εg + hg0) and A0 is the 0th row of the n × n matrixA whose entries in the 0th row are all zero except a0,kr = 2αhgkr/(2εg + hg0) and a0,0 =−(2g + βhg0)/(2εg + hg0).

Shared nodes occur at branch points and at point of discontinuity in limb geometryShared nodes(change in cross-sectional area of limb). Both are characterised by the need to conserveaxial current.Discontinuous cable geometry: Suppose that z j is a node at which limb geometry changesdiscontinuously. If electrotonic current Jj is injected at z j, then conservation of currentrequires that

−g j∂v(z (−)j , t )

∂x− Jj + g j+1

∂v(z (+)j , t )

∂x= 0 .

Page 23: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 235

Each partial derivative is replaced by its finite difference approximation (8.59) and (8.60)respectively. After some elementary algebra, it follows that

dvjdt

= −βvj + 2αg jg j + g j+1

vi + 2αg j+1

g j + g j+1vj+1 − 2Jj

h(g j + g j+1)(8.67)

to O(h). The definition of r j allows equation (8.67) to be expressed in the more elegantform

dvjdt

= −βvj + 2αr jg jvi + 2αr jg j+1vj+1 − 2r jJjh

(8.68)

with matrix representation

IjdVdt

= AjV − 2R j J

where Ij is the j th row of the n × n identity matrix In, the vector J has j th componentJj/h, R j is the j th row of the n × n diagonal matrix R where R j, j = r j and Aj is the j th

row of the n × n matrix A whose entries in the j th row are all zero except a j,i = 2αr jg j,a j, j+1 = 2αr jg j+1 and a j, j = −β.Branch point condition: Suppose that z j is a branch point at which a parent branch withpenultimate node ( j − 1) meets N child branches each beginning with z j but such thatthe second node on child r is zkr , (1 � r � N ). If electrotonic current Jj is injected at z j,then conservation of current at this branch point requires that

−g j∂v(z (−)j , t )

∂x− Jj +

N∑r=1

gkr∂vchild−r (z

(+)j , t )

∂x= 0 .

The partial derivatives are now replaced by their respective expressions using results (8.59)and (8.60) to obtain

−g j[dvjdt

+ βvj − 2αvj−1

]+

N∑r=1

gkr

[−dvjdt

+ 2αvkr − βvj

]− 2Jj

h= 0

to O(h). Bearing in mind that r j is the sum of the g-values of all limbs impinging on z j,this condition simplifies to

dvjdt

= −βvj + 2αr jg jvj−1 + 2αr j

N∑r=1

gkrvkr − 2r jJjh. (8.69)

In the parlance of linear algebra, equation (8.69) has representation

IjdVdt

= AjV − 2R j J

where Ij is the j th row of the n × n identity matrix In, the vector J has j th componentJj/h, R j is the j th row of the n × n diagonal matrix R where R j, j = r j and Aj is the j th

row of the n× n matrix A whose entries in the j th row are all zero except a j, j−1 = 2αr jg j,a j,k1 = 2αr jgk1 , . . . , a j,kN = 2αr jgkN and a j, j = −β.

Page 24: An Introduction to the Principles of Neuronal Modelling

236 K. A. L, J. M. O, D. M. H, J. R. R

Matrix Representation of a Dendritic Tree

The previous analysis indicates how the discrete formulation of the dendritic tree producesone equation for each node. These equations can be assembled to form the n × n systemof differential equations

dVdt

= AV − 2RJ . (8.70)

In these equations,A is commonly called the tree matrix or structure matrix of the dendriteand is determined entirely by the dendritic geometry and terminal boundary conditions.The tree matrix is independent of tree potentials although it may be dependent on timeif synaptic inputs are active. On the other hand, the input currents J may be dependenton both time and potential. The enumeration scheme, together with the use of secondorder central differences to approximate derivatives, ensures that A is largely tri-diagonalalthough it has unavoidable off-tri-diagonal entries. Branch points always generate off-tri-diagonal entries since nodes can be numbered consecutively only on one cylinder. Someobvious properties of A are now described.

(a) A branch point consisting of one parent and N child branches has (N − 1) pairsof off-tri-diagonal elements; a binary branch point has a pair of off-tri-diagonal en-tries. The examples in Figs 4 and 5b illustrate this structure. Unbranched structures(cables) always have completely tri-diagonal tree matrices (Figure 5a). Indeed, sev-eral unbranched cables may be placed together in the same matrix as illustrated inFigure 5c.

(b) Although A is not a symmetric matrix, it is guaranteed to be structurally symmetricin the sense that aij �= 0 ⇐⇒ a ji �= 0. This property of A rises from the reflexivenature of connectivity; if node i is connected to node j then node j is connected tonode i . In particular, A has zero entries in its sub and super diagonals correspondingto connected nodes that are not numbered sequentially.

The representation of dendritic structures using second order central differences is ideallyTree matrix examplesuited to numerical methods.However, it is instructive to demonstrate the above ideas withreference to a nontrivial dendritic tree whose technical details are algebraically feasibility.A Y-junction with a parent limb of length l and g-value gP connected to a soma at oneend and connected to two sealed limbs of length l and g-values gL and gR at its other end,provides the simplest branched structure with a disconnected section. The configurationand designation of nodes are illustrated in Fig 4.

The treematrix for this simple but non-trivial treemay be expressed in the algebraicallyconvenient form

A =

−βS αk2 0 0 0 0 0

α −β α 0 0 0 0

0 αp2 −β αq2 0 αr2 0

0 0 α −β α 0 0

0 0 0 2α −β 0 0

0 0 α 0 0 −β α

0 0 0 0 0 2α −β

(8.7)

where gs = gP + gR + gL = r−12 and

k2 = 2hgP2εg + hgP

, p2 = 2gPgS

, q2 = 2gRgS

, r2 = 2gLgS

. (8.72)

Page 25: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 237

S

S

• • •

z0 z1 z2z3

z4

z5

z6

gP

gL

gR

Fig. 4. The figure represents a dendritic tree consisting of a parent limb of length l and g-value gPconnected to a soma at z0 and branching into two sealed limbs each of length l and with g-valuesgR and gL at z2. Each limb is described by three nodes giving a total of seven nodes over the entiretree.

Nodes z4 and z6 are sealed terminals, z0 is the soma-to-tree connection and z2 is a binarybranch point. Nodes z1, z3 and z5 are all internal. This tree will be analysed in detailedlater. In particular, subsequent analysis will use the result p2 + q2 + r2 = 2.

Figures 5a–c illustrate other tree matrices schematically. Note the similarity betweenthe tree matrices in Figures 5b and 5c. The off-tri-diagonal elements indicate that twounbranched segments are connected at the junction.

Formal Solution of Matrix Equations

This section seeks to establish two non-trivial properties of the tree matrix A, namely

(i) that there exist an n× n diagonal matrix S such that S−1AS is symmetric, that is, A issimilar to a symmetric matrix;

(ii) that the eigenvalues of A are always real and negative.

Although, the first of these properties appears theoretical, it is a key ingredient of theargument used to establish the second result, and also leads eventually to the concept of theequivalent cable. The second of these results essentially states that this model of a dendritictree predicts exponential dissipation of input currents without the oscillations that occurin weakly damped systems. In this respect, the dendritic model is consistent with observeddendritic behavior. Such a property of the model needs to be justified and cannot simply betaken for granted.Perhaps the best illustration of this idea is inmechanics/thermodynamicswhere it is possible to construct environments in which heat flows from cold bodies tohot bodies without the expenditure of energy. This process is entirely consistent with theundisputed principles of conservation of mass, conservation of energy, conservation oflinear momentum and conservation of angular momentum, but has never been observedto occur spontaneously in reality nor is it ever likely to be observed in the future because itimplies the existence of machines that convert heat into energy with perfect efficiency. Theconcept of entropy is introduced with the specific purpose of prohibiting spontaneous heatflow from cold to hot bodies. The motto is that although the model appears to encapsulateall the reasonable features of the real phenomenon, it can never be assumed that the modeldoes not also admit undesirable effects that are not enjoyed by the real phenomenon.

Page 26: An Introduction to the Principles of Neuronal Modelling

238 K. A. L, J. M. O, D. M. H, J. R. R

0

14

0 14

9

8

16

0 3

140

node

zero element on upper/lower diagonals

non-zero off-tridiagonal element

KEY

0 11 12 16

0

0 16

16

9

9

3

3

0

0 16

16

11

11

12

12

Fig. 5. Branched trees, unbranched cables and their tree matrix representation. In (a) any single un-branched structure has a tree matrix that is tri-diagonal; in (b) a singly branched tree has an almosttri-diagonal tree matrix; in (c) placing two unbranched cables are in the same matrix representationgives a tri-diagonal matrix, but the elements representing each cable are now separated by two zeroelements along the diagonal.

Structure of the Tree Matrix

The structural symmetry of the tree matrix A already alluded to in the previous sectionsis now described in detail. It will be seen that a tree described by n nodes has a treematrix comprising n non-zero diagonal entries and 2(n−1) non-zero off-diagonal entriesdistributed symmetrically; a total of 3n−2 non-zero entries.While the number of diagonalentries is self-evident, the successful structural analysis of A is critically dependent onthe distribution and number of its off-diagonal entries. Two contrasting discussions arepresented here, both reaching the same conclusion but using different perspectives.

Page 27: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 239

Any tree node can be classified as either an internal node (a node connected to a node on Local argumenteither side) or a node corresponding to the soma, a branch point or a dendritic terminal.Internal nodes generate two off-diagonal entries arising from their connection to their twoneighbours, while nodes corresponding to dendritic terminals generate a single off-diago-nal entry in A since such nodes have only one neighbour.With this information in mind,the idea is to start at the tips of a dendritic tree and proceed towards the soma, countingthe surplus/deficit in off-diagonal entries with respect to two entries per node. Supposethat N terminal branches and a parent limb meet at a branch point then one off-diagonalentry is generated for each cylinder contingent on the branch point (N + 1) off-diag-onal entries in total. Thus the branch point generates a surplus of (N − 1) off-diagonalentries to compensate for the deficit of N off-diagonal entries, one from each terminallimb. Thus there is a deficit of exactly one off-diagonal entry in A arising from the com-plete description of that branch point. In effect, the penultimate node of the parent limb ofthat branch point now behaves like a terminal node. This argument may be repeated untilthe soma is reached; each repetition revealing a deficit of exactly one off-diagonal elementin A. However, the counting at the soma has presupposed that the soma is connected toa parent limb, which it is not, and therefore has over counted off-diagonal entries by one.Thus there is a deficit of two off-diagonal entries in A, that is, the tree matrix A consistsof exactly (n − 1) pairs of non-zero off-diagonal entries.

Any tree with N terminals can be divided into a set of N paths, each consisting of consec- Global argumentutively numbered nodes. There is one path, the soma path, where numbering starts from“0” and ends on a dendritic terminal, plus N − 1 additional paths each starting with anode connected to a branch point and ending on a dendritic terminal (or a node con-nected to a terminal in the case of a voltage boundary condition). Observe that these pathsare uniquely determined by the numbering scheme and have the property that every treenode lies on exactly one path. Each path must contribute a tri-diagonal portion to the treematrix, plus additional off-tri-diagonal elements arising from the connection of that pathto the rest of the tree.

Now consider any path starting with node p and connected to another path at thebranch point described by node j then p > j + 1. Tree connectivity ensures that elementsa jp and apj of the matrix A are non-zero and off-tri-diagonal, while elements ap(p−1) =a(p−1)p = 0 since node p− 1, which must be a dendritic terminal, and therefore cannot beconnected to node p. Since the soma path does not connect to another path, each non-so-mal path contributes a pair of non-zero off-tri-diagonal entries and a pair of zero elementson the sub- and super-diagonals of A, in total N − 1 pairs of non-zero off-tri-diagonalelements and N − 1 pairs of zero elements on the sub- and super-diagonals of A.

If the tree is represented by n nodes then there are n− 2N nodes that lie within paths.Each such element, j say, must connect to nodes j − 1 and j + 1 and so contributestwo off-diagonal elements to the j th row of A, giving a total of 2(n − 2N ) off-diagonalelements. The off-tri-diagonal elements due to connections between paths have alreadybeen determined as 2(N − 1). Since the starting node, p say, of each path (including thesoma path) must also be connected to node p + 1, there are a further N off-diagonalelements, one each in row p. Finally, each of the N terminal nodes must connect to justone other node, yielding a total of N off-diagonal elements. In total, then, there are 2(n−2N )+ 2(N − 1)+ N + N = 2(n − 1) off-diagonal elements.

Symmetrising the Tree Matrix

The symmetrising argument consists of three observations. The first is that the structuralsymmetry of A guarantees that it has an even number of non-zero off-diagonal entries,

Page 28: An Introduction to the Principles of Neuronal Modelling

240 K. A. L, J. M. O, D. M. H, J. R. R

while the properties of the finite difference algorithm guarantees that these entries are allpositive.

Now let S be a nonsingular n× n real diagonal matrix with entry si in the i th row andcolumn, then the matrix S−1AS has (i, j)th entry (s j/si )ai,j and ( j, i )th entry (si/s j )a j,i .Self evidently, the diagonal entries of S−1AS are just the diagonal entries of A. For eachpair of non-zero off-diagonal entries ai,j and a j,i, the values of si and s j ( j �= i) are chosento satisfy

sis ja j,i = s j

siai,j → s j

si=√a j,ia j,i

(8.73)

where it should be recognised that the argument of this square root is always positive. Sinceeach row of A must contain at least one off-diagonal entry and there are exactly (n − 1)different pairs of off-diagonal elements in A, then equations (8.73) are a set of (n − 1)simultaneous equations in the n unknowns s0, . . . , sn−1. Their general solution involves anarbitrary constant which may be taken to be s0 without loss of generality. Typically s0 isset to unity. Consequently there exists a diagonal matrix S such that S−1AS is a symmetricmatrix, that is, the tree matrix A is similar to a symmetric matrix.

Indeed, the non-zero off-diagonal entry of S−1AS in the (i, j) location is √aija ji whilethe symmetrising matrix has diagonal entry si = √

r0/ri in the i th row and column. There-fore, there is no practical need to construct A in a numerical procedure since S−1AS andS can be constructed directly.

As an example of the symmetrisation procedure, the tree matrix (8.7) may be shown toExample of thesymmetrisation process have symmetric form

S−1AS =

−βS αk 0 0 0 0 0

αk −β αp 0 0 0 0

0 αp −β αq 0 αr 0

0 0 αq −β √2α 0 0

0 0 0√

2α −β 0 0

0 0 αr 0 0 −β √2α

0 0 0 0 0√

2α −β

(8.74)

where S is the diagonal matrix

S = diag

(1,

1k,pk,pkq,p√

2kq

,pkr,

√2pkr

). (8.75)

Eigenvalues and Eigenvectors of Matrices

A non-zero vector X is an eigenvector of a square matrix A provided there is a scalar µ,called an eigenvalue, such that

AX = µX .

Since (A − µI )X = 0 with X �= 0 then the eigenvalues of A are just the solutions of thepolynomial equation det(A− µI ) = 0. Let S be a nonsingular square matrix of the sametype as A then

det(S−1AS − µI ) = det(S−1(A− µI )S

) = det S−1 det(A− µI ) det S = det(A− µI ) .

(8.76)

Page 29: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 24

Hence µ is an eigenvalue of A if and only if it is an eigenvalue of S−1AS. When A is areal matrix, the polynomial equation det(A − µI ) = 0 is real and so its solutions can beeither real or complex conjugate pairs. Hence real matrices need not have real eigenvalues.However, it is well known that real symmetric matrix have real eigenvalues. To appreciatethis fact, letA be a real symmetric matrix with eigenvalueµ and corresponding eigenvectorX so that AX = µX . Bearing in mind that A is a real matrix, the complex conjugate ofAX = µX yields AX = µX where z is the complex conjugate of z. The transpose ofAX = µX gives

µXT = (AX )T = XTAT = XTA ,

and when this equation is post multiplied by X, the result is

µXTX = XTAX = XT(µX ) → (µ− µ)XTX = 0 .

Since X �= 0 then µ = µ and so µ is real. This is a general property of real symmetricmatrices.

Recall now that every tree matrix A has an associated non-singular diagonal matrixS such that S−1AS is symmetric. Result (8.76) indicates that A and S−1AS have the sameeigenvalues while the result on real symmetric matrices indicates that the eigenvalues ofS−1AS are real. Hence the eigenvalues of A are real although A is not symmetric. Further-more, if AX = µX then for all 0 � i < n,

n−1∑j=0

aijx j = µxi → (aii − µ)xi = −n−1∑

j=1, j �=i

aijx j

and if xi is the largest component of X then

|aii − µ| ≤n−1∑

j=1, j �=i

|aij ||x j ||xi |

�n−1∑

j=1, j �=i

|aij | .

This simple result is commonly known as Gershgorin’s “circle theorem”. It reveals an im-portant property of tree matrices.

LetA be a treematrix. It is an algebraic fact that treematrices are diagonally dominated,that is, the modulus of the each main diagonal entry exceeds the sum of the moduli of allthe off-diagonal entries in the row containing that entry. The comparisons are

Type of node Diagonal entrySum of moduli of

off-diagonal entries

Soma −2g + hβg02εg + hg0

2hαg02εg + hg0

Internal nodes and

branch points−(1 + 2α) 2α

Terminal nodes −(1 + 2α) α

Since the diagonal entries of tree matrices A are negative without exception then theirfamily of Gershgorin circles is contained entirely within the left half-plane. No circle con-tains the origin and so µ = 0 is not an eigenvalue of A. But A is guaranteed to have realeigenvalues and so all the eigenvalues of A are real and negative. This is an important resultsince it will shortly be clear that this model of a dendritic tree predicts the attenuation ofcurrent input as is observed in real physical dendrites.

Page 30: An Introduction to the Principles of Neuronal Modelling

242 K. A. L, J. M. O, D. M. H, J. R. R

Solution of the Discretised Cable Equations

It has already been demonstrated that the tree matrix A has real eigenvalues, all of whichare negative. It can be shown that there is a real nonsingular matrix P (the columns ofP are some ordering of the eigenvectors of A) such that P−1AP = D where D is a realdiagonal matrix containing the eigenvalues of A. Alternatively, A = PDP−1. Using thisrepresentation of A, the original system of differential equations (8.70) becomes

dVdt

= PDP−1V − 2RJ . (8.77)

Let Y = P−1V then Y satisfies

dYdt

= DY − 2P−1R J . (8.78)

Given any constant square matrix M, it can be proved formally that d(eMt )/dt = MeMt .This idea, when applied to equation (8.78) yields

d(e−DtY )dt

= −2e−DtP−1R J

which now integrates to

Y (t ) = eDtY (0)− 2∫ t

0eD(t−s)P−1R J (s) ds . (8.79)

In conclusion V = PY satisfies

V (t ) = PeDtY (0)−2∫ t

0PeD(t−s)P−1R J (s) ds = PeDtY (0)−2

∫ t

0eA(t−s)R J (s) ds .(8.80)

Since D = diag (µ0,µ1, . . . ,µn−1) is a diagonal matrix then it is self evident that eDt

is the diagonal matrix diag (eµ0t , eµ1t , . . . , eµn−1t ). The corresponding time constants forthis representation of the dendritic tree are therefore (µk )

−1, 0 � k < n. Of course,the real dendritic tree has a countably infinite set of time constants. It is only the leadingeigenvalues (the least negative in this case) that are adequately approximated by the finitedifference representation of the tree.A finer discretisation of the dendritic tree can improvethe accuracy of the matrix determination of the leading time constants both numericallyand in number but still only captures a finite number of eigenvalues by contrast with theanalytical analysis of a dendritic tree. Of course, it is only the leading eigenvalues that areimportant practically since the others decay too quickly to have any serious impact on thesolution over long periods of time .

In practice, the finite difference representation of a dendritic tree provides neither apractical algorithm for finding tree time constants nor a sensible way to integrate the treeequations with respect to time, particularly if accuracy is required. The reason lies in thealgebraic (powers of h or reciprocal powers of n) convergence of finite difference schemes,made worse by the reduced accuracy inherent in the treatment of branch points, points ofdiscontinuity and terminal boundary conditions. The strength of the finite difference rep-resentation of a dendritic tree lies in the ability of the matrix A to capture the connectivityand geometrical structure of the passive dendrite, leading in turn to the construction ofthe equivalent cable representation of that dendrite, conditioned on the discretisation ofthe original dendrite. The following example illustrates the numerical deterioration as adirect result of the formulation of boundary conditions at external nodes.

Page 31: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 243

Comparison of Gradient Boundary Conditions

To appreciate the difference between the algebraic and analytical implementation of bound-ary conditions involving gradients, it is sufficient to consider the finite difference solutionof an initial boundary value problem for the one dimensional diffusion equation with agradient boundary condition. All the essential features of a fully blown solution of a den-dritic tree are embodied, in microcosm, in this simple problem. To be specific, supposethat v(x, t ) is the solution of the initial boundary value problem

∂v

∂t= ∂2v

∂x2+ g (t , x) , (t , x) ∈ (0,∞)× (0,π/2) (8.8)

with initial condition v(x, 0) = v0(x) and boundary conditions

v(0, t ) = 0 ,∂v(π/2, t )

∂x= 0 .

Let [0,π/2] be subdivided by (n+1) uniformly spaced nodes x0, x1, . . . , xn where xk = khand h = π/(2n) and suppose that vk (t ) = v(xk , t ) and gk (t ) = g (xk , t ).Clearly v0(t ) = 0for all time since this is just the boundary condition on x = 0. At the internal node xk,0 < k < n, the second spatial derivative appearing in equation (8.8) may be written

∂2v

∂2x

∣∣∣∣x=xk

= vk+1 − 2vk + vk−1

h2+O(h2)

so that the differential equation at the internal nodes is approximated to order h2 by thesystem of (n − 1) ordinary differential equations

dvkdt

= vk+1 − 2vk + vk−1

h2+ gk , vk (0) given , 0 < k < n . (8.82)

The specification of the problem is completed by implementing the boundary conditionat x = π/2. There are two quite different ways to do this.

One way to enforce the zero gradient condition at x = xn = π/2 is to recognise from Algebraic treatment ofgradient boundarycondition

(8.55) that

∂v

∂x

∣∣∣∣x=xn

= 3vn + vn−2 − 4vn−1

2h+O(h2) = 0 .

Hence the zero gradient boundary condition is satisfied to O(h2) by requiring that

vn = 4vn−1 − vn−2

3.

The disadvantage of this approach is that it apparently destroys the tri-diagonal structureof the finite difference scheme when applied to branch points in a dendritic tree. In fact,this is untrue it’s just that an extra step (involving the solution of a system of linearequations) is required to calculate the membrane potentials at all dendritic branch pointsand terminals.

The analytical treatment of the gradient boundary condition at xn = π/2 is based on a Analytical treatment ofgradient boundarycondition

fictitious node xn+1 = xn + h exterior to the region of solution. As already described inthis section, the fictitious potential vn+1 at this node satisfies

dvndt

= vn+1 − 2vn + vn−1

h2+ gn ,

∂v

∂x

∣∣∣∣x=xn

= vn+1 − vn−1

2h+O(h2) .

Page 32: An Introduction to the Principles of Neuronal Modelling

244 K. A. L, J. M. O, D. M. H, J. R. R

When vn+1 is eliminated between these two equations and the gradient of v is set to zeroat x = π/2, it follows easily that vn satisfies the ordinary differential equation

dvndt

= 2(vn−1 − vn)

h2+ gn ,

the equation being accurate to O(h). The initial condition vn(0) is calculated from theinitial data. The crucial point here is that the tri-diagonal form of the equations is preservedbut at the cost of a boundary condition whose accuracy is an order of magnitude less thanthat at internal nodes. In practice, this inaccurate description of the boundary conditioninfects the whole numerical scheme.

These theoretical considerations are all fine andwell but there is no substitute for a concreteIllustrationapplication to appreciate the numerical properties of the two different implementations ofthe gradient condition.

It is demonstrated easily that the exact solution of equation (8.8) for initial conditionv0(x) = sin x and input g (t , x) = 2bteαt sin x is

v(t , x) =

(1 + bt2)eαt sin x , α = −1

e−t sin x + 2b sin x(1 + α)2

((1 + α)te(1+α)t + 1 − e(1+α)t

). α �= −1

The exact solutions at t = 4,α = −1 and t = 4,α = 1 are used to check the numerical ac-curacy of the approximate solutions derived using the finite difference algorithm based onthe central difference formulae (8.53). The two competing schemes are integrated to hightemporal accuracy so that the resulting errors are due purely to spatial truncation of thesecond derivative while the variation between both sets of errors (algebraic and analytic) isdue entirely to the different implementations of the boundary conditions; otherwise bothsystems of equations are identical.

Table 8.1. Comparison of errors in the finite difference solution of ut = uxx + g for algebraic andanalytical forms of the gradient boundary condition when α = −1.

x value Errors for 0 nodes (h ≈ 0.1571) Errors for 20 nodes (h ≈ 0.0785)

at Algebraic Analytical Algebraic Analytical

t = 4 bdry cond bdry cond bdry cond bdry cond

0.000 0.00000 0.00000 0.00000 0.00000

0.157 0.00065 0.00128 0.00024 0.00032

0.314 0.00126 0.00253 0.00047 0.00063

0.471 0.00182 0.00372 0.00069 0.00093

0.628 0.00231 0.00482 0.00089 0.00120

0.785 0.00268 0.00579 0.00105 0.00145

0.942 0.00293 0.00663 0.00119 0.00166

1.100 0.00304 0.00730 0.00129 0.00182

1.257 0.00300 0.00779 0.00135 0.00195

1.414 0.00280 0.00809 0.00136 0.00202

1.571 0.00243 0.00820 0.00132 0.00205

By way of contrast, the same calculations are done for α = 1. In this case, the driving forceg (t , x) grows exponentially in time.

Page 33: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 245

Table 8.2. Comparison of errors in the finite difference solution of ut = uxx + g for algebraic andanalytical forms of the gradient boundary condition when α = 1.

x value Errors for 0 nodes (h ≈ 0.1571) Errors for 20 nodes (h ≈ 0.0785)

at Algebraic Analytical Algebraic Analytical

t = 4 bdry cond bdry cond bdry cond bdry cond

0.000 0.00000 0.00000 0.00000 0.00000

0.157 0.17010 0.26361 0.05429 0.06590

0.314 0.33059 0.52072 0.10656 0.13018

0.471 0.47190 0.76502 0.15484 0.19125

0.628 0.58459 0.99047 0.19719 0.24761

0.785 0.65939 1.19155 0.23177 0.29788

0.942 0.68720 1.36327 0.25681 0.34081

1.100 0.65908 1.50144 0.27068 0.37535

1.257 0.56626 1.60262 0.27185 0.40065

1.414 0.39998 1.66436 0.25891 0.41608

1.571 0.15144 1.68509 0.23060 0.42127

In all circumstances, the algebraic form of the boundary condition is superior to theanalytical form, although this superiority diminishes as spatial resolution improves. Recallthat each algorithm is identical except in the treatment of the gradient boundary condition.It is clear that even for fine spatial resolution, the reduced error of the analytical treatmentrapidly infects the entire solution.Of course, the largest errors are associated with estimatesof the slowest time constants of the cable equation ut +u = uxx+g .Note that this equationis formally identical to vt = vxx +G in which v = etu and G = et g .

Generating Independent and Correlated Stochastic Spike Trains

To investigate the behavior of model neurons under conditions that approach that of realneurons, it is necessary to generate large numbers of spike train inputs in which the sta-tistical characteristics of the individual spike trains can be specified and the correlationbetween the spike trains set at any desired strength. Large numbers of inputs are required tomatch the features of real neurons which receive, for example, in the case of a cortical pyra-midal cell, as many as 104–105 synaptic inputs (Bernander, Koch and Usher, 994). Largescale synaptic background activity is known to influence the spatial-temporal integrationof the effects of synaptic inputs within individual neurons (Bernander, Douglas, Martinand Koch, 99; Bernander et al., 994; Murthy and Fetz 994; Rapp, Yarom and Segev,992). The ability to generate spike trains with known statistical properties will provide anecessary tool for investigating different hypotheses on how local signal processing mayoccur within individual neurons. The structure of the correlation between spike trains isalso an important factor in determining how features of particular signals may be extractedby individual neurons or by populations of neurons (Halliday, 998a).

Exponentially Distributed Inter-spike Intervals

Real neuronal spike trains are frequently characterised by the distribution of their in-ter-spike intervals which may be correlated and may also vary widely (i.e not necessarilyperiodic). However, many naturally occurring spike trains are realistically modelled by a

Page 34: An Introduction to the Principles of Neuronal Modelling

246 K. A. L, J. M. O, D. M. H, J. R. R

point process in which the successive intervals are totally random, i.e. a Poisson process(Holden, 976). The interval lengths of a Poisson process are characterised by, and can begenerated from an exponential distribution. To appreciate this result, let N (u, v) denotethe number of spikes in (u, v], then the Poisson process with mean spike rate M per unittime and history Ht satisfies

Prob (N (t , t + τ ) = 1 |Ht ) = τ/M + o(τ )

Prob (N (t , t + τ ) > 1 |Ht ) = o(τ )(8.83)

for positive τ and all times t . Let F (t ) = Prob (T � t ) where T is elapsed time since thelast spike, then

F (t + τ ) = F (t )+(1 − F (t )

) τM

+ o(τ ) , F (0) = 0 .

Consequently,

F (t + τ )− F (t )τ

= 1 − F (t )M

+ o(τ )τ

and by taking limits of this equation with respect to τ as τ → 0+, it follows that F satisfiesthe differential equation F = (1 − F )/M with initial condition F (0) = 0. It can be showneasily that F (t ) and its related probability density function f (t ) are given by

F (t ) = 1 − e−t/M , f (t ) = dFdt

= 1Me−t/M . (8.84)

Furthermore, realisations of T can be obtained from T = T (F ), the inverse mapping ofF = F (t ), by treating F (or equivalently, 1−F) as a uniformly distributed random variablein (0, 1). Thus for any choice of M, deviates T are constructed from uniform deviates Uby the formula

T = −M log(1 − F ) = −M logU , U ∈ U (0, 1) . (8.85)

Correlated point processes may be described by an analysis similar to that given forthe Poisson process by providing an appropriate form for equations (8.83). If spike timesoccur at instantaneous rate M(s) at elapsed time s after the occurrence of the previousspike, then the related point process has cumulative density F and probability density fwhere

F (t ) = 1 − exp(− ∫ t

0 M−1(s) ds

), f (t ) = M−1(t ) exp

(− ∫ t

0 M−1(s) ds

).

Moreover,M(s) itself can have a stochastic origin in which case the entire process is saidto be doubly stochastic (see Cox and Isham, 980).

Normally Distributed Inter-spike Intervals

Weakly periodic spike trains in which the inter-spike intervals tend to be clustered aboutsome mean value are often modelled by the normal distribution. Normally distributed in-ter-spike intervals are characterised by theirmean and variance by contrast with inter-spikeintervals generated as Poisson deviates; the latter being completely characterised by a singleparameter M, their mean spike rate.

Normal deviates are usually generated using a variant of the Box-Muller (Box andMuller, 958) algorithm commonly called Polar-Marsagalia (Marsagalia and Bray, 964).The algorithm consists of two stages.

Page 35: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 247

If u1 and u2 are two independent uniform random deviates in (0, 1) then Stage I

ν1 = 2u1 − 1 , ν2 = 2u2 − 1 (8.86)

are two independent uniformly distributed deviates in (−1, 1). The pair (ν1, ν2) is acceptedas seed for the second stage of the algorithm provided w = ν21 + ν22 < 1, otherwise it isrejected and a new pair constructed by two further drawings from the uniform randomnumber generator U (0, 1). Clearly the pair (ν1, ν2) is accepted with probability π/4 (≈78.6%).However, this apparently inefficient use of the uniform random number generatoris generously rewarded in the second stage of the algorithm in that no computationallyexpensive trigonometrical calculations are needed by contrast with the Box-Muller methodwhich requires the computation of sine and cosine functions.

Given a pair (ν1, ν2) of random deviates such that 0 < w = ν21 + ν22 < 1, then Stage II

x1 = µ+ σν1√−2 log(w)/w , x2 = µ+ σν2

√−2 log(w)/w . (8.87)

are a pair of independent normal deviates with mean µ and standard deviation σ . Codefor this algorithm is contained in the example of Appendix I.

To appreciate this result, it is enough to recognise that the Jacobian of the mapping(ν1, ν2) → (x1, x2) defined by equations (8.87) satisfies∣∣∣∣ ∂(ν1, ν2)∂(x1, x2)

∣∣∣∣ =(

1

σ√

2exp

[− (x1 − µ)2

2σ2

])(1

σ√

2exp

[− (x2 − µ)2

2σ2

]). (8.88)

Since (ν1, ν2) is uniformly distributed in the unit circle, it has joint probability densityfunction 1/π and therefore (x1, x2) has joint probability density function f (x1, x2) where

f (x1, x2) =(

1

σ√

2πexp

[− (x1 − µ)2

2σ2

])(1

σ√

2πexp

[− (x2 − µ)2

2σ2

]). (8.89)

Since f (x1, x2) is separable in (x1, x2) and has sample space R2, then x1 and x2 are twoindependent normally distributed deviates with mean µ and standard deviation σ .

Uniform Deviates

The construction of exponential and normal deviates relies fundamentally on the ability togenerate uniform randomnumbers drawn from (0, 1).The vastmajority of such generatorsrely on congruence relations of the form

Xn+1 = aXn + b (mod c)

wherea, c are suitably chosen positive integers andb is a non-negative integer.The generatoris “seeded” by a choice of the starting integer X0 (the seed) but thereafter the algorithmgenerates a series of integers ranging from 0 to c − 1. The rational number Un = Xn/c isthen a good approximation of a uniform deviate in [0, 1].

Modern computers have a voracious appetite for uniform deviates as simulations be-come ever more ambitious. Primitive random number generators such as rand( ) in theC programming language are useful for limited simulations only. For serious simulationwork, professional uniform random number generators are best but it’s often convenientto have access to a portable but high quality uniform random number generator. Thegenerator advocated by Wichmann and Hill (982) is implemented by the pseudo-code:

– choose three integers i, j and k, for example, by using a primitive random numbergenerator such as srand and rand;

Page 36: An Introduction to the Principles of Neuronal Modelling

248 K. A. L, J. M. O, D. M. H, J. R. R

– recompute i, j and k according to the formulae

i = mod (171i, 30269) j = mod (172 j, 30307) k = mod (170k, 30323) .

– now compute the rational number

z = i/30269 + j/30307 + k/30323

then mod (z, 1.0) may be regarded as a pseudo-random number in (0, 1).

This surprisingly simple algorithm (coded in the example in Appendix I) has a portfo-lio of approximately 2.7 × 1013 deviates and relies for its effectiveness on the fact that30269, 30307 and 30323 are three large consecutive prime numbers. Recent research onrandom number generators has focussed on “lagged-Fibonacci” generators defined by therecurrence sequence

Xn = Xn−r (binary operation)Xn−s

where 0 < s < r are the lagging parameters (integers) with n � r and the binary operationreferred to is addition/subtraction (mod c) (see Kloeden, Platen and Schurz, 994; Knuth,997). Such recursive sequences can generate very long periods. For example, with r = 17,s = 5 and the operation of addition (mod 231), the period of the lagged-Fibonacci generatoris (217 − 1)231 (approximately 2.8× 1014) if at least one starting seed (7 in total) is odd say X0 = 1.

Correlated Spike Trains

The algorithms described so far in this section have concentrated on the generation ofsingle spike trains of known statistical characteristics, although clearly they can be extendedto the generation of arbitrary numbers of independent spike trains.However, to understandthe effects of correlated spike trains on the behavior of neurons, it is necessary to be able togenerate correlated spike trains whose strength of correlation can be controlled. Halliday(998a) introduces and describes a novel procedure for generating correlated spike trainsusing integrate-to-threshold-and-fire type encoders.

The design of these encoders is based on a leaky integrator circuit with an incorporatedthreshold. Spike times are taken as the times of threshold crossing. In its general form, theintegrate-to-threshold-and-fire encoder can be expressed in the differential form

τdν = G(n + y) dt − ν dt (8.90)

where ν is the output of the encoder12, n is a noise process,G (fixed at unity) is the gainof the encoder, τ (2.5 × 10−2 sec.) is its time constant and

y(t ) = A∑i

(H (t − ti )− H (t − ti − a)) . (8.9)

The function y(t ), referred to as the correlating signal, is expressed in terms of the Heav-iside function H and describes a train of pulses of amplitude A and duration a initiatedat times ti which may be generated deterministically or stochastically. Encoders of thistype form the basic building block for generating temporally correlated spike trains. Fora given noise process, the strength of correlation is governed largely by the product Aa.In practice, when generating large numbers of inputs to a dendritic tree, some of whichare correlated, many encoders are required. A subset of correlated inputs is generated byfeeding a selected group of these encoders with the common signal y.

12Although not explicit, encoders based on leaky integrators incorporate exponentially decaying mem-ory. Other neurophysiological applications may require different memory functions for the encoder. Forexample, encoders based on aWeibull density �t�−1 lead to super-exponential decay typified by exp (−t� ).

Page 37: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 249

There are two cases to consider depending on the nature of the noise input to eachencoder. A pseudo-white noise input stochastically fixes the level of the white noise butmaintains that level for a fixed interval of time. Thus pseudo-white noise is a piecewisedifferentiable function.By contrast,white noise is nowhere differentiable. For a given choiceof mean and variance, the behavior of encoders is markedly different for the two types ofinputs.

A pseudo-white noise process is one in which Gaussian noise is generated at discrete points Pseudo-white noise inputin time and held uniformover the time interval between these points.White noise producedin this way essentially behaves like a function of bounded variation.For pseudo-white noise,equation (8.90) may be represented meaningfully in the more familiar form

τdνdt

= G(n + y)− ν = Gx − ν , ν(0) = 0 , (8.92)

and can be integrated numerically using the conventional backward Euler algorithm

νk+1 = τνk +Gxk+1hτ + h

, xk = nk + yk . (8.93)

In this scheme, nk is the noise, yk is the correlating signal and νk is the solution of (8.92)at time tk . The projected solution at the next time step tk+1 = tk + h is νk+1, where h(10−3 sec., Halliday, 998a) is the duration of the time step. After each time step, the valueof νk+1 at time tk+1 is compared with the constant threshold νth (set at unity). If νk+1exceeds the threshold, an output spike is generated at time

tk + νth − νk

νk+1 − νk(8.94)

and the encoder value is reset to νk+1 − νth at time tk+1. Of course, this new initial valuedepends onh, the step length,but since the purpose of the encoder is to fire spikes, subtletiesof this nature are ignored since they have a negligible effect on the statistics of these spiketrains. Fundamentally, this encoder presupposes that ν is a differentiable function of texcept possibly at the points t1, t2, . . .. In this sense, the operating characteristics of theencoder are inextricably linked to the choice of h used in the numerical integration of thealgorithm.The efficacy of this class of encoder lies in its ease of implementation and the factthat it generates spike trains with good numerical efficiency and realistic neurobiologicalproperties.

In summary, correlated dendritic inputs may be created from a family of identicalencoders whose individual members supply input at a selected location (synapse) on themodel dendrite. Each encoder receives two types of input; one that is a pseudo-white noiseprocess that is independent of all the other encoders and a second type that is a correlat-ing signal common to some encoders, but not necessarily all encoders. In the absence ofthe correlating signal, each encoder will generate a spike train whose characteristics aredetermined by the properties of the pseudo-white noise input. Table 8.3, taken from Halli-day (998a), gives some guidance as to the selection of parameters for the pseudo-whitenoise component of encoder input and the frequency and coefficient of variation of thecorresponding spike train output.

By means of a framework of common inputs to an ensemble of encoders, it is possibleto produce temporally correlated outputs from these encoders. These outputs can be usedto provide correlated inputs at selected synapses on a dendrite. Encoders that are fedby that signal and are close to threshold, can be selectively triggered by adjusting thestrength of the correlating signal. Figure 6 illustrates a family of n encoders fed with thecommon input shown as a sequence of pulses. Each pulse increments the independentinputs causing some encoders to fire synchronously. By adjusting the properties of the

Page 38: An Introduction to the Principles of Neuronal Modelling

250 K. A. L, J. M. O, D. M. H, J. R. R

Table 8.3.An example of some choices for the parameters of the pseudo-white noise input and thecorresponding properties of the output spike train with encoder gain G = 1, encoder thresholdνth = 1 and encoder time constant τ = 0.025.

Pseudo-white Mean .020 .05 .269 0.892

noise input Std. Dev. 0.065 0.50 0.307 6.200

Output spike Spikes/sec 0 0 25 32

train CoV 0. 0.2 0. .0

τdν = G(n1 + y)dt − ν dt

τdν = G(nk + y)dt − ν dt

τdν = G(nn + y)dt − ν dt

Fig. 6.Diagrammatic representation of n encoders, each receiving noise that is uncorrelated with itselfand with that of the other encoders. In this example, each encoder also receives a sequence of pulsescommon to all encoders.

common and independent components comprising the total input to each encoder, andthe number of encoders that are to receive common inputs of a prescribed class, correlatedspike trains with a desired correlation structure can be generated.Further details on choicesof parameters are given in Halliday (998a).

To appreciate the difference between pseudo-white noise and white noise, it is convenientWhite noise inputto recognise that equation (8.90) has symbolic solution

ν(t ) = ν(0)e−t/τ + Gτ

∫ t

0n(s)e−(t−s)/τ ds + G

τ

∫ t

0y(s)e−(t−s)/τ ds . (8.95)

When n and y are functions of bounded variation, the integrals in (8.95) may be inter-preted in the sense of Riemann integration. Specifically, the value of the integral is inde-pendent of the limiting procedure. On the other hand, when n is white noise, i.e. is notof bounded variation, then the value of the first integral depends critically on the limitingprocedure. For example, the limiting procedure based on the midpoint of intervals definesthe Stratonovich integral, while that based on the left hand endpoint of intervals definesthe Ito integral (see Kloeden and Platen, 995). The latter is often preferred simply becauseit is not predictive. When n is white noise, ν is continuous everywhere and differentiablenowhere. Consequently, equation (8.92) makes no sense and the backward Euler scheme(8.93) is invalid because it implicitly assumes that ν is a differentiable function.

Page 39: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 25

Suppose that the encoder is driven by white noise of mean a and standard deviation b,then n dt = a dt+b dW where dW is the differential of theWeiner processW (t ), definedas a continuous standard Gaussian process with independent increments such that

W (0) = 0 with probability one

E[W (t )] = 0, Var [W (t )−W (s)] = t − s , 0 � s � t

where E[X] andVar [X] denote respectively the expected value and variance of the randomvariable X . Thus ν(t ), the state of the encoder, satisfies the stochastic differential equation

τdν = G(a dt + b dW + y dt )− ν dt (8.96)

and may be determined by the numerical integration of (8.96) using the stochastic back-ward Euler algorithm

νk+1 = τνk +Gh(a + yk+1)+ bGdWk

τ + h. (8.97)

Table 8.4 provides a comparison13 of the output spike rate and coefficient of variationwhen the input to the encoder is a white noise process as opposed to the pseudo-whitenoise process in Table 8.3. The specification of the noise is now independent of the stepsize h. Consequently the characteristics of the output spike train are fashioned by the meanand variance of the white noise input by contrast with the three parameters required forpseudo-white noise.

Table 8.4. An example of some choices for the parameters of the white noise input and the corre-sponding properties of the output spike train when encoder gain G = 1, encoder threshold νth = 1and encoder time constant τ = 0.025. This table, when compared to Table 8.3, demonstrates themarked difference between the effects of pseudo-white and white noise of given mean and variance.

White Mean .020 .05 .269 0.892

noise input Std. Dev. 0.065 0.50 0.307 6.200

Output spike Spikes/sec 2 33 62 790

train CoV 0.6 0.8 .2 5.4

In any event, the solution (8.96) is a continuous random variable in time. Withoutevaluating ν(t ), some of its important statistical characteristics can be extracted.

Features of the Encoder Response to White Noise Input

It has already been observed that the stochastic equation τdν = Gx(t ) dt − ν dt hassymbolic solution (see 8.95)

ν(t ) = ν(0)e−t/τ + Gτ

∫ t

0x(s)e−(t−s)/τ ds (8.98)

where x = n + y. Suppose that ν(0), the initial value of ν and x(s), the random input attime s are uncorrelated random variables for all times s. By taking the expected value ofequation (8.98), it follows immediately that E[ν(t )], E[ν(0)] and E[x(s)] satisfy

E[ν(t )] = E[ν(0)]e−t/τ + Gτ

∫ t

0E[x(s)]e−(t−s)/τ ds . (8.99)

13Appendix I gives a C program to generate pseudo-white noise and white noise driven spike trains.Incorporated in the program is code to generate uniform and normal deviates.

Page 40: An Introduction to the Principles of Neuronal Modelling

252 K. A. L, J. M. O, D. M. H, J. R. R

The linearity of expression (8.98) now ensures that

ν(t )− E[ν(t )] =(ν(0)− E[ν(0)]

)e−t/τ + G

τ

∫ t

0

(x(s)− E[x(s)]

)e−(t−s)/τ ds .(8.00)

The variance of ν therefore satisfies

Var [ν(t )] = Var [ν(0)]e−2t/τ

+2Gτ

∫ t

0E[(x(s)− E[x(s)]

)(ν(0)− E[ν(0)]

)]e−(2t−s)/τ ds

(8.0)

+G2

τ2

∫ t

0

∫ t

0E[(x(s)− E[x(s)]

)(x(u)− E[x(u)]

)]e−(2t−s−u)/τ ds du .

Since ν(0) and x(s) are uncorrelated

E[(x(s)− E[x(s)]

)(ν(0)− E[ν(0)]

)]= 0 , ∀s > 0 . (8.02)

If the covariance between x(s) and x(u) is denoted by Cov [x(s), x(u)] and defined by

Cov [x(s), x(u)] = E[(x(s)− E[x(s)]

)(x(u)− E[x(u)]

)],

then in view of result (8.02), equation (8.0) simplifies to

Var [ν(t )] = Var [ν(0)]e−2t/τ + G2

τ2

∫ t

0

∫ t

0Cov [x(s), x(u)]e−(2t−s−u)/τ ds du . (8.03)

In practice, x(s) is the sum of a white noise input of mean a (constant) and standarddeviation b (constant) and correlating signal y(s). Hence

E[x(s)] = a + E[y(s)] , Cov [x(s), x(u)] = b2δ(s − u)+ Cov [y(s), y(u)] . (8.04)

Thus equations (8.99) and (8.03) yield expressions for the mean and variance of therandom variable ν at any time t . Therefore given sufficient information to specify E[x(s)]and Cov [x(s), x(u)] in equation (8.04), then the first two moments of the encoder outputat any time prior to firing are determined. Since Gaussian deviates are completely specifiedby their mean and variance, one perception of ν(t ) is that it is approximately a normaldeviate whose mean and variance at time t are determined by the solution of equations(8.99) and (8.03). For example, in the absence of the correlating signal, the mean andvariance of ν(t ) are respectively

E[ν(t )] = Ga(1 − e−t/τ )+ E[ν(0)]e−t/τ ,

Var [ν(t )] = G2b2

2τ(1 − e−2t/τ )+Var [ν(0)]e−2t/τ .

(8.05)

Examples

The following examples illustrate () the procedure used to estimate the strength of correla-tion of a sample of weakly correlated spike trains, and (2) demonstrate the powerful effectthat a weakly correlated small percentage of the total synaptic input to a model neuronhas on the timing of output spikes from the neuron.

The sample of weakly correlated spike trains used in the first example was generated by theExample 1procedure introduced in Halliday (998a), described above, and illustrated in Fig. 6. Theindependent noise inputs to 00 encoders were adjusted so that each encoder generated a

Page 41: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 253

spike train with a mean rate centred on 2 Hz. The correlating signal, y(t ), common tothe encoders consisted of a pulse sequence with a mean rate centred on 25 Hz. Since thedominant frequency component of the common input to the sample of encoders is centredon 25 Hz, on theoretical grounds, one would expect that the coherence between any pairof spike trains generated by the encoders would have a peak centred about the frequencyof this common input (see Rosenberg, Halliday, Breeze and Conway, 998). A sample of00 spike trains with the characteristics determined by the combined noise and correlatinginputs to the encoders, each of 00 s, was generated.

Fig. 7a shows the estimated coherence between an arbitrarly selected pair of spike trainsgenerated by the encoders. Clearly a sample of duration 00 s is not sufficient to detect anycorrelation between these signals. However, the pooled coherence (refer to Chapter 8,and Amjad, Halliday, Rosenberg and Conway, 997), shown in Fig 7b, estimated fromtwenty pairs of encoder outputs gives small but significant values centred on the knowndominant frequency of the correlating signal. The absence of a detectable coherence fromindividual pairs of spike trains (Fig. 7a), coupled with the small peak value of the pooledcoherence (0.009 in Fig.7b), indicates that the strength of correlation between the encodergenerated spike trains is extremely weak.Details of the choice of parameters for generatingthe correlated spike trains of any desired strength of correlation and their analysis by apooled coherence estimate are given in Halliday (998a).

0 10 20 30 40 50 60 70Frequency (Hz)

0.00

0.04

0.08

0.12

0.16

(a)|R(λ)|2

0 10 20 30 40 50 60 70Frequency (Hz)

0.000

0.002

0.004

0.006

0.008

(b)|R(λ)|2

Fig. 7.(a) Estimated coherence |R(λ)|2 between two arbitrarily selected encoder generated spike trainsfrom a sample of 00 weakly correlated processes, and (b) Pooled coherence estimated from 20 pairsof spike trains from the sample of 00. The mean and standard deviation of the independent pseu-do-white noise input µ = 1.247 and σ = 0.306, whereas the correlating signal y(t ) had amplitudeA = 0.425 with a = 0.002 and period 40 ms. The dashed horizontal line represents the upper levelof an approximate 95% confidence interval assuming that the two processes are independent.

The second example demonstrates that, although the correlation between spike trains may Example 2be weak, the effect that this correlation has on the timing of spike outputs may, never-theless, be profound. A two cell model consisting of identical compartmental models ofmotoneurons that share a percentage of their synaptic input is used to illustrate the ef-fect of weakly correlated signals on the timing of output spikes from these neurons. Thecoherence between the output spike trains from the two neurons is used to provide an in-direct measure of the effect of correlated inputs on the timing of spike outputs. It has beendemonstrated both theoretically (Rosenberg et al., 998) and in practice (Farmer, Bremner,

Page 42: An Introduction to the Principles of Neuronal Modelling

254 K. A. L, J. M. O, D. M. H, J. R. R

Halliday, Rosenberg and Stephens, 993) that the coherence between two spike trains fromneurons known to receive common inputs reflects the frequency content of the commoninputs. Since the two model neurons are identical, the effect of correlating the input spiketrains on the timing of output spikes from one neuron will mirror those from the other.The coherence between the output spike trains can then be used to assess how changes inthe correlation between common inputs will effect the timing of output spikes, throughthe identification of the frequency content of the common inputs.

Two cases are examined when 5% of the total synaptic input is common to the twomodel neurons. In the first case the common inputs are uncorrelated,whereas in the secondthey are weakly correlated.

Each model neuron receives 996 inputs, distributed uniformly over the cell, giving riseto a total synaptic input of 3 872 EPSPs/sec. In the absence of common inputs, each cellwill discharge at approximately 2 spikes/s. When 5% of the inputs to the cells are madecommon,where each common input is driven by a 25 Hz signal, the overall rate of synapticinput to each cell is adjusted to remain at 3 872 EPSPs/sec. The autospectrum of the outputfrom one model neuron, shown in Fig. 8a, has a dominant peak at approximately 2 Hzcorresponding to the mean rate of discharge of the cell.

When the common inputs are uncorrelated, the estimated coherence between the twooutput spike trains (Fig. 8b), based on a 00 s sample, is not significant, suggesting that anuncorrelated 5% common input may not influence the timing of output spikes from theseneurons. In the second case the 25 Hz common inputs are weakly correlated at a peakvalue equal to that of the pooled coherence shown in Fig. 7b. The coherence between theoutput spike trains, shown in Fig. 8d, now has a significant peak centred about 25 Hz thefrequency of the common inputs; themean rate of the neuron,however, remains unchangedas indicated by its autospectrum (Fig. 8c).Although the peak value of the pooled coherencefor the common inputs is only 0.009 indicating weakly correlated common inputs thecoherence between the output spike trains is approximately 20 times greater. Simply byweakly correlating 5% of the synaptic input to a neuron a significant effect on the timingof its output spikes is produced. The effects of weakly correlated synaptic inputs has beenexamined more fully in Halliday (998b).

Equivalent Cable Construction

The idea of “equivalent structure” is intuitively understood but often not precisely defined.The object of this section is to give a definition of equivalence in the context of neuronalmodelling that is both consistent with common usage and also is mathematically precise.

Recall that any model of a concrete object, in this case a dendritic tree, is an abstractobject whose description is taken by us to be a description of the concrete object. Im-portantly, the abstract object is entirely determined by its definition and that the concreteobject, by contrast, is never susceptible to an exhaustive description. One says that theabstract object is a model of the concrete object when the definition of the former is takenfor a representation of the latter.

In the context of neuronal modelling, the concrete structure is a dendritic tree andsoma, whereas the abstract object is simply a collection of connected cylinders. Clearlythese two objects can never be truly equivalent in the dictionary sense of equivalence.Even as precise geometry of the original dendrite is approached in the limit, the electricalproperties may still not be matched between the dendrite and its model representation.

Various degrees of equivalence can be associated with the preservation of a range offeatures of the concrete object in the abstract object.Examples of such features are total den-dritic length (as opposed to maximum soma to tip length), total dendritic membrane area,the values of electrical parameters, etc. Therefore equivalence is about levels of informationpreservation between the concrete and abstract objects while mathematical equivalence is

Page 43: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 255

0 10 20 30 40 50 60 70Frequency (Hz)

−2.4

−2.8

−3.2

−3.6

−4.0

(a)f00(λ)

0 10 20 30 40 50 60 70Frequency (Hz)

0.00

0.04

0.08

0.12

0.16

(b)|R01(λ)|2

0 10 20 30 40 50 60 70Frequency (Hz)

−2.4

−2.8

−3.2

−3.6

−4.0

(c)f00(λ)

0 10 20 30 40 50 60 70Frequency (Hz)

0.00

0.04

0.08

0.12

0.16

(d)|R01(λ)|2

Fig. 8. (a) Estimated autospectrum, f00(λ), of a spike train generated by one of two model neuronswhen their common inputs are uncorrelated while (c) is the same autospectrum for weakly correlatedcommon inputs. (b) Estimated coherence, |R01(λ)|2, between the output spike trains of the twoneurons for uncorrelated common inputs to the neurons while (d) is the same coherence for weaklycorrelated common inputs. The horizontal dashed lines in (b) and (d) represent the upper level of anapproximate 95% confidence interval under the assumption that the two processes are independent.The dashed and solid horizontal lines respectively in (a) and (c) represent the P/2π level where Pis the mean rate of the process and the approximate 95% confidence interval under the assumptionthat the spike train was generated by a Poisson process.

about precise information preservation between respective models. For example, an arbi-trarily branched multi-cylinder model can be replaced by an unbranched multi-cylindermodel that is absolutely equivalent to it in the mathematical (and dictionary) sense.

A Brief History of Equivalent Models

All cable models are inspired by the success of Rall’s original equivalent cylinder (Rall,962a, 962b) which gave insight into the role of passive dendrites in neuron function, and

Page 44: An Introduction to the Principles of Neuronal Modelling

256 K. A. L, J. M. O, D. M. H, J. R. R

S C

S

1

6 S

S

Cut TerminalCS Sealed Terminal

2

3

S

C

C

5

S

2

2

3

1 11

S

SS

S9

7

(a)

(b)

(c)

Fig. 9.Diagrammatic representation of some equivalent cables. (a) A simple non-degenerate Y-junc-tion with one short limb and one long limb, and both terminals sealed. (b) A tree with two ordersof branching and all terminals sealed. The Equivalent cable is generated after three Y-junctions aretransformed. (c) A Y-junction with one sealed and one cut terminal. The cut terminal induces thelarge jump in diameters indicated by dotted lines.

allowed the estimation of specific electrical parameters. The restricted tree geometries forwhich this result is valid have prompted several efforts to extend Rall’s result.

Rall (962a) extended the cylindermodel to include tapering tree geometry by introduc-ing the idea of fractional orders of branching.However, the concept is clearly mathematicaland divorced from physiological reality. On the other hand, Burke (997), Clements andRedman (989) and Fleshman, Segev and Burke (988) have derived empirical equivalentcables which do not impose restrictions on tree geometry and have proven more successfulin use. For these cables, equivalence is measured by their ability to approximate somaticvoltage transients when a current pulse is injected at the soma.Analysis of such transientsenables improved estimates of passive electrical parameters such as axial resistivity, mem-brane resistivity, and effective electrotonic length of the dendritic tree to be made (Rallet al., 992; Burke, Fyffe and Moschovakis, 994). These cables have also been used to dy-namically reduce sections of complex tress to improve the efficiency of computermodellingof neurons (e.g., see Manor, Gonczarowski and Segev, 99).

The two most familiar models are Rall’s Equivalent Cylinder and the Lambda Cable.Significantly, however, none of these previous models are fully equivalent (in a mathemat-ical sense) to the original representation of the dendritic structure, simply because eachfails to preserve total dendritic electrotonic length. Configurations of inputs on the origi-nal representation cannot be reconstructed from those on the Rall Equivalent Cylinder orthe Lambda Cable. Different configurations of inputs on the original representation cangive rise to the same configuration of inputs on these equivalent models, which thereforedo not contain the information necessary for the construction of a unique configurationof inputs on the original representation that give the same effect at the soma. Only by

Page 45: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 257

preserving total dendritic electrotonic length can a unique relationship between tree andcable be established.

Overview of Equivalent Cables

Two model representations are said to be mathematically equivalent or just equivalent ifany configuration of inputs on one structure can be associated uniquely with a configura-tion of inputs on the second structure and vice versa, such that the responses at the somain both cases are identical. In mathematical terms, this association is called a mapping.The uniqueness requirement ensures that the mapping is injective while the second re-quirement guarantees that the mapping is surjective. Mappings that are both injective andsurjective are called bijective mappings. Under this definition, model properties describ-ing geometrical structure, boundary conditions and electrical activity will be preserved;all characterisable phenomena on one model are reproducible exactly in any equivalentmodel.

Any dendritic tree model formed from multiple uniform segments, each described bythe linear cable equation, and each with electrotonic length a multiple of some quantumlength l, may be transformed to its equivalent cable provided () the membrane time con-stant τ = CM/gM is a universal constant over the entire tree, and (2) each terminal satisfieseither a current injection or a cut end boundary condition. The soma, which is the pointwith respect to which the cable is generated (the origin), doesn’t influence the equivalentcable structure and may take any boundary condition.

The equivalent cable preserves total electrotonic length and may consist of many dis-tinct sections, only one of which (called the connected section) is attached to the soma ofthe original tree. The remaining sections are all disconnected sections, are not attached tothe soma, and therefore define electrical activity over the tree that will not influence thesoma.

The basic geometrical unit of dendritic construction is the simple Y-junction comprising Basic branching structuretwo daughter branches arising from a single parent branch.

The equivalent cable for a Y-junction contains a connected section plus at most onedisconnected section.AY-junction is classified as degenerate if its equivalent cable containsa disconnected section, otherwise it is non-degenerate. This degeneracy can be associatedwith repeated eigenvalues in the tree matrix representation.

Any multi-branched dendritic structure can be constructed from Y-junctions. The ge-ometrical complexity of any dendrite can therefore be associated with the number of basicgeometric units, i.e. Y-junctions, required to build the dendrite. An equivalent model canbe generated by collapsing any terminal Y-junction. This new model is equivalent to theoriginal dendrite but will have reduced geometrical complexity since one branch point hasbeen removed.However, this reduction is achieved at the expense of a more complex inputstructure. Fig. 0 illustrates the reduction in geometrical complexity and the correspondingincrease in the complexity of the input current structure. By continuing the process of se-lectively reducing terminal Y-junctions, a hierarchy of equivalent models can be generated.The process terminates with a final unbranched equivalent model or equivalent cable.

Examples of an Equivalent Cable

Fig. 9 illustrates three simple dendritic trees and their equivalent cables.It is instructive to examine in detail the properties of the equivalent representation of a

simpleY-junction with limbs of equal length (Fig. 0), partly because of its native simplicityand partly because such junctions turn out to be commonplace in the reduction of generaldendritic trees to their equivalent cable.

Page 46: An Introduction to the Principles of Neuronal Modelling

258 K. A. L, J. M. O, D. M. H, J. R. R

(a)

C/S

cS

C/S

cL

c1 = cL + cSC/S

+c2 = c1

C/S C

IS

IL

I1

I2

IS(x, t ) = cSc1

(I1(x, t )+ cL

c1I2(l − x, t )

)

IL(x, t ) = cLc1

(I1(x, t )− cS

c1I2(l − x, t )

)

I1(x, t ) = IS(x, t )+ IL(x, t )

I2(x, t ) = c1cS

IS(l − x, t )− c1cL

IL(l − x, t )

(b)

C

cS

S

cL

c1 c2C

I1

IS

I2

IL

c1 = cL + cS

c2 = cScL(cS + cL)

IS(x, t ) = cSc1

(I1(x, t )+ I2(l − x, t )

)

IL(x, t ) = cLc1

I1(x, t )− cSc1

I2(l − x, t )

I1(x, t ) = IS(x, t )+ IL(x, t )

I2(x, t ) = cLcS

IS(l − x, t )− IL(l − x, t )

Fig. 10.Construction of an equivalent cable for simple uniformY-junctions with limbs of equal length.The notation “C/S” refers to cut/sealed boundary conditions respectively. In (a) both boundary con-ditions are the same whereas in (b) they are different. The equations in (a) and (b) give the bijectivemappings between the tree and equivalent cylinder. Arrows directed into limbs are excitatory whilethose directed away from limbs are inhibitory.

In all circumstances, the equivalent cable consists of an unbranched piecewise uniformcable with sections of known diameter and with total length that of the original dendrite,together with a bijective mapping between the dendritic tree and the unbranched equiv-alent cable. In the example of figure (0ab), the equivalent cable consists of two sections,one of which is connected to the parent dendrite while the other may either be connectedto this section or be entirely disconnected from it depending on the nature of the terminalboundary conditions.

Let VS(x, t ) and VL(x, t ) be the membrane potentials in the respective limbs of theY-junction illustrated in figure 0 and let IS(x, t ) and IL(x, t ) be the corresponding currentinputs. Continuity of membrane potential and conservation of current at the junctiondemand respectively that VS(x, t ) and VL(x, t ) satisfy the boundary conditions

VS(0, t ) = VL(0, t ) = VP(L, t ) , (Continuity of potential)

cS∂VS(0, t )∂x

+ cL∂VL(0, t )∂x

= cP∂VP(L, t )

∂x(Conservation of current)

(8.06)

whereVP(x, t ) is the potential in the parent branch and L is its length.By an analysis similarto that in the section on the Rall equivalent cylinder (p. 224), it can be demonstrated thatthe potentials

ψ1(x, t ) = cSVS(x, t )+ cLVL(x, t )cS + cL

ψ2(x, t ) = VS(x, t )−VL(x, t )

0 < x < l , t > 0 , (8.07)

Page 47: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 259

are cable solutions. The continuity of potential in boundary condition (8.06a) implies thatψ1(0, t ) = VP(L, t ) whereas conservation of current in (8.06b) yields

(cS + cL)∂ψ1(0, t )

∂x= cS

∂VS(0, t )∂x

+ cL∂VL(0, t )∂x

= cP∂VP(L, t )

∂x.

Henceψ1(x, t ) is the potential in a cable that connects to the parent branch in the sense thatit preserves both continuity of membrane potential and conservation of current providedthis cable is chosen to have c1 = cS + cL. Of course, this is just the familiar Rall result inanother guise. Thus ψ1(x, t ), 0 � x � l, is the potential of Rall’s equivalent cylinder forthis Y-junction. Moreover, it should be observed that ψ2(x, t ) is also a solution of a cableequation satisfying the cut boundary condition ψ2(0, t ) = 0 in view of the continuity ofVS and VL at x = 0. The previous remarks are completely general for this dendrite butstill insufficient to specify the equivalent cable. To obtain the fully equivalent cable, it isnecessary to incorporates the terminal boundary conditions into the construction process.Two possibilities exist here: either both tips of the Y-junction satisfy different terminalboundary conditions as in figure 0b or they share the same terminal boundary conditionas in figure 0a.

Whenever the dendritic terminals are both cut or both sealed, ψ1 and ψ2 are respec-tively both cut or both sealed at x = l .Henceψ1 andψ2 are two complete and independentcable solutions and therefore represent two separate cables. However, onlyψ1 is connectedto the parent branch because it alone satisfies continuity of potential and conservation ofcurrent with the parent branch. In this case,ψ2 is a disconnected section. Clearly the equiv-alent cable requires both the connected and disconnected sections to resolve potentialseverywhere on the original Y-junction.

Alternatively when the dendritic terminals satisfy different boundary conditions, ψ1

and ψ2 cannot individually terminate at x = l, but ψ1(x, t ) (at x = l) can be connectedto ψ2(l − x, t ) (at x = 0) in a way that preserves continuity of potential and conservationof current. Consequently the equivalent cable has no disconnected section in this instance(Fig. 0b) and terminates on a cut end.

This analytical construction serves to illustrate all the properties of equivalent cablesand the associated mapping. In fact, the entire process can be performed numericallyusing a matrix representation of the dendritic structure. The following section provides amethodology for constructing equivalent cables using this representation while at the sametime providing a framework for the numerical integration of the cable equations expressedwithin a finite difference scheme.

Equivalent Cable Construction

The construction of equivalent cables relies on the fact that tree matrices corresponding todendrites with combinations of cut and sealed terminals can be transformed into tri-diag-onal matrices that have a natural interpretation as an unbranched dendrite (or equivalentcable). The procedure consists of three stages, the first of which has already been describedin the section on symmetrising the tree matrix (p. 239) and involves the construction ofthe symmetric tree matrix S−1AS where S is a real diagonal matrix. The second and thirdstages involve respectively the generation of the symmetric cable matrix C from S−1AS andits de-symmetrisation into the matrix E which has an interpretation as a cable, called theequivalent cable. Householder reflections play an important role in the construction of theequivalent cable and are now discussed.

Given any vectorV with components vi, the entries hij of theHouseholder reflectionmatrix Householder reflectionsH (see Golub and Van Loan, 990) are defined by

hij = δij − 2vivjvrvr

, (8.08)

Page 48: An Introduction to the Principles of Neuronal Modelling

260 K. A. L, J. M. O, D. M. H, J. R. R

where δij is Kronecker’s delta and a repetition of indices implies summation. By construc-tion, H is both symmetric and idempotent. The former is obvious since hij = h ji whilethe latter follows from the calculation

hikhkj =(δik − 2vivk

vrvr

)(δkj − 2vkvj

vsvs

)

= δikδkj − 2δikvkvj

vsvs− 2δkj

vivk

vrvr+ 4

vivk

vrvr

vkvj

vsvs

= δij − 4vivj

vrvr+ 4

vivj

vrvr= δij .

Thus H is a symmetric orthogonal matrix and is also its own inverse, that is,HT = H−1.

Householder reflections are traditionally used as the first stage in the reduction of amatrix to its Jordan canonical form. By skilful choices of the vectorV , it is possible to con-struct a sequence of Householder reflections that reduce any matrix to upper Hessenbergform14. In particular, Householder reflections reduce symmetric matrices to tri-diagonalform (see Golub and Van Loan, 990). However, when the Householder algorithm is ap-plied in the traditional way to S−1AS, the resulting tri-diagonal form has no immediateinterpretation as a cable or unbranched dendrite, and the method appears to fail. The diffi-culty stems from the fact that conventional applications of Householder’s algorithm alwaysstart with the last column and bottom row of a matrix and progressively sweep throughthe matrix structure finally arriving at its top left hand corner. The resulting tri-diagonalmatrix has no direct interpretation as a cable.

Instead another strategy is required. In overview, it can be shown that repeated pre-and-post multiplication of S−1AS by a series of suitably chosen Householder matrices eventu-ally reduces S−1AS to a symmetric cable matrix C which may then be associated witha de-symmetrised cable matrix E. Each pre- and post-multiplication by a Householdermatrix is called a Householder operation and has the property that it zeroes a single pairof elements in the reduction of S−1AS to C . In this respect the algorithm is similar to aGiven’s rotation (Golub and van Loan, 990), except that the latter destroys the matrixstructure essential for cable formation whereas the former sequentially generates elementsof the symmetrised cable matrix as off-tri-diagonal entries are progressively cleared fromsuccessive rows of the partially tri-diagonalised symmetric tree matrix.

Let A be a n× n symmetric matrix A. The Householder reduction algorithm that pre-serves cable structure consists of two complementary operations now described in detail.Suppose that p and q are respectively the minimum row index and the minimum columnindex in row p such that ap,q and aq,p (q > p + 1) is a non-zero off-tri-diagonal element.If p and q do not exist then A is tri-diagonal, otherwise choose vi in matrix (8.08) by theformula

vi = √1 − α δi,(p+1) − √

1 + α δi,q ,

α = ap,(p+1)√a2p,(p+1) + a2p,q

, β = ap,q√a2p,(p+1) + a2p,q

.(8.09)

14An upper Hessenberg matrix has all its entries below the principal sub-diagonal zero and is the re-quired starting form for a QZ or QL reduction to Jordan canonical form.

Page 49: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 26

The Householder reflection defined by this choice of V is denoted by H (p,q) and has blockmatrix form

H (p,q) =

Ip 0 0 0 0

0 α 0 β 0

0 0 Iq−p−2 0 0

0 β 0 −α 0

0 0 0 0 In−q

(8.0)

where Ij denotes the j × j identity matrix and α2 + β2 = 1. It can be demonstrated thatthe (p, q)th and (q, p)th entries of H (p,q)AH (p,q) are zero; the effect of the Householderoperation defined by H (p,q) is to distribute the (p, q)th and (q, p)th elements of A aroundthe (p, p+1)th and (p+1, p)th entries ofH (p,q)AH (p,q) and other off-tri-diagonal elementslying below row p in the new matrix H (p,q)AH (p,q). Repeated application of this idea witha carefully chosen sequence of Householder operations enables all the off-tri-diagonalelements S−1AS to be “chased” out of the original matrix. In conclusion, there is a seriesof Householder reflections H1, H2, . . . , Hk such that

T = (HkHk−1 · · ·H2H1)(S−1AS)(H1H2 · · ·Hk−1Hk ) (8.)

is a symmetric tri-diagonal matrix. Unfortunately pairs of negative entries can arise on thesub/super diagonals of T so T itself is not the symmetric cable matrix C . However, T canbe transformed into C by a sequence of elementary matrix operations.

Suppose that tp,(p+1) and t(p+1),p are a pair of negative elements in the sub/super di-agonal of T and that all the off-tri-diagonal elements in the first (p − 1) rows of T arezero, then the algebraic sign of the elements in the (p, p + 1)th and (p + 1, p)th entries ofR(p)TR(p) are reversed by the action of the symmetric orthogonal matrix

R(p) =

Ip 0 0

0 −1 0

0 0 In−p−1

. (8.2)

In fact, R(p) changes the algebraic sign of all the entries in the pth row and column of Texcept the (p, p) entry (which actually has its algebraic sign changed twice). Thus thereis a sequence of reflections R1, R2, . . . , Rm such that T is transformed into the symmetriccable matrix C

C = (RmRm−1 · · ·R2R1)T (R1R2 · · ·Rm−1Rm) (8.3)

whose sub and super diagonals contain non-negative elements only. In conclusion, thereis a series of Householder reflections and H1, . . . , Hk and a series of correction matricesR1, . . . , Rm such that

C = Q−1(S−1AS)Q , Q = Rm · · ·R1Hk · · ·H1 , QTQ = QQT = I .

Fig. illustrates the procedure schematically for a general Y-junction symmetric tree ma-trix.

The matrixC is now regarded as the symmetrised form of a tree matrix, E, corresponding Extracting the equivalentcableto an unbranched tree or equivalent cable. It therefore remains to extract E fromC together

with the symmetrising n× n diagonal matrix X = diag (x0, . . . , xn−1) whose first entry isx0 = 1 without loss of generality, and for which C = X−1EX . Clearly

C = X−1EX ⇐⇒ cij = x jeijxi

.

Page 50: An Introduction to the Principles of Neuronal Modelling

262 K. A. L, J. M. O, D. M. H, J. R. R

X

X

(e)

(a)

X

X

XX

XX

(b) (c)

X

X

XX

XX

(d)

(f)

Fig. 11. Schematic of the Householder tri-diagonalisation procedure applied to the tree matrix for ageneral Y-junction. (a) The symmetric tree has one pair of off-tri-diagonal elements. (b) zeroing thiselement produces new off-tri-diagonal element further towards the lower-right corner of the matrix.(c)–(d) repeat until tri-diagonality is achieved.The resulting cablematrix will represent either (e) twosections (one connected, one disconnected) or (f) one section (connected).

The connection between X and E mirrors the connection between the original tree matrixA and its symmetrising matrix S. By construction, E and C have the same main diagonal.The sub and super diagonal entries of C and E satisfy

ci,i+1 = xi+1ei+1,i

xi, ci+1,i = xiei,i+1

xi+1, 0 � i < n − 1 .

From the symmetry of C it now follows that

ei+1,i = c2i,i+1

ei,i+1, 0 � i < n − 1 . (8.4)

xi+1 = xi

√ei,i+1

ei+1,i, 0 � i < n − 1 . (8.5)

Since the diagonal entries of E andC are identical, the task is therefore to construct the suband super diagonals of E fromC . The technical construction of the E (equivalent cable) isbest understood through an appreciation of the overall strategy. The procedure by whichC is de-symmetrised to get E may be usefully decomposed into three distinct phases,namely stage I, starting a cable; stage II, building a cable body and stage III, terminatinga cable and, if necessary, restarting a new cable. Cables terminate either naturally becausethe de-symmetrisation process exhausts all the entries of C or prematurely because C hasa non-trivial block tri-diagonal structure15 that imposes a non-trivial block tri-diagonalstructure on E. Of course, each tri-diagonal block of E corresponds to a cable, but only theblock of E occupying the top left hand corner is connected to the soma (somal cable): each

15The matrix C is block tri-diagonal whenever zero elements occur in its sub/super diagonals.

Page 51: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 263

remaining block can be identified as a cable that is disconnected from the soma. Thereforeconfigurations of inputs on the model dendrite that map to inputs on this disconnectedcable have no impact on the somal potential.

Each block tri-diagonal matrix of E defines a distinct cable. The terminal boundaryconditions satisfied by this cable are embedded in the first and last rows of the block,while all other rows are identified with internal nodes. In particular, the finite differencerepresentation of derivatives guarantees that the sum of the off-diagonal entries of thesenon-boundary rows is 2α. Sealed terminals correspond to a boundary row whose off-di-agonal entry is also 2α while boundary rows corresponding to cut terminals have α asoff-diagonal entry and require the cable to be extended by one internodal distance to theactual cut terminal.Once a starting boundary condition is identified for any cable, the bodyof the cable and the nature of its other terminal boundary are determined algorithmicallyby alternate use of result (8.4) and the fact that the sum of all off-diagonal entries in eachnon-boundary row is 2α.

Stage I The value of e01, the off-diagonal entry in the first row of E is found by recognisingthat the first section of the equivalent cable is the “Rall” sum of the limbs contingent onthe soma. Recall from (8.66) that the somal condition for the original tree is

dv0dt

= −2g + βhg02εg + hg0

v0 + 2αh2εg + hg0

N∑r=1

gkrvkr − 2J02εg + hg0

.

Therefore the first row of E contains the pair of entries

e00 = −2g + βhg02εg + hg0

, e01 = 2αh2εg + hg0

N∑r=1

gkr = 2αhg02εg + hg0

.

Stage II Once e01 is known then equation (8.4) asserts that e10 = c201/e01 and conse-quently e12 = 2α − e10. This procedure is repeated, that is, ei+1,i is calculated from ei,i+1and then ei+1,i+2 is calculated from ei+1,i according to the prescriptions

ei+1,i = c2i,i+1

ei,i+1, ei+1,i+2 = 2α− ei+1,i (8.6)

provided ei,i+1 �= 0. Once started, this algorithmic procedure generates the body of a cableand concludes the second phase of cable construction.

Stage III Sooner or later the algorithm described in Stage II fails because either i +1 = n − 1 (i.e. E is completely determined) or ei,i+1ei+1,i = 0 so that the equivalentcable now contains disconnected sections. In the latter case, ei,i+1 = ei+1,i = 0. In bothcases, the nature of the boundary condition at cable termination is determined from thenumerical value of ei,i−1, the last non zero off-diagonal entry of E to be determined priorto disconnection/termination. If ei,i−1 = 2α then the cable ends on a sealed end otherwiseei,i−1 = α and the cable ends on a cut terminal after one further internodal distance. Thereare no other possible values for ei,i−1.

Self evidently, the diagonal entries of the symmetrising matrix X are calculated directlyfrom equation (8.5) until a natural or premature cable termination. Whenever a cableterminates prematurely and C still has elements to be de-symmetrised, the de-symmetri-sation procedure must be restarted at either a sealed or cut terminal, and the remainingelements of X initialised with xi+1 = 1. The difficulty stems from the absence of a somalboundary condition, and is resolved by inspecting the properties of the mapping betweenthe original dendrite and the symmetrised cable matrix C . If a cable is to be initiated at

Page 52: An Introduction to the Principles of Neuronal Modelling

264 K. A. L, J. M. O, D. M. H, J. R. R

node (i + 1) then it will restart with a sealed end if the membrane potential at node zi+1features in the (i + 1)th mapping vector, otherwise it will restart with a cut end.

The description of the equivalent cable is completed by the specification of the g-value ofDiameters of equivalentcable sections each cable section. If Gi is the g-value of a cable section that has node Zi at its distal end

then the finite difference representation of the cable equation at node Zi is

dVi

dt= −βVi + 2αGi

Gi +Gi+1Vi−1 + 2αGi+1

Gi +Gi+1Vi+1 = −βVi + ei,i−1Vi−1 + ei,i+1Vi+1 .

By inspection, it is clear that

ei,i−1 + ei,i+1 = 2α , Gi+1 = Giei,i+1

ei,i−1.

The first result has already been used in the construction of E while the last result deter-mines the g-values of all the sections of the component cables of the equivalent cable fromthe g-value of their first section. For the somal cable, the g-value of the first section is sim-ply the sum of the g-values of the limbs connected to the soma. For a disconnected section,the g-value of its first section may be arbitrarily fixed at unity without loss of generality.

The pathway from original dendrite to equivalent cable is now seen to consist of three dis-The electrical mappingtinct steps, namely, symmeterisation of the original dendritic representation, reduction ofthat symmetrised representation to symmetric cable form and finally, de-symmeterisationof the symmetric cable to get the equivalent cable. In the formalism of matrix algebra, theseoperations are described respectively by the similarity transformations

C = H−1(S−1AS)H , C = X−1EX

so that the original tree matrix A and its equivalent form E are now connected by thesimilarity transformation

E = M−1AM , M = SHX−1 . (8.7)

Furthermore, the original cable equation now becomes

d(MV )dt

= E(MV )− 2MRJ ≡ dVE

dt= EVE − 2MRJ

where VE is seen to be the membrane potentials on the equivalent cable. Thus the matrixM may be interpreted as the electrical mapping between the original tree with membranepotentialsV and its equivalent cable with membrane potentialsVE = MV . The matrix Mwill be called the Electro-Geometric-Projection matrix (EGP).

Computational Considerations

The tree and cable matrices can be efficiently stored since they are sparse and nearly tri-diagonal. It has already been explained that a tree matrix based on n numbered nodescontains 3n − 2 non-zero entries. The elements of the main diagonal can be stored astwo elements, one associated with the soma node and one corresponding to the all the re-maining nodes. The 2n− 2 off-diagonal elements can be stored as triplets whose first twoelements are respectively the row and column of the element while the last is the elementvalue.

The symmetrising matrix S is stored as a vector of length n while the symmetric treematrix S−1AS is stored as (n − 1) triplets. The form for S is constructed directly from Aas described in equations (8.73) and defines the form for S−1AS.

Page 53: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 265

The simple structure of the Householder reflection H (p,q) guarantees that only rowsp+ 1 and q and columns p+ 1 and q of the current intermediate matrix are modified bythis Householder operation. Moreover, it is only elements in rows/columns p or greaterthat are actually altered. Therefore, the Householder operationH(p,q) modifies a maximumof 2(n − p − 1) elements. The Householder operations by which the symmetric tree ma-trix is manipulated into the symmetric cable matrix maintain a high level of sparsity inintermediate matrices. The temporary off-tri-diagonal elements are small in number andmay be stored in triplet form.

The electrical mapping between tree and its equivalent cable can be stored in termsof the sequence of individual Householder reflections, each of which may be stored as atriplet. For example,H(p,q) is the triplet (p, q,β) (α and the structure of H(p,q), follow fromequation 8.09 and 8.0). Of course, to appreciate the connection between dendritic treeand equivalent cable, the EGP matrix is required.

The structure of the equivalent cable is stored with similar efficiency to the originaldendritic tree as is the symmetrising matrix X .

In conclusion, the passage from dendritic tree to equivalent cable can be achievedwith high speed and efficient memory utilisation in view of the sparse nature of dendriticstructure matrices. However, the electrical mapping, being an association between pointson the dendritic tree and its equivalent cable, requires calculations on full matrices for acomplete specification and therefore is inevitably slow and memory intensive.

Full Example of the Householder Procedure

It has already been observed that the diagonal matrix S defined by equation (8.75), whenapplied to the tree matrix A defined in equation (8.7), gives the symmetric tree matrix

S−1AS =

−βS αk 0 0 0 0 0

αk −β αp 0 0 0 0

0 αp −β αq 0 αr 0

0 0 αq −β √2α 0 0

0 0 0√

2α −β 0 0

0 0 αr 0 0 −β √2α

0 0 0 0 0√

2α −β

.

The Householder reduction of S−1AS to tri-diagonal form is now illustrated for this matrixand is achieved by two Householder operations.

The first Householder operation is designed to zero the entry αr in the third row and sixth Step Icolumn of S−1AS.With this intention in mind, let the symmetrised tree matrix S−1AS andthe Householder reflection H1 have block diagonal forms

S−1AS =[

T U

UT B

], H1 =

[I3 034043 Q

]

Page 54: An Introduction to the Principles of Neuronal Modelling

266 K. A. L, J. M. O, D. M. H, J. R. R

in which the forms for T (a 3 × 3 matrix),U (a 3 × 4 matrix) and B (a 4 × 4 matrix) areevident from the expression for S−1AS. Specifically

Q =

γ 0 δ 0

0 1 0 0

δ 0 −γ 0

0 0 0 1

,

γ = q√r2 + q2

δ = r√r2 + q2

.

Using matrix block multiplication, it follows that

H1(S−1AS)H1 =

[T UQ

(UQ)T QBQ

]

in whichUQ and QBQ are respectively 3 × 4 and 4 × 4 matrices. Let w = √r2 + q2 thenit is simply a matter of matrix algebra to verify that

H1(S−1AS)H1 =

−βS αk 0 0 0 0 0

αk −β αp 0 0 0 0

0 αp −β αw 0 0 0

0 0 αw −β √2αγ 0

√2αδ

0 0 0√

2αγ −β √2αδ 0

0 0 0 0√

2αδ −β −√2αγ

0 0 0√

2αδ 0 −√2αγ −β

. (8.8)

The second Householder operation is designed to zero the entry√

2αδ in the fourth rowStep IIand seventh column of H1(S−1AS)H1. In this case, letH1(S−1AS)H1 and the Householderreflection H2 have block diagonal forms

H1(S−1AS)H1 =

[T U

UT B

], H2 =

[I4 043034 Q

]

in which the forms for T (a 4 × 4 matrix),U (a 4 × 3 matrix) and B (a 3 × 3 matrix) areevident from expression (8.8) for H1(S−1AS)H1. Specifically

Q =

γ 0 δ

0 1 0

δ 0 −γ

.

Page 55: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 267

Let H = H2H1, then it is again a matter of algebra to demonstrate that

H (S−1AS)H =

−βS αk 0 0 0 0 0

αk −β αp 0 0 0 0

0 αp −β αw 0 0 0

0 0 αw −β √2α 0 0

0 0 0√

2α −β 0 0

0 0 0 0 0 −β √2α

0 0 0 0 0√

2α −β

(8.9)

where the final Householder operations are embodied in the orthogonal matrix

H = H2H1 =

1 0 0 0 0 0 0

0 1 0 0 0 0 0

0 0 1 0 0 0 0

0 0 0 γ 0 δ 0

0 0 0 0 γ 0 δ

0 0 0 δ 0 −γ 0

0 0 0 0 δ 0 −γ

. (8.20)

The equivalent cable matrix E is obtained from (8.9) by the de-symmetrisation algorithm Fully equivalent cabledescribed previously. The presence of a pair of zero elements in the (5, 6) and (6, 5) en-tries of the symmetrised cable matrix H (S−1AS)H indicates that the equivalent cable inthis instance has a connected section of length 2l and a disconnected section of length l .The de-symmetrisation procedure begins by recognising that the first row of the de-sym-metrised matrix E has second entry αk2. Thereafter, the process is mechanical until thefifth row of E is complete. At this point

E =

−βS αk2 0 0 0 0 0

α −β α 0 0 0 0

0 αp2 −β αw2 0 0 0

0 0 α −β α 0 0

0 0 0 2α −β 0 0

0 0 0 0 0 · · · · · ·

0 0 0 0 0 · · · · · ·

, (8.2)

Page 56: An Introduction to the Principles of Neuronal Modelling

268 K. A. L, J. M. O, D. M. H, J. R. R

while the de-symmetrising diagonal matrix is

X = diag

(1,

1k,pk,pkw

,p√

2kw

, · · · , · · ·). (8.22)

Furthermore, the form of the fifth row of E indicates that the connected section of the fullyequivalent cable ends on a sealed terminal. The g-values of the first four sections of theequivalent cable are

g1 = g2 = gP , g3 = g4 = g2

(w2

p2

)= gL + gR . (8.23)

After four sections, the equivalent cable must be restarted. The restarting condition isdetermined by properties of the sixth row of H displayed in (8.20). The terminal nodesz4 and z6 of the original dendritic tree are not present in the electrical mapping associatedwith the sixth row of H and therefore the equivalent cable must be restarted on a cutterminal. The completed components of E in (8.2), X in (8.22) and the cable g-valuesin (8.23) are respectively[

−β α

2α −β

],

(· · · , 1, 1√

2

), g5 = g6 = gR + gL .

It is now evident that the disconnected section of the equivalent cable begins on a cutterminal, ends on a sealed terminal and is of uniform thickness. Bringing together thesymmetrising matrix S given in (8.75), the Householder operations H given in (8.20)and the de-symmetrising matrix X given in (8.22), the complete electrical mapping fromdendritic tree to equivalent cable is now seen to be

M = SHX−1 =

1 0 0 0 0 0 0

0 1 0 0 0 0 0

0 0 1 0 0 0 0

0 0 0 1 0 ξ 0

0 0 0 0 1 0 ξ

0 0 0 1 0 η 0

0 0 0 0 1 0 η

,

ξ = r pwkq

−η = qpwkr

.

Generalised Compartmental Models

Rall’s equivalent cylinder was used as a mathematical model to simplify the “exploration ofthe physiological implications of dendritic branching” (see Rall, 964).However,Rall recog-nised both the limited utility of this model when applied to the spatiotemporal analysis ofbranching and tapering dendritic structures with complex patterns of synaptic activity, andthe difficulty of solving the partial differential equations (cable equations) characterisingthese complex structures. To simplify the analytical and computational problems associ-ated with the direct application of the cable equation to dendritic systems, Rall introduceda compartmental model of a neuron whose underlying mathematical formulation was ex-pressed in terms of ordinary differential equations as opposed to the partial differentialequations that appear in neuronal cable theory (Rall, 964). This procedure has the advan-tage that well understood numerical algorithms are readily available to solve these systemsof ordinary differential equations.

The Rall compartments are a collection of contiguous sections of the neuron, each oneof which is considered to be spatially uniform. The compartments themselves were mod-elled by the usual equivalent circuit for the electrical behavior of a nerve membrane and

Page 57: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 269

their interaction with neighbouring compartments is governed by Kirchhoff ’s circuit laws.The practical implementation of the Rall compartmental model associates a distinct point(typically its midpoint) with each compartment, whose primary role is then to determinethe biophysical characteristics of the equivalent circuit at that point. The Rall compart-mental model of a neuron is often described as a series of isopotential regions coupled byresistances to its immediate neighbours (see Rall et al., 992; Perkel and Mulloney, 978a).

Since the equations governing the Rall compartmental model arise from a considera-tion of electrical circuits, these equations are obliged to have a matrix representation that istri-diagonal except for rows pertaining to dendritic branch points. It must be emphasisedthat this tri-diagonal structure is inherent in Rall’s compartmental model and should notbe confused with numerical schemes to integrate the cable equation specification of thedendrite. Any tri-diagonal structure possessed by the latter arises through the choice ofnumerical scheme (e.g. second order central differences) and is not an obligatory featureof the model. For example, fourth order central differences schemes give penta-diagonalmatrices and spectral methods give full matrices. In summary, there is a subtle but impor-tant distinction between the structure of Rall’s compartmental model, which is exact, andnumerical schemes that have a similar mathematical structure but are approximate.

In Rall’s compartmental model of a neuron, statements about the interactions betweencompartments (distinct physical regions of a dendrite) become equivalent to statementsdescribing the interaction between points representing these regions. The equivalence be-tween compartment and point provides the motivation for a more general description ofcompartmental models through the mathematical notion of duality: compartments andpoints are defined as dual elements so that results for compartments may be regarded asresults for points and vice-versa.However, although both sets of results are mathematicallysimilar, the procedures for formulating a description based on compartments is differentfrom that used for points. In the latter, points form a set of designated sites on the dendritictree, and the mathematical model is formulated for the membrane potential at these pointsunder the modelling assumption that current can flow across the dendritic membrane onlyat these designated points. It will be seen that Rall’s compartmental model is a special caseof this class of compartmental model.

The question now arises as to the connection, if any, between a compartmental repre-sentation of a neuron and one based on interconnected limbs described by cable equations.Rall partly answers this question by demonstrating that the limit of his compartmentalmodel is the cable equation. The inference of this result is that sufficiently refined com-partmental models give neuronal behavior that is close to that predicted from analyticalsolutions of the cable equations. Specific comparisons (see Segev, Fleshman, Miller andBunow, 985) do indeed bare out this presumption. However, it is important to recognisethat each compartmentalisation of a neuron is a different model, and that comparisonsbetween compartmental and cable models have necessarily been limited to comparisonsfor finite times. To prove that the cable and compartmental models are identical, it is nec-essary to show that, for each location on a real dendrite, the maximum difference in themembrane potential calculated (without numerical error) for the two models can be madesmaller than some closeness criterion for all time, and not just for a finite time. That is,the convergence between models is uniform in time. If this can be proved, then agreementbetween solutions to dendritic models based on compartments and based on the cableequation is entirely expected and confirms Rall’s insight that compartmental models doindeed capture the behavior of neurons (as defined by the cable equation model).

The development of the generalised model brings with it several benefits to compart-mental modelling that are not available for the special models. Once a desired spatialresolution for dendritic potentials is specified, the values for axial resistance connectingcompartments and the membrane capacitance for each compartment follow from the ap-plication of several simple rules,which for tapering dendrites give exact expressions.Taking

Page 58: An Introduction to the Principles of Neuronal Modelling

270 K. A. L, J. M. O, D. M. H, J. R. R

the general model as a reference, errors arising in the choice of compartment parametersfor particular models may be estimated. In particular compartmental models, synaptic in-puts are assigned to the compartments in which they happen to fall, irrespective of theirposition within this compartment. The general model provides a means for partitioningsynaptic input between neighbouring compartments in a way that is natural and consistentwith dendritic physiology.

Formulation of a General Compartmental Model

Building on the notion of duality, a compartmental model based on points (as opposed tocompartments) is now developed. Let z j−1, z j and z j+1 be three physically sequential pointson a dendritic limb at which charge flow between the intracellular and extracellularmedia ispossible.Recall that current flow across themembrane is restricted to these points only.Theusual ladder network (Fig. 2) used in the formulation of Rall’s compartmental model is auseful aid in motivating the development of the general compartmental model. Each rungof the ladder now models the membrane properties of the dendrite in the neighbourhoodof the designated point z j . Each rung is constructed by the usual equivalent circuit in whicha capacitor is connected in parallel with individual batteries and resistors as illustrated infigure 2.The backbone of the ladder describes the resistive (axoplasmic) coupling betweenadjacent points.

The general compartmental equations are now constructed using Kirchhoff ’s circuitlaws and the properties of standard circuit components. The first law requires conservationof current at z j, that is

I (m)j = Ij−1, j − Ij, j+1 . (8.24)

• • •r j−1, j r j, j+1

Ij−1, j Ij, j+1z j−1 z j z j+1

Vj−1 Vj Vj+1

c (m)j+1

I (m)j+1

c (m)j−1

I (m)j−1

c (m)j

I (m)j

Fig. 12. A diagrammatic representation of the ladder network used to represent the general com-partmental model. Membrane current flowing between intracellular and extracellular media in theneighbourhood of designated point z j is modelled by the usual equivalent electrical circuit consist-ing of a capacitor in parallel with a battery and one or more resistances representing ionic currents.Axial currents Ij−1, j and Ij, j+1 are ohmic and governed by the resistances r j−1, j between z j−1 and z jand by r j, j+1 between z j and z j+1.

Page 59: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 27

The current flow from z j−1 to z j is based on Ohms law, and in the first instance, assumesno current leakage across the membrane connecting these designated nodes. In order tomodel realistic synaptic activity at any location on the dendrite, the model will be extendedto include discrete current inputs between z j−1 and z j . This extension will allow, synapticinput to be partitioned in a natural way between nodes.

Let Ij−1, j (t ) be the total axial current flowing along a dendrite limb between nodes z jand z j−1 in the absence of current input. Suppose also that the dendritic limb has cross-sectional area A(x) where x is an axial coordinate along the limb then Ij−1, j satisfies

Ij−1, j (t ) = −gAA(x)∂V (t , x)∂x

where V (t , x) is the dendritic membrane potential at x. The potential difference betweenpoints z j−1 and z j is therefore

V (t , z j )−V (t , z j−1) = Vj −Vj−1 =∫ zj

zj−1

∂V∂x

dx = − Ij−1, j (t )

gA

∫ zj

zj−1

dsA(s)

In conclusion,

Ij−1, j = gAVj−1 −Vj

r j−1, j, r j−1, j =

∫ zj

zj−1

dsA(s)

. (8.25)

The characteristics of the equivalent circuit indicate that

I (m)j = I (ionic)j + c (m)j

dVj

dt(8.26)

where c (m)j is a lumped capacitance at z j (see Hines and Carnevale, 997), that is,

c (m)j = CM

∫ (zj+zj+1)/2

(zj−1+zj )/2P(x) dx . (8.27)

Technically, this integral should be taken over the dendritic surface and not along the den-dritic axis.However, if surface and axial measures of length are significantly different16, thatis, the dendritic taper is severe, then non-axial current flow is almost certainly importantand the validity of dendritic compartment models themselves is doubtful. Thus the equa-tion governing the evolution of the membrane potential at z j is obtained by combiningthe component equations (8.24–8.26) and is

c (m)j

dVj

dt+ I (ionic)j = gA

Vj−1 −Vj

r j−1, j+ gA

Vj+1 −Vj

r j, j+1. (8.28)

This equation is formally identical to those for the Rall compartmental model (see Segev,Fleshman and Burke, 989) of a neuron. Indeed, all compartmental models of neurons willnecessarily take the form of equation (8.28) for suitable choices of c (m)j and r j−1, j etc. withdifferent choices giving different models. Equations for terminal boundaries, branch pointsand the tree-to-soma connection are treated in a similar way. A branch point is modelledby the membrane potential at the junction but the associated region of dendrite is star-likeand not cable-like. For example, Rall’s compartmental model corresponds to the choice

r j−1, j = gA(z j − z j−1)

2

(1

Aj−1+ 1Aj

), (8.29)

c (m)j = CM(z j+1 − z j−1)

2P(z j−1 + 2z j + z j+1

4

). (8.30)

16Recall that ds = dx√1 + (dy/dx)2 for a plane curve so that ds and dx differ to second order in

gradient/taper.

Page 60: An Introduction to the Principles of Neuronal Modelling

272 K. A. L, J. M. O, D. M. H, J. R. R

Formulae (8.29) and (8.30) are immediately recognisable as numerical quadratures forr j−1, j in equation (8.25) and c j in equation (8.27). Rall’s expression for r j−1, j is based onthe trapezoidal rule∫ zj

zj−1

f (x) dx = (z j − z j−1)

2

(f (z j−1)+ f (z j )

)− (z j − z j−1)

3

12f ′′(ξj ) (8.3)

while that for c j uses the midpoint rule∫ zj

zj−1

f (x) dx = (z j − z j−1) f( z j−1 + z j

2

)+ (z j − z j−1)

2

3f ′′(ηj ) . (8.32)

Rall’s compartmental model is now seen to be an approximation of a more generalcompartmental model based on points and not isopotential segments of a dendrite. Inparticular, the difference between the Rall compartmental model and the general compart-mental model may be attributed to errors in approximating quadratures for r j−1, j and c

(m)j .

Another popular model of dendritic structure assumes contiguous cylinders of uniformcross-section so that A(x) and P(x) are piecewise constant functions of x. In this event,the quadratures for r j−1, j and c (m)j may be evaluated exactly.

Finally, it should be noted that if information regarding the geometric structure ofa dendrite is available at selected locations, then the quadratures for r j−1, j and c (m)j maybe estimated numerically using the trapezoidal rule. If geometrical data is available atuniformly spaced nodes then Simpson’s rule can be used for improved numerical accuracy.

Uniformly Tapering Dendrites

The general compartmental model of a dendrite raises the possibility of quantifying theerrors incurred in representing tapering dendritic geometry by a stepped sequence of uni-form contiguous cylinders. Prior to this discussion, some elegant and exact results for in-ter-nodal resistances are developed for tapering dendrites. The concept of taper, althoughprimitive, involves an element of subtlety that is not immediately apparent. Let AL be thecross-sectional area of a tapering dendritic section and let point (a, b, 0) be interior toAL. A section of a uniformly tapering dendrite may be viewed as a frustrum17 of the coneformed by a pencil of lines drawn from the pointV (a, b, H ) (H > 0) to δAL, the boundaryof AL. Without loss of generality, suppose that δAL has length PL and is the parametriccurve x = (x0(u), y0(u), 0) where u ∈ I , an interval of the real line. Since any pointon the line joining V (a, b, H ) to the point with coordinates (x0(u), y0(u), 0) on δAL hasposition

x = λ(a, b, H )+ (1 − λ)(x0(u), y0(u), 0) , λ ∈ [0, 1]

then the surface of the cone with vertex V (a, b, H ) has parametric equations

x = aλ+ (1 − λ)x0(u) ,

y = bλ+ (1 − λ)y0(u) ,

z = Hλ ,

(u, λ) ∈ I × [0, 1] . (8.33)

It follows directly from Green’s theorem that the cross-sectional area of this cone exposedby the plane λ constant, is given by the line integral

A(λ) = 12

∮(x dy − y dx)

17Subtle tapered dendritic geometries can be constructed by repositioning the vertex of the generatingcone after each section is generated.

Page 61: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 273

= 12

∮ [(aλ+ (1 − λ)x0)(1 − λ)dy0 − (bλ+ (1 − λ)y0)(1 − λ)dx0

]

= 1 − λ

2

∮(aλ dy0 − b dx0)+ (1 − λ)2

2

∮(x0 dy0 − y0 dx0)

where the curve of integration is the perimeter of the cone when λ is constant. Since∮dy0 = 0 ,

∮dx0 = 0 ,

12

∮(x0 dy0 − y0 dx0) = AL

then it follows immediately that

A(λ) = (1 − λ)2AL , λ ∈ [0, 1] , (8.34)

for all tapered dendrites. Similarly P(λ), the perimeter of A(λ), has value

P(λ) =∮ds =

∫I

√(∂x∂u

)2

+(∂y∂u

)2

du

=∫I(1 − λ)

√(∂x0∂u

)2

+(∂y0∂u

)2

du

= (1 − λ)PL .

Suppose that a tapering section of a dendrite is modelled by the frustrum of this conedefined by λ ∈ [0, λ1], (λ1 < 1). If the length of the section (height of the frustrum) is Lthen L = Hλ1 and the cross-sectional areas of the left and right hand faces of the frustrumare AL and AR = (1 − λ1)

2AL respectively from formula (8.34). Since z = Hλ then∫ L

0

dzA(z)

=∫ λ1

0

H dλ(1 − λ)2 AL

= Hλ1(1 − λ1)AL

= L√ALAR

(8.35)

By contrast, Rall’s compartmental model replaces this integral by its trapezoidal estimate.Using the well known result that the arithmetic mean of two positive numbers is never lessthan their geometric mean, it follows that

L2

[1AL

+ 1AR

]� L√

ALAR

where the left hand side of this inequality denotes the Rall approximation and the righthand side is the exact area. Thus the classical association between dendritic axial resistanceand geometry tends to overestimate axial resistance for dendrites with a pure taper. Thesuggestion is therefore that the expression (8.35) for dendritic axial resistance capturesmore accurately the electrical properties of the dendrite and enjoys the advantage that it isexact for dendrites with a pure taper. In particular, the procedures by which dendritic limbsare partitioned into cylinders are redundant for the purpose of assigning axial resistance.Similarly, the lumped capacitance associated with each node of a tapering dendrite is∫ L

0CMP(x) dx = CMH

∫ λ1

0(1 − λ)PL dλ = CM

(PL + PR)L2

.

The general compartmental model also provides insight as to how real dendritic cross-sec-tional areas might taper to zero. To interpret the integral expression for r j−1, j at a dendritictip z j where A(z j ) = 0, the function A−1(x) must have an integrable singularity. Thiscondition requires that A(x) = O

((z j − x)k

)where 0 < k < 1 which is consistent with

bull-nosed shaped dendritic terminals, and is inconsistent with dendritic terminals thatend on a taper. The clear suggestion is that dendritic limbs may be well modelled by ta-pering sections except near terminals where a bull-nose shaped cross-section should bematched to the tapered section to achieve termination.

Page 62: An Introduction to the Principles of Neuronal Modelling

274 K. A. L, J. M. O, D. M. H, J. R. R

Discrete Internodal Input

Discrete internodal current inputs such as might arise in the consideration of synapticactivity are now discussed for the general compartmental model. The usual approach issimply to assign a synaptic input to the compartment on which it naturally falls. Thisprocedure ignores the exact location of the synaptic input and the effects that this inputmay have on neighbouring compartments. The generalised model suggests a procedurefor partitioning the effects of synaptic activity between compartments.

Suppose that a synapse is active at site zs between the nodes z j and z j+1 and letV (s) bethe membrane potential at zs . If Ij, j+1 is the current leaving z j in the direction of z j+1 thenthe balance of currents at the site of the synaptic input requires that Ij, j+1+gs(t )(V (s)

k−Eα)

is the current entering z j+1. The potentials Vj, V(s) and Vj+1 are therefore connected by

the equations

V (s) −Vj = Ij, j+1

gA

∫ z(s)

zj

dsA(s)

, (8.36)

Vj+1 −V (s) = Ij, j+1 + gs(t )(V (s) − Eα)

gA

∫ zj+1

z(s)

dsA(s)

. (8.37)

By elimination of V (s) between these equations, Ij, j+1 is determined in termsVj andVj+1.In this way, the effect of the synaptic activity at zs is incorporated into the system ofcompartmental equations for themembrane potential at the nodes z j and z j+1.This idea forone synapse can be extended tomany synapses and gives rise to a systemof linear equationscomparable to (8.36, 8.37). However, since synaptic activity occurs stochastically andevolves in time, the system matrix itself is dynamic and the solution process is excessivelytime consuming.

An approximate way to partition synaptic activity in a way that is both numerically effi-cient and responsive to variations in location of synaptic inputs is based on the assumptionthat synaptic currents are small enough to ensure that the potential distribution betweennodes is not significantly different from that based on zero internodal current input. Inthis event,

V (z) = Vj − Ij, j+1(t )

gA

∫ z

zj

dsA(s)

, Vj+1 −Vj = − Ij, j+1(t )

gA

∫ zj+1

zj

dsA(s)

.

Eliminating the current Ij, j+1 between the equations yields

V (z)− Eα =

(Vj − Eα

) ∫ zj+1

z

dsA(s)

+(Vj+1 − Eα

) ∫ z

zj

dsA(s)∫ zj+1

zj

dsA(s)

. (8.38)

Suppose now that a synapse is active at zs, then the related input current is modelled bygs(t )(V (zs ) − Eα). In view of formula (8.38) for V (z), it is clear that synaptic input atz = zs may be approximately redistributed as fsgs(t ) at z j and (1 − fs )gs(t ) at z j+1 where

fs =

∫ zj+1

zs

dsA(s)∫ zj+1

zj

dsA(s)

.

This approximate result is derivable from the general procedure outlined at the start ofthis subsection by expanding the solutions for Ij, j+1 to order O(g 2s ).

Page 63: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 275

For uniformly tapering dendrites, it can be shown that

fs = A−1/2R − A−1/2

S

A−1/2R − A−1/2

L

= P−1R − P−1

S

P−1R − P−1

L

.

Time Integration

By restricting attention to the formulation of compartmental models of neurons, this sec-tion has so far avoided any confusion that may arise between compartmental models andapparently similar numerical schemes used to treat partial differential equation models ofdendrites. The discussion has indicated that compartmental models of neuronal behaviorare formulated typically as a system of ordinary differential equations for the dendriticmembrane potential together with a set of initial conditions. Although these equationshave many linear terms, they are generally nonlinear due to the presence of intrinsic volt-age dependent currents.

All subsequent discussion is directed towards the numerical solution of the compart-mental equations. The presence of transient solutions in the model equations presents theprimary difficulty in their numerical integration.Although these transients are short lived,they impose a global limitation on the size of time step for which numerical schemes basedon forward integration are feasible. Such algorithms may require absurdly small integra-tion time steps at all times despite the fact that the true solution may be very well behavedonce the transients have decayed. Differential equations with two widely different timescales (transient time and total observation time, for example) are said to be stiff. By wayof illustration, consider the numerical solution of the differential equation

τdydt

= −(y − 1) , y(0) = A > 1 (8.39)

using the standard Euler scheme

yn+1 = yn − hτ(yn − 1) , y0 = A

with fixed time step h and where yn = y(nh). By inspection, the exact solution is y(t ) =1 + (A− 1)e−t/τ and the numerical solution is yn = 1 + (A− 1)

(1 − h/τ

)n. For a given

τ, the behavior of the numerical solution can be classified into the three distinct regionsh ∈ (0, τ ), h ∈ (τ, 2τ ) and h ∈ (2τ,∞).

Stable solution: When 0 < h < τ then 0 < (1 − h/τ ) < 1 and the numerical schemeand analytical solution are in agreement. This is the only situation in which the forwardscheme reflects accurately the analytical solution.

Oscillatory and bounded: When τ < h < 2τ then −1 < (1 − h/τ ) < 0 and thenumerical scheme oscillates boundedly, although yn → 1 as n → ∞, that is, the limit ofthe numerical and analytical schemes are identical. However, this is only time at which theanalytical and numerical solutions are uniformly close. At finite times, the analytical andnumerical solutions differ markedly.

Oscillatory and unbounded: When h > 2τ then (1 − h/τ ) < −1 and the numeri-cal scheme oscillates unboundedly. The numerical and analytical schemes are nowhereidentical other that at the initial point. Clearly the numerical scheme is unstable in thisinstance.

In particular, these result apply to the numerical solution at all times since any re-ad-justment of h after time T is equivalent to a new initial value problem with starting value

Page 64: An Introduction to the Principles of Neuronal Modelling

276 K. A. L, J. M. O, D. M. H, J. R. R

y(T ) at time t = T . This simple example illustrates the archetypal numerical behavior ofstiff equations.

Consider now the numerical scheme

yn+1 = yn − hτ(1 − α)(yn − 1)− h

τα(yn+1 − 1) (8.40)

for the solution of equation (8.39) in which α ∈ [0, 1]. Clearly α = 0 is the (forward)Euler scheme just discussed while α = 1 is a fully backward Euler scheme to solve (8.39).Again, it is straightforward to verify that scheme (8.40) has solution

yn = 1 + (A− 1)

(τ + hα− hτ + hα

)n

.

By inspection, this scheme may oscillate unboundedly if α < 1/2 and may oscillate bound-edly if 1/2 ≤ α < 1. However, if α = 1, that is, the scheme is fully backward then

yn = 1 + (A− 1)

τ + h

)n

and the scheme is unconditionally stable irrespective of the choice of h. The key feature ofalgorithm (8.40) is that errors in yn are reduced in yn+1 because of the contraction propertyof themultiplier τ/(τ+h). It is the unconditional stability of backward integration schemesthat render them most suitable for the integration of dendritic compartmental equations.Suppose that n steps of size h take the solution to time t then t = nh and

yn = yn(t ) = 1 + (A− 1)

τ + h

)n

= 1 + (A− 1)(1 + t

τn

)−n

is now the estimate of y(t ) using the numerical scheme. Using the standard result that(1 + x/n)−n → e−x as n → ∞, it follows that yn(t ) → y(t ), the exact solution at time t,as n → ∞.

In traditional numerical work involving the integration of stiff equations, it is commonpractice to use commercial software libraries such as NAG or IMSL simply because theyprovide high quality adaptive bootstrapping schemes (variable order), the best known ofwhich is undoubtedly due to Gear (97). However, it would be misleading to suggest thatthe compartmental equations arising in dendritic modelling can be classified as traditionalfor two straightforward reasons. Firstly, the equations are sparse since compartments in-teract with nearest neighbours only and secondly, the synaptic activity on a dendrite isstochastic and therefore integration must proceed in small time steps to appreciate thestatistics of this activity. It is primarily for these two reasons that numerical methods haveevolved in tandem with the dendritic compartmental models themselves (see Mascagni,989; Hines and Carnevale, 997), whilst commercial implementations of stiff integratorshave been effectively sidelined. Suppose that the compartmentalised equations for a den-drite have form

dVdt

= AV + F (V ) , V (0) = V0 , (8.4)

where V = (v0(t ), . . . , vn(t )

)Tare the membrane potentials at nodes z0, . . . , zn respec-

tively and F is an (n + 1)-vector whose components are non-linear functions of the com-ponents of V , that is, v0, . . . , vn (typically arising from intrinsic and voltage dependentcurrents). In particular, A is an (n+ 1)× (n+ 1)matrix whose entries may be dependenton time but are independent of v0, . . . , vn. Equation (8.4) is typically integrated usingthe backward integration scheme

Vk+1 = Vk + h(Ak+1Vk+1 + F (Vk+1)

)

Page 65: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 277

which may be re-arranged to form

(I − hAk+1)Vk+1 = Vk + hF (Vk+1) . (8.42)

Compartmental models enjoy the property that Ak+1 has real and negative eigenvalues sothat (I − hAk+1) has real eigenvalues all exceeding unity. Thus the inverse of (I − hAk+1)

exists and is a contraction mapping, being a generalisation of the scalar contraction τ/(τ+h). The algorithm (8.42) is now implemented in the form

Vk+1 = (I − hAk+1)−1(Vk + hF (Vk+1)

). (8.43)

GivenVk, values of Vk+1 are first predicted and then corrected while the contraction prop-erty of (I − hAk+1)

−1 ensures numerical stability. In practice, the tri-diagonal predomi-nance is such that no matrix inverse is formally computed in the execution of the iterativescheme (8.43).

The Spectral Methodology

There are a number of neurophysiology packages (see De Schutter, 992) that can be usedto investigate dendritic trees modelled by equation (8.5). These packages almost univer-sally resolve the spatial dependence of the tree potential using the method of finite dif-ferences. However, the ease with which finite difference schemes can be programmed fordendritic trees should now be counterbalanced against the poor numerical resolution in-herent in such algorithms and the need to embrace the much larger and more complexdendritic structures required in contemporary neurophysiology. Nowadays finite differ-ence schemes in applied mathematics have been largely superseded by finite element andspectral algorithms for intensive calculations. The latter method is particularly suitable forthe description of dendritic trees. Furthermore, spectral techniques usually enjoy exponen-tial convergence to the analytical solution unlike finite differences methods which convergeonly algebraically. Canuto, Hussaini, Quarteroni and Zang (988) make this comparisonfor V (x, t ), the solution of the initial boundary value problem

∂V∂t

= ∂2V∂x2

, (x, t ) ∈ (0, 1)× (0,∞) (8.44)

with boundary and initial data

V (0, t ) = V (1, t ) = 0 , V (x, 0) = sinπx . (8.45)

The exact solution is V (x, t ) = e−π2t sinπx while the spectral solution based on Cheby-shev polynomials can be shown to be

V (x, t ) = 2∞∑k=1

sin (kπ/2)Jk (π)e−π2tTk (x) (8.46)

whereTk (x) is the Chebyshev polynomial of order k (to be defined shortly) and Jk (x) is theBessel function of the first kind of order k. It can be demonstrated that the truncated series(8.46) converges to V (x, t ) exponentially. Canuto et al. (988) quote maximum errorsfor the Chebyshev collocation algorithm varying from 4.58 × 10−4 with 8 polynomialsto 2.09 × 10−11 with 6 polynomials. Second order finite differences with 6 degrees offreedom achieves an accuracy of 0.135.

Typically gL/gA, the ratio of membrane leakage conductance to core conductance, anda/ l2, the ratio of dendritic radius to length squared are both order one. Hence the oper-ational conditions of the cable equation (8.5) are very similar to those quoted here forthe heat equation (8.44). In practice, the vast majority of C and Fortran compilers in the

Page 66: An Introduction to the Principles of Neuronal Modelling

278 K. A. L, J. M. O, D. M. H, J. R. R

marketplace cannot take advantage of the exponential convergence inherent in a spectralalgorithm although there is now increasing pressure to remedy this situation. In the currentenvironment, we believe that spectral algorithms in dendritic modelling are probably bestdeployed to reduce the number of variables needed to capture the essence of each limbof a dendritic tree thereby freeing resources to be used either to treat larger dendrites oralternatively to do faster computations on a given dendrite.

Mathematical Preliminaries

The family of Chebyshev polynomials T0(z), T1(z), . . .Tn(z) . . . is defined by the identity

Tn(cosϑ) = cos(nϑ) , n = 0, 1, . . . . (8.47)

For example, T0(z) = 1, T1(z) = z and T2(z) = 2z2 − 1 are the first three Chebyshevpolynomials. By replacing ϑ by π − ϑ in (8.47), and by substituting ϑ = 0 and ϑ = π indefinition (8.47), it follows that

Tn(−z) = (−1)nTn(z) , z ∈ [−1, 1] ,

Tn(1) = 1 , Tn(−1) = (−1)n ,

n = 0, 1, . . . . (8.48)

Furthermore T ′n(−1) and T ′

n(1) can be found by differentiating definition (8.47) withrespect to ϑ and then recognising that

T ′n(−1) = lim

ϑ→π

n sin nϑsinϑ

= (−1)nn2 , T ′n(1) = lim

ϑ→0

n sin nϑsinϑ

= n2 . (8.49)

Results (8.48) and (8.49) are used in the treatment of boundary conditions involvingpotential and currents respectively. From definition (8.47), it can also be demonstrated ina straightforward way that Chebyshev polynomials possess the properties

2Tn(z)Tm(z) = Tm+n(z)+ T|n−m|(z) , (8.50)

2Tn(z) = T ′n+1(z)

n + 1− T ′

n−1(z)

n − 1, n > 1 , (8.5)

together with the orthogonality condition

∫ 1

−1

Tn(z)Tm(z)√1 − z2

dz =

0 n �= m

π m = n = 0

π/2 n = m � 1

. (8.52)

A large class of important functions have Chebyshev series expansions. The situation isanalogous to Fourier series, and just as with Fourier series, the orthogonality condition isused to calculate the coefficients of a Chebyshev series expansion from its parent function.Suppose that f is a function defined in [−1, 1] and is square integrable with respect to theweight function w(z) = (1 − z2)−1/2, that is∫ 1

−1

f 2(z) dz√1 − z2

< ∞

then f has a Chebyshev series expansion

f (z) =∞∑k=0

fkTk (z) , (8.53)

Page 67: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 279

whose coefficients fk are formed from (8.53) by multiplying both sides of this equationby Tn(z)/

√1 − z2 and then integrating over [−1, 1]. It follows immediately that

f0 = 1π

∫ 1

−1

f (z)√1 − z2

dz , fn = 2π

∫ 1

−1

f (z)Tn(z)√1 − z2

dz n � 1 . (8.54)

The non-periodic nature of Chebyshev series makes then an ideal way to represent solu-tions of partial differential equations whose boundary conditions are non-periodic.

Suppose now that the representation (8.53) involves only (N +1) polynomials so thatfN+1 = fN+2 = · · · = 0. In this case f (z) has Chebyshev series approximation

f (z) =N∑

k=0

fkTk (z) . (8.55)

Superficially it would appear that the series (8.55) is just a truncated version of (8.53)in which fk is an approximation of fk, the coefficient in the corresponding infinite ex-pansion. In fact, the truncated series (8.55) is more accurately viewed as a compromisedescription of f (z) in which the coefficients f0 · · · fN also contain the aliased effects of theresidual series fN+1TN+1 + · · · in expansion (8.53). For the finite Chebyshev expansion,the expression (8.54) for the coefficient fn is replaced by

fn = 1cn

∫ 1

−1

f (z)Tn(z)√1 − z2

dz , cn ={π n = 0, N

π/2 0 < n < N .

Discrete Chebyshev Transform

The computation of f0, f1, . . . , fN from f (z) is now addressed. Given a set of N +1 points(z0, F0), (z1, F1), . . . , (zN , FN ) where Fk = f (zk ), then the coefficients f0, f1, . . . , fN maybe evaluated as the solution of the system of N + 1 linear equations

Fk =N∑j=0

fjTj (zk ) , k = 0, 1, . . . , N .

The dissection zk = cos (kπ/N ) (0 � k � N ), given by the nodes of a Gauß-Lobattoquadrature of order N (see Davis and Rabinowitz, 983)), are often favoured and leadsto the discrete Chebyshev transform. This states that if z0, z1, . . . , zN are points in [−1, 1]defined by zk = cos (πk/N ) and if Fk = f (zk ) then

Fj =N∑

k=0

fk cosπjkN

, j = 0, 1, . . . , N (8.56)

fk = 2Nck

N∑j=0

1c jFj cos

πjkN

, k = 0, 1, . . . , N (8.57)

where

c0 = cN = 2 , c1 = c2 = · · · = cN−1 = 1 . (8.58)

Of course, other families of orthogonal polynomials possess similar discrete transforms.However, the discrete Chebyshev transform is singled out by the fact that, with minormodification, it can be recast in the framework of the Fast Fourier Transforms (FFT) sothat the transition from coefficient (spectral space) to value (physical space) and vice-versais performed with high precision and speed. This is one of the major ingredients of theChebyshev magic.

Page 68: An Introduction to the Principles of Neuronal Modelling

280 K. A. L, J. M. O, D. M. H, J. R. R

Function Computation

The discrete Chebyshev transform, when implemented by means of the fast Fourier trans-form, provides an efficient way to move from coefficient space to value space providedfunction values at z0, z1, . . . , zN are needed. However, it is often necessary to computefunction values at points lying between the Gauß-Lobatto nodes. These can be computedeasily from (8.53) by first determining the angle ϑ ∈ [0,π] corresponding to z = cosϑand then recognising that

f (z) = f (cosϑ) =N∑

k=0

fk cos kϑ . (8.59)

Alternatively, if such function values are required frequently (e.g the treatment of spatiallydistributed synaptic inputs), another more sophisticated (and also more efficient) algo-rithm is appropriate.

Let y0, y1, . . . , yN+2 be a sequence defined by the iteration

yk = 2zyk+1 − yk+2 + fk , k = 0, 1, . . . , N yN+1 = yN+2 = 0 , (8.60)

where fk is the coefficient of Tk (z) in the Chebyshev series approximation of f (z). Fromproperty (8.50), it follows that 2zTk (z) = Tk+1(z) + Tk−1(z). This result, together withthe definition of fk in terms of yk, when used in expression (8.55) for f (z), yields

f (z) =N∑

k=0

fkTk (z) =N∑

k=0

(yk − 2zyk+1 + yk+2

)Tk (z)

=N∑

k=0

ykTk (z)− 2zy1 −N∑

k=1

yk+1

(2zTk (z)

)+

N∑k=0

yk+2Tk (z)

=N∑

k=0

ykTk (z)− 2zy1 −N∑

k=1

yk+1

(Tk+1(z)+ Tk−1(z)

)+

N∑k=0

yk+2Tk (z)

=N∑

k=0

ykTk (z)− 2zy1 −N+2∑k=2

ykTk (z)−N−1∑k=0

yk+2Tk (z)+N∑

k=0

yk+2Tk (z)

= y0 − zy1

This iterative computation of f (z) is based on Clenshaw’s algorithm (see Clenshaw, 962).Although more difficult to implement than formula (8.59) for f (z), it never requires thedetermination of Tk (z) nor does it require the evaluation of the cosine function. Further-more, the iterative scheme (8.60) is stable provided |z| � 1 (always true!). Thus if f needsto be evaluated frequently at points that are not Gauß-Lobatto nodes, then iterative scheme(8.60) is recommended and leads to the final result f (z) = y0 − zy1.

Spectral Differentiation

Suppose that f is a differentiable function of z in (−1, 1) then f ′(z) can be estimated fromthe representation (8.55). Indeed, it is obvious from property (8.5) that

d fdz

=N∑

k=0

fkdTk (z)dz

=N∑

k=0

f (1)k Tk (z) , (8.6)

where the coefficients f (1)k

are generated from fk by the iterative formula

f (1)k

= f (1)k+2

+ 2(k + 1) fk+1 1 � k � N − 1 ,

2 f (1)0 = f (1)2 + 2 f1 , f (1)N = f (1)N+1 = 0 .(8.62)

Page 69: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 28

This result is central to the numerical solution of partial differential equations using Cheby-shev polynomials. Its justification relies on that fact that Tk (z) can be replaced by deriva-tives of Chebyshev polynomials from property (8.5). Using this idea together with thetrivial observation that T ′

0(z) = 0, the manipulation of formula (8.6) gives

N∑k=1

2 fkT′k (z) = 2 f (1)0 T0(z)+ 2 f (1)1 T1(z)+

N∑k=2

f (1)k

(T ′k+1(z)

k + 1− T ′

k−1(z)

k − 1

)

= 2 f (1)0 T ′1(z)+ f (1)1

2T ′2(z)+

N+1∑k=3

f (1)k−1

T ′k (z)

k−

N−1∑k=1

f (1)k+1

T ′k (z)

k

= f (1)0 T ′1(z)+

N∑k=1

(f (1)k−1 − f (1)k+1

)T ′k (z)

k

bearing in mind that f (1)N = f (1)N+1 = 0. Equating the coefficients of T ′k (z) on both sides of

this identity yields the result

f (1)k−1 − f (1)k+1 = 2kfk , 2 � k � N , 2 f (1)0 − f (1)2 = 2 f1 .

The iterative relationship (8.6) is a practical way to calculate f (1)0 , . . . , f (1)N−1, the coeffi-cients of the Chebyshev series for f ′(z). Furthermore, the algorithm can be repeated againto compute f (2)0 , . . . , f (2)N−2, the Chebyshev coefficients of f ′′(z).

Boundary Conditions

Of course, once the Chebyshev coefficients of f ′(z) are known then f ′(z) can be evaluatedfor every z ∈ [−1, 1] by summation of the series, and can be computed very efficientlyat the Chebyshev nodes using the inverse discrete Chebyshev transform. The endpointsz = 1 and z = −1 are particularly important since f (1), f (−1), f ′(1) and f ′(−1) areubiquitous ingredients of boundary conditions. It is a piece of tedious algebra to verifythat

f ′(1) = 2N 2 + 16

F0 +N−1∑j=1

Fj (−1)j

sin2 (πj/2N )+ 1

2FN , (8.63)

f ′(−1) = −12F0 −

N−1∑j=1

Fj (−1)j

cos2 (πj/2N )− 2N 2 + 1

6FN (8.64)

in which Fk = f (zk ). Thus boundary conditions involving derivatives can be used toextract information concerning boundary values of the function f itself. For example,suppose that a cable is sealed at z = 1 so that the potential V (z, t ) has zero gradient atz = 1. In view of identity (8.63), the sealed end condition is arranged by ensuring thatV0(t ), the potential at z = 1, satisfies

∂V (1, t )∂z

= 2N 2 + 16

V0(t )+N−1∑j=1

Vj (t )(−1)j

sin2 (πj/2N )+ 1

2VN (t ) = 0

from which it follows that

V0(t ) = − 62N 2 + 1

N−1∑j=1

Vj (t )(−1)j

sin2 (πj/2N )− 3

2N 2 + 1VN (t ) . (8.65)

Page 70: An Introduction to the Principles of Neuronal Modelling

282 K. A. L, J. M. O, D. M. H, J. R. R

Representation of Synaptic Input

It has already been recognised that synaptic current inputs into a dendrite are commonlydescribed by a time dependent conductance whose spatial behavior is delta-like. For exam-ple, the activation of a synaptic input at time τ and location x = X on a dendrite of lengthL involving species Sα is modelled by the current density gsyn(t−τ )(V (X, t )−Eα)δ(x−X )in the cable equation for that limb where typically

gsyn(t ) =

0 t � 0

G(α)tTe(1−t/T ) t > 0 .

Suppose that

gsyn(t )δ(x − X )(V (x, t )− ES) = g (z, t ) = gsyn(t )N∑

k=0

gkTk (z) (8.66)

then the coefficients g0, g1, . . . , gN are given by the formulae

gk = 1ck

∫ 1

−1

g (z)√1 − z2

dz , c0 = cN = π, ck = π

2, (0 < k < N ) . (8.67)

However, the computation of g (z) requires some formal recognition of the properties ofdelta functions. In all subsequent analyses the mapping z = −1 + 2x/L will be used toassociate point x on a dendritic segment of length L (i. e. x ∈ [0, L]) with point z ∈[−1, 1]. Under this mapping, X ∈ [0, L] on the real dendrite maps to Z ∈ [−1, 1] whereZ = −1 + 2X/L. Thus

δ(x − X ) = δ

(L(1 + z)

2− L(1 + Z )

2

)= δ

(L(z − Z )

2

)= 2

Lδ(z − Z ) .

Suppose that a synaptic input becomes active at time τ then it is clear from (8.66) that

g (z) = 2gsyn(t − τ )

Lδ(z − Z )(V (z, t )− ES) (8.68)

whereV (z, t ) is used unambiguously to meanV (x(z), t ). It now follows from (8.67) that

gk = 2gsyn(t − τ )

Lck

(V (Z, t )− Eα

)Tn(Z )

√1 − Z2

= 2gsyn(t − τ )

Lck

(V (cosΘ, t )− Eα

)cos kΘ

sinΘ, (8.69)

where Z = cosΘ. These expressions therefore give the coefficients of the Chebyshevexpansion of a synaptic input occurring at x = X .

Solution Procedure

The cable equation (8.5) may be represented in the general format

∂V∂t

= L (V ) (8.70)

where L is an operator involving t and x but only spatial derivatives. The dendrite x ∈[0, L] is first mapped into [−1, 1] by the formula z = −1+2x/L. Similarly each occurrence

Page 71: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 283

of the differential operator ∂/∂x in L is replaced by (2/L)∂/∂z. Suppose now that themodified potential V (z, t ) is approximated by the discrete Chebyshev series

V (z, t ) =N∑

k=0

vk (t )Tk (z) , (z, t ) ∈ [−1, 1] × (0,∞) (8.7)

then

∂V (z)∂t

=N∑

k=0

∂vk

∂tTk (z) . (8.72)

Two distinct solution strategies are now evident; either solve for v0(t ), . . . , vN (t ), the co-efficients of the spectral representation of V (z, t ) or solve for V (z0, t ), . . . ,V (zN , t ), thevalues of the potential at the Chebyshev nodes z0, . . . , zN . The former is commonly calleda “spectral” or “pure spectral” approach because it seeks to find the coefficients of the spec-tral series for V rather than function values. The latter is often called a “pseudo-spectral”or “collocation” approach because it seeks to solve for function values at the nodes of adissection. Briefly, the former is often harder to implement and arguably less physical in itsrepresentation of the boundary conditions.On the other hand it executes faster and ismorepowerful in the respect that it can deal more effectively with diverging solutions. However,the well behaved nature of dendritic potentials allied with the number and complexity ofthe boundary conditions for the interconnected limbs of a dendritic tree suggests that thecollocation approach is probably the better strategy in this instance.

Spectral Algorithm

For completeness, the basic construction of the spectral algorithm is now described forequation (8.70). The potential and its time derivative are first represented in the formdescribed in equations (8.7) and (8.72). The latter equation is now multiplied by(1 − z2

)−1/2Tj (z) and integrated over [−1, 1] to obtain

dvjdt

= 2πc j

∫ 1

−1

∂V∂t

Tj (z)√1 − z2

dz = 2πc j

∫ 1

−1

L (V )Tj (z)√1 − z2

dz , j = 0, 1, . . . , N . (8.73)

Each integer value of j from j = 0 to j = N−2 generates an ordinary differential equationconnecting the coefficients v0, v1, . . . , vN . However, the equations in (8.73) arising fromj = N − 1 and j = N misrepresent vN−1 and vN due to missing contributions fromChebyshev polynomials of order higher thanN . These two equations are therefore replacedby boundary conditions. In fact, two boundary conditions are used to extract algebraicexpressions for vN−1 and vN in terms of the other spectral coefficients v0, v1, . . . , vN−2.

Collocation Algorithm

The collocation solution of equation (8.70) requires that

∂V (z j , t )

∂t+ J (z j , t ) = L (V )|z=zj . (8.74)

be true arithmetically at all internal points z1, . . . , zN−1 of the dissection (−1, 1). Bound-ary conditions are solved algebraically for V (z0, t ) and V (zN , t ). It should be stated herethat theoretical analyses leading to the formulation of equations (8.73) and (8.74) oftenpresent these derivation in terms of trial functions (Chebyshev polynomials in this case)and test functions (delta functions here). Of course, such analyses are essential in an ap-preciation of the errors inherent in the numerical procedures but have negligible impacton the practical implementation of the algorithms.

Page 72: An Introduction to the Principles of Neuronal Modelling

284 K. A. L, J. M. O, D. M. H, J. R. R

Irrespective of implementation methodology (pure spectral or pseudo spectral),boundary conditions are applied first before the computation of L (V ) is performed.Furthermore, the treatment of the boundary conditions is nearly always stabilizing inthe sense that computational errors in vN−1, vN or alternatively V (z0, t ), V (zN , t ) aremultiplied by a factor less than unity and therefore are controlled.

Two passive non-tapering prototype dendritic trees are now used to illustrate themethod. Each dendritic configuration has a closed form analytical solution that will beextracted and compared with that arising from the numerical solution of the underlyingpartial differential equations. Of course, the numerical strategy is equally implementablefor non-uniform active dendrites whereas the analytical methodology breaks down fortapered dendrites18.

Spectral and Exact Solution of an Unbranched Tree

Consider a simple dendrite consisting of a uniform single limb of radius a and lengthl attached to a soma of area AS. The dendrite is assumed to be passive, and thereforedescribed by the linear cable equation. In the absence of active synaptic input current JS,synaptic current activity is simulated by the time dependent alpha function input currentJin(x, t ) = G(t/T )e(1−t/T )δ(x − f l ) acting at location x = f l, f ∈ (0, 1) and turned onat t = 0. The configuration is illustrated in figure 3.

x = lx = 0

Sealed end∂V (l , t )∂x

= 0

Input current

(t/T )Ge(1−t/T )δ(x − f l )

Soma

Fig. 13. A passive dendritic limb with sealed end and attached soma is stimulated by an injected“alpha” function pulse of current at x = f l, where (0 < f < 1).

The deviation from the resting potential of the passive dendrite in figure 3 is modelledby the partial differential equation

τ∂V∂t

+V + JingL

= agA2gL

∂2V∂x2

, x ∈ (0, l ) . (8.75)

Equation (8.75) is solved with boundary conditions

∂V (l , t )∂x

= 0 , τS∂V (0, t )∂t

+V (0, t ) = gAgS

πa2

AS

∂V (0, t )∂x

(8.76)

and initial condition

V (x, 0) = 0 . (8.77)

In order to simultaneously simplify the representation of the problem for analytical pur-poses and also develop a formulation of the problem suitable for numerical solution using

18This is perhaps an overstatement of reality for uniformly tapered dendrites. In fact, the trigonometricfunctions appearing in the analytical solution need to be replaced by Bessel functions. The entire operationbecomes more intractable and requires sophisticated mathematics.

Page 73: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 285

spectral methods, it is convenient to non-dimensionalise time, length and voltage in equa-tions (8.75, 8.76, 8.77) by the coordinate transformations t∗ = t/τ, z = −1+ 2x/ l andϕ = V/V0 whereV0 = 2Gσe/ l gL and σ = τ/T . The corresponding non-dimensionalisedequations, boundary conditions and initial condition now become

∂ϕ

∂t+ ϕ + i (z, t ) = β

∂2ϕ

∂z2, z ∈ (−1, 1) , (8.78)

∂ϕ(1, t )∂z

= 0 , (8.79)

χ

(ε∂ϕ(−1, t )

∂t+ ϕ(−1, t )

)= ∂ϕ(−1, t )

∂z(8.80)

ϕ(z, 0) = 0 (8.8)

in which superscript stars have been suppressed but it is understood that all variablesare non-dimensional. Although i (z, t ) = te−σtδ(z − (2 f − 1)) in this application, theanalysis is pursued generally without explicit reference to this particularly simple form.The non-dimensional parameters σ, β, χ and ε are defined in terms of the biophysicalparameters of the problem by

σ = τ

T, β = 2a

l2gAgL

, χ = l2gSgA

AS

πa2, ε = τS

τ.

Parameter σ is the ratio of the time constant of the dendritic membrane to that of theinput current while ε is the ratio of the somal and dendritic time constants. In the finalspecification of the problem it will be assumed that ε = 1 but, for the time being, theanalysis proceeds on the basis that ε is an arbitrary positive constant.

Finite Transforms

For real λ let ϕλ and iλ be defined by

ϕλ =∫ 1

−1ϕ(z, t ) cos λ(1 − z) dz iλ =

∫ 1

−1i (z, t ) cos λ(1 − z) dz . (8.82)

Equation (8.78) is now multiplied by cos λ(1 − z) and the resulting equation integratedover (−1, 1). Bearing in mind that ϕ satisfies the boundary condition (8.79), after twointegrations by parts, it follows easily that

dϕλdt

+ (1 + βλ2)ϕλ + iλ = −β cos 2λ∂ϕ(−1, t )

∂z+ βλ sin 2λ ϕ(−1, t ) . (8.83)

The somal boundary condition (8.80) is now used to remove the potential gradient atx = 0. After some straightforward algebra, equation (8.83) can be recast in the form

dϕλdt

+ (1 + βλ2)ϕλ + εβχ cos 2λ

[dϕ(−1, t )

dt+ χ cos 2λ− λ sin 2λ

εχ cos 2λϕ(−1, t )

]= −iλ .

(8.84)

The key idea is now to choose λ (as yet unspecified) to satisfy the transcendental equation

1 + βλ2 = χ cos 2λ− λ sin 2λεχ cos 2λ

≡ tan 2λ = χ

(1 − ε

λ− βελ

). (8.85)

With this choice of λ, the function

ψλ(t ) = ϕλ + βεχ cos (2λ) ϕ(−1, t ) (8.86)

Page 74: An Introduction to the Principles of Neuronal Modelling

286 K. A. L, J. M. O, D. M. H, J. R. R

satisfies the ordinary differential equation

dψλdt

+ (1 + βλ2)ψλ = −iλwith general solution

ψλ(t ) = ψλ(0)e−(1+βλ2)t −

∫ t

0iλ(s)e

−(1+βλ2)(t−s) ds .

In view of the initial condition (8.8), ψλ(0) = 0 in this application so that

ψλ(t ) = −∫ t

0iλ(s)e

−(1+βλ2)(t−s) ds . (8.87)

The primary purpose of this analysis is to provide closed form solutions for the potentialin some trial dendrites so that direct comparisons can be made with solutions extractedby the numerical procedures described in the previous sections. With this aim in mind,it makes sense to ease the technical complications arising in the construction of the exactsolution by assuming now that ε = 1. Thus λ satisfies the transcendental equation

tan 2λ+ 2γλ = 0 , 2γ = βχ = gSgL

AS

πal. (8.88)

In order to appreciate the role of the ψ’s in the formation of the analytical cable solution,it is first necessary to develop an orthogonality condition.

Orthogonality of Eigenfunctions

Let λ and η be two distinct solutions of the transcendental equation (8.88) then it isverified easily that

∫ 1

−1sin λ(1 − z) sin η(1 − z) dz =

0 λ �= η

1 + γ cos2 2λ λ = η .(8.89)

Thus sin λ(1 − z) and sin η(1 − z) are orthogonal functions over [−1, 1] whenever λ andη are distinct solutions of (8.88). In fact, the eigenfunctions cos λ(1 − z) correspondingto each solution λ of (8.88) also form a complete space so that ϕ(z, t ) has representation

ϕ(z, t ) = A0 +∑λ

Aλ cos λ(1 − z) . (8.90)

The task is now to compute A0 and Aλ. In order to determine A0, observe first that∫ 1

−1ϕ(z, t ) dz = 2A0 +

∑λ

∫ 1

−1cos λ(1 − z) dz = 2A0 +

∑λ

Aλsin 2λλ

.

Assuming that the series for ϕ(z, t ) converges to ϕ(−1, t )when z = −1 (as indeed it does)then

ϕS(t ) = A0 +∑λ

Aλ cos 2λ .

The previous two results are now added together to obtain

ψ0 = A0(2 + βχ)+∑λ

(sin 2λλ

+ βχ cos 2λ

)= 2(1 + γ )A0 . (8.9)

Page 75: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 287

In conclusion,

A0 = ψ0

2(1 + γ )= − 1

2(1 + γ )

∫ t

0iλ(s)e

−(t−s) ds . (8.92)

Furthermore, the orthogonality property (8.89) leads to the derivation∫ 1

−1

∂ϕ

∂zsin λ(1 − z) dz =

∑α

∫ 1

−1sin α(1 − z) sin λ(1 − z) dz = (1 + γ cos2 λ)Aλ .

Using integration by parts, it follows immediately that∫ 1

−1

∂ϕ

∂zsin λ(1 − z) dz = ϕλ − sin 2λ

λϕS(t ) = ψλ

which in turn leads to the conclusion that

Aλ = ψλ

1 + γ cos2 λ. (8.93)

Hence the coefficients Aλ in the series expansion of ϕ(z, t ) are determined from ψλ andconsequently the deviation of the dendritic membrane potential from its resting state hasclosed form expression

ϕ(z, t ) = ψ0(t )2(1 + γ )

+∑λ

ψλ(t ) cos λ(1 − z)1 + γ cos2 2λ

(8.94)

where λ are the solutions of the transcendental equation tan 2λ+ 2γλ = 0.

Convergence of Analytical Solution

There are two quite separate issues underlying the construction of the series solution (8.94)for ϕ(z, t ). Arguably the more important of these concerns the question of completenessof the space of eigenfunctions; is every solution representable as a weighted sum of eigen-functions (c.f. Fourier series)? Allied to this issue are complementary questions relatingto the quality of series convergence. On the other hand, boundary and initial conditionsare satisfied through the choice of λ’s and the time dependence of the ψ’s. In fact theeigenvalue problem here is complete, although not all apparently well posed problems en-joy this property. The following cautionary example illustrates this point. The functionsχ(z, t ) = sin λ(1 − z) demonstrably satisfy the orthogonality property

∫ 1

−1sin λ(1 − z) sin η(1 − z) dz =

0 λ �= η

1 − γ cos2 2λ λ = η

where λ and η are solutions of tan 2λ = 2γλ with γ > 0. In this instance, it can be shownby counterexample that the corresponding eigenspace formed by χλ(z, t ) is incompletewhen 0 < γ < 1 but is partially complete (complete for a restricted class of functions)when γ = 1 and complete when γ > 1. It is unclear whether or not these deficienciescan be remedied when γ � 1 as they appear to originate from the fact that the equationtan x = γ x has only the trivial solution x = 0 on the principle branch of the tangentfunction in this case. When γ > 1, tan x = γ x also has a non-trivial solution on theprinciple branch of the tangent function.

To test the quality of convergence, the previous discussion can be used to deduce theidentity

1 + z = 11 + γ

+ 2∑λ

(sin λλ

)2 cos λ(1 − z)1 + γ cos2 2λ

, z ∈ [−1, 1] , (8.95)

Page 76: An Introduction to the Principles of Neuronal Modelling

288 K. A. L, J. M. O, D. M. H, J. R. R

Table 8.5. The table illustrates the accuracy that can be expected from an exact representation of thesolution to the cable equation for various numbers of eigenvalues. In fact, 5 000 eigenvalues is onlymarginally superior to 500.

GammaError at

z = −1.0

Error at

z = −0.5

Error at

z = 0.0

Error at

z = 0.5

Error at

z = 1.0

Accuracy based on 0 eigenfunctions

γ = 0.1 1.83 × 10−3 2.78 × 10−3 6.22 × 10−4 4.96 × 10−3 4.07 × 10−2

γ = 1.0 7.69 × 10−5 2.22 × 10−3 3.14 × 10−4 4.90 × 10−3 4.05 × 10−2

γ = 10.0 6.49 × 10−6 2.18 × 10−3 2.77 × 10−4 4.90 × 10−3 4.05 × 10−2

Accuracy based on 50 eigenfunctions

γ = 0.1 1.60 × 10−5 8.91 × 10−5 5.89 × 10−6 2.11 × 10−4 8.11 × 10−3

γ = 1.0 6.25 × 10−7 8.78 × 10−5 2.65 × 10−6 2.11 × 10−4 8.11 × 10−3

γ = 10.0 5.27 × 10−8 8.77 × 10−5 2.32 × 10−6 2.11 × 10−4 8.11 × 10−3

Accuracy based on 500 eigenfunctions

γ = 0.1 1.63 × 10−8 9.99 × 10−9 6.19 × 10−9 1.24 × 10−8 8.11 × 10−4

γ = 1.0 1.82 × 10−10 7.49 × 10−10 3.71 × 10−9 9.37 × 10−9 8.11 × 10−4

γ = 10.0 1.25 × 10−10 9.16 × 10−10 2.22 × 10−9 1.04 × 10−8 8.11 × 10−4

where tan 2λ+ 2γλ = 0 and γ > 0 is arbitrary. Table 8.5 provide some indication of theexpected error in representing 1 + z by a truncated series of eigenfunctions.

In this example, 50 eigenfunctions provide adequate accuracy except near z = 1. Se-ries using more eigenfunctions clearly provide better accuracy although, in fact, hardlyany improvement in accuracy is achieved with more than 500 eigenfunctions primarilydue to losses in arithmetical significance and the poor convergence characteristics of thetruncated series. A critical inspection of the convergence properties of the orthogonal se-ries reveals that the largest inaccuracies occur near z = 1 and that the convergence errorthere is roughly inversely proportional to the number of polynomials used in the expan-sion. The non-uniform convergence of series solutions suggests the likelihood of seriouspractical difficulties when analytical solutions are used to study the characteristics of large,heavily branched passive dendritic structures.Of course, the analytical solution will alwaysproduce numbers, but their relevance must be appraised carefully.

Solution for Alpha Current Injection

The specific potential for a dendrite excited by an injected alpha function current input atx = f l is now considered. It has already been shown that this current input is describedby i (z, t ) = te−σtδ(z − (2 f − 1)) so that iλ = cos 2λ(1 − f )te−σt . Thus

ψλ = − cos 2λ(1 − f )∫ t

0s e−σse−(1+βλ2)(t−s) .

After some further integration by parts, it can be shown that

ψλ = cos 2λ(1 − f )(σ − 1 − βλ2)2

[(σ − 1 − βλ2)te−σt + e−σt − e−(1+βλ2)t

]. (8.96)

Page 77: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 289

Numerical Solution

Let ϕ0(t ),ϕ1(t ), . . . ,ϕN (t ) denote the potential in the cable at nodes z0, z1, . . . , zN andlet i0, i1, . . . , iN be a distribution of currents at the nodes that are equivalent to the inputcurrent i (z, t ) = te−σtδ(z − (2 f − 1)), modulo the precision available (only N + 1Chebyshev polynomials are in use). It follows directly from (8.6) that if

i (z, t ) =N∑j=0

i jTj (z)

then

i j = 1c j

te−σt cos 2 j cos−1 (2√f )√

f (1 − f ), c j =

{π j = 0, N

π/2 0 < j < N .

The values ϕ0, . . . ,ϕN are now determined as follows. First choose ϕ0 by the prescription

ϕ0(t ) = − 62N 2 + 1

N−1∑j=1

ϕj (t )(−1)j

sin2 (πj/2N )− 3

2N 2 + 1ϕN (t ) , (8.97)

then ϕ(z, t ) satisfies the sealed condition at z = 1. Now ϕ1, . . . ,ϕN−1 are obtained as thesolutions of the ordinary differential equations

dϕkdt

= −ϕk + χ−1ϕ′′(zk , t )− ik (t ) , k = 1, 2, . . . , N − 1 . (8.98)

The second spatial derivative in (8.98) is computed by first using the Chebeyshev cosinetransform to get the Chebyshev coefficients of ϕ(z, t ) followed by two differentiations inspectral space followed lastly by the inverse Chebyshev cosine transform back into physicalspace. Finally, ϕN is determined from the somal boundary condition

dϕNdt

= −ϕN − χ−1

ϕ0

2+

N−1∑j=1

ϕj (t )(−1)j

cos2 (πj/2N )+ 2N 2 + 1

6ϕN

.

Fig. 4 illustrates the agreement between the exact and numerical solutions (based on 8and 6 polynomials) for the potential at the soma in this unbranched dendrite. Differencesbetween the exact and numerical solutions are entirely due to the ability of any numericalsolution to treat delta-like inputs at points other than nodes.

5 0 5 20 25 30 35 40

0.5

.0

.5

Fig. 14.Comparison of true potential and spectral estimate of the potential at the soma. The spectralestimate is based on 8 (peaks highest) and 6 (lowest curve) polynomials.

Page 78: An Introduction to the Principles of Neuronal Modelling

290 K. A. L, J. M. O, D. M. H, J. R. R

Spectral and Exact Solution of a Branched Tree

Major et al. (993a) describe another analytical treatment of this problem and its variousgeneralisations using separation of variables. This approach may present the followingproblems:

. Fourier series cannot be freely evaluated and differentiated at endpoints;2. Spatially and temporally distributed inputs can be difficult to incorporate into these

schemes the method of separation of variables intrinsically applies to homogeneousequations.

These difficulties can often be overcome once an orthogonality condition is established.However the transform methodology illustrated here is quite general and automaticallyallows non-homogeneous terms to be included in the membrane potential in a naturalway.

Consider a dendrite consisting of a uniform limb of radius a1 and length l1 attachedat one end to a soma of area AS and branched at its other end into two uniform dendritesof radii a2 and a3 and lengths l2 and l3 respectively. The branched dendrite is assumed tobe passive, and therefore described by the linear cable equation. In the absence of activesynaptic input current JS, synaptic current activity is simulated at t = 0 by the time depen-dent alpha function input currents J (k)(x, t ) = G(k)(t/T (k))e(1−t/T (k) )δ(x − fklk ) actingat location x = fklk, fk ∈ (0, 1) on limb k. Figure 5 illustrates the configuration.

The deviation from the resting potential of the passive dendrite in figure 5 is modelledby the partial differential equations

τ∂V (k)

∂t+V (k) + J (k)in

gM= akgA

2gM

∂2V (k)

∂x2, x ∈ (0, lk ) , k = 1, 2, 3 . (8.99)

The crucial assumption in this analysis is that each limb has an identical time constant τ andis consistent with the assumption that the passive electrical properties of dendritic materialare site independent. Inmathematically terms, this uniformity admits a “separable solution”to the complete set of cable equations. If time constants vary from limb to limb, an exact

x = l1x = 0

∂V (l2, t )∂x

= 0

∂V (l3, t )∂x

= 0

J (1)(x, t )

J (2)(x, t )

J (3)(x, t )

Soma

Fig. 15. A passive branched dendrite with sealed ends is stimulated on limb k by injected “alpha”function pulses of current J (k) at x = fklk, where (0 < fk < 1).

Page 79: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 29

mathematical solution can be found by Fourier series methods, but it is not a separablesolution and it does not lead to an elementary transcendental equation for the modes ofdecay. Here the statement of the cable equations assumes a priori that axial conductancegA and membrane conductance gM are universally constant. This assumption is madefor algebraic convenience and may be relaxed. Equation (8.99) is solved with boundaryconditions

∂V (2)(l2, t )∂x

= ∂V (3)(l3, t )∂x

= 0 ,

V (1)(l1, t ) = V (2)(0, t ) = V (3)(0, t ) ,(8.200)

πa22gA∂V (2)(0, t )

∂x+ πa23gA

∂V (3)(0, t )∂x

= πa21gA∂V (1)(l1, t )

∂x,

τS∂V (1)(0, t )

∂t+V (1)(0, t ) = gA

gS

πa21AS

∂V (1)(0, t )∂x

and initial condition

V (1)(x, 0) = V (2)(x, 0) = V (3)(x, 0) = 0 . (8.20)

As in illustration I, each limb is mapped into [−1, 1] so that the analytical representationof the problem is also a suitable basis for the development of the numerical solution usingspectral methods. Let biophysical parameters χ, γk, αk and σk be defined by

χ = gSgA

ASl12πa21

, γk =(l2k gM2akgA

)1/2

,

αk = 2eσk J(k)

gMlk, σk = τ

T (k),

k = 1, 2, 3 . (8.202)

Parameter σk is the ratio of the time constant of the dendritic membrane to that of the inputcurrent on limb k. When time and length in equations (8.99) are non-dimensionalisedby the coordinate transformations t∗ = t/τ and z = −1 + 2x/ lk respectively, the cableequations (8.99) become

∂V (k)

∂t+V (k) + i (k)(z, t ) = 1

γ 2k

∂2V (k)

∂z2, z ∈ (−1, 1) , k = 1, 2, 3 , (8.203)

in which superscript stars have been suppressed but is understood that all variables arenon-dimensional and where

i (k)(z, t ) = αkte−σktδ(z − (2 fk − 1)) , k = 1, 2, 3 . (8.204)

As in illustration I, the analytical development uses the specific expression for i (k) in thefinal derivation of the potentials but is otherwise completely general. The boundary andinitial conditions accompanying equations (8.203) have non-dimensional form

∂V (2)(1, t )∂z

= ∂V (3)(1, t )∂z

= 0 , (8.205)

V (1)(1, t ) = V (2)(−1, t ) = V (3)(−1, t ) , (8.206)

A2

γ 22

∂V (2)(−1, t )∂z

+ A3

γ 23

∂V (3)(−1, t )∂z

= A1

γ 21

∂V (1)(1, t )∂z

, (8.207)

AS2gSgM

(ε∂V (1)(−1, t )

∂t+V (1)(−1, t )

)= A1

γ 21

∂V (1)(−1, t )∂z

, (8.208)

Page 80: An Introduction to the Principles of Neuronal Modelling

292 K. A. L, J. M. O, D. M. H, J. R. R

where Ak (k = 1, 2, 3) is the curved surface area of dendrite k. In condition (8.208), ε isagain the ratio of the somal to dendritic time constant. For the time being, the analysisproceeds on the basis that ε is an arbitrary positive constant although it will eventual takethe value unity.

Finite Transforms

For arbitrary real λ let V (1)λ

, V (2)λ

and V (3)λ

be defined by

V (1)λ

=∫ 1

−1V (1)(z, t )

(cos λγ1(1 − z)+ µ1 sin λγ1(1 − z)

)dz ,

V (2)λ

=∫ 1

−1V (2)(z, t )

cos λγ2(1 − z)cos 2λγ2

dz , (8.209)

V (3)λ

=∫ 1

−1V (3)(z, t )

cos λγ3(1 − z)cos 2λγ3

dz ,

in which µ1 is an arbitrary parameter and let i (1)λ

, i (2)λ

and i (3)λ

be similarly defined by

i (1)λ

=∫ 1

−1i (1)(z, t )

(cos λγ1(1 − z)+ µ1 sin λγ1(1 − z)

)dz ,

(8.20)

i (2)λ

=∫ 1

−1i (2)(z, t )

cos λγ2(1 − z)cos 2λγ2

dz , i (3)λ

=∫ 1

−1i (3)(z, t )

cos λγ3(1 − z)cos 2λγ3

dz .

The cable equations for limbs two and three are nowmultiplied by cos λγ2(1 − z)/ cos 2λγ2and cos λγ3(1 − z)/ cos 2λγ3 respectively and then integrated over (−1, 1). Bearing inmind that V (2) and V (3) satisfy the boundary conditions (8.205), it can be shown that

dV (2)λ

dt+ (1 + λ2)V (2)

λ+ i (2)

λ= − 1

γ 22

[∂V (2)(−1, t )

∂z− λγ2 tan 2λγ2V

(2)(−1, t )],

(8.2)

dV (3)λ

dt+ (1 + λ2)V (3)

λ+ i (3)

λ= − 1

γ 23

[∂V (3)(−1, t )

∂z− λγ3 tan 2λγ3V

(3)(−1, t )].

(8.22)

Similarly, the cable equation for k = 1 is multiplied by cos λγ1(1 − z)+µ1 sin λγ1(1 − z)and the resulting partial differential equation integrated over (−1, 1). In this instance, theresult is

dV (1)λ

dt+ (1 + λ2)V (1)

λ+ i (1)

λ= − 1

γ 21

[(cos 2λγ1 + µ1 sin 2λγ1)

∂V (1)(−1, t )∂z (8.23)

−∂V(1)(1, t )∂z

− λγ1µ1V(1)(1, t )− λγ1(sin 2λγ1 − µ1 cos 2λγ1)V

(1)(−1, t )].

Let ξλ and iλ be defined by the formulae

ξλ(t ) = A1V (1)λ

+ A2V (2)λ

+ A3V (3)λ

iλ(t ) = A1 i (1)λ+ A2 i (2)λ

+ A3 i (3)λ.

(8.24)

In view of the current balance condition (8.207), it is straightforward algebra to verify that

dξλdt

+ (1 + λ2)ξλ + iλ = −A1

γ 21

(cos 2λγ1 + µ1 sin 2λγ1

)∂V (1)(−1, t )∂z

Page 81: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 293

+A1λ

γ1

(sin 2λγ1 − µ1 cos 2λγ1

)V (1)(−1, t ) (8.25)

+λ[A1µ1

γ1+ A2

γ2tan 2λγ2 + A3

γ3tan 2λγ3

]V (1)(1, t ) .

It is now beneficial to choose the arbitrary parameter µ1 such that

A1µ1

γ1+ A2

γ2tan 2λγ2 + A3

γ3tan 2λγ3 = 0 . (8.26)

Since Akγ−1k = 2(2gA/gM)1/2ck where ck = d3/2

k, then whenever the electrical properties

of the dendrite are uniform, the computation of µ1 is dependent only on the c-values ofeach limb. To digress further, if limbs two and three have the same electrotonic length, thatis, γ2 = γ3 then clearly

µ1 = − c2 + c3c1

tan 2λγ2 (8.27)

and if, in addition, the parent limb satisfies c1 = c2 + c3 so that the child limbs can becollapsed and connected seamlessly to the parent (the Rall cylinder) then

µ1 = − tan 2λγ2 (8.28)

Assuming now that µ1 is chosen according to the prescription of formula (8.26), thenclearly ξλ satisfies the simplified equation

dξλdt

+ (1 + λ2)ξλ + iλ = −A1

γ 21

(cos 2λγ1 + µ1 sin 2λγ1

)∂V (1)(−1, t )∂z

(8.29)

+A1λ

γ1

(sin 2λγ1 − µ1 cos 2λγ1

)V (1)(−1, t ) .

The somal boundary condition (8.208) is now used to remove the potential gradient atz = −1. After some straightforward algebra, the right hand side of equation (8.29) canbe recast in the form

− βλ

[dV (1)(−1, t )

dt+(

− A1λ

2γ1AS

CM

CS

tan 2λγ1 − µ1

1 + µ1 tan 2λγ1

)V (1)(−1, t )

](8.220)

where

βλ = 2ASCS

CM(cos 2λγ1 + µ1 sin 2λγ1) . (8.22)

Just as in illustration I, the key idea is now to choose λ (as yet unspecified) to satisfy thetranscendental equation

1 + λ2 = 1ε

− A1λ

2γ1AS

CM

CS

tan 2λγ1 − µ1

1 + µ1 tan 2λγ1. (8.222)

Again, in the particular case of a branched network with a Rall equivalent cylinder repre-sentation, recall from (8.28) that µ1 = − tan 2λγ2. Consequently it is a matter of simpletrigonometry to recognise that, in the case of a Rall Y-junction, λ is a solution of

1 + λ2 = 1ε

− A1λ

2γ1AS

CM

CStan 2λ(γ1 + γ2) . (8.223)

Thus the secular condition forλ reduces to that of a somawith a single unbranched dendriteof electrotonic length (γ1 + γ2). In effect, this a just a restatement of the fact that the

Page 82: An Introduction to the Principles of Neuronal Modelling

294 K. A. L, J. M. O, D. M. H, J. R. R

branched structure is now equivalent to a Rall cylinder of electrotonic length (γ1 + γ2)

connected to the soma.Now let ψλ be defined by the expression

ψλ(t ) = ξλ(t )+ βλV(1)(−1, t ) (8.224)

and return to the original analysis.When λ is determined by the secular condition (8.222),ψλ satisfies the ordinary differential equation

dψλdt

+ (1 + λ2)ψλ = −iλwith general solution

ψλ(t ) = ψλ(0)e−(1+λ2)t −

∫ t

0iλ(s)e

−(1+λ2)(t−s) ds . (8.225)

In view of the initial condition (8.20), ψλ(0) = 0 in this application and so

ψλ(t ) = −∫ t

0iλ(s)e

−(1+λ2)(t−s) ds . (8.226)

Since the primary purpose of this analysis is to obtain exact solutions for comparison withnumerical solutions, it is now convenient to set ε = 1 and CM = CS. It this case, λ satisfiesthe transcendental equation

2λγ1AS

A1+ tan 2λγ1 − µ1

1 + µ1 tan 2λγ1= 0 (8.227)

in which µ1 is specified by formula (8.26).At first sight, it would appear that the equationfor λ involves the tangent function and all the incumbent book keeping that its asymptotesnecessitate. In fact, the form of the secular condition (8.227) has an elegant structure thatdoes not involve asymptotes. Suppose f (λ; p, q) is defined by the formula

f (λ; p, q) = η(2ASγ1λ cos 2λϑ + A1 sin 2λϑ

)η = A1

γ1+ p

A2

γ2+ q

A3

γ3, ϑ = γ1 + pγ2 + qγ3

(8.228)

then equation (8.227) can be rewritten

f (λ; 1, 1)+ f (λ; 1,−1)+ f (λ; −1, 1)+ f (λ; −1,−1) =∑

f (λ; p, q) = 0 (8.229)

where the sum is taken over all possible combinations of the integers p and q satisfying|p| = |q| = 1.

Orthogonality of Eigenfunctions

As in illustration I, the complete solution of the problem ultimately requires the deriva-tion of an orthogonality condition. Here the process is complicated by the fact that theeigenfunctions contain components in each limb of the branched dendrite. Let λ and η betwo distinct solutions of the transcendental equation (8.222) then it is straightforward buttedious to verify that

∫ 1

−1

sin λγ (1 − z)cos 2λγ

sin ηγ (1 − z)cos 2ηγ

dz =

η tan 2λγ − λ tan 2ηγγ (λ2 − η2)

λ �= η

sec2 2λγ − tan 2λγ2λγ

λ = η .

(8.230)

Page 83: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 295

Similarly it can be shown that∫ 1

−1

(sin λγ (1 − z)− µ(λ) cos λγ (1 − z)

)(sin ηγ (1 − z)− µ(η) cos ηγ (1 − z

)dz

(8.23)

has value

cos 2λγ cos 2ηγγ (λ2 − η2)

[η(tan 2γλ− µ(λ))(1 + µ(η) tan 2γη)

−λ(tan 2γη− µ(η))(1 + µ(λ) tan 2γλ)]

+ ηµ(λ) − λµ(η)

γ (λ2 − η2)

λ �= η

sin 2λγ cos 2λγ2γλ

(µ(λ) − tan 2γλ

)2+(1 + µ(λ)

2 − tan 2λγ2λγ

)λ = η .

(8.232)

The key idea is now to combine results (8.230), (8.23) and (8.232) by calculating

A1

∫ 1

−1

(sin λγ1(1 − z)− µ(λ)1 cos λγ1(1 − z)

)(sin ηγ1(1 − z)− µ(η)1 cos ηγ1(1 − z

)dz

(8.233)

+A2

∫ 1

−1

sin λγ2(1 − z)cos 2λγ2

sin ηγ2(1 − z)cos 2ηγ2

dz + A3

∫ 1

−1

sin λγ3(1 − z)cos 2λγ3

sin ηγ3(1 − z)cos 2ηγ3

dz

when λ = η and when λ �= η. The calculations are again straightforward but tedious. Itcan be shown that expression (8.233) evaluates to

A1cos 2λγ1 cos 2ηγ1γ1(λ

2 − η2)

[η(tan 2λγ1 − µ(λ))(1 + µ(η) tan 2ηγ1)

−λ(tan 2ηγ1 − µ(η))(1 + µ(λ) tan 2λγ1)] λ �= η ,

A1

(1 + µ(λ)1

2)

+ A2 sec 2 2λγ2 + A3 sec 2 2λγ3

+AS cos2 2λγ1(1 + µ(λ)1 tan 2λγ1

)2 λ = η .

(8.234)

Wheneverλ �= η, the secular condition (8.227) ensures that expression (8.233) is identicallyzero. Define the vector v(z) and the diagonal 3 × 3 matrix D by

vλ(z) =(sin λγ1(1 − z)− µ(λ)1 cos λγ1(1 − z),

sin λγ2(1 − z)cos 2λγ2

,sin λγ3(1 − z)

cos 2λγ3

)D = diag (A1, A2, A3)

(8.235)

then it now follows from (8.227), (8.233) and (8.234) that

〈vλ, vη〉 =∫ 1

−1vλ(z)

TDvη(z) dz =

0 λ �= η ,

A(λ) λ = η(8.236)

where

A(λ) = A1

(1 + µ(λ)1

2)

+ A2 sec 2 2λγ2 + A3 sec 2 2λγ3

+AS cos2 2λγ1(1 + µ(λ)1 tan 2λγ1

)2.

(8.237)

Thus the vectors vλ(z) are mutually orthogonal with respect to the inner product 〈u, v〉defined in (8.236).This important result is now used to construct the full analytical solutionto the initial boundary value problem.

Page 84: An Introduction to the Principles of Neuronal Modelling

296 K. A. L, J. M. O, D. M. H, J. R. R

Solution Representation

Within this eigenspace, the potentialsV (1)(z, t ),V (2)(z, t ) andV (3)(z, t ) have representa-tion

V (1)(z, t ) = V0(t )+∑λ

Vλ(t )(cos λγ1(1 − z)+ µ(λ)1 sin λγ1(1 − z)

)(8.238)

V (2)(z, t ) = V0(t )+∑λ

Vλ(t )cos λγ2(1 − z)

cos 2λγ2(8.239)

V (3)(z, t ) = V0(t )+∑λ

Vλ(t )cos λγ3(1 − z)

cos 2λγ3(8.240)

where λ are the solutions of secular equation (8.227). The forms for V (1)(z, t ), V (2)(z, t )andV (3)(z, t ) satisfy continuity of membrane potential at the branch point by constructionand also preserve current balance through the choice of µ(λ)1 . The task is now to computeV0(t ) and Vλ(t ). Each equation in (8.238) is differentiated partially with respect to z andthe result used to show that Vλ(t ) satisfies⟨(A1

γ1

∂V (1)

∂z,A2

γ2

∂V (2)

∂z,A3

γ3

∂V (3)

∂z

), vη

⟩=∑λ

λVλ(t )〈vλ, vη〉 = ηAηVη(t ) (8.24)

where 〈u, v〉 is defined in (8.236) and Aη is given by expression (8.237). Every term in thesummation appearing in equation (8.24) is eliminated using the orthogonality condition(8.236) except the term that arises when λ = η. The left hand side of equation (8.24) isnow integrated by parts to deduce that

ηAηVη(t ) =[⟨(A1

γ1V (1),

A2

γ2V (2),

A3

γ3V (3)

), vη

⟩]z=1

z=−1

−⟨(A1

γ1V (1),

A2

γ2V (2),

A3

γ3V (3)

),dvηdz

= −V (1)(1, t )(µ(η)1

A1

γ1+ A2

γ2tan 2ηγ2 + A3

γ3tan 2ηγ3

)

−A1

γ1V (1)(−1, t )

(sin 2ηγ1 − µ(η)1 cos 2ηγ1

)+η(A1V (1)

η + A2V (2)η + A3V (3)

η

).

(8.242)

The definition of µ(η)1 ensures that the first of these brackets is zero. The third bracketis immediately identified as ξη(t ) from definition (8.24) and hence it now follows from(8.242) that

ηAηVη(t ) = ηξη(t )− A1

γ1VS(t )

(sin 2ηγ1 − µ(η)1 cos 2ηγ1

)

= ηξη(t )− A1

γ1VS(t )

(− 2ηγ1

AS

A1

)(cos 2ηγ1 + µ(η)1 sin 2ηγ1

)= ηξη(t )+ 2ηASVS(t )

(cos 2ηγ1 + µ(η)1 sin 2ηγ1

)= ηξη(t )+ ηβηVS(t ) ,

(8.243)

where βη is defined previously in (8.22) and CM = CS in this instance. In view of thedefinition of ψη(t ) in (8.225) and its value in (8.226), this analysis leads directly to thecrucial result

Vη(t ) = ψη(t )Aη

= − 1Aη

∫ t

0iλ(s)e

−(1+η2)(t−s) ds . (8.244)

Page 85: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 297

In remains to find V0(t ). Assuming that the series for V (1)(z, t ) converges to V (1)(−1, t )when z = −1 (as indeed it does) then

VS(t ) = V (1)(−1, t ) = V0(t )+∑λ

Vλ(t )(cos 2λγ1 + µ(λ)1 sin 2λγ1

). (8.245)

It follows directly from the expressions for V (1)(z, t ), V (2)(z, t ) and V (3)(z, t ) in (8.238)that ∫ 1

−1

(A1V

(1)(z, t )+ A2V(2)(z, t )+ A3V

(3)(z, t ))dz = 2(A1 + A2 + A3)V0(t )

(8.246)

+∑λ

Vλ(t )λ

[A1µ

(λ)1

γ1+ A2

γ2tan 2λγ2 + A3

γ3tan 2λγ3 + A1

γ1

(sin 2λγ1 − µ(λ)1 cos 2λγ1

)].

Taking account of the definition of µ(λ)1 and the secular equation (8.227) satisfied by theeigenvalues, it follows from (8.246) that

ξ0(t ) =∫ 1

−1

(A1V

(1)(z, t )+ A2V(2)(z, t )+ A3V

(3)(z, t ))dz = 2(A1 + A2 + A3)V0(t )

(8.247)−2AS

∑λ

Vλ(t )(cos 2λγ1 + µ(λ)1 sin 2λγ1

).

Equations (8.245) and (8.247) are now combined so as to eliminateVλ(t ). The final resultis

ξ0(t )+ 2AS = 2(AS + A1 + A2 + A3)V0(t ) = ψ0(t ) (8.248)

so that

V0(t ) = − 12(AS + A1 + A2 + A3)

∫ t

0i0(s)e

−(t−s) ds . (8.249)

Hence the coefficients V0(t ) and Vλ in the series expansion of V (1)(z, t ), V (2)(z, t ) andV (3)(z, t ) are generally determined fromψ0(t ) andψλ(t ) and in this instance are specifiedby formulae (8.249) and (8.244) in which λ is a solution of the secular equation (8.227).

Specific Solution

The specific potential for a dendritic structure excited by injected alpha function currentinputs on limbs , 2 and 3 at x1 = f1l1, x2 = f2l2 and x3 = f3l3 is now considered. It hasalready been shown in formula (8.204) that

i (1)(z, t ) = α1te−σ1tδ(z − (2 f1 − 1)) , (8.250)

i (2)(z, t ) = α2te−σ2tδ(z − (2 f2 − 1)) , (8.25)

i (3)(z, t ) = α3te−σ3tδ(z − (2 f3 − 1)) , (8.252)

where the meanings of αk and σk are defined in (8.202). The transformed current densitiesi (λ)1 , i (λ)2 and i (λ)3 are computed from expressions (8.20) and yield

i (1)λ

= α1te−σ1t(cos 2λγ1(1 − f1)+ µ(λ)1 sin 2λγ1(1 − f1)

),

i (2)λ

= α2te−σ2t cos 2λγ2(1 − f2)cos 2λγ2

, i (3)λ

= α3te−σ3t cos 2λγ3(1 − f3)

cos 2λγ3.

(8.253)

The current density iλ(s) is now formed from i (1)λ

, i (2)λ

and i (3)λ

according to recipe (8.24)and this in turn enables ψλ(t ) to be computed from its integral value in formula (8.226).Hence all the coefficients of the series expansions of the membrane potential in limb arenow determined, that is, the analytical solution is found.

Page 86: An Introduction to the Principles of Neuronal Modelling

298 K. A. L, J. M. O, D. M. H, J. R. R

Numerical Solution

Unlike the unbranched dentrite, the implementation of boundary conditions at dendriticterminals and the soma, and continuity conditions at the branch point, needs detailedexplanation. Let ϕ(k)0 (t ),ϕ(k)1 (t ), . . . ,ϕ(k)N (t ) denote the potential at nodes z0, z1, . . . , zN onbranch k of the dendritic structure and let i (k)0 , i (k)1 , . . . , i (k)N be a distribution of currentsat these nodes that is equivalent to the input current i (k)(z, t ) = αkte

−σktδ(z− (2 fk − 1)),modulo the precision available (only N + 1 Chebyshev polynomials are in use). It followsdirectly from (8.6) and the formula for i (k)(z, t ) that if

i (k)(z, t ) =N∑j=0

i (k)j Tj (z)

then

i (k)j = 1c j

αkte−σkt cos 2 j cos−1 (2

√fk )√

fk (1 − fk ), c j =

{π j = 0, N

π/2 0 < j < N .

The values ϕ(k)0 , . . . ,ϕ(k)N are now determined so as to satisfy terminal boundary condi-tions, the soma condition and the continuity conditions at the branch point. Continuityof membrane potential at the branch point requires that

VB(t ) = ϕ(1)0 (t ) = ϕ(2)N (t ) = ϕ(3)N (t ) . (8.254)

The sealed conditions at z = 1 on limbs 2 and 3 are satisfied by the requirements

ϕ(2)0 (t )+ ωN

2ϕ(2)N (t ) = ϕ(2)0 (t )+ ωN

2VB = −ωN

N−1∑j=1

ϕ(2)j (t )(−1)j

sin2 (πj/2N )(8.255)

ϕ(3)0 (t )+ ωN

2ϕ(3)N (t ) = ϕ(3)0 (t )+ ωN

2VB = −ωN

N−1∑j=1

ϕ(3)j (t )(−1)j

sin2 (πj/2N )(8.256)

where

ωN = 62N 2 + 1

. (8.257)

Conservation of current at the branch point is expressed in the condition (8.207) andrequires that

A2

γ 22

ϕ(2)0 (t )

2+

N−1∑j=1

ϕ(2)j (t )(−1)j

cos2 (πj/2N )+ ϕ(2)N (t )

ωN

+ A3

γ 23

ϕ(3)0 (t )

2+

N−1∑j=1

ϕ(3)j (t )(−1)j

cos2 (πj/2N )

+ϕ(3)N (t )

ωN

= A1

γ 21

ϕ(1)0 (t )

ωN+

N−1∑j=1

ϕ(1)j (t )(−1)j

sin2 (πj/2N )+ ϕ(1)N (t )

2

,

(8.258)

The second spatial derivative in (8.98) is computed by first using the Chebyshev cosinetransform to get the Chebyshev coefficients of ϕ(z, t ) followed by two differentiations inspectral space followed lastly by the inverse Chebyshev cosine transform back into physicalspace. Finally,VS = ϕ(1)N (t ) is determined from the somal boundary condition

dVS

dt= −VS − χ−1

VB

2+

N−1∑j=1

ϕ(1)j (t )(−1)j

cos2 (πj/2N )+ 2N 2 + 1

6VS

.

A comparison between the analytical and numerical solutions for the branched dendriteis given in figure 6.

Page 87: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 299

5 0 5 20 25 30 35 40

0.5

.0

.5

Fig. 16.Comparison of true potential and spectral estimate of the potential at the soma. The spectralestimate based on 6 polynomials is almost indistinguishable from the exact solution while that basedon 8 polynomials has a slightly lower peak.

References

AmjadAM,Rosenberg JR,Halliday DM,Conway BA (997)An extended difference of coherence testfor comparing and combining several independent coherence estimates: theory and applicationto the study of motor units and physiological tremor. J NeuroSci Meth 73: 69–79

Bernander Ö,Douglas RJ,Martin KAC,KochC (99) Synaptic background activity influences spatio-temporal integration in single pyramidal cells. Proc Natl Acad Sci USA 88: 5692–573

Bernander Ö, Koch C, Usher M (994) The effect of synchronised inputs at the single neuron level.Neural Comput 6: 622–64

Box BD, Muller M (958) A note on the generation of random normal variables. Ann Math Stat 29:60–6

Burke RE (997) Equivalent cable representations of dendritic trees: variations on a theme. SocNeurosci abstr 23: 654

Burke RE, Fyffe REW,MoschovakisAK (994) Electrotonic architecture of cat gamma-motoneurons.J Neurophysiol 72(5): 2302–236

Canuto C,HussainiMY,QuarteroniA,Zang TA (988) Spectral methods in fluidmechanics. Springerseries in computational physics. Springer, Berlin Heidelberg New York

Clements JD,Redman SJ (989) Cable properties of cat spinal motoneuronesmeasured by combiningvoltage clamp, current clamp and intra-cellular staining. J Physiol Lond 409: 63–87

Clenshaw CW (962) Mathematical tables. Vol 5 National Physical Laboratory, London HM Sta-tionery Office

CohenAH,Rossignol S, Grillner S (988) Neural control of rhythmic movements in vertebrates. JohnWiley and Sons, New York

Cox DR, IshamV (980) Point processes.Monographs in applied probability and statistics, Chapmanand Hall, New York

Davis PJ, Rabinowitz P (983) Methods of numerical integration. (2nd edition) Academic Press,Harcourt Brace Jovanovich Publishers, San Diego, New York, London, Sydney, Tokyo, Toronto

De Schutter E (992) A consumer guide to neuronal modelling software. TINS 5(): 462–464Eisenberg RS, Johnson EA (970) Three dimensional electric field problems in physiology. Prog

Biophys Mol Biol 20: –65Evans JD, Kember GC (998) Analytical solutions to a tapering multi-cylinder somatic shunt cable

model for passive neurons. Math. Biosci. 49(2): 37–65Evans JD, Kember GC, Major G (992) Techniques for obtaining analytical solutions to the multi-

cylinder somatic shunt cable model for passive neurons. Biophys J 63: 350–365

Page 88: An Introduction to the Principles of Neuronal Modelling

300 K. A. L, J. M. O, D. M. H, J. R. R

Evans JD, Major G, Kember GC (995) Techniques for the application of the analytical solution tothe multicylinder somatic shunt cable model for passive neurons. Math Biosci 25(): –50

Falk G and Fatt P (964) Linear electrical properties of striated muscle fibres observed with intracel-lular electrodes. Proc Roy Soc Lond B 60: 69–23

Farmer SF, Bremner ER, Haliday DM, Rosenberg JR, Stephens JA (993) The frequency content ofcommon synaptic inputs to motoneurons studied during voluntary isometric contractions inman. J Physiol 470: 27–55

Fleshman JW, Segev I, Burke RE (988) Electrotonic architecture of type-identified α-motoneuronsin the cat spinal cord. J Neurophysiol 60(): 60–85

Gear CW (97) Numerical initial value problems in ordinary differential equations. Prentice Hall,Englewood Cliffs, NJ

Getting PA Reconstruction of small neural networks. In: Koch C, Segev I (eds) Methods in neuronalmodelling: from synapses to networks. 1th edn. MIT press, Cambridge, MA, pp 7–94

Golub GH,Van Loan CF (990) Matrix computations. (2nd edition) John Hopkins University Press,Baltimore, Maryland USA

Halliday DM (998a) Generation and characterisation of correlated spike trains. Comp Biol Med 28:43–52

Halliday DM (998b) Weak stochastic temporal correlation of large scale synaptic input is a majordeterminant of neuronal bandwidth. Neural Comput (in press)

Hines ML, Carnevale NT (997) The NEURON simulation environment. Neural Comp 9(6):79–209

Holden AV (976) Models of the stochastic activity of neurons. Lecture notes in biomathematics. 2Springer, Berlin Heidelberg New York

Kember GC, Evans JD (995) Analytical solutions to a multicylinder somatic shunt cable model forpassive neurons with spines. IMA J Math Applied in Medicine and Biology 2(2): 37–57

Kloeden PE, Platen E (995) Numerical solution of stochastic differential equations. Springer, BerlinHeidelberg New York

Kloeden PE, Platen E, Schurz H (994) Numerical solution of SDE through computer experiments.Springer, Berlin Heidelberg New York

Knuth DE (997) The art of computer programming: Vol II Seminumerical algorithms. AddisonWesley, Reading MA, Harlow England, Don Mills Ontario, Amsterdam, Tokyo

Major G (993) Solutions for transients in arbitrarily branching cables: III Voltage clamp problems.Biophys J 65: 469–49

Major G, Evans D (994) Solutions for transients in arbitrarily branching cables: IV Non-uniformelectrical parameters. Biophys J 66: 65–634

Major G, Evans D, Jack JJB (993a) Solutions for transients in arbitrarily branching cables: I Voltagerecording with a somatic shunt. Biophys J 65: 423–449

Major G, Evans D, Jack JJB (993b) Solutions for transients in arbitrarily branching cables: II Voltageclamp theory. Biophys J 65: 450–468

ManorY,Gonczarowski J, Segev I (99) Propogation of action potentials along complex axonal trees:model and implementation. Biophys J 60: 4–423

Marsagalia G, Bray TA (964) A convenient method for generating normal variables. SIAM Review6: 260–264

Mascagni MV (989) Numerical methods for neuronal modelling. In: Koch C, Segev I (eds) Methodsin neuronal modelling: from synapses to networks MIT press, Cambridge, MA, pp 255–282

Murthy VN, Fetz EE (994) Effects of input synchrony on the firing rate of a 3-conductance corticalneuron model. Neural Comp 6: –26

Ogden JM, Rosenberg JR, Whitehead RR (999) The Lanczos procedure for generating equivalentcables. In: Poznanski RR (ed) Modelling in the neurosciences: from ion channels to neuralnetworks. Harwood Academic, pp 77–229

Perkel DH, Mulloney B (978a) Calibrating compartmental models of neurons. Amer J Physiol 235:R93–R98

Perkel DH, Mulloney B (978b) Electrotonic properties of neurons: steady-state compartmentalmodel. J Neurophysiol 4: 62–639

Perkel DH, Mulloney B, Budelli RW (98) Quantative methods for predicting neuronal behaviour.Neuroscience 6: 823–837

Page 89: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 30

Rall W (962) Electrophysiology of a dendritic model. Biophys J 2(2): 45–67Rall W (962) Theory of physiological properties of dendrites. Ann NY Acad Sci 96: 07–092RallW (964) Theoretical significance of dendritic trees for neuronal input-output relations. In: Neu-

ral theory and modelling. (ed) Reiss R Stanford University Press, Stanford, California pp 73–97RallW (969) Distributions of potential in cylindrical coordinates for amembrane cylinder.Biophys J

9: 509–54

RallW (977) Core conductor theory and cable properties of neurons. In: Kandel ER,Brookhardt JM,MountcastleVB (eds) Handbook of Physiology: the Nervous System,Vol I,Williams andWilkin-son, Baltimore, Maryland, pp 39–98

RallW (989) Cable theory. In: Koch C, Segev I (eds) Methods in neuronal modelling: from synapsesto networks. 1th edn. MIT press, Cambridge, MA, pp 9–62

RallW,Burke RE,HolmesWR, Jack JJB,Redman SJ, Segev I (992)Matching dendritic neuronmodelsto experimental data. Physiol Rev suppl 72(4): S59–S86

Rapp M, Yarom Y, Segev I (992) The impact of parallel fibre background activity on the cableproperties of cerebellar Purkinje cells. Neural Comp 4: 58–533

Regnier A (966) Les infortunes de la raison. Collection science ouverte aux Editions du Seuil, ParisRegnier A (974) La crise du langage scientifique. Editions anthropos, ParisRinzel J, Ermentrout GB (989) Analysis of neural excitability and oscillations. In: Koch C, Segev I

(eds) Methods in neuronal modelling: from synapses to networks. MIT press, Cambridge, MA,pp 35–70

Rosenberg JR, Halliday DM, Breeze P, Conway BA (998) Identification of patterns of neuronalconnectivity-partial spectra, partial coherence, and neuronal interactions. J NeuroSci Meth 83:57–72

Segev I, Fleshman JW, Burke RE (989) Compartmental models of complex neurons. In: Methods inneuronal modelling. Koch C, Segev I (eds),1th edn. MIT Press, Cambridge, MA pp 63–96

Segev I, Fleshman JW, Miller JP, Bunow B (985) Modelling the electrical properties of anatomicallycomplex neurons using a network analysis program: passive membrane. Biol Cybern 53: 27–40

Segev I, Rinzel J, Shepherd G (eds) (995) The theoretical foundations of dendritic function: selectedpapers of Wilfrid Rall with commentaries. MIT press, Cambridge, MA

Tuckwell HC (988a) Introduction to theoretical neurobiology Vol I linear cable theory and den-dritic structure. Cambridge University Press, Cambridge

Tuckwell HC (988b) Introduction to theoretical neurobiology Vol II nonlinear and stochastictheories. Cambridge University Press, Cambridge

Whitehead RR, Rosenberg JR (993) On trees as equivalent cables. Proc Roy Soc Lond B 252: 03–08Wichmann BA, Hill ID (982) Appl Statistics 3(2): 88–90

Page 90: An Introduction to the Principles of Neuronal Modelling

302 K. A. L, J. M. O, D. M. H, J. R. R

Notation and definitions

a ∈ A a is a member of set A[a, b] closed interval a � x � b(a, b) open interval a < x < b(a, b] semi-open interval a < x � bO( f ) expression divided by f remains bounded as f → 0o( f ) expression divided by f tends to zero as f → 0A × B set of pairs (a, b) where a ∈ A and b ∈ BV (x, t ) transmembrane potential at time t and position xJ (x, t ) axial current density (current per unit area of the dendritic

cross-section) at time t and position xgA axial conductivity of intercellular materialJIC injected current per unit length of dendriteJIVDC intrinsic voltage-dependent current per unit area of dendritic

membraneJSC synaptic current per unit area of dendritic membraneA(x) dendritic cross-sectional area at position xP(x) dendritic perimeter at position xAs surface area of soma−Jsoma current density into soma across membrane surface� resistivity (Ohm·m)δ(x − a) Dirac delta function δ(x − a) = 0 if x �= a, and∫∞

−∞ δ(x − a)dx = 1,∫∞−∞ δ(x − a) f (x)dx = f (a)

Cs capacitance per unit area of somaVs(t ) transmembrane potential of somaEL resting membrane potentialEα equilibrium potential for ionic species αgS conductance per unit area of soma (Ohm−1m−2)JSI current injected into somagM passive membrane conductancegNL non-linear (active) membrane conductancegL leakage conductanceτ time constant of a dendritic limb τ = CM/gMλ length constant of a dendritic limb λ2 = gAA/gMPL length of a dendritic limbl nondimensional or electronic length of a dendritic limb,

l =∫ L

0

√gMP(s)gAA(s)

ds

g-value g = √gMgAAPd diameter of dendritec-value c = d3/2

zi designated node on a dendritic treevi membrane potential at node zigi g-value of a dendritic cylinderr−1j Sum of the g-values of all the segments of the finite difference

representation that contain z jε soma dendritic conductance ratio ε = gMCS/gSCM = τS/τ

A−1 inverse of matrix AAT transpose of matrix AA complex conjugate of matrix A

Page 91: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 303

A(p,q) (p, q)th entry of matrix AH (t ) Heaviside unit step function H (t ) = 1 if t > 0 or 0 if t < 0〈u, v〉 inner product of vectors u and vE [x] expectation of random variable xVar [x] variance of random variable xCov [x(s), x(u)] covariance of the values of the random variable x at s and u,δij Kronecker delta defined by δij = 1 if i = j and 0 otherwise

Page 92: An Introduction to the Principles of Neuronal Modelling

304 K. A. L, J. M. O, D. M. H, J. R. R

Appendix I

This appendix gives the C computer program that was used to determine the propertiesof an encoder in the absence of correlated inputs. The last two functions in the programgenerate normal deviates and uniform random numbers in (0, 1) respectively and areimplementations of the algorithms described in the sections on pages 247 and 248.

#include <stdio.h>#include <stdlib.h>#include <math.h>

/*** Builds SPIKE TRAINS using a integrate-to-threshold-and-fire** methodology for the ENCODER \tau dz_t=G(A dt+B dW_t)-z_t dt.*/

#define ZMAX 1.0 /* Threshold at which ENCODER fires */#define GAIN 1.0 /* Gain of the ENCODER */#define X0 0.0 /* Starting state for ENCODER */#define H 0.001 /* Step size for numerical integration */#define TAU 0.025 /* ENCODER time constant */#define A 1.269 /* Mean white noise input */#define B 0.307 /* STD DEV of white noise input */#define NSIM 10000 /* Number of simulations to be done */#define FLAG 0 /* Set FLAG=0 for pseudo-white noise

Set FLAG=1 for white noise */

int main( void ){

int i, nspike=0;double spike_time[NSIM], mu, sd, tnow=0.0, zold=X0, znow=X0,

tmp, sigma, coeff01, coeff02, coeff11, coeff12,coeff13, normal(double,double);

/* Step 1. - Initialise counters, times and ENCODER state */sigma = sqrt(H);coeff01 = coeff11 = TAU/(H+TAU);coeff02 = GAIN*H/(H+TAU);coeff12 = GAIN*H*A/(H+TAU);coeff13 = GAIN*B/(H+TAU);

/* Step 2. - Simulate ENCODER operation for pseudo-white noise */do {

if ( znow < ZMAX ) { /* ENCODER below threshold */zold = znow;znow = coeff01*znow+coeff02*normal(A,B);tnow += H;

} else if ( znow == ZMAX ) { /* ENCODER on threshold */spike_time[nspike] = tnow;tnow = znow = zold = 0.0;nspike++;

} else { /* ENCODER above threshold */tmp = H*(znow-ZMAX)/(znow-zold);

Page 93: An Introduction to the Principles of Neuronal Modelling

8 An Introduction to the Principles of Neuronal Modelling 305

spike_time[nspike] = tnow-tmp;nspike++;znow = fmod(znow,ZMAX);zold = znow;tnow = tmp;

}} while ( nspike<NSIM && FLAG==0 );

/* Step 3. - Simulate ENCODER operation for white noise */do {

if ( znow < ZMAX ) { /* ENCODER below threshold */zold = znow;znow = coeff11*znow+coeff12+coeff13*normal(0.0,sigma);tnow += H;

} else if ( znow == ZMAX ) { /* ENCODER on threshold */spike_time[nspike] = tnow;tnow = znow = zold = 0.0;nspike++;

} else { /* ENCODER above threshold */tmp = H*(znow-ZMAX)/(znow-zold);spike_time[nspike] = tnow-tmp;nspike++;znow = fmod(znow,ZMAX);zold = znow;tnow = tmp;

}} while ( nspike<NSIM && FLAG==1 );

/* Step 4. - Compute MEAN (mu) and STD DEV (sd) of spike train */for ( mu=0.0,i=0 ; i<NSIM ; i++ ) mu += spike_time[i];mu /= ((double) NSIM);for ( sd=0.0,tmp=0.0,i=0 ; i<NSIM ; i++ ) {

sigma = spike_time[i]-mu;tmp += sigma;sd += sigma*sigma;

}sigma = ((double) NSIM);sd = (sd-tmp*tmp/sigma)/(sigma-1.0);sd = sqrt(sd);printf("\n Mean spike rate %6.1lf",1.0/mu);printf("\n CoV of spike_train %6.1lf",sd/mu);exit(0);

}

/************************************************************Function returns Gaussian deviate.

************************************************************/double normal( double mean, double sigma){

static int start=1;static double g1, g2;

Page 94: An Introduction to the Principles of Neuronal Modelling

306 K. A. L, J. M. O, D. M. H, J. R. R

double v1, v2, w, ran( int);

if ( start ) {do {

v1 = 2.0*ran(1)-1.0;v2 = 2.0*ran(1)-1.0;w = v1*v1+v2*v2;

} while ( w==0.0 || w>=1.0 );w = log(w)/w;w = sqrt(-w-w);g1 = v1*w;g2 = v2*w;start = !start;return (mean+sigma*g1);

} else {start = !start;return (mean+sigma*g2);

}}

/*************************************************************Function returns primitive uniform random number usingalgorithm AS183 by Wichmann and Hill Appl. Stat. (1982)

*************************************************************/double ran( int n){

static int start=1;void srand( unsigned int);static unsigned long int ix, iy, iz;double temp;

if ( start ) {srand( ((unsigned int) abs(n)) );ix = rand( );iy = rand( );iz = rand( );start = 0;

}

/* 1st item of modular arithmetic */ix = (171*ix)%30269;

/* 2nd item of modular arithmetic */iy = (172*iy)%30307;

/* 3rd item of modular arithmetic */iz = (170*iz)%30323;

/* Generate random number in (0,1) */temp = ((double) ix)/30269.0+((double) iy)/30307.0

+((double) iz)/30323.0;return fmod(temp,1.0);

}