Abstract
Randomly connected populations of spiking neurons display a rich variety of dynamics. However, much of the current modeling and theoretical work has focused on two dynamical extremes: on one hand homogeneous dynamics characterized by weak correlations between neurons, and on the other hand total synchrony characterized by large populations firing in unison. In this paper we address the conceptual issue of how to mathematically characterize the partially synchronous “multiple firing events” (MFEs) which manifest in between these two dynamical extremes. We further develop a geometric method for obtaining the distribution of magnitudes of these MFEs by recasting the cascading firing event process as a first-passage time problem, and deriving an analytical approximation of the first passage time density valid for large neuron populations. Thus, we establish a direct link between the voltage distributions of excitatory and inhibitory neurons and the number of neurons firing in an MFE that can be easily integrated into population–based computational methods, thereby bridging the gap between homogeneous firing regimes and total synchrony.
Similar content being viewed by others
Notes
Donsker’s theorem states that the fluctuations of an empirical CDF about its theoretical CDF converge to Gaussian random variables with zero mean and certain variance. The sequence of independent Gaussian random variables can be formulated in terms of a standard Brownian bridge, a continuous-time stochastic process on the unit interval, conditioned to begin and end at zero.
References
Amari, S. (1974). A method of statistical neurodynamics. Kybernetik, 14, 201–215.
Amit, D., & Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7, 237–252.
Anderson, J., Carandini, M., Ferster, D. (2000). Orientation tuning of input conductance, excitation, and inhibition in cat primary visual cortex. Journal of Neurophysiology, 84, 909–926.
Battaglia, D., & Hansel, D. (2011). Synchronous chaos and broad band gamma rhythm in a minimal multi-layer model of primary visual cortex. PLoS Computational Biology, 7(10), e1002176.
Benayoun, M., Cowan, J.V., Drongelen, W., Wallace, E. (2010). Avalanches in a stochastic model of spiking neurons. PLoS Computational Biology, 6(7), e1002176.
Brette, R., Rudolph, M., Carnevale, T., Hines, M., Beeman, J., Bower, J., Diesmann, M., Morrison, A., Goodman, P., Harris JR., F., et al. (2007). Simulation of networks of spiking neurons: a review of tools and strategies. Journal of Computational Neuroscience, 23(3), 349–398.
Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8, 183–208.
Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11, 1621–1671.
Bruzsaki, G., & Draguhn, A. (2004). Neuronal oscillations in cortical networks. Science, 304, 1926–1929.
Cai, D., Rangan, A., McLaughlin, D. (2005). Architectural and synaptic mechanisms underlying coherent spontaneous activity in V1. Proceedings of the National Academy of Science, 102(16), 5868–5873.
Cai, D., Tao, L., Rangan, A. (2006). Kinetic theory for neuronal network dynamics. Communications Mathematical Sciences, 4, 97–127.
Cai, D., Tao, L., Shelley, M., McLaughlin, D. (2004). An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex. Proceedings of the National Academy of Science, 101(20), 7757–7762.
Cardanobile, S., & Rotter, S. (2010). Multiplicatively interacting point processes and applications to neural modeling. Journal of Computational Neuroscience, 28, 267–284.
Churchland, M.M., & et al. (2010). Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature Neuroscience, 13(3), 369–378.
DeVille, R., & Peskin, C. (2008). Synchrony and asynchrony in a fully stochastic neural network. Bulletin of Mathematical Biology, 70(6), 1608–33.
DeWeese, M., & Zador, A. (2006). Non-gaussian membrane potential dynamics imply sparse, synchronous activity in auditory cortex. Journal of Neuroscience, 26(47), 12,206–12,218.
Donsker, M. (1952). Justification and extension of Doobs heuristic approach to the Kolmogorov-Smirnov theorems. Annals of Mathematical Statistics, 23(2), 277–281.
Durbin, J. (1985). The first passage density of a continuous Gaussian process to a general boundary. Journal of Applied Probability, 22, 99–122.
Durbin, J., & Williams, D. (1992). The first passage density of the brownian process to a curved boundary. Journal of Applied Probability, 29, 291–304.
Eggert, J., & Hemmen, J. (2001). Modeling neuronal assemblies: theory and implementation. Neural Computation, 13, 1923–1974.
Fusi, S., & Mattia, M. (1999). Collective behavior of networks with linear integrate and fire neurons. Neural Computation, 11, 633–652.
Gerstner, W. (1995). Time structure of the activity in neural network models. Physical Review E, 51, 738–758.
Gerstner, W. (2000). Population dynamics of spiking neurons: fast transients, asynchronous states and locking. Neural Computation, 12, 43–89.
Hansel, D., & Sompolinsky, H. (1996). Chaos and synchrony in a model of a hypercolumn in visual cortex. Journal of Computational Neuroscience, 3, 7–34.
Knight, B. (1972). Dynamics of encoding in a population of neurons. Journal of General Physiology, 59, 734–766.
Kriener, B., Tetzlaff, T., Aertsen, A., Diesmann, M., Rotter, S. (2008). Correlations and population dynamics in cortical networks. Neural Computation, 20, 2185–2226.
Krukowski, A., & Miller, K. (2000). Thalamocortical NMDA conductances and intracortical inhibition can explain cortical temporal tuning. Nature Neuroscience, 4, 424–430.
Lampl, I., Reichova, I., Ferster, D. (1999). Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron, 22, 361–374.
Lei, H., Riffell, J., Gage, S., Hildebrand, J. (2009). Contrast enhancement of stimulus intermittency in a primary olfactory network and its behavioral significance. Journal of Biology, 8, 21.
Mazzoni, A., Broccard, F., Garcia-Perez, E., Bonifazi, P., Ruaro, M., Torre, V. (2007). On the dynamics of the spontaneous activity in neuronal networks. PLoS One, 2(5), e439.
Murthy, A., & Humphrey, A. (1999). Inhibitory contributions to spatiotemporal receptive-field structure and direction selectivity in simple cells of cat area 17. Journal of Neurophysiology, 81, 1212–1224.
Newhall, K., Kovačič, G., Kramer, P., Cai, D. (2010). Cascade-induced synchrony in stochastically driven neuronal networks. Physical Review E, 82, 041903.
Nykamp, D., & Tranchina, D. (2000). A population density approach that facilitates large scale modeling of neural networks: analysis and application to orientation tuning. Journal of Computational Neuroscience, 8, 19–50.
Omurtage, A., Knight, B., Sirovich, L. (2000). On the simulation of a large population of neurons. Journal of Computational Neuroscience, 8, 51–63.
Petermann, T., Thiagarajan, T., Lebedev, M., Nicolelis, M., Chailvo, D., Plenz, D. (2009). Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proceedings of the National Academy of Science, 106, 15,921–15,926.
Rangan, A., & Cai, D. (2007). Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks. Journal of Computational Neuroscience, 22(1), 81–100.
Rangan, A., & Young, L. (2012). A network model of V1 with collaborative activity. PNAS Submitted.
Rangan, A., & Young, L. (2013a). Dynamics of spiking neurons: between homogeneity and synchrony. Journal of Computational Neuroscience, 34(3), 433-460. doi:10.1007/s10827-012-0429-1.
Rangan, A., & Young, L. (2013b). Emergent dynamics in a model of visual cortex. Journal of Computational Neuroscience. doi:10.1007/s10827-013-0445-9.
Renart, A., Brunel, N., Wang, X. (2004). Mean field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. Computational Neuroscience: A comprehensive approach.
Riffell, J., Lei, H., Hildebrand, J. (2009). Neural correlates of behavior in the moth Manduca sexta in response to complex odors. Proceedings of the National Academy of Science, 106, 19,219–19,226.
Riffell, J., Lei, H., Christensen, T., Hildebrand, J. (2009). Characterization and coding of behaviorally significant odor mixtures. Current Biology, 19, 335–340.
Samonds, J., Zhou, Z., Bernard, M., Bonds, A. (2005). Synchronous activity in cat visual cortex encodes collinear and cocircular contours. Journal of Neurophysiology, 95, 2602–2616.
Sillito, A. (1975). The contribution of inhibitory mechanisms to the receptive field properties of neurons in the striate cortex of the cat. Journal of Physiology, 250, 305–329.
Singer, W. (1999). Neuronal synchrony: a versatile code for the definition of relations?Neuron, 24, 49–65.
Sompolinsky, H., & Shapley, R. (1997). New perspectives on the mechanisms for orientation selectivity. Current Opinion in Neurobiology, 7, 514–522.
Sun, Y., Zhou, D., Rangan, A., Cai, D. (2010). Pseudo-Lyapunov exponents and predictability of Hodgkin-Huxley neuronal network dynamics. Journal of Computational Neuroscience, 28, 247–266.
Treves, A. (1993). Mean-field analysis of neuronal spike dynamics. Network, 4, 259–284.
Wilson, H., & Cowan, D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12, 1–24.
Wilson, H., & Cowan, D. (1973). A Mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13, 55–80.
Worgotter, F., & Koch, C. (1991). A detailed model of the primary visual pathway in the cat comparison of afferent excitatory and intracortical inhibitory connection schemes for orientation selectivity. Journal of Neuroscience, 11, 1959–1979.
Yu, Y., & Ferster, D. (2010). Membrane potential synchrony in primary visual cortex during sensory stimulation. Neuron, 68, 1187–1201.
Yu, S., Yang, H., Nakahara, H., Santos, G., Nikolic, D., Plenz, D. (2011). Higher-order interactions characterized in cortical activity. Journal of Neuroscience, 31, 17,514–17,526.
Zhang, J., Rangan, A., Cai, D., et al. (In preparation). A coarse-grained framework for spiking neuronal networks: between homogeneity and synchrony.
Zhou, D., Sun, Y., Rangan, A., Cai, D. (2008). Network induced chaos in integrate-and-fire neuronal ensembles. Physical Review E, 80(3), 031918.
Acknowledgments
The authors would like to thank David Cai for useful discussions. J. Z. is partially supported by NSF grant DMS-1009575, K. N. is supported by the Courant Institute. D. Z. is supported by Shanghai Pujiang Program (Grant No. 10PJ1406300), NSFC (Grant No. 11101275 and No. 91230202), as well as New York University Abu Dhabi Research Grant G1301. A. R. is supported by NSF Grant DMS-0914827.
Author information
Authors and Affiliations
Corresponding author
Additional information
Action Editor: Gaute T. Einevoll
Appendixes
Appendixes
1.1 Appendix A: Firing events in I&F networks
In order to accurately simulate the dynamics of Eq. (1a), we need to resolve the neurons’ firing sequence correctly despite the fact that all neurons fire during the same instance of time in a single MFE. The algorithm used must produce an MFE magnitude consistent with the cascade condition (6) in Sec. 3.1. We present one such algorithm here in which the voltages of the remaining neurons are updated based on the neuron with the highest voltage first. We also emphasize that neurons having fired previously in an MFE do not receive input from neurons firing after it within the same MFE, because the neurons have a short refractory period, thus neurons can never fire more than once in an MFE.
To resolve the firing sequence of an MFE, we start from the sets \(\left \{v_{j}\right \}_{j=1}^{N_{E}}\) and \(\left \{w_{j}\right \}_{j=1}^{N_{I}}\) of the excitatory and inhibitory neuronal voltages at the time one excitatory voltage is above threshold, indicating that this neuron is about to fire. Then, the following algorithm is used:
-
1.
Find the k E excitatory and k I inhibitory voltages within the sets v j and w j that are above the threshold, V T .
-
2.
If both k E and k I are zero, stop; the MFE has ended. Otherwise, find the neuron with largest voltage, V max, out of the k E excitatory and k I inhibitory voltages above the threshold.
-
3.
The neuron with voltage V max fires; it is reset to V R . If it is type E (I), it is removed from the list v j {w j } and the remaining voltages of neurons that have not fired are updated by adding S EEto (subtracting S EIfrom) the voltages in {v j } and adding S IEto (subtracting S IIfrom) the voltages in w j .
-
4.
If either set v j or w j is non-empty, return to Step 1. Otherwise, stop.
The MFE magnitudes for the excitatory and inhibitory populations are obtained by subtracting the number of neurons that did not fire in the MFE from the total numbers of neurons in the two populations,
where # denotes the number of elements in the set. Note that the sets in Eq. (36) only include the voltages of non-firing neurons, as detailed in Step 3 of the algorithm.
1.2 Appendix B: Derivation of the geometric method
Starting from condition (6) in the main text, we describe here how to obtain a single condition, and therefore the MFE magnitude described by the single intersection point of G(v) defined in Eq. (14) and the line l(v) in Eq. (5). From the last points v < V T and w < V T where condition (6) is satisfied,
we have
Subtracting Eq. (40) from Eq. (39) we have
which can be written as
where δ = S II S EE / S IE − S EI. The first two terms on the RHS suggest the transformation of the inhibitory voltages defined in Eq. (7) in the main text,
This, together with the fact that one can show the empirical CDF
allows Eq. (41) to be written as
Next, we must consider the two cases of δ > 0 and δ < 0 separately.
Case 1
If δ > 0, we further transform \(\hat {w}\) by
appearing as Eq. (9) in the main text. Then, Eq. (44) is simply
and we substitute this into Eq. (39), together with Eq. (43) to obtain
What we want to do next is replace the function of \(\hat {w}\) by a function of \(\bar {w}\). Suppose we can invert the transformation \(\bar {w}= \hat {w} - \delta N_{I} \left (1-\hat {F}_{I}(\hat {w})\right )\) to obtain
(Note that \(h^{-1}(x) = x - \delta N_{I}(1-\hat {F}_{I}(x))\), we have \(\left [h^{-1}(x)\right ]' = 1+\hat {F}'_{I}(x)>0\). Therefore, the transformation h is a monotonic increasing map and has only one root.) Then we have
Changing the integration to the variable y defined by y = h − 1(z), we have
Taylor expanding h(y) and only keeping the linear term, we have \(h(y)-h(\bar {w}) = C_{0}(y-\bar {w})\) and h ″(y) = C 0, resulting in
Substituting Eq. (49) into Eq. (47), we have
corresponding to Eq. (12) in the main text, where we defined \(\bar {v}=v\) so that \(\bar {F}_{E}(v) = F_{E}(v)\). Solving this for \(\bar {w}\) is equivalent to finding the intersection point between the line \(l(v) = 1+\frac {1}{N_ES^{EE}}(v-V_T)\) and the function
Case 2
If δ < 0, then \(\bar {w}\) defined in Eq. (45) may be larger than V T , which we would like to avoid. Rather, we solve Eq. (44) for \(\hat {w}\),
and transform v to (Eq. (10) in the main text)
so that Eq. (51) becomes
Using Eq. (53) to rewrite Eq. (52) in terms of \(\bar {v}\) and solving for v gives
Using Eq. (54) and that \(F_{I}(w) = \hat {F}_{I}(\hat {w})\), Eq. (40) becomes
corresponding to Eq. (12) in the main text, where we defined \(\bar {w}=\hat {w}\), so that \(\bar {F}_{I}(v) = \hat {F}_{I}(v)\).
Now, what we want to do is replace the function of v by a function of \(\bar {v}\). If we denote Eq. (54) by
then change the integration variable in
to y using z = g(y) and \(v_{j} = g(\bar {v}_j)\), we have that
following the same argument used to derive Eq. (49). Now, Eq. (55) becomes
Solving this for \(\bar {v}\) is equivalent to finding the intersection point between the line \(l(v) = 1 + \frac {1}{N_ES^{EE}} (v-V_T)\) and the function
Combining these two cases, we arrive at the MFE magnitude defined via the intersection of the function G(v) defined in Eq. (14) in the main text, and the line l(v) defined in Eq. (5) in the main text.
1.3 Appendix C: Analytical formula of first passage time
In this appendix, we explain how to reformulate the passage of a generic 2-dimensional anisotropic Brownian motion across a moving boundary to the viewpoint of one-dimensional Brownian motion as used in Durbin’s papers (Durbin 1985; Durbin and Williams 1992).
We define x = (x, y) to be two-dimensional anisotropic brownian motion with linear drift, given by the solution to
with initial starting point x(0) = 0 for simplicity. If β x ≠ β y , then we define the isotropic version of this process via \(\mathbf {z}=( \hat {x},\hat {y})\), with \(\hat {x}=x/\beta _{x}\), \(\hat {y}=y/\beta _{y}\), or z = β − 1 x, with
We will also define
We consider the first crossing density of x to a linear boundary \( \mathcal {A}\) described by the normal constraint
where \(\bar {\eta }=\left [\bar {\eta }_{x},\bar {\eta }_{y},\bar {\eta }_{t}\right ]^{\intercal }\) is the unit normal to \(\mathcal {A}\) in (x, y, t) space and k is the constant ‘offset’ of the linear plane (i.e., the distance between \(\mathcal {A}\) and the origin (0, 0, 0)). We also define \(\eta _{\perp }=\left [ \bar {\eta }_{x},\bar {\eta }_{y} \right ]^{\intercal }\) (not a unit vector) to be the projection of \(\bar {\eta }\) onto the x, y plane. So we know the process (xt), y(t) crosses \(\mathcal {A}\) when
or where
or when
That is, x crosses \(\mathcal {A}\) when z crosses \(\bar {\beta } ^{-1}\mathcal {A}\) (i.e., the boundary perpendicular to \(\bar {\beta }\bar {\eta } \)). Now z is isotropic brownian motion (plus a linear drift), so z can be described in terms of z ⊥ and z ∥, the directions parallel and perpendicular to η ⊥. Each of the processes z ⊥ and z ∥ are given by independent brownian motion (plus a linear drift). So the probability of z hitting the rescaled boundary \(\bar {\beta }^{-1}\mathcal {A}\) at time t is just the probability of z ⊥ reaching the moving (scalar) boundary \( \left [ \bar {\beta }^{-1}\mathcal {A}\right ]_{\perp }=\left ( k-\bar {\eta } _{t}t\right ) /\left \vert \beta \eta _{\perp }\right \vert \) at time t. That is to say
According to Durbin (1985), this probability is simply
Now if z crosses at time t at a particular point \(\bar {\beta } ^{-1}a( t) \), then two things must happen. First, z ⊥ must equal \(\left [ \bar {\beta }^{-1}a( t) \right ]_{\perp }\) which is also \(\left ( k-\bar {\eta }_{t}t\right ) /\left \vert \beta \eta _{\perp }\right \vert \) at time t. Second, z ∥ must equal \(\left [ \bar {\beta } ^{-1}a( t) \right ]_{\parallel }\) at time t, where \(\left [ \bar {\beta }^{-1}a( t) \right ]\) is the component of a(t) perpendicular to η ⊥. Thus, the probability that the process z crosses at time t at a particular point \(\bar {\beta }^{-1}a( t) \) on \(\bar {\beta }^{-1} \mathcal {A}\) is given by
Now, as we discussed above,
For the second term, we have
and together, since z ⊥ and z ∥ are independent,
As we discussed above, if z crosses at time t at a point \(\bar {\beta } ^{-1}a( t) \), then x crosses at time t at at. However, the density \( p\left ( t\text {, }\mathbf {z}\text { crosses at }\bar {\beta } ^{-1}a( t) \right )\) is not equivalent to the density p (t, x crosses at a(t)). Rather, we have
where \(\bar {\eta }_{\parallel }=\left [ -\bar {\eta }_{y},\bar {\eta }_{x},0 \right ]^{\intercal }\), and \(\eta _{\parallel }=\left [ -\bar {\eta }_{y},\bar {\eta }_{x}\right ]^{\intercal }\) (not a unit vector) lies along \(\mathcal {A}\) and lies in the x-y plane, and is perpendicular to η ⊥. Comparing the lengths of these line segments, we see that length \(\left | \bar {\beta }^{-1}a( t) ,\bar {\beta } ^{-1}( a\left ( t) +\delta \cdot \bar {\eta }_{\parallel }\right ) \right | =\delta \cdot \left \vert \beta ^{-1}\eta _{\parallel }\right \vert \) and length \(\left | a( t) ,a( t) +\delta \cdot \bar {\eta }_{\parallel }\right |=\delta \cdot \left \vert \eta _{\parallel }\right \vert \text {.} \) Using this relationship we can conclude the density
This in turn implies that the probability that the process x crosses \(\mathcal {A}\) at time t at a particular point a(t) on \( \mathcal {A}\) is given by the density
Finally, since
where β is the determinant of β. The density
Using the language described in Section 3.3, the above can be written as
leading to
where f(t, a | 0, x0) is a (unconditioned) density for the process x at t, a, see details in Appendix B.
We can now fill in the geometry for the specific problem at hand. The points (x, y) on the surface \(\mathcal {A}\) at time t was given in Eq. (23) and comes from the constraint of \(\bar {G}(t) = \bar {l}(t)\). The vector
is normal to the surface in 3D space at the point (x, y, t). The 2-component normal to \(\mathcal {A}\), restricted to the x-y plane, is
and the 2-component vector perpendicular to n ⊥ is
where | η | is the length of the vector η. One can also think of \(\mathcal {A}\) as a line parallel to − γx + αy in the x-y plane, with a time dependent offset of \(k/N_{E}+\gamma \bar {f}_{E}( t) -\alpha \bar {f}_{I}( t) -\frac {1}{ N_{E}S^{EE}}t\). So, around any point (a(t), t) = a x , a y , t on \(\mathcal {A}\) , the point
is on the linearized boundary tangent to \(\mathcal {A}\) at the point a(t). The factor
can be thought of as the speed at which the boundary propagates in the normal direction, as s (and hence t − s) changes and the unit vector in the x-y plane \( \mathbf {u} = \frac {[-\gamma ,\alpha ] }{\sqrt {\gamma ^{2}+\alpha ^{2}}} \) is perpendicular to \(\widehat {\mathcal {A}}\) and therefore parallel to n ⊥. Therefore,
We now change variables to the n ⊥ and n ∥directions by first changing to variables in which the stochastic processes (ϕ E (t), ϕ I (t)) obeys isotropic Brownian motion in 2D. This is done with the matrix
and requires the change of variables term given by
where \(\zeta =\sqrt {1+1/\nu _{n}^{2}}\).
Using the definition \( l( s,\mathbf {x}|t,\mathbf {a}) =\left ( \widehat {\mathcal {A}}_{a\left ( t\right ) }|_{s}-\mathbf {x}\right ) \cdot \mathbf {u}\; \zeta \; , \) we can calculate the density of crossing at time t and position a on \(\mathcal {A}\) via
In the above expression the term f(t, a) refers to the unconditioned density of the process (ϕ E , ϕ I , t) at time t and position a (presumed to lie on \(\mathcal {A}\)). The first term in the sum is straightforward, the second term is a little trickier. Note that the conditional first passage density at time r to point b given that the process also crosses the point a at time t can be written as q(r, b)f(r, b | t, a) = qr, bf(r, b, t, a) / f(t, a), where q(r, b) is defined via p(r, b) = q(r, b)f(r, b). Following Durbin’s method, we achieve the analytical formula (32) by taking the first two terms of Eq. (59) in the main text.
1.4 Appendix D: Probability density of stochastic process
In this appendix, we derive the distributions f(t, a) in Eq. (33) and f(r, b | t, a) in Eq. (34) in the main text. Recall that \(\bar {f}_{E}(t)\) and \(\bar {f}_{I}(t)\) are the smooth CDFs for the transformed voltage variables, defined in Eq. (18). These define the stochastic processes ϕ E (t) and ϕ I (t) in Eq. (20).
The distribution f(t, a) refers to the probability of observing the unconstrained process (ϕ E , ϕ I ) at at at time t. In other words, f(t, a) is the probability that
at time t, without assuming anything about how ϕ crosses \(\mathcal {A}.\) This probability is given by:
where each component of ϕ evolves independently. Now of course the probability that ϕ Q (t) = a Q (t) at the time t is the same as the probability that \(\bar {\phi }_{Q}\left ( \tau \right ) =a_{Q}( t) \) at time τ = f Q (t). This in turn is the same as \(\sqrt {N_{Q}}\) times the probability \(P\left ( B_{\tau }=\sqrt {N_{Q}}a_{Q}\right ) \), the probability that a standard brownian bridge (connecting (0, 0) to (1, 0)) reaches the value \(\sqrt {N_{Q}}a_{Q}( t) \) at time τ = f Q t. According to Durbin and Williams (1992), this probability is given by
therefore f(t, a) is precisely Eq. (33) in the main text.
In a similar vein the distribution f(r, b | t, a) is the conditional distribution of the process ϕ at r, b, given that the point t, a is reached. Using logic similar to the above, this is given by
Note that B τ conditioned to hit \(\sqrt {N_{Q}}a_{Q}\) at time f Q (t) is just a brownian bridge connecting 0, 0 to \(( f_{Q}\left ( t) ,\sqrt {N_{Q}}a_{Q}\right ) \). One can think of this new brownian bridge as a rescaled version of a standard brownian bridge:
Then
and formula (34) for f(r, b | t, a) in the main text follows. In the computation of MFE magnitude, we must simply account for not selecting the k neurons that initiate the MFE, so we replace N E by N E − k.
Rights and permissions
About this article
Cite this article
Zhang, J., Newhall, K., Zhou, D. et al. Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks. J Comput Neurosci 36, 279–295 (2014). https://doi.org/10.1007/s10827-013-0472-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10827-013-0472-6