Abstract
Negative correlations in the sequential evolution of interspike intervals (ISIs) are a signature of memory in neuronal spike-trains. They provide coding benefits including firing-rate stabilization, improved detectability of weak sensory signals, and enhanced transmission of information by improving signal-to-noise ratio. Primary electrosensory afferent spike-trains in weakly electric fish fall into two categories based on the pattern of ISI correlations: non-bursting units have negative correlations which remain negative but decay to zero with increasing lags (Type I ISI correlations), and bursting units have oscillatory (alternating sign) correlation which damp to zero with increasing lags (Type II ISI correlations). Here, we predict and match observed ISI correlations in these afferents using a stochastic dynamic threshold model. We determine the ISI correlation function as a function of an arbitrary discrete noise correlation function \({{\,\mathrm{\mathbf {R}}\,}}_k\), where k is a multiple of the mean ISI. The function permits forward and inverse calculations of the correlation function. Both types of correlation functions can be generated by adding colored noise to the spike threshold with Type I correlations generated with slow noise and Type II correlations generated with fast noise. A first-order autoregressive (AR) process with a single parameter is sufficient to predict and accurately match both types of afferent ISI correlation functions, with the type being determined by the sign of the AR parameter. The predicted and experimentally observed correlations are in geometric progression. The theory predicts that the limiting sum of ISI correlations is \(-0.5\) yielding a perfect DC-block in the power spectrum of the spike train. Observed ISI correlations from afferents have a limiting sum that is slightly larger at \(-0.475 \pm 0.04\) (\(\text {mean} \pm \text {s.d.}\)). We conclude that the underlying process for generating ISIs may be a simple combination of low-order AR and moving average processes and discuss the results from the perspective of optimal coding.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The spiking activity of many neurons exhibits memory, which stabilizes the neurons’ firing rate and makes it less variable than a renewal process. In spontaneously active neurons, a signature of these memory effects can be found in the serial correlations of interspike intervals (ISIs), which display a prominent negative correlation between adjacent ISIs. This is a result of long intervals following short intervals so that fluctuations from the mean ISI are damped over long time-scales, thereby stabilizing the firing rate (Ratnam and Nelson 2000; Goense and Ratnam 2003). Negative correlation between adjacent ISIs, which is the first serial correlation coefficient (\(\rho _1\)), can assume a range of values (Farkhooi et al. 2009) from near-zero [close to a renewal spike train, e.g., Lowen and Teich (1992); Fisch et al. (2012)] to values close to \(-0.9\) (Ratnam and Nelson 2000). While more negative values may suggest a stronger memory effect, the relationship between the extent of memory in the spike train and their ISI correlations is by no means clear, in part due to the difficulty in determining joint interval distributions of arbitrarily high orders (van der Heyden et al. 1998; Ratnam and Nelson 2000).
Negative correlations in spontaneous spike trains, or in spike trains obtained under quiescent conditions, have been known for many years. Early reports came from lateral line units of Japanese eel (Katsuki et al. 1950), the retina of the cat (Kuffler et al. 1957), and subsequently from several neuronal systems (Farkhooi et al. 2009; Neiman and Russell 2001; Johnson et al. 1986; Lowen and Teich 1992; Hagiwara and Morita 1963; Amassian et al. 1964; Bullock and Chichibu 1965; Calvin and Stevens 1968; Nawrot et al. 2007; Fisch et al. 2012) (see Avila-Akerberg and Chacron 2011; Farkhooi et al. 2009, for tabulations of negative correlations). In the active sense of some wave-type weakly electric fish (Hagiwara and Morita 1963; Bullock and Chichibu 1965), primary electrosensory afferents exhibit the strongest known correlations in their adjacent ISIs (Ratnam and Nelson 2000). These electrosensory neurons are an excellent model system for studying memory effects and regularity in firing due to their high spontaneous spike-rates (Chacron et al. 2001; Ratnam and Nelson 2000; van der Heyden et al. 1998). It is likely that spike encoders which demonstrate negative ISI correlations have adaptive value because they can facilitate optimal detection of weak sensory signals (Goense and Ratnam 2003; Goense et al. 2003; Ratnam and Nelson 2000; Ratnam and Goense 2004; Chacron et al. 2001; Hagiwara and Morita 1963; Nesse et al. 2010; Farkhooi et al. 2011) and enhance information transmission (Chacron et al. 2005, 2004b, a).
Negative ISI correlations are a characteristic feature of spike frequency adaptation where a constant DC-valued input to a neuron is gradually forgotten, possibly due to an intrinsic high-pass filtering mechanism in the encoder (Liu and Wang 2001; Benda and Herz 2003; Benda et al. 2010). Two commonly used models of spike frequency adaptation are based on: (1) a dynamic threshold, and (2) an adaptation current, both of which cause increased refractoriness following an action potential. In the case of a dynamic threshold, the increased refractoriness is due to an elevation in firing threshold, historically referred to as “accommodation” (Hill 1936). This is usually modeled as an abrupt increase in spike threshold immediately after a spike is output, followed by a relaxation of the threshold to its normative value without reset (Buller 1965; Hagiwara 1954; Goldberg et al. 1964; Geisler and Goldberg 1966; Ten Hoopen 1964; Brandman and Nelson 2002; Chacron et al. 2000, 2001; Schwalger et al. 2010; Schwalger and Lindner 2013). Thus, if two spikes are output in quick succession with an interval smaller than the mean ISI, the upward threshold shift will be cumulative, making the membrane more refractory, and so a third spike in the sequence will occur with an ISI that is likely to be longer than the mean. In this way, a dynamic time-varying threshold serves as a moving barrier, carrying with it a memory of prior spiking activity. In the case of an adaptation current, outward potassium currents, including voltage gated potassium currents and calcium-dependent or AHP currents, can give rise to increased refractoriness and lead to spike frequency adaptation (e.g., Benda and Herz 2003; Prescott and Sejnowski 2008; Liu and Wang 2001; Benda et al. 2010; Jolivet et al. 2004; Schwalger et al. 2010; Fisch et al. 2012). Adaptation currents are usually modeled by explicitly introducing a hyperpolarizing outward current with a time-varying conductance in the equation for the membrane potential. The conductance (or a gating variable) will be elevated following a spike and, as with a dynamic threshold, it will decay back to its normative value. Both models have been successful in reproducing nearly identical spike frequency adaptation behavior for a step input current, and normalized first ISI correlation coefficient (\(\rho _1\)), although they differ in other details (Benda et al. 2010).
In this report, we focus on a dynamic threshold model. A simple dynamic threshold model can accurately predict spike-times in cortical neurons (Kobayashi et al. 2009; Gerstner and Naud 2009; Jones et al. 2015) and peripheral sensory neurons (Jones et al. 2015). Further, we had recently proposed that spike trains encoded with a dynamic threshold are optimal, in the sense that they provide an optimal estimate of the input by minimizing coding error (estimation error) for a fixed long-term spike-rate (energy consumption) (Jones et al. 2015; Johnson et al. 2015, 2016). These results did not incorporate noise in the model, and so here, we extend our earlier model by incorporating noise to model spike timing variability and serial ISI correlations.
In previous work, colored noise or Gaussian noise (or both) is added to a dynamic threshold, or to an adaptation current, or to the input signal so that negative correlations can be observed (Brandman and Nelson 2002; Chacron et al. 2001, 2003; Prescott and Sejnowski 2008). In these models the first ISI correlation coefficient \(\rho _1\) (between adjacent ISIs) is close to or equal to \(-0.5\), and all remaining correlation coefficients \(\rho _i\), \(i \ge 2\), are identically zero. In another report (Benda et al. 2010) \(\rho _1\) is parameterized, and can assume values between 0 and \(-0.5\). Experimental spike trains demonstrate broader trends, where \(\rho _1\) can assume values smaller or greater than \(-0.5\), and the remaining coefficients can be nonzero for several lags, sometimes with damped oscillations and sometimes monotonically increasing to zero (Ratnam and Nelson 2000). In several types of integrate-and-fire models with adaptation currents (Schwalger et al. 2010; Schwalger and Lindner 2013), colored and Gaussian noise fluctuations of different time-scales (slow and fast, respectively) determine the various patterns of ISI correlations, including positive correlations. All of these patterns had a geometric structure (i.e., \(\rho _k/\rho _{k-1} = \text {constant}\)). Urdapilleta (2011) also obtained a geometric structure with monotonically decaying correlation pattern with \(\rho _k < 0\). These latter studies show that the role of noise fluctuations, in particular the time-scale of fluctuations, is important in determining patterns of ISI correlations. So, far adaptation models with an adaptation current have successfully predicted patterns of ISIs, but models with dynamic thresholds have been investigated much less, and it is not known whether they can accurately predict observed ISI patterns. Models that can accurately predict experimentally observed correlation coefficients for all lags have the potential to isolate mechanisms responsible for ISI fluctuations and negative correlations and provide insights into neural coding. This is the goal of the current work.
Serial interspike interval correlations observed in primary P-type afferents of weakly electric fish Apteronotus leptorhynchus are modeled using a dynamic threshold with noise. The model is analytically tractable and permits an explicit closed-form expression for ISI correlations in terms of an arbitrary correlation function \({{\,\mathrm{\mathbf {R}}\,}}_k\), where k is a multiple of the mean ISI. This allows us to solve the inverse problem where we can determine \({{\,\mathrm{\mathbf {R}}\,}}_k\) given a sequence of observed correlation coefficients. Theoretically, the limiting sum of ISI correlation coefficients is \(-0.5\) (a perfect DC-block), and experimental correlation coefficients are close to this sum. This model is parsimonious, and in addition to predicting spike-times as shown earlier, it reproduces observed ISI correlations. Finally, the model provides a fast method for generating surrogate spike trains that match a mean firing rate with prescribed ISI distribution, joint-ISI distribution, and ISI correlations.
2 Methods
2.1 Experimental procedures
Spike data from P-type primary electrosensory afferents were collected in a previous in vivo study in Apteronotus leptorhynchus, a wave-type weakly electric fish (Ratnam and Nelson 2000). All animal procedures including animal handling, anesthesia, surgery, and euthanasia received institutional ethics (IACUC) approval from the University of Illinois at Urbana-Champaign and were carried out as previously reported (Ratnam and Nelson 2000). No new animal procedures were performed during the course of this work. Of relevance here are some details on the electrophysiological recordings. Briefly, action potentials (spikes) were isolated quasi-intracellularly from P-type afferent nerve fibers in quiescent conditions under ongoing electric organ discharge (EOD) activity (this is the so-called baseline spiking activity of P-type units). An artificial threshold was applied to determine spike onset times, and reported at the ADC sampling rate of \(16.67~\text {kHz}\) (sampling period of \(60~\mu \text {s}\)). The fish’s ongoing quasi-sinusoidal EOD was captured whenever possible to determine the EOD frequency from the power spectrum, or the EOD frequency was estimated from the power spectrum of the baseline spike train. Both methods reported almost identical EOD frequencies. EOD frequencies typically ranged from 750 to 1000 Hz (see Ratnam and Nelson 2000, for more details).
2.2 Data analysis
P-type units fire at most once every EOD cycle, and this forms a convenient time-base to resample the spike train (Ratnam and Nelson 2000). Spike times resampled at the EOD rate are reported as increasing integers. Resampling removes the phase jitter in spike-timing but retains long-term correlations due to memory effects in the spike train. Spike-times were converted to a sequence of interspike intervals, \(X_1,\,X_2,\,\ldots ,\,X_k,\ldots , X_N\).
2.2.1 Autocorrelation function
The normalized autocorrelation function for the sequence of ISIs are the normalized serial correlation coefficients or SCCs, \(\rho _1,\,\rho _2,\,\rho _3,\,\ldots \), with \(\rho _0 = 1\) by definition. These were estimated from time-stamps at the original ADC rate of \(16.67~\text {kHz}\) and at the resampled EOD frequency (Ratnam and Nelson 2000). In some afferents, there is a small difference between the two estimates, particularly in the estimates of \(\rho _1\), but this has negligible effect on the results. The normalized ISI correlation function (i.e., correlation coefficients) were estimated from the resampled spike trains as follows:
where \(T_1\) is the mean ISI, and M is the number of spikes in a block, typically ranging from 1000–3000 spikes. Correlation coefficients were estimated in non-overlapping blocks and then averaged over [N/M] blocks. There was no drift in the mean ISI within a block. We usually had about \(2.5 \times 10^5\) spikes per afferent.
Throughout this work, we will use the term covariance to refer to the mean subtracted cross-correlation. If the two random variables in question are identical, then we will simply refer to covariance as variance, and correlation as autocorrelation. If the correlation is normalized by the variance, then we will refer to it as the correlation coefficient. The abbreviation SCCs stand for ISI serial correlation coefficients and are the same as the normalized ISI autocorrelation function (ACF).
2.2.2 Partial autocorrelation function
In addition to the normalized autocorrelation (SCCs), we compute the normalized partial autocorrelation \(\phi _{k,\,k}\) between \(X_i\) and \(X_{i+k}\) by removing the linear influence of the intervening variables \(X_{i+1},\,\ldots ,\,X_{i+k-1}\) (Box and Jenkins 1970). The notation \(\phi _{k,\,j}\) means that the process is purely autoregressive (AR) of order k, and \(\phi _{k,\,j}\) is the jth coefficient in the AR model. Partial autocorrelations provide a convenient way to identify an AR process, just as the autocorrelation function, Eq. (1), provides a way to identify a moving average (MA) process. When the ISIs are AR of order-p, then the partial correlation function (PACF) is finite with \(\phi _{k,\,k} = 0\), for \(k > p\), however, the autocorrelation function will be infinite. Conversely, when the process is MA of order-m, the autocorrelation function is finite with \(\rho _k = 0\), for \(k > m\), however, the partial autocorrelation function will be infinite. When the partial autocorrelation and autocorrelation functions are both infinite, the underlying process is neither purely AR nor purely MA, but is an autoregressive moving average (ARMA) process of some unknown order \((p,\,m)\). The partial autocorrelations \(\phi _{1,\,1},\,\phi _{2,\,2},\,\phi _{3,\,3},\,\ldots \), can be obtained for \(k = 1,\,2,\,3,\,\ldots \), by solving the Yule-Walker equations. A more efficient method is to solve Durbin’s recursive equations. Durbin’s formula is (Box and Jenkins 1970),
where the \(\rho _j\) are SCCs obtained from Eq. (1). The \(\phi _{k,\,j}\) in the formula are estimates with some mean and standard deviation over the population. To reduce the estimation error, we can follow the same procedure as for serial correlations by averaging over blocks.
3 Results
3.1 Experimental results
ISI serial correlation coefficients (SCCs) were estimated from the baseline activity of 52 P-type primary electrosensory afferents in the weakly electric fish Apteronotus leptorhynchus (see Materials and Methods). SCCs from two example afferents (Fig. 1) demonstrate the patterns of negative SCCs observed in spike trains, and motivate this work. Statistical properties of these and other spike trains were reported earlier (Ratnam and Nelson 2000), with a qualitative description of SCCs and some descriptive statistics. A complete analytical treatment is undertaken here. Two types of serial interspike interval statistics can be identified according to the value taken by \(\rho _1\) (the first SCC, between adjacent ISIs).
-
1.
Type I: \(-0.5< \rho _1 < 0\). Subsequent \(\rho _k\) are negative and diminish to 0 with increasing k (Fig. 1A). The ISIs of these afferents are unimodal (shown later) and their spike trains do not exhibit bursting activity.
-
2.
Type II: \(\rho _1 < -0.5\). Subsequent \(\rho _k\) alternate between positive (\(\rho _{2k})\) and negative (\(\rho _{2k+1}\)) values, are progressively damped, and diminish to zero (Fig. 1C). The ISIs of these spike trains are bimodal with a prominent peak at an ISI equal to about one period of the electric organ discharge (EOD) (shown later). These spike trains exhibit strong bursting.
Afferent fibers sampled from individual fish were a mix of Types I and II. Additionally we identify a third type that has not been observed in experimental data (at least by these authors) but is fairly common in some models with adaptation (e.g., Brandman and Nelson 2002; Chacron et al. 2001, see also Discussion). We call this Type III, and for this pattern of SCCs \(\rho _1 = -0.5\) and subsequent \(\rho _k\) are identically zero (Fig. 1). The Type III pattern is a singleton (i.e., there is only one SCC pattern in this class).
The baseline spike-train statistics of 52 P-type afferents reported earlier (Ratnam and Nelson 2000) were analyzed in detail in this work. All observed units showed a negative correlation between succeeding ISIs (\(\rho _1 < 0\)) (Fig. 2A, panels A1 and A3). Experimental spike-trains demonstrated an average \(\bar{\rho }_1 = -0.52 \pm 0.14\) (\(\text {mean} \pm \text {s.d.}\)) (\(N = 52\)). Nearly half of these fibers had \(\rho _1 > -0.5\) (\(N = 24\)) while the remaining fibers had \(\rho _1 < -0.5\) (\(N = 28\)). The second SCC (\(\rho _2\)), between every other ISI (Fig. 2A, panels A2 and A3) assumed positive (\(N = 36\)) or negative (\(N = 16\)) values with an average of \(\bar{\rho }_2 = 0.10 \pm 0.18\). The correlation coefficient at the second lag (\(\rho _2\)) is linearly dependent on \(\rho _1\) with a positive \(\rho _2\) more likely if \(\rho _1 < - 0.5\) (Fig. 2A, panel A3). The correlation between \(\rho _1\) and \(\rho _2\) is \(-0.94\). The linear relationship is described by the equation \(\rho _2 = -1.18 \rho _1 - 0.51\), with a standard error (SE) of 0.06 (slope) and 0.03 (intercept), and significance \(p = 1.11 \times 10^{-25}\) (slope) and \(p = 1.42 \times 10^{-21}\) (intercept). This is close to the line \(\rho _2 = -\rho _1 - 0.5\), the significance of which is discussed further below. For all afferents across the population, Fig. 2B depicts \(\rho _1\) as a function of the average firing rate of the neuron, and Fig. 2C depicts \(\rho _1\) as a function of the coefficient of variation (CV) of the ISIs. Finally, the sum of the SCCs for each fiber taken over the first fifteen lags (excluding zeroth lag) has a population mean \(\sum _k\,\rho _k = -0.475 \pm 0.042\) (\(N = 52\)) which is just short of \(-0.5\) (Fig. 2D). Fifty out of 52 afferents had sums larger than \(-0.5\), whereas the remaining two afferents had a sum smaller than \(-0.5\) (\(-0.504\) and \(-0.502\), respectively). However, for these two fibers, the difference from \(-0.5\) is small. We are not able to state with confidence that the trailing digits of these two estimates are significant.
3.2 Deterministic dynamic threshold model
In the simplest form of the dynamic threshold model (Jones et al. 2015), the spike-initiation threshold is a dynamic variable governed by two time-varying functions: the sub-threshold membrane potential v(t) and a time-varying or dynamic threshold \(r\left( t\right) \) (Fig. 3A). In the sub-threshold regime, \(v(t) < r\left( t\right) \). Most models (e.g., Kobayashi et al. 2009) assume that a spike is emitted when v(t) exceeds \(r\left( t\right) \) from below. To induce memory, the dynamic threshold is never reset (Fig. 3A) but suffers a jump in value whenever a spike is generated and then gradually relaxes to its quiescent value. This “jump and decay” is a fixed function which is called the dynamic threshold, and it usually takes the form \(h\left( t\right) = A\exp \left( -t/\tau \right) \) where A is the instantaneous jump in threshold following a spike at \(t = 0\), and \(\tau \) is the time-constant of relaxation. Between two spikes \(t_k < t \le t_{k+1}\) the dynamic threshold is \(r(t_k^-) + h(t - t_k)\) where \(r(t_k^-)\) is the value assumed immediately before the spike at \(t = t_k\). It captures the sum over the history of spiking activity.
Most neuron models in the literature, including models incorporating a dynamic threshold, typically integrate the input using a low-pass filter (i.e., pass it through a leaky integrator) or a perfect integrator and then reset the membrane voltage to a constant \(v_r\) following a spike (e.g., Chacron et al. 2001; Schwalger and Lindner 2013). Some leaky integrators have been assumed to be non-resetting (e.g., Kobayashi et al. 2009). In the form of the model considered in this work and earlier (Jones et al. 2015; Johnson et al. 2015, 2016), the voltage v(t) is not integrated (i.e., there is no filtering), and it is not reset following a spike. These assumptions remove the effects of filtering in the soma and dendrites and remove all the nonlinearities except for the spike-generator, which we assume generates a sequence of Dirac-delta functions.
For modeling spontaneous activity, we consider a steady DC-level bias voltage \(v > 0\). In its most general form, a spike is fired whenever \(v - r\left( t\right) \ = \gamma \), where \(\gamma \) is a constant spike-threshold (Fig. 3A). Historically, and in much of the literature \(\gamma = 0\) (e.g., Kobayashi et al. 2009; Chacron et al. 2001; Schwalger and Lindner 2013). We use the more general form with a constant, nonzero \(\gamma \). The specific value assumed by \(\gamma \) plays a role in optimal estimation of the stimulus (Jones et al. 2015; Johnson et al. 2016) but does not influence serial correlation coefficients. This is addressed in the discussion. The major advantage of these simplifications is that they allow us to focus on the role of the dynamic threshold element \(h\left( t\right) \) in generating SCCs.
To make explicit the presence of memory, we note that the condition for firing the kth spike at \(t_k\) is met when
It is evident that memory builds up because of a summation over history of spiking activity. The spike encoder with dynamic threshold implicitly incorporates a feedback loop (Fig. 3B) and so a different view of the above model is to think of the dynamic threshold \(r\left( t\right) \) as an ongoing estimate of the membrane voltage \(v\left( t\right) \). Here, the dynamic threshold \(h\left( t\right) = A\exp \left( -t/\tau \right) \) acts as a linear low-pass filter. It filters the spike train to form an ongoing estimate \(r\left( t\right) \) of the voltage \(v\left( t\right) \). The instantaneous error in the estimate (the coding error) is then \(e\left( t\right) = v\left( t\right) - r\left( t\right) \) (Fig. 3B). When the error exceeds \(\gamma \), a spike is output and the threshold increases by A to reduce the error. The time-varying dynamic threshold tracks \(v\left( t\right) \) much like a home thermostat tracks a temperature set-point (Fig. 3A). Viewed in this way, it is the estimation error, and not the signal \(v\left( t\right) \), which directly drives the spike generator and determines when the next spike should be generated (Fig. 3B).
From Fig 3A, we can approximate the exponential with a piece-wise linear equation when the ISI is small. If \(t_{i-1}\) and \(t_{i}\), \(i \ge 1\), are successive spike-times (Fig. 3B), then the time evolution of the dynamic threshold \(r\left( t\right) \) is given by
Note that v, \(\gamma \), A, \(\tau \) are constant, and so we can define \(m = \left( v-\gamma +A\right) /\tau \) so that the slope of the decaying dynamic threshold is \(-m\). The ISI can be obtained directly as (see Appendix for details)
which is the deterministic firing rule for a spike generator with a constant, DC-level input voltage.
3.3 Stochastic extension of the dynamic threshold model
In the schematic shown in Fig. 3B, noise injected in the body of the feedback loop will reverberate around the loop and generate memory. In the literature, a stochastic extension of this model is usually \(v -r\left( t\right) + w\left( t\right) = 0\), where w is independent Gaussian noise. That is, noise is continuous. Here, we consider a discrete, noisy threshold \(\gamma \). Subsequently, we will provide some results for a noisy time-constant \(\tau \) in the dynamic threshold element.
Figure 4 depicts the stochastic dynamic threshold model where the spike threshold (\(\gamma \)) is a stochastic process. All other aspects of the model are unchanged from the deterministic model (Fig. 3). Let \(\gamma \) be a discrete wide-sense stationary process with mean \({{\,\mathrm{\mathbf {E}}\,}}[\gamma ]\), discrete autocorrelation function \({{\,\mathrm{\mathbf {R}}\,}}_k\) and power \({{\,\mathrm{\mathbf {E}}\,}}\left[ \gamma ^2\right] = {{\,\mathrm{\mathbf {R}}\,}}_0\). The spike threshold with additive noise assumes the random value \(\gamma _i\), \(i \ge 1\) immediately after the \((i-1)\)th spike and remains constant in the time interval \(t_{i-1} < t \le t_{i}\) (Fig 4A) (Chacron et al. 2004a; Gestri et al. 1980). Thus, the ith spike is emitted when the error satisfies the condition
Subsequently the instantaneous value of the dynamic threshold jumps to a higher value specified by \(v - \gamma _i + A\), and the noisy spike threshold assumes a new value \(\gamma _{i+1}\). From Fig. 4A proceeding as before
The mean ISI is therefore \({{\,\mathrm{\mathbf {E}}\,}}\left[ t_{i}-t_{i-1}\right] = A/m\) as in the deterministic case given by Eq. (7). From the assumption of wide-sense stationarity, the autocorrelation function \({{\,\mathrm{\mathbf {E}}\,}}\left[ \gamma _i\, \gamma _j\right] \) can be written as \({{\,\mathrm{\mathbf {R}}\,}}\left( t_i - t_j\right) \). This is a discrete autocorrelation function whose discrete nature is made clear in the following way (see also Appendix). Denote the mean of the kth-order interval by \(T_k = {{\,\mathrm{\mathbf {E}}\,}}[t_{i+k} - t_i]\), then the mean ISI is \(T_1\) \((=A/m)\), and further \(T_k = kT_1\). A realization of the random variable \(\gamma \) is generated once every ISI and thus, \({{\,\mathrm{\mathbf {R}}\,}}\) takes discrete values at successive multiples of the mean ISI, i.e., \({{\,\mathrm{\mathbf {R}}\,}}\left( T_1\right) \), \({{\,\mathrm{\mathbf {R}}\,}}\left( T_2\right) \), etc., and will be denoted by \({{\,\mathrm{\mathbf {R}}\,}}_1\), \({{\,\mathrm{\mathbf {R}}\,}}_2\), etc., respectively. As noted before, \({{\,\mathrm{\mathbf {R}}\,}}_0\) is noise power. That is, we can write
where k is the number of intervening ISIs. The serial correlation coefficients at lag k are defined as (Cox and Lewis 1966)
For a wide-sense stationary process, the covariances are constant, so the subscript i can be dropped. From the relations and definitions in Eqs. (9), (10), and (11), and after some routine algebra (see Appendix), we obtain
Further below we determine the \({{\,\mathrm{\mathbf {R}}\,}}_k\) from experimental data. The serial-correlation coefficients given by Eqs. (12) and (13) are independent of the slope m of the decay rate of the dynamic threshold, and its gain A. Thus, for a constant input the observed correlation structure of the spike-train is determined solely by the noise added to the deterministic firing threshold \(\gamma \).
3.3.1 Limiting sum of ISI correlation coefficients and power spectra
We make the assumption that the process \(\gamma \) is aperiodic and the autocorrelation function \({{\,\mathrm{\mathbf {R}}\,}}_N \rightarrow 0\) when \(N \rightarrow \infty \). That is, the noise process decorrelates over long time-scales. From this assumption and Eq. (13), it follows that (see Appendix)
This is the limiting sum of ISI serial correlation coefficients. Let the mean and variance of ISIs be denoted by \(T_1\) and \(V_1\), respectively, and the coefficient of variation of ISIs as \(C = \sqrt{V_1}/T_1\). If \(P\left( \omega \right) \) is the power spectrum of the spike train, then the DC-component of the power is given by (Cox and Lewis 1966)
Introducing Eq. (14) into Eq. (15), we obtain
yielding a perfect DC block.
The limiting sum of SCCs from experimental spike trains, with the sum calculated up to 15 lags, is depicted in Fig. 2D. The population of afferents demonstrated a range of limiting sums with mean sum of \(-0.475 \pm 0.04\). As noted earlier, the limiting sum was less than \(-0.5\) in only two afferents, where the sums were estimated as \(-0.505\) and \(-0.502\).
3.3.2 Predicting Type I and Type II serial correlation coefficients
In the relationships for the SCCs given by Eqs. (12) and (13), the noise process that drives the spike generator has an unknown correlation function \({{\,\mathrm{\mathbf {R}}\,}}\) which must be determined so as to predict experimentally observed SCCs. Here, we are interested in identifying the simplest process that can satisfactorily capture observed SCCs.
Consider a Gauss–Markov, i.e., Ornstein–Uhlenbeck (OU) process which relaxes as \(\exp \left( -t/\tau _\gamma \right) \) with relaxation time \(\tau _\gamma \), where \(\gamma \) signifies the spike threshold. We are interested in a discrete-time realization of the process. We first convert the decaying exponential to a geometric series where the time-base is in integer multiples of the mean ISI (see Sect. 3.3 and Eq. 10). Thus, the mean ISI becomes the “sampling period” and the exponential can be sampled at multiples of \(T_1\), i.e., \(t = kT_1\), with \(k = 1,\,2,\,3,\ldots \). Following this procedure, the continuous-time relaxation process \(\exp \left( -kT_1/\tau _\gamma \right) \) transforms to a discrete-time (sampled) geometric series with \(a^{-k} = \exp \left( -kT_1/\tau _\gamma \right) \), where the parameter \(a = \exp \left( -T_1/\tau _\gamma \right) < 1\). In the discrete formalism, the continuous-time first-order Gauss–Markov process (OU process) with relaxation time \(\tau _\gamma \), transforms to a first-order autoregressive (AR) process with parameter a. We show this below, where we define two first-order AR processes which will generate Type I and Type II SCCs, respectively.
Type I serial correlation coefficients Type I afferent spiking activity demonstrates serial correlations where \(-0.5< \rho _1 < 0\), and subsequent \(\rho _k\) are negative and diminish to 0 with increasing k (Fig. 1A). These spike trains have a unimodal ISI and do not display bursting activity. They can be generated from the ansatz, a first-order AR process
where \(\gamma _k\) is the noise added to the threshold at the kth spike (Fig. 4), a is the relaxation time of the discrete noise process (see above), and \(w_k\) is white noise input to the AR process. The output \(\gamma \) is wide-sense stationary, and its discrete autocorrelation function \({{\,\mathrm{\mathbf {R}}\,}}_k\) is
Equations (13) and (18) yield the geometric series
From Eq. (19), and noting that \(0< a < 1\), we conclude that \(\rho _1 > -0.5\), \(\rho _k < 0\) for all k, \(\mid \rho _{k-1}\mid > \mid \rho _k\mid \), and \(\rho _k \rightarrow 0\) as \(k \rightarrow \infty \). This is the observed Type I pattern of SCCs. Further, summing the geometric series Eq. (19) yields \(\sum _{k \ge 1}\,\rho _k = -0.5\) as stated in Eq. (14). The AR parameter a can be estimated from experimentally determined SCCs. From Eq. (19) this is
In practice, a better estimate is often obtained from the ratio \(a = \rho _2/\rho _1\), where \(\rho _k\) are available from experimental data.
We model a Type I P-type primary electrosensory afferent depicted in Fig. 1A using a noisy threshold with correlation function \({{\,\mathrm{\mathbf {R}}\,}}_k\) specified by Eq. (18) and shown in Fig. 5A. The dynamic threshold parameters were determined from the experimental data, and tuned so that they matched the afferent SCCs, ISI, and joint distributions (Fig. 6). By design, the noise process is first-order autoregressive, and this is reflected in the partial autocorrelation function calculated from the noise samples (Fig. 5D). The top row (Fig. 6A–C) depicts the ISI distribution, joint ISI distribution, and the serial correlation coefficients, respectively. Type I spike trains do not display bursting activity and their ISI distribution is unimodal. The bottom row (Fig. 6D–F) shows data from a matched model using noise correlation function given by Eq. (18). SCCs of adjacent ISIs \(\rho _1\) are \(-0.39\) (data) and \(-0.40\) (model). The mean sum of SCCs are \(-0.48\) (data) and \(-0.5\) (model). Thus, the observed pattern of Type I SCCs is reproduced.
Type II serial correlation coefficients Type II afferent spiking activity demonstrates serial correlations where \(\rho _1 < -0.5\) and successive \(\rho _k\) alternate between positive (\(\rho _{2k})\) and negative (\(\rho _{2k+1}\)) values, are progressively damped, and diminish to zero (Fig. 1B). These spike trains have bimodal ISIs and display bursting activity. Proceeding as before, they can be generated from the ansatz, a first-order AR process
with discrete autocorrelation function
Equations (13) and (22) yield the geometric series
From Eq. (23), and noting that \(0< a < 1\), we conclude that \(\rho _1 < -0.5\), \(\rho _{2k} > 0\), \(\rho _{2k+1} < 0\), \(\mid \rho _{2k}\mid > \mid \rho _{2k+1}\mid \), and \(\rho _k \rightarrow 0\) as \(k \rightarrow \infty \). This is the observed Type II pattern of SCCs. Further, summing the geometric series (Eq. 23) yields \(\sum _{k \ge 1}\,\rho _k = -0.5\) as stated in Eq. (14). The AR parameter a can be estimated from experimentally determined SCCs. From Eq. (23) this is
As noted for Type I SCCs a better estimate is often obtained from the ratio \(a = -\rho _2/\rho _1\). Finally, we note that the Type I formalism extends nicely to the Type II formalism with the only difference being that the coefficient of the first-order process (a) becomes negative. This simple substitution in Eqs. (18), (19), and (20) will result in Eqs. (22), (23), and (24), respectively
We model Type II P-type primary electrosensory afferents using a noisy threshold with correlation function \({{\,\mathrm{\mathbf {R}}\,}}_k\) specified by Eq. (22). The noise correlation function and dynamic threshold parameters were determined from the experimental data, and tuned so that they matched the afferent SCCs, ISI, and joint distributions. We demonstrate with two examples: (i) moderate bursting activity and (ii) strong bursting activity. The moderately bursting neuron has a broad bimodal ISI distribution (Fig. 7A–C), and its ISI and joint ISI distributions, and SCCs are captured by the matched model (Fig. 7D–F). The noise correlation function for generating the model spike-train is depicted in Fig. 5B, and the noise partial correlation function in Fig. 5E. SCCs of adjacent ISI \(\rho _1\) are \(-0.59\) (data) and \(-0.62\) (model). The mean sum of SCCs are \(-0.49\) (data) and \(-0.5\) (model). The strongly bursting neuron has a well-defined bimodal distribution (Fig. 8A–C), and its ISI and joint ISI distributions, and SCCs are captured by the matched model (Fig. 8D–F). The noise correlation function for generating the model spike-train is depicted in (Fig. 5C), and the noise partial correlation function in Fig. 5F. SCCs of adjacent ISI \(\rho _1\) are \(-0.7\) (data) and \(-0.7\) (model). The mean sum of SCCs are \(-0.49\) (data) and \(-0.5\) (model). Thus, in both cases (Figs. 7 and 8), the observed patterns of Type II SCCs are reproduced. All observed afferent spike trains were either Type I or Type II.
We mention in passing that the noise correlation functions depicted in Fig. 5A–C should not be confused with the SCCs depicted in panels C and F in Figs. 6, 7, and 8.
Type III serial correlation coefficients Type III afferent spiking activity demonstrates serial correlations where \(\rho _1 = -0.5\) and all \(\rho _k = 0\) for \(k \ge 2\). This is a degenerate case, resulting in a singleton with a unique set of SCCs. Such spike trains can be generated by a process where spike threshold noise is uncorrelated, and hence white. In this case, \({{\,\mathrm{\mathbf {R}}\,}}_0 = \sigma _\gamma ^2\) is noise power, and \({{\,\mathrm{\mathbf {R}}\,}}_k = 0\) for \(k \ge 1\). We see immediately from Eqs. (12) and (13) that the \(\rho _k\) have the prescribed form. Further, we note that trivially \(\sum _{k \ge 1}\,\rho _k = -0.5\) as stated in Eq. (14). We mention in passing, and for later discussion, that white noise added to the spike threshold is equivalent to setting \(a = 0\) (\(\tau _\gamma = 0\)) in the first-order AR process defined in Eq. (17). A spike-train with exactly Type III SCCs has not been observed in the experimental data presented here.
3.3.3 Partial autocorrelation functions
The autocorrelation function (ACF) at lag k includes the influence of the variable \(X_{i+k}\) on \(X_i\), and the influence of all the intervening lags \(r < k\). The direct effect of \(X_{i+k}\) on \(X_i\) is not explicitly known. On the other hand, the partial autocorrelation (PAC) between two variables in a sequence \(X_i\) and \(X_{i+k}\) removes the linear influence of the intervening variables \(X_{i+1},\,\ldots ,\,X_{i+k-1}\) (Box and Jenkins 1970). The PAC as a function of k is the PAC function or PACF. Section 2.2.2 in Methods discusses the differences between the ISI PACF and the ISI ACF with regard to AR and MA processes. We estimated the PACF from experimental data and the modeled spike trains using Eqs. (2), (3), and (4). These are the same model spike trains used in the calculation of the ACFs. The PACFs are shown in Fig. 9A–C for the afferents shown in Figs. 6, 7, and 8, respectively. By definition the first partial autocorrelation \(\phi _{1,\,1} = \rho _1\). In contrast to the SCCs where correlation coefficients \(\rho _k\) could be positive or negative for \(k \ge 2\), all partial autocorrelations are negative irrespective of the type of afferent.
3.3.4 Dynamic threshold with a random time-constant
The dynamic threshold has three parameters A, \(\gamma \), and \(\tau \). We have so far described a stochastic model based on a noisy \(\gamma \) which is formally equivalent to a noisy A. We now consider a stochastic dynamic threshold model based on a noisy time-constant \(\tau \). We can transform the adaptive threshold model with a random spike threshold from the previous section and Fig. 4 so that the time-constant of the adaptive threshold filter \(h\left( t\right) \) is a random variable with mean \(\tau \) (Fig. 10). From Eq. (9) the random variate \(\gamma _i - \gamma _{i-1}\) which appears in the time-base can be transformed into the random variate \(m_i\) which is the slope of the linearized adaptive threshold (green line, Fig. 10). From
we obtain
and so, \({{\,\mathrm{\mathbf {E}}\,}}\left[ \tau _i\right] = \tau \). As noted, this is the mean value of the dynamic threshold filter time-constant. It is immediately apparent from Eqs. (9) and (26) that the covariance of the sequence \(\tau _i\) (sampled at the ISIs) is the same as the covariance of the ISI sequence up to a scale-factor, and therefore the serial correlations of \(\tau \) (\(\rho _{\tau ,\,k}\), \(k \ge 0\)) are the same as the serial correlations of the ISIs. Therefore (see Appendix for details)
The right side of the above equations, Eqs. (27) and (28), are the same as Eqs. (12) and (13), the expressions for the SCCs of a spike train generated with a noisy adaptive threshold. Thus, the random filter time-constants have the same serial correlation coefficients as the interspike intervals, \(\rho _{\tau ,\,k} = \rho _k\). This is a “pass through” effect where the correlations observed in the time-constant are directly reflected in the ISI correlations.
In summary, a noisy threshold \(\gamma \) or a noisy filter time-constant \(\tau \) can be used to generate spike trains which have prescribed ISI SCCs. We have generated spike trains using a noisy threshold and will not duplicate the results for a noisy time-constant.
4 Discussion
4.1 Experimental observations
All experimentally observed P-type spike-trains from Apteronotus leptorhynchus demonstrated negative correlations between adjacent ISIs (Figs. 1, 2A). Thus, negative SCCs between adjacent ISIs may be an obligatory feature of neural coding, at least in this species. The implications for coding are discussed further below. A broad experimental observation is the roughly equal division of spike-trains into units with \(\rho _1 > -0.5\) (non-bursting or Type I units, Fig. 1A, \(N = 24\)) and units with \(\rho _1 < -0.5\) (bursting or Type II units, Fig. 1C, \(N = 28\)). Using a different method of classification, Xu et al. (1996) reported 31% of 117 units as bursting.
As a function of the firing rate, the first SCC, \(\rho _1\) (Fig. 2C) shows a minimum at a firing rate of about 190 spikes/s with a V-shaped envelope which appears to set a lower bound on \(\rho _1\) at low and high firing rates. At high firing rates, the mean ISI is small, and there is limited variability below the mean ISI because an ISI cannot be less than zero. This possibly increases \(\rho _1\) (toward zero) as firing rate increases, and creates the effect shown in (Fig. 2C). Supporting this finding is the inverse relationship between \(\rho _1\) and the coefficient of variation (CV) of ISI (Fig. 2C). Although the correlation is weak (\(r = -0.67\)) a large \(\rho _1\) is more likely when the ISI CV is small. An explanation for the shape of the envelope on the low-firing rate flank in Fig. 2C is not readily apparent.
For Type I units \(\rho _2 < 0\), whereas for Type II units \(\rho _2 > 0\) (Fig. 2C). The former give rise to over-damped SCCs which remain negative and diminish to zero, while the latter give rise to under-damped or oscillatory SCCs which also diminish to zero. The dependence of \(\rho _2\) on \(\rho _1\) is linear for the most part and follows the equation \(\rho _2 = -1.18\rho _1 - 0.51\) (Fig. 2C). The limiting sum of SCCs \(\sum _{k \ge 1}\rho _k\) is close to \(-0.5\) irrespective of the type of SCC pattern (Fig. 2D) (Ratnam and Goense 2004). With the exception of two afferent fibers, the sum of SCCs for the fibers was never smaller than \(-0.5\), i.e., it was almost always true that \(\sum _{k \ge 1}\rho _k \ge -0.5\). In two cases the sums were less than the limit (\(-0.505\) and \(-0.502\)), possibly due to estimation error. Although the dominant SCCs are \(\rho _1\) and \(\rho _2\), their sum \(\rho _1 + \rho _2\) is not close to \(-0.5\) (i.e., \(\rho _2 \ne -\rho _1 - 0.5\)) and deviates, as stated above, with a linear relationship which follows \(\rho _2 = -1.18\rho _1 - 0.51\). Thus, more terms (lags) are needed to bring the sum of SCCs close to the limiting value, and this results in significant correlations extending over multiple lags (time-scales). As discussed in an earlier work (Ratnam and Nelson 2000), long-short (short-long) ISIs create memory in the spike-train and keep track of the deviations of successive ISIs from the mean ISI (\(T_1\)). These deviations are a series of “credits” and “debits” which may not balance over adjacent ISIs, but will eventually balance so that a given observation of k successive ISIs returns to the mean with \(t_k - t_0 \approx kT_1\). Such a process will exhibit long-range dependencies that may not be captured by SCCs.
That the dependencies may extend over multiple ISIs is confirmed from an analysis of joint dependencies of intervals extending to high orders (van der Heyden et al. 1998; Ratnam and Nelson 2000). All 52 units in Apteronotus leptorhynchus were at least second-order Markov processes, with about half (\(N = 24\)) being fifth-order or higher (Ratnam and Nelson 2000). Further, SCCs were not correctly predicted when only the adjacent ISI dependencies were preserved, i.e., were considered to be first-order Markov (Poggio and Viernstein 1964; Rodieck 1967; Ratnam and Nelson 2000). Indeed, an examination of the sequence of SCCs provides no indication of the extent of memory. For instance, short-duration correlations do not necessarily imply that ISI dependencies are limited to fewer adjacent intervals. Long-duration dependencies may be present even when the correlation time is short (van der Heyden et al. 1998). Conversely, a first-order Markov process produces a ringing in the serial correlogram (\(\rho _k = \rho _1^k\)) that can continue for ISIs much longer than two adjacent ISIs (Cox and Lewis 1966; Nakahama et al. 1972). In fact, for some P-type electrosensory afferent spike trains, the observed ISIs exhibited SCCs whose magnitudes were smaller than the SCCs for the matched first-order Markov model even though the experimental data were at least second-order or higher (see Fig. 8, Ratnam and Nelson 2000).
The stochastic process which generates ISIs may be more complex than a simple autoregressive (AR) or moving average (MA) process because afferent ISI correlation functions (Figs. 6F, 7F, and 8F) and partial autocorrelation functions (Fig. 9) are infinite in duration (see Methods for a distinction between the two functions, Box and Jenkins (1970)). This suggests that a more general ARMA process may be responsible for the generation of ISIs. The source of the mix of AR and MA processes is discussed further below when we consider the generating model.
4.2 The dynamic threshold model
The experimental observations and dependency analyses motivated us to ask whether we could develop a stochastic model to reproduce a prescribed sequence of SCCs \(\rho _1\,,\rho _2\,,\ldots \,,\rho _k\). We adopt a widely used and physiologically plausible model with a time-varying, i.e., dynamic, threshold. This is a simple model without much complexity, and has few parameters. The model allows us to probe patterns of SCCs as a function of these parameters. Further, it tests the extent to which we can describe experimental data with a simple model. Dynamic threshold models typically have three components: (1) dynamics of membrane voltage v(t) in response to an input signal, (2) a spike or impulse generator, and (3) a time-varying dynamic threshold r(t) which is elevated following a spike and subsequently relaxes monotonically. A spike is fired when the membrane voltage meets the (relaxing) threshold, thus forming a feedback loop (Fig. 3B). Models of adaptation with an adaptation current are broadly similar but instead of altering the threshold directly, they use an outward current to induce refractoriness (see, e.g., Benda et al. 2010; Benda and Herz 2003; Schwalger and Lindner 2013). In many models, the feedback loop is implicitly defined (as in conductance-based models), but it may also be explicitly defined as is done here so that \(v(t) - r(t)\) reaches a fixed spike-initiation threshold. This threshold is usually taken to be zero, i.e., \(v(t) = r(t)\) (Kobayashi et al. 2009), but here we assume it to be a nonzero value (\(\gamma = A/2\)) following Jones et al. (2015). This choice does not alter model behavior or the analyses presented here (see Methods) but has relevance to optimal coding and is discussed further below.
Models that do not utilize a dynamic threshold or an adaptation current are not discussed here because they are outside the scope of this work. The reader may consult Longtin et al. (2003), Lindner and Schwalger (2007), Farkhooi et al. (2009), Urdapilleta (2011) for more details and references to such models. Most dynamic threshold models that address negative correlations assume some form of perfect or leaky integrate-and-fire dynamics for the first two components listed above (Chacron et al. 2000, 2001, 2004b; Brandman and Nelson 2002; Benda et al. 2010), and an exponentially decaying dynamic threshold without spike reset. Noise is added to the time-varying threshold or to the input current and this results in negative serial correlations. Model complexity has been a major drawback in determining the precise role of noise in shaping ISI correlations (see, for instance, observations made in Chacron et al. 2004b; Lindner et al. 2005). Resets, hard refractory periods, sub-threshold dynamics due to synaptic filtering, and sometimes multiple sources of noise obscure the effects of signal propagation through the system and obscure signal dependencies. Thus, with few exceptions (see below), dynamic threshold models and models with adaptation currents, have been qualitative. They demonstrate some features of experimentally observed ISI distributions, and at best correlations between adjacent ISIs (i.e., \(\rho _1\)) (Geisler and Goldberg 1966; Chacron et al. 2000, 2003). On the other hand, reduced model complexity can result in a lack of biophysical plausibility. Thus a judicious choice of models should expose desired mechanisms while retaining enough important features of the phenomena.
4.3 Model results
In recent years, deterministic dynamic threshold models with an exponential kernel have been used to predict spike-times from cortical and peripheral neurons (Kobayashi et al. 2009; Fontaine et al. 2014; Jones et al. 2015) (see Gerstner and Naud 2009, for an early review) and predict peri-stimulus time histograms (Jones et al. 2015; Johnson et al. 2016) with good accuracy. Capturing spike-times accurately is perhaps the first requirement in our analysis, and this gives confidence that the model may tell us something about ISI correlations. We eliminated sub-threshold dynamics and resets so that there is only one nonlinear element, the spike generator (Jones et al. 2015; Johnson et al. 2016). These are not serious restrictions, and they make the analysis tractable. We follow the usual practice of representing the dynamic threshold element with an exponential decay with time-constant \(\tau \). The absence of reset implies that the time-varying dynamic threshold which carries memory is a simple convolution of the spike train with an exponential kernel \(Ae^{-t/\tau }\) (Fig. 3B). We inject noise precisely in one of two places, either to perturb the spike threshold \(\gamma \) (Fig. 4) or perturb the time-constant \(\tau \) (Fig. 10). The two forms of perturbation are formally equivalent (see Appendix). We linearize the exponential so that we can obtain analytical solutions of SCCs. This is applicable at asymptotically high spike-rates (Jones et al. 2015; Johnson et al. 2015) and applies well to P-type afferent spike trains because of their high baseline firing rates, about 250-300 spikes/s (Bastian 1981; Xu et al. 1996; Ratnam and Nelson 2000). To fit ISI and joint-ISI distributions of individual P-type afferents, the parameters of the dynamic threshold element \(h\left( t\right) \) (A and \(\tau \)) are obtained from the afferent spike-train (Jones et al. 2015). A single noise parameter, a, independent of the dynamic threshold, is obtained from the observed SCCs. These model elements and procedures allow us to determine the shaping of ISI correlations. The major results are
-
1.
ISI correlations are determined by the autocorrelation function, \({{\,\mathrm{\mathbf {R}}\,}}\), of the noise process (Eqs. 12–13).
-
2.
Non-bursting units and bursting units are described by the same functional relationship between ISI SCCs and \({{\,\mathrm{\mathbf {R}}\,}}\) (Eqs. 12–13).
-
3.
Non-bursting spike trains (with unimodal ISI distribution) are generated by slow noise with a decaying (positive) correlation function \({{\,\mathrm{\mathbf {R}}\,}}\), which in its simplest form is given by Eq. (18) (e.g., Fig. 6).
-
4.
Bursting spike trains (with bi-modal ISI distribution) are generated by fast noise with an oscillating correlation function \({{\,\mathrm{\mathbf {R}}\,}}\), which in its simplest form is given by Eq. (22) (e.g., Figs. 7 and 8).
-
5.
The two types of correlation functions are described by the sign of a single parameter, the parameter a of a first-order autoregressive process (Eqs. 21 and 17). The AR parameter is directly related to noise bandwidth, i.e., the low or high cut-off frequencies and can be uniquely determined from the correlation, \(\rho _1\), between adjacent ISIs (Eqs. 20 and 24). More robust estimates are obtained from the ratio \(\rho _2/\rho _1\). SCCs at subsequent lags are related to a as terms in a geometric progression (Eqs. 19 and 23).
-
6.
While more complex patterns of SCCs can be produced by other types of noise correlation functions, only Type I and Type II SCC patterns are observed in P-type afferent spike-trains. Type III SCC patterns are mentioned here because they are commonly reported in modeling studies and are discussed further below.
-
7.
The expression for ISI SCCs is independent of the adaptive threshold parameters (A and \(\tau \)), the signal (v), and the firing threshold (\(\gamma \)). It is dependent only on the noise correlation function, including noise power \({{\,\mathrm{\mathbf {R}}\,}}_0\).
-
8.
The model fits ISI and joint-ISI distributions.
-
9.
For both non-bursting and bursting units (slow and fast noise, respectively) the theoretical prediction of the sum of ISI SCCs is exactly \(-0.5\). The sum of SCCs over all afferent spike trains is close to this limit: \(-0.475 \pm 0.04.\) (\(N = 52\)).
-
10.
SNR (\(10\,\log _{10}(v^2/{{\,\mathrm{\mathbf {R}}\,}}_0\)) is generally larger than \(20~\text {dB}\), i.e., fluctuations in threshold (noise) are small compared to the input signal. This is in keeping with the hypothesis that spike-time jitter is small in comparison with the mean ISI.
There are two components to ISI serial correlations as is apparent from Eq. (9), where the ith ISI is given by \(t_{i} - t_{i-1} = \left( \gamma _{i} - \gamma _{i-1} + A\right) /m\). The first component is due to the difference \(\gamma _{i} - \gamma _{i-1}\) which is coupled to the next (adjacent) interval \(t_{i+1} - t_{i}\) by the common term \(\gamma _i\). This term appears with opposing signs in adjacent ISIs and hence results in a negative correlation which does not extend beyond these ISIs. If the \(\gamma _i\) are uncorrelated then it can be shown that the adjacent ISI correlation \(\rho _1 = -0.5\) and all other \(\rho _k = 0\), for \(k \ge 2\). Thus, for independent random variables, this result follows from the property of a differencing operation and it is not indicative of memory beyond adjacent ISIs. The second component of ISI correlations is due to long-range correlations \({{\,\mathrm{\mathbf {R}}\,}}\) in the random process \(\gamma \) which extend beyond adjacent ISIs. These correlations are endogenous, possibly biophysical in origin, and could be shaped by coding requirements. The two components to ISI correlations are made clear by restating Eq. (13) for \(\rho _1\) as
Thus, they are separable. For a wide-sense stationary process, \({{\,\mathrm{\mathbf {R}}\,}}_0 > {{\,\mathrm{\mathbf {R}}\,}}_k\) for all \(k \ge 1\), and so the denominator of the second term is always positive. Thus, the deviation of \(\rho _1\) from \(-0.5\) is determined by the sign of \({{\,\mathrm{\mathbf {R}}\,}}_1-{{\,\mathrm{\mathbf {R}}\,}}_2\). This term is positive for non-bursting units (Type I SCCs), and it is negative for bursting units (Type II SCCs). The singleton case (Type III SCCs) results because noise is uncorrelated and so the term vanishes. In this case only the first component is present. For a Ornstein–Uhlenbeck or Gauss–Markov process, i.e., first-order AR process with coefficient \(a > 0\), \({{\,\mathrm{\mathbf {R}}\,}}_1 > {{\,\mathrm{\mathbf {R}}\,}}_2\) (Fig. 5A), and this produces a non-bursting Type I pattern. When the AR parameter is negative, i.e., the coefficient is \(-a\), \({{\,\mathrm{\mathbf {R}}\,}}_1 < {{\,\mathrm{\mathbf {R}}\,}}_2\) (Fig. 5B, C), and this produces a bursting Type II pattern with a bimodal ISI distribution. Thus, a single parameter (the sign of the first-order AR parameter) can create the observed patterns of negatively correlated SCCs. Type III SCCs where \(\rho _1 = -0.5\), and \(\rho _k = 0\) for \(k \ge 2\) are trivially generated by perturbing the spike threshold with uncorrelated white noise, i.e., by setting \(a = 0\) in the first-order AR process. We have not observed Type III neurons experimentally although some spike trains have \(\rho _1\) values close to \(-0.5\).
The sign of the a parameter in the AR process determines the time-scale of noise fluctuations in the spike threshold, and determines the patterns of SCCs. Recall that \(\tau _\gamma = -T_1/\ln \left( a\right) \). For Type I SCCs, the AR process produces slow noise with noise bandwidth dominated by frequencies \(\omega < 2\pi \tau _\gamma ^{-1}\) (i.e., low-pass). For Type II SCCs, the generating noise is dominated by frequencies \(\omega > 2\pi \tau _\gamma ^{-1}\) (i.e., high-pass). In the latter case, the characteristic ringing of the correlation function is distinctive, with the degree of damping being controlled by the value of \(\tau _\gamma \).
From the above observations, we suggest that the ARMA process leading to generation of ISIs is the result of two processes: 1) An MA component which is due to a differencing operation. This contributes a value of \(-1/2\) to \(\rho _1\), i.e., to the finite portion of the autocorrelation function (and the infinite tail of the partial autocorrelation function). 2) An AR component which is due to a feedback loop (Fig. 4). This contributes a value of \(-1/2\) to \(\phi _{1,\,1}\), i.e., to the finite portion of the partial autocorrelation function (and the infinite tail of the autocorrelation function). In the noise process used here, \(\rho _1 = -1/2 \pm a/2\) (Eqs. 20 and 24 for non-bursting and bursting afferents, respectively), and so the residual contribution of the autoregressive component to \(\rho _1\) is \(\pm a/2\). The goodness of fit to the experimental data with a single parameter \(\pm a\) leads us to believe that the underlying ARMA model is first order in both the AR and MA components.
Farkhooi et al. (2009) reported that in the extrinsic neurons of the mushroom body in the honeybee, the ISI partial autocorrelation function is zero for all lags \(k > 1\). Thus, the ISI process is first-order AR, i.e., AR(1). As we remarked above, we cannot conclude the order of the AR or ARMA process from our data although we think that it is possibly ARMA\((1,\,1)\). The threshold noise process (whose autocorrelation and partial autocorrelation are depicted in Fig. 5) is first-order AR (Figs. 5D–F). However, the threshold noise correlations (Eq. 18 for Type I, and Eq. 22 for Type II) influences the ISI correlations according to Eq. 13. This is illustrated in the closed form expressions for ISI SCCs using a first-order noise process (Eqs. 19 and 23). The ISI SCC \(\rho _k\) is a function of k and \(k-1\). For a first-order noise process at least, our data suggest that the ISI process is a mix of AR and MA processes and not a simple AR(1) process.
4.4 Comparison with other adaptation models
These results can be directly compared with results from an earlier study which modeled adaptation currents (Schwalger et al. 2010). In that study, patterns of SCCs were generated by a perfect integrate-and-fire neuron under two conditions : i) a deterministic adaptation current with fast, white Gaussian noise input, and ii) a stochastic adaptation current with slow, exponentially correlated (channel) noise. A deterministic adaptation current with fast noise produced patterns of SCCs that we report here as Types I and II. These patterns were characterized by a parameter \(\vartheta \) which is analogous to the AR parameter a used here, with Type I pattern (positive coefficient, a) corresponding to \(\vartheta >0\), and Type II pattern (negative coefficient, \(-a\)) corresponding to \(\vartheta < 0\). A pattern similar to the Type III SCC pattern reported here resulted when \(\vartheta = 0\). This is the same as \(a = 0\) although we did not observe these neurons experimentally. While we define Type III SCCs to have only one pattern (\(\rho _1 = -0.5\), and \(\rho _k = 0\) for \( k \ge 2\)), Schwalger et al. (2010) report that \(\rho _k = 0\) when \(k \ge 2\), but the value of \(\rho _1\) is governed by an additional parameter and could take a range of values, with \(\rho _1 = -0.5\) appearing as a limiting case. Stochastic adaptation currents produced only positive ISI correlations which we do not observe or model here. In a subsequent report Schwalger and Lindner (2013) extended the model to more general integrate-and-fire models and obtained a relationship between ISI SCCs and the phase-response curve (PRC). The patterns of SCCs reported here most likely correspond to their Type I PRC.
The model presented here for first-order noise processes, given by Eqs. (17) and (21), does not generate positive correlations. Note that the parameter a in these equations is bounded such that \(0< a = \exp (-T_1/\tau _\gamma ) < 1\). Thus, the AR(1) coefficients a in Eq. (17) and \(-a\) in Eq. (21), are restricted to the open interval \((-1,\,1)\) otherwise the noise process is unstable (i.e., unbounded). Let us say that we require the first ISI SCC \(\rho _1 > 0\), then from Eq. (19) we must have \(a > 1\) (Type I neuron), and from Eq. (23) we must have \(-a < -1\) (Type II neuron). Thus, a is outside the stable range of the AR(1) coefficients and the model cannot generate positive ISI SCCs. For second-order or higher-order noise processes \(\text {AR}(p)\) with \(p > 1\), positive ISI SCCs are possible if the noise correlation \({{\,\mathrm{\mathbf {R}}\,}}_k\) satisfies \({{\,\mathrm{\mathbf {R}}\,}}_0 < 2{{\,\mathrm{\mathbf {R}}\,}}_1 - {{\,\mathrm{\mathbf {R}}\,}}_2\) (from Eq. 13). It should be possible to construct arbitrary correlation sequences using Eq. (13) and satisfying the inequality, to create prescribed sequences of ISI correlations. We have not explored these ideas.
The SCCs reported here follow a geometric progression with the filter-pole a being the ratio parameter (see Eqs. 19 and 23). Schwalger et al. (2010) and Schwalger and Lindner (2013) reported that the patterns of negative correlations follow a geometric progression with the ratio parameter being \(\vartheta \). Similarly, a geometric progression was also reported by Urdapilleta (2011). Note that a first-order Markov process also follows a geometric progression with ratio parameter \(\rho _1\) (Cox and Lewis 1966; Nakahama et al. 1972). A geometric progression of SCCs is not surprising given that the ISI sequence is discrete, and thus, feedback will result in a (discrete) recurrence relation. For example, all the first-order AR processes used to predict SCCs have correlation functions which are in geometric progression (Eqs. 19, 22). Taken together with the analysis of partial correlation coefficients, and as noted above, it is likely that the underlying ARMA process generating the ISI sequence is low-order. The limiting sum of SCCs reported here (Eq. 14) is exactly \(-0.5\), whereas Schwalger and Lindner (2013) report a sum that asymptotically approaches a value that is slightly larger than \(-0.5\). The average sum of SCCs in our experimental data (Fig. 2D) is also slightly larger than \(-0.5\) (\(-0.475 \pm 0.04.\)), and this merits further investigation.
In considering the results presented here using a dynamical threshold model, and those from the more general models using an adaptation current, it appears that the presence of the feedback loop (the coupling between the membrane voltage and the dynamic threshold, see Figs. 3B, 4B, and 10B) may account for almost all the properties of SCCs if the noise fluctuation is shaped appropriately. These fluctuations have the effect of reverberating around the feedback loop, creating memory and introducing negative correlations extending over multiple ISIs. However, the source of this noise is moot because it can appear in the input or the model parameters (see Eq. 31, where noise can be distributed over the input v, the threshold decay function r, or the spike threshold \(\gamma \)). The dynamic threshold model used here does not suggest a biophysical mechanism, but we suggest that a noisy threshold is due to an endogenous source of noise which perturbs the spiking threshold \(\gamma \) or the time-constant \(\tau \), i.e., noise is not passed through the input. This is an assumption, and input noise will of course shape SCCs. However, several biophysical mechanisms can account for endogenous sources of noise, including probabilistic transitions between conformational states in voltage-gated ion channels (White et al. 1998, 2000; Schneidman et al. 1998; Van Rossum et al. 2003; Fontaine et al. 2014) leading to perturbations in gating characteristics (Benda et al. 2010; Chacron et al. 2007). These can introduce cumulative refractoriness in firing. It should be noted that the amount of noise to be added to the threshold is small, generally weaker than \(20~\text {dB}\) SNR (see Figs. 6, 7, and 8). That only weak noise may be necessary has been reported earlier by Schwalger and Lindner (2013), and in a study on threshold shifts by Fontaine et al. (2014).
While a dynamic threshold model does not explicitly incorporate biophysics, models with adaptation currents can be more readily tied to biophysical conductances. An ion channel that may be a substantial contributor to adaptation currents is the non-inactivating M-current (KCNQ/Kv7 family, Brown and Adams 1980) which is a voltage-gated potassium channel with slow dynamics (relative to the mean ISI). This channel may be responsible for spike-timing precision, and hence a timing-based neural code (see Jones et al. 2015, for a discussion). The possible role of the M-current and the specific sources of noise have been explored earlier (see Schwalger et al. (2010) above, in relation to types of SCCs, and Fisch et al. (2012)). Both studies (Schwalger et al. 2010; Fisch et al. 2012) included a modified conductance-based Traub-Miles model (Traub and Miles 1991; Ermentrout 1998) frequently used in modeling M-currents. They concluded that negative correlations were a consequence of a deterministic adaptation current with fast white Gaussian noise input rather than a stochastic adaptation current with slow correlated noise. The current report shows that at least some of the results with adaptation currents can be reproduced with a simple dynamic threshold with a noisy spiking threshold or a noisy decay time-constant. Further, either slow noise or fast noise can produce negative correlations if injected into the spiking threshold, with the time-scale of noise fluctuations determining the type of SCC pattern that is produced (i.e., bursting or non-bursting). In another study (Benda et al. 2010) the first SCC (\(\rho _1\)) was compared when the generating models were a leaky integrate-and-fire (LIF) neuron with an adaptation current (LIFAC) or a dynamic threshold (LIFDT). When the LIFAC and LIFDT models are matched so that the spike-rate and adaptation are about the same, the \(\rho _1\) as a function of spike rate are also similar. This gives reason to believe that either model of adaptation can give rise to similar patterns of SCCs, possibly due to the simple feedback present in the models. However, a more detailed biophysical investigation is needed before we can tease apart the differences at a mechanistic level.
Finally, we note that the model has been formulated as a dynamic threshold model. However, the model is so minimal that it can be viewed in some instances as an adaptation current. For example, when we transformed the randomness in spike threshold \(\gamma \) into a variable time-constant (across ISIs), we have a model which has a constant spike threshold but a variable relaxation (outward) current. This transformed model is more akin to an adaptation current although it is not stochastic (it is deterministic within an ISI). Thus, for a simple, abstract model such as the one presented here, it may not be possible to make a distinction between dynamic threshold and adaptation current.
4.5 Power spectra and noise-shaping
Negatively correlated spike trains have implications for information transmission (Chacron et al. 2004b). These spike trains have a power spectra \(P\left( \omega \right) \) that rolls off toward DC (Chacron et al. 2004a; Schwalger and Lindner 2013), effectively reducing low-frequency noise and improving SNR in the baseband (also referred to as noise-shaping). The connection between DC power \(P\left( \omega = 0\right) \) and the ISI SCCs is given by Eq. (16) (Cox and Lewis 1966), where it can be seen that negative correlations have a tendency to reduce noise at zero-frequency. From the theoretical limit of the sum of SCCs (\(-0.5\)) reported here in Eq. (14), similar to the results with adaptation currents (Schwalger and Lindner 2013), DC-power vanishes, thus yielding a perfect DC block allowing low-frequency signals to be transmitted with very high SNR. The limiting sum of observed SCCs is only just a little larger than the optimum sum (\(-0.5\)), and this allows some noise to bleed through at zero-frequency. These results are in line with the results on power spectra first predicted by Chacron et al. (2004b) and Chacron et al. (2004a), and more recently extended to models with adaptation currents (Schwalger and Lindner 2013).
4.6 Implications for optimal coding
The model presented here sets the spike threshold \(\gamma = A/2\). We showed previously that the time-varying dynamic threshold \(r\left( t\right) \) is an estimator of the input signal v(t), and the mean-squared estimation error \(\langle \left( v(t) - r(t)\right) ^2\rangle \) is minimized when the firing threshold is \(\gamma = A/2\) (Jones et al. 2015; Johnson et al. 2015). The error increases for any other value of \(\gamma \), in particular \(\gamma = 0\) which is the value usually adopted in the literature. Thus, for optimal timing of spikes the ongoing estimation error must be bounded below by \(\gamma = A/2\). When this bound is reached, the neuron fires a spike and raises the adaptive threshold variable by A. Viewed in this manner, the spike generator directly encodes the error and not the signal (Figs. 3B, 4B). This is much more efficient than directly encoding the signal (or some filtered version of the signal). The coding error is equivalent to quantization error in digital coding. This coding mechanism is analogous to one-bit delta modulation (Jayant and Noll 1984), but it is asymmetric because the threshold is one-sided. Thus, coding with an dynamic threshold is analogous to lossy digital coding or source coding. That is, a neuron performs data compression.
The optimal coding principle is based on a trade-off between maximizing coding fidelity (reducing estimation error) and minimizing the long-term firing rate of the neuron (a metabolic energy constraint, see Attwell and Laughlin (2001), and Johnson et al. (2016); Jones et al. (2015); Johnson et al. (2015)). This formulation is closely related to an approach from predictive coding in spiking networks which also balances fidelity against spiking activity in a population of spiking neurons (Boerlin et al. 2013).
For a constant DC-valued input, the SCCs generated by the model are dependent solely on noise correlations \({{\,\mathrm{\mathbf {R}}\,}}\), and are independent of all other parameters, including the dynamic threshold parameters A and \(\tau \), the mean value of the threshold \(\gamma \), and the constant DC-valued input signal v. The model parameters A and \(\tau \) are fitted from the baseline (spontaneous) spike-train through the relationship \(A\tau = vT_1\), where v is the bias input and \(T_1\) is the long-term mean ISI. This allows us to reduce the degrees of freedom to one, to either A or \(\tau \), and then determine the free parameter by optimizing the model spike train to match experimental spike-times (see Jones et al. 2015, for details of the procedure). Thus, the dynamic threshold parameters are not estimated from the observed SCCs. This fits with our understanding of the dynamic threshold as an optimal estimator with its parameters being determined by coding quality and an energy constraint (Jones et al. 2015; Johnson et al. 2015). When the deterministic parameters are fixed, a noisy threshold can be generated by estimating the filter parameter a from \(\rho _1\) or \(\rho _2/\rho _1\) to match the observed SCCs. This decoupling of dynamic threshold parameters from SCCs implies that a correlation function \({{\,\mathrm{\mathbf {R}}\,}}\) can be generated from a specified sequence of SCCs (determined, say experimentally) without reference to the time-varying threshold. Thus, a family of spike trains with arbitrary mean ISI, ISI and joint-ISI distributions can be generated, all of which have the same sequence or pattern of SCCs. Some iteration will be needed to fit the ISI and joint-ISI distributions from the dynamic threshold parameters A and \(\tau \) as is done here, but this is not too difficult.
4.7 Further work and conclusion
The main goal of this work was to show that a simple dynamic threshold model with few parameters can reproduce known negative correlations, and further, model experimental data and provide insight into the origin and patterns of ISI negative correlations. We proceeded on the assumption that there is only one experimentally observable quantity: the sequence of spike times. Based on spike-times, we have shown that the ISI distribution, the joint-ISI distribution, and SCCs can be predicted accurately. The approach used here suffers from several drawbacks. The mean firing rate, or equivalently mean ISI, is implicitly defined through the model parameters and hence it does not influence SCCs (the noise correlation function is discretized using ISI lag-number, and not the absolute value of the ISI). The SCCs obtained with this approach are thus invariant under firing rate. In reality, we expect correlations to be small when the spike rate is high or low, reaching a maximum at some intermediate firing rate (see Benda et al. 2010). The independence of SCCs from mean ISI is a result of infinite memory in the model because the exponential threshold was approximated as a line with slope m. A second drawback is that the input is a fixed DC-value and does not incorporate dynamics or noise. It should be noted that noise in the current model is assumed to perturb the spiking threshold; however, it can be exogenous, i.e., present in the input v, or present jointly in \(\gamma \) and v. Currently, there is no way to distinguish between these cases (see the deterministic firing rule specified in Eq. 7) because noise can be distributed over \(\gamma \) or over v. Thus, for a noisy DC-valued input and fixed \(\gamma \), the results will be the same as for a noisy \(\gamma \). Rather than determining the effect of input dynamics on SCCs an alternative and promising line of enquiry is to determine the role of ISI SCCs, (determined under spontaneously active conditions) on processing time-varying inputs of varying bandwidth, i.e., on signal encoding. As noted earlier (Jones et al. 2015), the bandwidth of the input will affect the quality of the coding and the extent to which ISI correlations maintain coding fidelity (i.e., the stimulus estimation error). To determine these effects, the model can be readily adapted to non-constant inputs by making the input piece-wise linear between spikes. This approach and the necessary theory was developed earlier with a deterministic version of the dynamic threshold model to predict spike-times (Jones et al. 2015; Johnson et al. 2015, 2016). We hypothesize that a stochastic extension of the model with weak noise (added to the spike threshold or decay time-constant) should not disrupt timing other than introduce timing jitter. This has been shown in neocortical neurons where spike-timing is reliable across repeated presentations of rapidly fluctuating time-varying signals (Mainen and Sejnowski 1995), and can be accurately modeled using a dynamic threshold (Jones et al. 2015). Another study, albeit preliminary, showed that the patterns of negative correlations do not change when there is a time varying input (Ratnam et al. 2001). While it is known that negative correlations, and more generally adaptation currents can reduce variability and stabilize firing rate in single neurons (see for example, Ratnam and Nelson 2000; Schwalger and Lindner 2013) and populations of neurons (Farkhooi et al. 2011, 2013), robust spike-times (i.e., reduced timing jitter) in response to time-varying signals may be yet another benefit provided by negative SCCs in ISIs. Thus, the use of a more exact, i.e., leaky, dynamic threshold function, the extension to time-varying inputs, and the determination of the linkages between dynamics of the input, the threshold, and coding fidelity, are the obvious next steps.
The simplified dynamic threshold model used here demonstrates the influence of noise on ISI correlations. We provide a closed-form expression for SCCs as a function of the noise correlation \({{\,\mathrm{\mathbf {R}}\,}}\). This is useful for solving the inverse problem when SCCs are known and the discrete sequence \({{\,\mathrm{\mathbf {R}}\,}}_k\) is to be determined. The forward problem is also readily solved, i.e., given a sequence \({{\,\mathrm{\mathbf {R}}\,}}_k\), the SCCs can be evaluated. We illustrate with a first-order AR model, and show that a single-parameter (the sign of the coefficient in the first-order AR process) captures the pattern of observed SCCs, and in this respect it agrees with a detailed dynamical model using adaptation currents (Schwalger and Lindner 2013). This model can fit observed spike-times with good accuracy (Kobayashi et al. 2009; Jones et al. 2015), is energy-efficient while maximizing coding fidelity (Jones et al. 2015; Johnson et al. 2015), and is computationally inexpensive. Finally, the model provides a quick and efficient way to generate surrogate data mimicking negatively correlated spike trains observed in experimental neurons. Thus, the model can be useful for understanding timing-based neural codes.
References
Amassian VE, Macy J, Waller HJ, Leader HS, Swift M (1964) Transformations of afferent activity at the cuneate nucleus. In: Gerard RW, Duyff J (eds) Information processing in the nervous system. Excerpta Medica Foundation, Amsterdam, pp 235–254
Attwell D, Laughlin SB (2001) An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow Metab 21(10):1133–1145
Avila-Akerberg O, Chacron MJ (2011) Nonrenewal spike train statistics: causes and functional consequences on neural coding. Exp Brain Res 210(3–4):353–371
Bastian J (1981) Electrolocation. J Comp Physiol 144(4):465–479
Benda J, Herz AVM (2003) A universal model for spike-frequency adaptation. Neural Comput 15:2523–2564
Benda J, Maler L, Longtin A (2010) Linear versus nonlinear signal transmission in neuron models with adaptation currents or dynamic thresholds. J Neurophysiol 104:2806–2820
Boerlin M, Machens CK, Denève S (2013) Predictive coding of dynamical variables in balanced spiking networks. PLoS Comput Biol 9(11):e1003258
Box G, Jenkins G (1970) Time series analysis: forecasting and Control. Holden-day, San Francisco
Brandman R, Nelson ME (2002) A simple model of long-term spike train regularization. Neural Comput 14(7):1575–1597
Brown DA, Adams PR (1980) Muscarinic suppression of a novel voltage-sensitive k+ current in a vertebrate neurone. Nature 283(5748):673–676
Buller AJ (1965) A model illustrating some aspects of muscle spindle physiology. J Physiol 179:402–416
Bullock TH, Chichibu C (1965) Further analysis of sensory coding in electroreceptors of electric fish. Proc Natl Acad Sci USA 54:422–429
Calvin WH, Stevens CF (1968) Synaptic noise and other sources of randomness in motoneuron interspike intervals. J Neurophysiol 31:574–587
Chacron MJ, Longtin A, St-Hilaire M, Maler L (2000) Suprathreshold stochastic firing dynamics with memory in \(\mathit{P}\)-type electroreceptors. Phys Rev Lett 85:1576–1579
Chacron MJ, Longtin A, Maler L (2001) Negative interspike interval correlations increase the neuronal capacity for encoding time-dependent stimuli. J Neurosci 21(14):5328–5343
Chacron MJ, Pakdaman K, Longtin A (2003) Interspike interval correlations, memory, adaptation, and refractoriness in a leaky integrate-and-fire model with threshold fatigue. Neural Comput 15(2):253–278
Chacron MJ, Lindner B, Longtin A (2004) ISI correlations and information transfer. Fluctuation Noise Lett 4(01):L195–L205
Chacron MJ, Lindner B, Longtin A (2004) Noise shaping by interval correlations increases information transfer. Phys Rev Lett 92(8):080601
Chacron MJ, Maler L, Bastia J (2005) Electroreceptor neuron dynamics shape information transmission. Nature Neurosci 8(5):673–678
Chacron MJ, Lindner B, Longtin A (2007) Threshold fatigue and information transfer. J Comput Neurosci 23(3):301–311
Cox DR, Lewis PAW (1966) The statistical analysis of series of events. Wiley, New York
Ermentrout B (1998) Linearization of FI curves by adaptation. Neural Comput 10(7):1721–1729
Farkhooi F, Strube-Bloss MF, Nawrot MP (2009) Serial correlation in neural spike trains: experimental evidence, stochastic modeling, and single neuron variability. Phys Rev E 79(2):021905
Farkhooi F, Muller E, Nawrot MP (2011) Adaptation reduces variability of the neuronal population code. Phys Rev E 83(5):050905
Farkhooi F, Froese A, Muller E, Menzel R, Nawrot MP (2013) Cellular adaptation facilitates sparse and reliable coding in sensory pathways. PLoS Comput Biol, 9(10):e1003251
Fisch K, Schwalger T, Lindner B, Herz AVM, Benda J (2012) Channel noise from both slow adaptation currents and fast currents is required to explain spike-response variability in a sensory neuron. J Neurosci 32(48):17332–17344
Fontaine B, Peña JL, Brette R (2014) Spike-threshold adaptation predicted by membrane potential dynamics in vivo. PLoS Comput Biol 10(4):e1003560
Geisler CD, Goldberg JM (1966) A stochastic model of the repetitive activity of neurons. Biophys J 6(1):53–69
Gerstner W, Naud R (2009) How good are neuron models? Science 326:379–380
Gestri GHAK, Mastebroek HAK, Zaagman WH (1980) Stochastic constancy, variability and adaptation of spike generation: performance of a giant neuron in the visual system of the fly. Biol Cybern 38(1):31–40
Goense JBM, Ratnam R (2003) Continuous detection of weak sensory signals in afferent spike trains: the role of anti-correlated interspike intervals in detection performance. J Comp Physiol A 189(10):741–759
Goense JBM, Ratnam R, Nelson ME (2003) Burst firing improves the detection of weak signals in spike trains. Neurocomput 52:103–108
Goldberg JM, Adrian HO, Smith FD (1964) Response of neurons of the superior olivary complex of the cat to acoustic stimuli of long duration. J Neurophysiol 27:706–749
Hagiwara S (1954) Analysis of interval fluctuation of the sensory nerve impulse. Jpn J Physiol 4(3):234–240
Hagiwara S, Morita H (1963) Coding mechanisms of electroreceptor fibers in some electric fish. J Neurophysiol 26:551–567
Hill AV (1936) Excitation and accommodation in nerve. Proc R Soc Lond B Biol Sci 119:305–355
Jayant NS, Noll P (1984) Digital coding of waveforms: principles and applications to speech and video. Prentice-Hall, Englewood Cliffs, pp 115–251
Johnson DH, Tsuchitani C, Linebarger DA, Johnson MJ (1986) Application of a point process model to responses of cat lateral superior olive units to ipsilateral tones. Hear Res 21:135–159
Johnson EC, Jones DL, Ratnam R (2015) Minimum squared-error, energy-constrained encoding by adaptive threshold models of neurons. In: Proceedings of IEEE international symposium on information theory (ISIT), pp 1337–1341. IEEE
Johnson EC, Jones DL, Ratnam R (2016) A minimum-error, energy-constrained neural code is an instantaneous-rate code. J Comput Neurosci 40(2):193–206
Jolivet R, Lewis TJ, Gerstner W (2004) Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. J Neurophysiol 92:959–976
Jones DL, Johnson EC, Ratnam R (2015) A stimulus-dependent spike threshold is an optimal neural coder. Front Comput Neurosci 9:61
Katsuki Y, Yoshino S, Chen J (1950) Action currents of the single lateral-line nerve fiber of fish. I. On the spontaneous discharge. Jpn J Physiol 1:87–99
Kobayashi R, Tsubo Y, Shinomoto S (2009) Made-to-order spiking neuron model equipped with a multi-timescale adaptive threshold. Front Comput Neurosci 3:2009
Kuffler SW, Fitzhugh R, Barlow HB (1957) Maintained activity in the cat’s retina in light and darkness. J Gen Physiol 40:683–702
Lindner B, Schwalger T (2007) Correlations in the sequence of residence times. Phys Rev Lett 98(21):210603
Lindner B, Chacron MJ, Longtin A (2005) Integrate-and-fire neurons with threshold noise: a tractable model of how interspike interval correlations affect neuronal signal transmission. Phys Rev E 72(2):021911
Liu YH, Wang XJ (2001) Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron. J Comput Neurosci 10(1):25–45
Longtin A, Laing C, Chacron MJ (2003) Correlations and memory in neurodynamical systems. In: Processes with Long-range Correlations, pp 286–308. Springer
Lowen SB, Teich MC (1992) Auditory-nerve action potentials form a nonrenewal point process over short as well as long time scales. J Acoust Soc Am 92:803–806
Mainen ZF, Sejnowski TJ (1995) Reliability of spike timing in neocortical neurons. Science 268(5216):1503–1506
Nakahama H, Ishii N, Yamamoto M (1972) Markov process of maintained impulse activity in central single neurons. Kybernetik 11(2):61–72
Nawrot MP, Boucsein C, Rodriguez-Molina V, Aertsen A, Grün S, Rotter S (2007) Serial interval statistics of spontaneous activity in cortical neurons in vivo and in vitro. Neurocomput 70(10–12):1717–1722
Neiman A, Russell DF (2001) Stochastic biperiodic oscillations in the electroreceptors of paddlefish. Phys Rev Lett 86:3443–3446
Nesse WH, Maler L, Longtin A (2010) Biophysical information representation in temporally correlated spike trains. Proc Natl Acad Sci USA 107(51):21973–21978
Poggio GF, Viernstein LJ (1964) Time series analysis of impulse sequences of thalamic somatic sensory neurons. J Neurophysiol 27:517–545
Prescott SA, Sejnowski TJ (2008) Spike-rate coding and spike-time coding are affected oppositely by different adaptation mechanisms. J Neurosci 28(50):13649–13661
Ratnam R, Goense JBM (2004) Variance stabilization of spike-trains via non-renewal mechanisms: the impact on the speed and reliability of signal detection. Computational Neuroscience Meeting (CNS*2004), Baltimore, MD
Ratnam R, Goense JBM, Nelson ME (2001) The response of P-type electrosensory afferents to weak prey-like stimuli. The 6th International Congress on Neuroethol, Bonn, Germany
Ratnam R, Nelson ME (2000) Nonrenewal statistics of electrosensory afferent spike trains: implications for the detection of weak sensory signals. J Neurosci 20(17):6672–6683
Rodieck RW (1967) Maintained activity of cat retinal ganglion cells. J Neurophysiol 30(5):1043–1071
Schneidman E, Freedman B, Segev I (1998) Ion channel stochasticity may be critical in determining the reliability and precision of spike timing. Neural Comput 10(7):1679–1703
Schwalger T, Lindner B (2013) Patterns of interval correlations in neural oscillators with adaptation. Front Comput Neurosci 7:164
Schwalger T, Fisch K, Benda J, Lindner B (2010) How noisy adaptation of neurons shapes interspike interval histograms and correlations. PLoS Comput Biol 6:2010
Ten Hoopen M (1964) On an impulse generating mechanism. Jpn J Physiol 14:607–614
Traub RD, Miles R (1991) Neuronal networks of the Hippocampus, vol 777. Cambridge University Press, Cambridge
Urdapilleta E (2011) Onset of negative interspike interval correlations in adapting neurons. Phys Rev E 84(4):041904
van der Heyden MJ, Diks CGC, Hoekstra BPT, DeGoede J (1998) Testing the order of discrete markov chains using surrogate data. Physica D 117:299–313
Van Rossum MCW, O’Brien BJ, Smith RG (2003) Effects of noise on the spike timing precision of retinal ganglion cells. J Neurophysiol 89(5):2406–2419
White JA, Klink R, Alonso A, Kay AR (1998) Noise from voltage-gated ion channels may influence neuronal dynamics in the entorhinal cortex. J Neurophysiol 80(1):262–269
White JA, Rubinstein JT, Kay AR (2000) Channel noise in neurons. Trends Neurosci 23(3):131–137
Xu Z, Payne JR, Nelson ME (1996) Logarithmic time course of sensory adaptation in electrosensory afferent nerve fibers in a weakly electric fish. J Neurophysiol 76(3):2020–2032
Acknowledgements
The authors were partially supported by National Science Foundation grants EFRI-BSBA-0938007 and IGERT 0903622 (DLJ, ECJ, RR), and subsequently a University Research Board grant (URBSASI20A5) from Ahmedabad University (RR). Additional funds and support were provided by the College of Engineering and Coordinated Science Laboratory (UIUC), the Advanced Digital Sciences Center (Illinois at Singapore), and Ahmedabad University. Electric fish data were collected in the laboratory of Mark E. Nelson, UIUC, through a National Institutes of Health grant R01MH49242 and National Science Foundation grant IBN-0078206. We are grateful to the anonymous reviewers and the editor Dr. Jan Benda for providing constructive comments and helping to strengthen our manuscript significantly.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Benjamin Lindner.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 Deterministic and stochastic adaptive threshold model
In the deterministic adaptive threshold model (Fig. 3A), if \(t_{i-1}\) and \(t_{i}\), \(i \ge 1\), are successive spike-times, then the time evolution of the adaptive threshold \(r\left( t\right) \) is given by
At \(t = t_{i-1}^+\), \(r\left( t_{i-1}^+\right) = v - \gamma + A\), and at \(t = t_{i}\), \(r\left( t_{i}\right) = v - \gamma \). So,
Let \(m = \left( v -\gamma + A\right) /\tau > 0 \), so that the slope of the adapting threshold is \(-m\), then
Noting that \(r\left( t_{i}\right) = v - \gamma \),
which is the deterministic firing threshold for a constant, DC-level signal.
1.1.1 Dynamic threshold with random firing threshold
Throughout this work, we will use the term covariance to refer to the mean subtracted correlation. If the two random variables in question are identical, then we will simply refer to covariance as variance, and correlation as autocorrelation. If the correlation is normalized by the variance, then we will refer to it as the correlation coefficient.
We will now provide a stochastic extension to the model by making the spike threshold (\(\gamma \)) noisy while leaving all else unchanged (Fig. 4). Let \(\gamma \) be a discrete wide-sense stationary process with mean \({{\,\mathrm{\mathbf {E}}\,}}[\gamma ]\), discrete autocorrelation function \({{\,\mathrm{\mathbf {R}}\,}}_k\) and power \({{\,\mathrm{\mathbf {E}}\,}}\left[ \gamma ^2\right] = {{\,\mathrm{\mathbf {R}}\,}}_0\). The spike threshold with additive noise assumes the value \(\gamma _i\), \(i \ge 1\) immediately after the \((i-1)\)th spike and remains constant in the time interval \(t_{i-1} < t \le t_{i}\) (Fig. 4A) (Chacron et al. 2004a; Gestri et al. 1980). Thus, the ith spike is emitted when the error satisfies the condition
Subsequently the adaptive threshold jumps to a higher value specified by \(v - \gamma _i + A\), and the noisy spike threshold assumes a new value \(\gamma _{i+1}\). From Fig. 4A, proceeding as before
The mean ISI is therefore \({{\,\mathrm{\mathbf {E}}\,}}\left[ t_{i}-t_{i-1}\right] = A/m\) as in the deterministic case given by Eq. (7). Thus, the correlation function for the ISI sequence is
with variance
More generally, for the sum of k ISIs
with variance
From Eqs. (36) and (38), it can be seen that there are two terms in each of these ISI correlation functions. The first is due to the input signal (which gives a mean ISI of A/m), and the second is due to the discrete-event noise process \(\gamma \). From the assumption of wide-sense stationarity, the autocorrelation function \({{\,\mathrm{\mathbf {E}}\,}}\left[ \gamma _i\, \gamma _j\right] \) can be written as \({{\,\mathrm{\mathbf {R}}\,}}\left( t_i - t_j\right) \). The discrete nature of the correlation function \({{\,\mathrm{\mathbf {R}}\,}}\) is made clear in the following way. Denote the mean of the kth-order interval by \(T_k = {{\,\mathrm{\mathbf {E}}\,}}[t_{i+k} - t_i]\), then the mean ISI is \(T_1\) \((=A/m)\), and further \(T_k = kT_1\). The random variable \(\gamma \) is generated once every ISI and thus, the discrete autocorrelation function \({{\,\mathrm{\mathbf {R}}\,}}\) takes values at successive multiples of the mean ISI, i.e., \({{\,\mathrm{\mathbf {R}}\,}}\left( T_1\right) \), \({{\,\mathrm{\mathbf {R}}\,}}\left( T_2\right) \), etc., and will be denoted by \({{\,\mathrm{\mathbf {R}}\,}}_1\), \({{\,\mathrm{\mathbf {R}}\,}}_2\), etc., respectively. As noted before, \({{\,\mathrm{\mathbf {R}}\,}}_0\) is noise power. That is, we can write
and therefore
The covariance of two ISIs \(\left( t_i-t_{i-1}\right) \) and \(\left( t_{i+k}-t_{i+k-1}\right) \) separated by lag \(k \ge 1\) is
This yields
The definition of serial correlation coefficient at lag k is (Cox and Lewis 1966),
For a wide-sense stationary process, the covariances are constant, and the subscript i can be dropped. Introducing Eqs. (41) and (44) into Eq. (45) yields
These equations are reproduced in Results as Eqs. (12) and (13), respectively. The serial-correlation coefficients (SCCs) given by Eqs. (46) and (47) are independent of the slope m of the reconstruction filter (the decay rate of the adaptive threshold) and the reconstruction filter gain A. Thus, the observed correlation structure of the spike-train is determined solely by the noise statistics of the firing threshold \(\gamma \).
1.1.2 Dynamic threshold with a random time-constant
We begin with Eqs. (9), (25), and (26), which transform a random firing threshold into a random time-constant \(\tau _i\) that is held constant between spike-times \([t_{i-1}, t_{i})\) with mean \(\tau = {{\,\mathrm{\mathbf {E}}\,}}\left[ \tau _i\right] \) (Fig. 10). It is immediately apparent that the covariance of the sequence \(\tau _i\) (sampled at the ISIs) is the same as the covariance of the ISI sequence up to a scale-factor, and therefore the serial correlations of \(\tau \) (\(\rho _{\tau ,\,k}\), \(k \ge 0\)) are the same as the serial correlations of the ISIs. We can establish this as follows. From Eq. (26)
and
From Eqs. (45), (48) and (49), we can determine the serial correlation coefficients of the random time-constants to be
These equations are reproduced in Results as Eqs. (27) and (28), respectively. The right side of the above equations, Eqs. (50) and (51), are the same as Eqs. (46) and (47), which are the expressions for the SCCs of a spike train generated with a noisy adaptive threshold. Thus, the random filter time-constants have the same serial correlation coefficients as the interspike intervals, \(\rho _{\tau ,\,k} = \rho _k\). This is a “pass through” effect where the correlations observed in the time-constant are directly reflected in the ISI correlations.
In summary, a noisy threshold or a noisy dynamic threshold time-constant can generate spike trains with the same ISIs and SCCs, and either method can be used.
1.2 Sum of serial correlation coefficients
We now determine the limiting sum of the SCCs over all lags. Let us first transform \(\rho _k\) as follows by defining
Then, from Eq. (47), \(z_k\) is a moving-average process such that
from which it is easy to show that
where N is the Nth-order interval. We make the assumption that the process \(\gamma \) is aperiodic and the autocorrelation function \({{\,\mathrm{\mathbf {R}}\,}}\left( N\right) \rightarrow 0\) when \(N \rightarrow \infty \), and so \(\left( {{\,\mathrm{\mathbf {R}}\,}}_{N+1}-{{\,\mathrm{\mathbf {R}}\,}}_N\right) \rightarrow 0\). That is, the process decorrelates over long time-scales. Thus,
By summing Eq. (52) over all k, and inserting the result from Eq. (55), the limiting sum of the ISI serial correlation coefficients for the spike-train is
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sidhu, R.S., Johnson, E.C., Jones, D.L. et al. A dynamic spike threshold with correlated noise predicts observed patterns of negative interval correlations in neuronal spike trains. Biol Cybern 116, 611–633 (2022). https://doi.org/10.1007/s00422-022-00946-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00422-022-00946-5