Abstract
There is a high diversity of neuronal types in the mammalian neocortex. To facilitate construction of system models with multiple cell types, we generate a database of point models associated with the Allen Cell Types Database. We construct a set of generalized leaky integrate-and-fire (GLIF) models of increasing complexity to reproduce the spiking behaviors of 645 recorded neurons from 16 transgenic lines. The more complex models have an increased capacity to predict spiking behavior of hold-out stimuli. We use unsupervised methods to classify cell types, and find that high level GLIF model parameters are able to differentiate transgenic lines comparable to electrophysiological features. The more complex model parameters also have an increased ability to differentiate between transgenic lines. Thus, creating simple models is an effective dimensionality reduction technique that enables the differentiation of cell types from electrophysiological responses without the need for a priori-defined features. This database will provide a set of simplified models of multiple cell types for the community to use in network models.
Similar content being viewed by others
Introduction
The problem of understanding the complexity of the brain has been central to neuroscience. The classification of neurons into cell types is a conceptual simplification intended to reduce this complexity. To this end, a large-scale effort at the Allen Institute for Brain Science has focused on characterizing the diversity of cell types in the primary visual cortex of the adult mouse using electrophysiology, morphological reconstructions, connectivity, and modeling in one standardized effort. This has resulted in the Allen Cell Types Database1 that is publicly and freely available at http://celltypes.brain-map.org. It includes both morphological and electrophysiological data of genetically identified neurons mapped to the common coordinate framework2. Morphologically and biophysically detailed as well as simple generalized leaky integrate-and-fire point neuron models have been generated1 to reproduce cellular data produced under highly standardized conditions.
Creating simplified models is a way to reduce the complexity of the brain to its most fundamental mechanisms. In addition to the benefits of clarifying mechanisms for single-neuron behavior, single-neuron models can be used in larger network models that attempt to explain network computation. Thus, many models of a wide range of complexity have been developed to describe and recreate various aspects of neuronal behavior3. For an in-depth characterization of the diversity of neuron models, see the review4, and for their capacity to reproduce spike times see ref. 5.
At the high end of the complexity spectrum, are the morphologically and biophysically realistic Hodgkin–Huxley-like models6,7,8. Their strength lies in their capacity to map between multiple observables: morphology, electrophysiology, intracellular calcium concentration, and levels of expression and patterns of distributions of ionic currents. Although adding complexity to a model may increase the ability of that model to recreate certain behavior, finding the right parameters for complex models becomes a challenge9. Furthermore, the computational power needed to simulate sophisticated neural models can be quite large10. Therefore, ideally one would use a computationally minimal model adequate to recreate and to understand the desired behavior11. One simplification that significantly reduces model complexity is to represent the entire dendritic tree, soma, and axonal hillock by a single compartment, while maintaining the dynamics of the individual conductances3. This approximation is especially warranted when characterizing neurons via somatic current injection and voltage recording as is done in the Allen Cell Types Database.
Here we report on the point neuron modeling portion of the Allen Cell Types Database1. In this study, we aimed to identify simple models that could both effectively reduce the biological space to a set of useful parameters for cell type classification and recreate spiking behavior for a diverse set of neurons for use in network models. In the adult cortex, the majority of communication between neurons is via chemical synapses from axons onto dendritic or somatic membrane (with a fraction of inhibitory neurons coupled by gap junctions as notable exceptions). The response of these non-NMDA synapses is generally dependent only on the action potentials generated by the presynaptic cell. Thus, we focus on reproducing the temporal properties of spike trains using computationally compact point neuron models. This spike-train focus allows us to generate models which are much simpler than biophysically detailed models but still capture a substantial amount of their complexity.
An extensive amount of work has gone into constructing simple models that can accurately reproduce the spiking behavior of neurons5. Yet many large network modeling papers use traditional leaky integrate and fire models (for example, ref. 12). We wanted to understand how much may be gained (or lost) in adding complexity to such models. Thus, we characterized how adding phenomenological complexity to a model influences its ability to reproduce neuronal spike times and classify cells.
We used a family of models we refer to as generalized leaky integrate and fire (GLIF) models. This family of models starts with the classic leaky integrate and fire model13 and then incorporates additional phenomenological mechanisms similar to other studies. These mechanisms are fit directly from the electrophysiological data and loosely represent slow activation/inactivation of voltage-gated ion channels and long-term effects of ion channel currents. Although there are many simplified and computationally compact models14 available in the literature, the GLIF framework has a combination of advantages ideal for this study. First, GLIFs have been shown to be able to recreate a wide variety of biologically realistic behavior15. Second, the dynamical equations of the GLIF model are linear, and various versions have been previously fit to biological data15,16,17,18,19,20. In addition, phenomenological mechanisms loosely representing real biological mechanisms can be added to the LIF model that can easily be interpreted and potentially could be mapped to biological mechanisms. Other simplified non-linear models such as those of Izhikevich21,22 can be difficult to fit to data23 and are less phenomenologically interpretable. Furthermore, Mensi et al.19 have shown the potential of GLIF models to classify cell types by identifying three cortical neuronal types using a linear classifier.
Here we expand on previous work, and apply GLIF-fitting tools to a large-scale database of neuronal responses and demonstrate the striking ability of GLIF models to both reproduce spiking behavior and classify a wide variety of transgenic lines. We show: (1) The traditional LIF model (GLIF1) was able to reproduce 70% of the spike times within a resolution of 10 ms on an in vivo-like stimulus for a large set of biological neurons. Including additional mechanisms increased the ability of the models to recreate spike times by almost 10%. (2) Inhibitory neurons can be better fit than excitatory neurons, and we present evidence that inhibitory neurons are more stereotypical than excitatory neurons. (3) Inhibitory and excitatory models require different mechanisms to achieve the spike trains of neurons. (4) Increasing model complexity by adding mechanisms fit from the voltage waveform does not necessarily lead to increased performance in spiking behavior. Furthermore, mechanisms that do not increase performance independently can provide additional benefit when combined with other mechanisms. (5) Parameters obtained from fitting neurons with GLIF models are useful in classifying cell types: higher level GLIF parameters are more effective at differentiating cell types associated with transgenic lines than sub-threshold electrophysiological features. (6) GLIF parameters can be combined with spike-shape features for cell type classification.
We provide a large-scale database of point neuron models for 16 transgenic lines. We characterize the parameters of the GLIF models associated with the different transgenic lines and illustrate the neurons that best recreate the actual spike trains for each of the transgenic lines.
Results
Model description
In GLIF models, the mechanisms are separated by time scale: GLIF models aim to represent the slow linear sub-threshold behavior of a neuron and recreate the spike times, not the shape of the action potential caused by fast, non-linear ion channels. Thus, none of the fast, non-linear processes associated with the action potential itself are included directly in the dynamics. However, some attributes of the spike, such as spike width and voltage after a spike relative to voltage before a spike, are accounted for in the reset rules which map the state before the spike to the state thereafter (Supplementary Methods “Parameter fitting and distributions” and Supplementary Fig. 1). Model equations can be found in the “Methods” section of the main text and the Supplementary Methods. Table 1 summarizes the model parameters and variables.
The family of GLIF models is schematized in Fig. 1a. Examples of model behavior are shown in Fig. 1b. A standard LIF model was our starting point, progressing to more generalized leaky integrate-and-fire models. In the standard LIF model (GLIF1 here), current injected into the cell causes the voltage to rise in a linear fashion. When the voltage reaches a fixed threshold (referred to as Θ∞ here) the model spikes and the threshold is reset to the resting potential of the neuron.
The GLIF2 model advances the GLIF1 model by incorporating more realistic reset rules (R) for voltage and threshold. The rapid changes of the action potentials are followed by slower dynamics which affect a neuron’s state. The voltage after a spike does not reset to rest and the threshold does not remain at a fixed value. This GLIF2 model continues to assume that the spikes are sufficiently similar such that a mapping between the voltage and threshold state before and after a spike can be found. The specific linear relationship of the voltage after the spike as a function of the voltage before the spike is found directly from the electrophysiological data (Supplementary Fig. 1). This relationship also defines the width of a spike or the “refractory period” which is implemented as a time in which the model cannot produce another spike. In addition to the refractory period, it is often more difficult to cause a neuron to spike again after a first spike due to mechanisms such as the slow inactivation of voltage-dependent currents which activate during the spike. This difficulty in causing another spike is modeled as a rise in threshold that decays after the spike and is again extracted directly from the data (Supplementary Fig. 3).
Although GLIF models do not aim to recreate the shape of an action potential caused by fast non-linear ion channel activation, the GLIF3 model does incorporate the longer term effects of ion channels. Here we assume that ion channel currents have a stereotyped activation following a spike, and we bundle all slow ion-channel effects into a set of two after-spike currents (ASC) with different time scales. These currents are again fit directly from the data (see “Parameter fitting and distributions” in the Supplementary Methods).
The GLIF4 model incorporates both reset rules of GLIF2 and the after-spike currents of GLIF3.
Finally, slow depolarization can lead to partial inactivation of the voltage-dependent sodium current which generates a spike. In GLIF5 we incorporate this mechanism into an adapting threshold (AT) which is dependent on the membrane potential (see “Parameter descriptions” in the Supplementary Methods).
Note that the number of variables increases from one (V(t)) for GLIF1 to two (V(t), \(\Theta _s(t)\)) for GLIF2, etc., up to five for GLIF5 (see Table 2).
Data
Intracellular electrophysiological recordings were carried out via a highly standardized process24. The data can be accessed in the Allen Cell Types Database1. The data considered here consists of in vitro electrophysiology data collected from 16 different labeled Cre-positive transgenic lines selective for different types of neurons. The transgenic line panel of Fig. 2 summarizes the properties of the different transgenic lines. Colors and short hand names represent all transgenic line data throughout this manuscript. In general, shades of red represent inhibitory transgenic lines and shades of blue represent excitatory lines. Excitatory neurons from each layer are identified using layer-selective lines. For inhibitory neurons, cells are targeted across lamina using lines selective for known neuronal subtypes25. Cells from each of the three major inhibitory subtypes (Pvalb (primarily basket cells), Sst (primarily Martinotti cells), and Htr3a (diverse morphologies)) are represented. Also included are transgenic lines that enrich for subsets of inhibitory types: the Vip, Ndnf, and Chat Cre lines (subtypes of Htr3a), Chrna2 (subtype of Sst), and Nkx2.1 (subtype of Pvalb). All cell types recorded are from young adult (P45 to P70) C57BL/6J mouse primary visual cortex.
Model performance
After model fitting and optimization (see Supplementary Methods for details; see Fig. 3 for distributions of the fit parameters) on our set of training stimuli (Fig. 2; see the “Stimulus” section of the Methods for details), we test model performance on a “hold out” in vivo-like noise stimulus not used for training (to ensure our models were not over fit). Using hold-out data to evaluate model performance is the “gold standard” for model selection. In the “Akaike Information Criterion” section of the Supplementary Methods and Supplementary Fig. 11, we also provide an Akaike Information Criterion (AIC) measure (which follows the same trends) to characterize the trade-off between complexity and model performance of GLIF models on training data. We quantified performance by how much of the temporal variance in the spike times of the data can be explained by the model at a 10 ms temporal scale (see Fig. 4 and “Evaluation of model spike times” in the Supplementary Methods for details). Medians of explained variance of all data can be viewed in Fig. 5 with corresponding values in Table 3. We begin our analysis by testing if there are overall differences between the GLIF model levels. A Friedman test reports that there are overall differences with a p-value of 1.83e−45. Thus, we continue with Wilcoxon sign-ranked tests where p-values are corrected for multiple comparisons using the Bejamini–Hochberg procedure. All p-values can viewed in Supplementary Figs. 8, 9, and 10.
The progression of explained variance through the different model levels demonstrates that different mechanisms are important for achieving spiking behavior of inhibitory and excitatory neurons. Overall GLIF1 (equivalent to a leaky integrate and fire model) had a surprisingly high explained variance of 70% when all neurons were considered. Inhibitory neuron models perform better then excitatory models: 75% versus 68%. The introduction of reset rules in GLIF2 decreases the performance of the models for both excitatory (65%) and inhibitory (74%) neurons. The introduction of after-spike currents alone in GLIF3 helps the inhibitory neurons (81%) but not the excitatory (68%) neurons. To confirm that improvement of fitting between GLIF1 and GLIF3 follow different trends for inhibitory and excitatory neurons, we subtracted the pairwise explained variance ratio of GLIF1 from GLIF3 models. We found that the distributions of these differences for the excitatory and inhibitory neurons are significantly different (p = 1.35e−7, Mann–Whitney U; Benjamini–Hochberg correction, family size = 10), suggesting that after-spike currents have different influence in inhibitory and excitatory neurons. Interestingly, the inclusion of reset rules with after-spike currents in GLIF4 improves the performance of both excitatory (72%) and inhibitory neurons (83%) even though reset rules alone hurt performance. Please see the “Discussion” section for a discourse on the influence of model complexity on performance. Here again, the distribution of differences between GLIF3 and GLIF4 are statistically different for inhibitory and excitatory neurons (p = 2.89e−4, Mann−Whitney U; Benjamini–Hochberg correction, family size = 10). Finally, the addition of the adaptive threshold along with the after-spike currents and reset rules provides an improvement for excitatory cells (75%) but does little for inhibitory (84%) cells. Distributions of differences of excitatory and inhibitory neurons differ with the following values GLIF3 and GLIF5: p = 1.98e−5, GLIF4 and GLIF5: p = 1.40e−2 (Mann−Whitney U; Benjamini–Hochberg correction, family size = 10).
We found it interesting that the addition of reset rules measured directly from the data actually hindered the ability of the GLIF2 model to recreate spike times of the neural data. We explored if there was a relationship between the ability of a model to reproduce the sub-threshold behavior of the voltage waveform and reproduce the neural spike times. When taking into account all models, overall there is a correlation between the ability of a model to reproduce the sub-threshold voltage and its ability to reproduce spike times (please see the black regression line in Supplementary Fig. 15). However, when considering the median performance values describing the ability of the different model levels to reproduce sub-threshold voltage and spike times (Supplementary Fig. 15), although GLIF2 better reproduces sub-threshold voltage than GLIF1 and GLIF3, it does worse at reproducing the spiking behavior. Thus, the ability of a model to reproduce sub-threshold voltage does not necessarily translate into better spike time performance.
There are several potential methodological and biological explanations for the increased ability to fit inhibitory neurons over excitatory neurons. It is possible that inhibitory neurons are better optimized because they generally fire more and thus have more data points to optimize (see the Supplementary Methods for details). Alternatively, perhaps they are easier to fit because they are more stereotypical than excitatory neurons. As described in the “Parameter fitting and distributions” section of the Supplementary Methods and Supplementary Fig. 1, we define spike cut length as the post spike time at which a linear regression based on the voltage before the spike minimizes the squared residuals of the voltage after the spike. We use the standard error from this spike cut length fit as a description of spike reproducibility. Inhibitory neurons have shorter spike cut lengths than excitatory neurons (median 2.7 versus 4.6 ms, respectively; p = 1.10e-47, Mann Whitney U), spike more frequently (a median of 84 versus 38 spikes per noise 1 stimuli, respectively; p = 6.15e-35, Mann Whitney U), and have more reproducible spikes (median standard error 2.6e−2 and 5.7e−2, respectively; p = 4.97e-39, Mann Whitney U). Supplementary Fig. 4 shows there are correlations between the explained variance ratio for each of these three variables. To investigate the importance of these factors in achieving a good model fit, we perform a multiple linear regression (Python, statsmodels.regression.linear_model.OLS). Spike cut length and number of spikes are good indicators of how well a neuron will be fit (p-values are 2.80e−19 and 5.95e-21, respectively), whereas this measure of spike reproducibility is not statistically important (p-value: 0.276).
It is likely that individual transgenic lines will have different mechanisms most important for achieving their spiking behavior. We do not draw any conclusions here due to a large number of non-significant p-values between model levels for individual transgenic lines (most likely due to the lower number of neurons from individual transgenic lines in this study). Performance of individual transgenic lines can be viewed in Fig. 5 with corresponding values in Table 3 and Supplementary Figs. 9 and 10.
Example neurons
Throughout the manuscript data and models from two exemplary neurons are consistently shown as examples: an Htr3a inhibitory neuron (specimen ID 474637203) and a Ctgf pyramidal neuron (specimen ID 512322162).
To provide examples of models that do well reproducing the spike times of the different transgenic cell lines, for each line we select the model which has the highest explained variance from neurons that have all five model levels. Example neurons are labeled with stars in all figures and can be found on the website via the following specimen IDs: Nr5a1: 469704261 GLIF5, Cux2: 490376252 GLIF4, Ctgf: 512322162 GLIF5, Vip: 562535995 GLIF5, Scnn1a-Tg2: 490205998 GLIF5, Ntsr1: 490263438 GLIF4, Sst-IRES: 313862134 GLIF4, Rbp4: 488380827 GLIF5, Scnn1a-Tg3-Cre: 323834998 GLIF3, Chrna2: 580895033 GLIF2, Pvalb: 477490421 GLIF4, Ndnf: 569623233 GLIF5, Nkx2.1: 581058351 GLIF5, Htr3a: 474637203 GLIF5, Chat: 518750800 GLIF5, Rorb: 467003163 GLIF3. Although these neurons are the best performers, they may not be the most representative cells. Most of these models have at least one parameter which lies outside the 5 to 95 percentiles for the trangenic line. To facilitate the choice of other good representative models, neurons that contain all GLIF model levels and all model parameters within the 5 to 95 percentiles are available in Supplementary Table 1.
Clustering
After deriving parameters for each of the GLIF models, we assessed how well these parameters could classify neurons into putative types corresponding to transgenic lines. We used two different clustering algorithms to identify trends that were not method-specific. The first of these is an iterative binary splitting method, using standard hierarchical clustering methods (see the “Clustering” section of the Supplementary Methods for details) to separate cells into groups in an iterative manner. We show the results of the hierarchical clustering here in Figs. 6 and 7. We confirm the general trends from this analysis using a second clustering method, affinity propagation26, as shown in Supplementary Figs. 12 and 14.
To compare different partitions of the cells, we use the adjusted Rand index (ARI)27 and the adjusted variation of information (AVOI) metric28. The adjusted Rand index is a corrected-for-chance extension of the Rand Index, which measures the probability that any pair of elements will belong to the same cluster in different partitionings. A value of 1 indicates that the data clusterings are exactly the same, whereas a value of 0 indicates that the clusterings are no different from what would be expected by chance. The AVOI metric is an information-theoretic value measuring the ability of one partitioning to predict the other, as compared to shuffled-label data. A value of 0 indicates that the clusterings are no different from what would be expected by chance. The upper bound on the AVOI is dependent on the number of elements in the clustering. Here the upper bound is 6.47. We show the ARI and AVOI from the hierarchical clustering in the main text (Figs. 6 and 7). We show results from affinity propagation in the Supplementary Figs. 12 and 14. The variability in the performance of the binary splitting method via bootstrapping is shown in Supplementary Fig. 17. As mentioned in the “Stimulus” section of the Methods, GLIF2, GLIF4, and GLIF5 require a stimulus that is not applied to all neurons (for reasons independent of this study). To ensure a full data set for clustering, we use only parameters that are available for all 645 cells. Table 2 describes the parameters used in each clustering paradigm.
First, we assess how well the putative clusters agree with known cell type information based on the transgenic lines from which the cells were derived. This is an imperfect validation as it is known that different transgenic lines may not perfectly correspond to different cell types, as measured by transcriptomics25. The ARI and AVOI values for Cre lines compared to transcriptomics found in a previous study25, are 0.30 and 2.69, respectively (AVOI upper bound was 6.86 in the cited study). Nonetheless, we would expect some degree of relationship, as broad classes of cells do tend to segregate by transgenic line. Second, to further validate our results, we compare the clustering obtained by GLIF parameters to the clustering obtained using features extracted directly from electrophysiological traces. Here we use two sets of features. In the first set, we cluster on 14 features that contain aspects of both sub-threshold and supra-threshold spike-shape features. As GLIF neurons do not recreate the shape of a spike, in the second set we start with the same set of features but eliminate all features that describe the shape of the action potential. See the “Electrophysiological features” section for a brief description of the features used here; a more detailed description can be found in ref. 24
The progression of the ability of the GLIF model parameters to recapitulate segregation of cells by transgenic line is shown in Fig. 6. Figure 6 quantifies how clustering the different GLIF level parameters improves the distinction among different transgenic lines. Although GLIF1 and GLIF2 have an ARI of ~0.08 (AVOI ~0.8), GLIF3 ARI increases to ~0.12 (AVOI ~1.3) and GLIF4 improves even more yielding a value of ~0.18 for the iterative binary clustering (AVOI ~1.3). Similarly, the ARI and AVOI values calculated during affinity propagation are higher for GLIF3 and GLIF4 than for GLIF1 and GLIF2 (Supplementary Fig. 12).
The transgenic line composition of each terminal cluster in GLIF4 (Fig. 6) suggests that clustering based on model parameters broadly segregates neurons into previously identified classes of cells. Neurons from transgenic lines labeling predominantly excitatory neurons cluster separately from those labeling mainly interneurons. In addition, sub-categories of neurons appear: these include parvalbumin-positive interneurons, layer 6a corticothalamic neurons (derived from the Ntsr1 transgenic line), and layer 6b neurons (derived from the Ctgf transgenic line).
When the same hierarchical clustering technique is applied directly to the electrophysiological feature data set that includes spike-shape parameters (bottom left of Fig. 6 and Supplementary Fig. 14), the correspondence between electrophysiological clusters and transgenic lines is similar to those of GLIF4 parameter-based clusters (the electrophysiological cluster scores higher than GLIF4 when using iterative binary clustering (Fig. 6) and lower when using affinity propagation (Supplementary Fig. 12). However, when the features describing the spike shape of the neuron are removed, the ARI drops below GLIF4 to a value similar to GLIF3 (Fig. 6). Thus, the dimensionality reduction onto the space of model fit parameters allows for discriminability better than that obtained by the extraction of a subset of electrophysiological features without spike shape.
Similarly, when we compare how well the GLIF parameter clustering matches the full feature-based clustering (Fig. 7, bottom right), GLIF1 and GLIF2 parameter-based clustering results in partitions that are dissimilar from that obtained using the electrophysiological features. GLIF3 and GLIF4 parameter-based clustering results in partitions that are closer to that obtained with the full-feature clustering. The set of partitions obtained by the non-spike electrophysiological features are the most similar to the partitions obtained from the full set of electrophysiological features. This is not surprising as the non-spike features are a subset of the full set of features. Confusion matrices of the relationship between electrophysiological feature-based clustering and GLIF parameter-based clustering are shown in Supplementary Fig. 13.
We performed clustering on the GLIF parameters and the spike features to determine whether combining them would further help the differentiation of Cre lines (Fig. 7 and Supplementary Fig. 14). Including spike-shape features along with GLIF1 and GLIF2 parameters greatly increases their ability to differentiate Cre lines. For the more complex neuronal models, the improvement of including the spike features is modest, suggesting that the GLIF3 and GLIF4 model parameters encompass most of the information contained in this set of sub and supra-threshold electrophysiological features.
Discussion
In the cerebral cortex there is a wide diversity of neuron types observed at transcriptomic levels25. The existence of different types may be driven by a need to develop particular cell type-specific connectivity, neuromodulation, or perform different cell specific computations, as well as by developmental and evolutionary constraints. When constructing a system-level model, it is important to know how many types of neurons are needed.
We provide a sizable database of GLIF models that can facilitate the development of system models with cell types available in the Allen Cell Types Database at http://celltypes.brain-map.org/. Here we show that GLIF models can simultaneously reproduce the spike times of biological neurons and reduce the complex mapping between the input current to the spike-train output with a small number of parameters. We explore how complexity affects the ability of models to both reproduce spike times and differentiate transgenic lines via unsupervised clustering. We find that the more complex models (GLIF3 or greater) are better than the less complex models at reproducing biological spike times and differentiating transgenic lines.
After optimizing the neuronal parameters, we were surprised at how well the traditional leaky integrate and fire neuron models reproduce spike times under naturalistic conditions, explaining a median value of 70.2% of the variance. To put this value in context, biophysically realistic models with passive dendrites in the Allen Cell Types Database1 achieve a median explained variance of 65.1% (n = 195) and biophysically realistic models with active dendrites achieve a median explained variance of 69.3% (n = 107). It should be noted that the biophysical models are not optimized to reproduce spike times but instead to recreate other aspects of electrophysiological responses8,29. As we show with GLIF2, fitting data to aspects of the voltage waveform does not mean that the ability of the models to reproduce spike times will improve.
Our result that a model’s ability to reproduce sub-threshold or spiking behavior does not increase monotonically with model complexity may seem counterintuitive. Furthermore, we demonstrate that the overall ability of a model level to reproduce sub-threshold behavior does not translate into the overall ability of a model level to reproduce spike times. When trying to understand how this is possible, it is important to note several things. Theoretically, in a situation where all parameters are simultaneously optimized, increasing complexity (via the addition of new mechanisms and variables to the previous model) should lead to a smaller error of the function being minimized. In this paradigm, one could include more parameters or state variables until the model starts to over fit the training data. Optimizing on our performance metric of explained variance would no longer be convex, and hence, optimization would be more unintuitive and computationally intensive with no guarantee of convergence. We chose a different route, in which different observables and error functions are used to isolate the effects of individual mechanisms. For example, here we optimize the difference in voltage between the model and the data during the forced spike paradigm because this is a convex problem. What we desired was a “good” model that has mechanisms which relate to how the neuron is actually performing computations and thus recreates observed behavioral aspects such as sub-threshold voltage and spike times. In our study, the following procedures could lead to the observed non-monotonic behavior. (1) Individual mechanisms are not optimized to reproduce the spike times, but rather to fit different aspects of the sub-threshold membrane potential during different stimuli. (2) The biological effects phenomenologically described here by reset rules and the after-spike currents act together to create the voltage waveform of a neuron. Here, we fit the after-spike currents and the reset rules independently on the voltage waveform of a neuron without subtracting out the effect of the other mechanism. Explicitly, when the reset rules in GLIF2 were calculated, there were certainly after-spike currents (which loosely represent the summation of the ion-channel effects in biological neurons) present in the data. In addition, the after-spike currents were fit to the observed sub-threshold voltage, which includes the biological reset rules. (3) Reset rules and after-spike currents exert influence on different time scales. Reset rules have an instantaneous effect implemented right after a spike (or far away from the next spike) whereas after-spike currents have multiple time scales that extend though the sub-threshold portion of the voltage trace to the next spike. (4) We only optimize one parameter to the spike times of the data. More parameters could be optimized to potentially achieve better spike time performance. However, adding more parameters greatly increases optimization time and is more prone to over-fitting. (5) The metrics we used to fit and optimize the parameters (linear regression and difference between model and data voltage at spike time in the forced spike paradigm) were not our reporting metrics (explained variance, difference in sub-threshold voltage). The “take home” message here is that the ability to accurately reproduce spike times is due to the complicated interaction of many counteracting mechanisms over different time scales.
The progression of the explained variance for the different GLIF levels suggests that after-spike currents are very important for achieving the spiking behavior of inhibitory neurons, whereas a combination of after-spike currents and reset rules are necessary to improve the performance of excitatory neurons. In addition, incorporating the voltage-activated threshold improves the performance of excitatory neurons even more while it does little for inhibitory models. At all levels, models were able to reproduce the spike times of inhibitory neurons better than excitatory neurons. Inhibitory neurons have a more stereotyped relationship between voltage before and after a spike, shorter spike cut lengths, and spike more. The spike cut length and the number of spikes significantly predict a model's capacity to reproduce neuronal spike times whereas the error in fitting the pre/post-spike voltage did not.
Our work uses similar methods to ref. 20 Both studies incorporate generalizations on the basic LIF model. In addition, both studies implement a fast fitting step followed by a slower optimization of a variable. However, there are some differences: (1) Here, more complex reset rules are used that map the state of the neuron before the spike to that after. (2) We include a membrane potential-dependent adaptation of threshold. (3) We use exponential basis functions for the time-dependencies of the after-spike currents. These allow a much sparser representation, essential for clustering, compared to the non parameterized after spike-current trace composed of wavelets implemented in Pozzorini et al.20 (4) We use a direct measurement of noise in the membrane potential of a neuron rather than a probabilistic threshold. The exact comparison of performance for models across different publications is difficult because performance is dependent on the properties of the input currents, i.e., if there are large variations in the input current, the capacity to predict the spiking output is better. We chose to compute the performance metrics on stimuli similar to what would be observed in vivo: our stimuli have a coefficient of variation equal to 0.2 centered around baselines at 0.75, 1.0, and 1.25 current threshold (rheobase). The Pozzorini et al.20 stimulus has a variable coefficient of variation centered around a single baseline. As opposed to comparing models across previous studies, the focus of this study is to apply generalizations of the leaky integrate and fire model to a large database of neuronal responses, characterize the models associated with a large number of transgenic lines, and demonstrate how well they describe the spiking responses and dissociate between cell types.
In an attempt to characterize cell types associated with the input/output transform measured by electrophysiological experiments, clustering algorithms can be run on a set of electrophysiology features. However, it is not entirely clear which are the most important features to consider. An alternative method is to use the entire spike-train by synthesizing the input/output relationship into a model and then performing the clustering on the model parameters. A seemingly intuitive model to use for such a clustering would be a biophysically detailed model, as the parameters map well to biological mechanisms. However, one problem with such models is that they do not provide unique solutions: repeated optimization with the same input can lead to solutions which are far away in parameter space. Thus, another approach is to use simpler linear models which have both (a) unique parameter solutions, and (b) have been shown to reproduce spike times of biological data20. Indeed, previous studies have touched upon the potential for clustering using simplified models19.
The unsupervised clusters of the GLIF parameters or the features extracted from the electrophysiological traces do not perfectly match the transgenic lines. This is not surprising given that the transgenic lines are known to comprise multiple molecularly defined cell types. In addition, it is not known what specific electrophysiological features are needed to classify cell types. Nonetheless, we certainly expected and did observe relationships.
GLIF model parameters have a better capacity for differentiating transgenic lines than sub-threshold features designed for that purpose. The set of 14 electrophysiological features that include features dependent on spike shape, perform similarly to the GLIF4 parameters we use here, at differentiating transgenic lines (the full set of features perform slightly worse than GLIF4 when differentiated by the iterative binary clustering and perform slightly better when differentiated by affinity propagation). When the spike-shape features are removed, their ability to classify transgenic lines drops below the ability of GLIF4. This suggests that aspects of spike shape are helpful for transgenic line classification30 when clustering electrophysiological features alone. When including spike-shape features along with GLIF3 and GLIF4 parameters in classification, any improvement in the ability to differentiate Cre lines is slight. This suggests that the GLIF3 and GLIF4 parameters carry much of the information present in this set of sub-threshold and supra-threshold electrophysiological features.
We did not cluster on all available GLIF parameters as they were not available for all 645 cells. It is likely that incorporating the threshold adaptation parameters would further improve clustering. The GLIF model paradigm is sufficiently general that additional mechanisms could be added to the GLIF model family. For example, a mechanism that accounts for the sharp voltage rise at spike initiation23 incorporated by the adaptive exponential integrate and fire model31 could further raise the clustering capabilities of GLIF parameters.
As with all abstractions, there are limitations to our models: the traces which are reproduced are coming from a current injection into the soma of the neuron under stereotyped recording conditions. We try to keep the statistics of the test stimulus similar to in vivo patch clamp currents, but in vivo, additional large sources of variability will occur from stochastic transmission at synapses, complex dendritic integration, neuromodulation, etc. These mechanisms, which are necessary for exact integration of single-neuron models into systems models are beyond the scope of the current study.
For any study, the best model to use will depend on the question at hand. Currently, it is unknown what level of spike precision will be needed in network models. However, as the brain has evolved a diversity of cell types, it is likely cell type-specific models will be necessary to model brain computation. We hope that this study along with our corresponding publicly available database of models will be a useful resource for the community by providing both optimized models and an intuition for how much complexity aids in spike time performance and cell type classification.
Methods
Model definitions
We developed five different GLIF models, each of which successively incorporates the time evolution of one to five state variables. For brevity, here all model level equations are presented together. A separate and detailed description of the models along with their mathematical equations are available in the “Model definitions” section of the Supplementary Methods.
The set of five state variables \(X = \{ V(t),\Theta _s(t),I_{(j = 1,2)}(t),\Theta _V(t)\} \) represent the neuronal membrane potential, the spike-dependent component of the threshold, two after-spike currents and the membrane potential-dependent component of the threshold, respectively. Figure 1a illustrates the iteration between the state variables in the model equations. Table 1 contains a summary of the parameters and variables. It is assumed that these state variables evolve in a linear manner between spikes:
where C represents the neurons’s capacitance, R, the membrane resistance, E L , the resting membrane potential, I e , the external current, \(1/k_j\), \(1/b_s\), \(1/b_v\) are the time constants of the after-spike currents, spike and voltage dependence of the threshold, and a v couples the membrane potential to the threshold.
If \(V(t) > \Theta _v(t) + \Theta _s(t) + \Theta _\infty \), a spike is generated. After a refractory period, δ t , the state variables are updated with a linear dependence on the state before the spike:
Where t+ and t− represent the time just after and before a spike, respectively. f v , and δ V , are the slope and intercept of the linear relationship of the voltage before and after a spike (see Supplementary Fig. 1). δΘ s is the amplitude of the spike-induced threshold after spiking. f j , is a fraction of current implemented after a spike; here it is always set to 1. δI j , is the amplitude of the spike-induced currents.
The 1D model, GLIF1, is represented by equation 1, with a simple reset and \(I_j(t)\) = 0 for all j. GLIF2 adds Θ s and the more complex reset rules (Eqs. 5 and 6). GLIF3 uses membrane potential and two spike-induced currents, \(I_j(t)\) in Eq. 1, and reset rules defined by Eq. 7. GLIF4 uses membrane potential, spike-induced threshold dependence and after spike currents, whereas GLIF5 uses all the variables described.
Stimulus
We designed a set of four different types of stimuli for fitting and testing parameters of our GLIF models (Fig. 2). This set of stimuli is part of a much larger electrophysiological stimulation protocol which are the subject of different studies. Details of the stimuli described here and the other stimuli of the Allen Cell Type Database can be found at ref. 24
The first type of stimulus we used were very short (3 ms) square pulses of multiple amplitudes to assess the threshold of a neuron. The minimal voltage at which the neuron fires a spike in response to a short current pulse is defined as the instantaneous threshold, Θ∞. This parameter is later optimized in the post hoc procedure (see sections, “Fitting and post hoc optimization of instantaneous threshold in the main text”, and “Post hoc optimization” in the Supplementary Methods).
The most substantial type of stimulus we used, consists of noise stimuli. This noise was created to have a coefficient of variation of 0.2 (similar to in vivo patch clamp recordings) at three different base amplitudes: one sub-threshold, one peri-threshold and one supra-threshold to explore different parameter regimes. Each amplitude is given for 3 s with a 5 s break in between to allow the neuron to settle to a rest state. Using stimuli with highly varied structure is important. Not only does it allow the model to explore the parameter space but it is more realistic to what a neuron actually experiences in vivo. Solely using traditional simple stimuli, such as square pulses, would present only a small subset of possible histories (both spiking and sub-threshold) and bias the data set to a non-physiological regime. Within this noise data we have two different complete sets of noise which we refer to as noise 1 and noise 2. The noise 1 and noise 2 stimuli have the same statistics but were created with a different random seed. We used noise 1 data to fit the model and noise 2 as “hold out” test data to ensure we have not over fit our model. Both the noise 1 and noise 2 stimuli are repeated at least twice to characterize how consistently the biological neuron fires at the same time for a repeated stimulus.
The third type of stimulus is a (3 s) sub-threshold long square pulse just below rheobase given to characterize the intrinsic noise in the neuronal membrane near threshold (please see “Objective function: maximum likelihood based on internal noise (MLIN)” in the Supplementary Methods and Supplementary Fig. 5).
Finally, a series of short square pulses at different frequencies, referred to as a “triple short square” (Supplementary Fig. 3), are used to characterize the spike-induced changes in threshold in the absence of additional membrane potential-induced changes (such as the voltage-induced changes in threshold modeled in GLIF5). This stimulus is necessary to fit all models with a spike component of threshold, i.e., GLIF2, GLIF4, and GLIF5. Due to experimental constraints independent of this study, this triple short square stimulus was only played to approximately half the neurons. This is why there are many fewer GLIF2, GLIF4, and GLIF5 models available.
Model exclusion criteria
Only transgenic lines which contained more than five neurons and had the necessary data to create a basic GLIF1 model described in “Stimulus” section, were included in this study. In addition, models which had biologically unrealistic parameter values after fitting, or neurons that had an explained variance ratio value less than 20% for one model on the training data set (noise 1) were eliminated (please see the “Exclusion criteria” section in the Supplementary Methods for a detailed explanation of excluded data). Overall, a total of 645 neurons from 16 transgenic lines met the criteria for fitting the GLIF1 and GLIF3 models. As mentioned in the “Stimulus” section, only some neurons were given the stimulus needed to create GLIF2, GLIF4, and GLIF5 models. 254 neurons met the additional requirements for models with reset rules (GLIF2 and GLIF4) and 253 neurons met all the criteria for a GIF5 model.
Fitting and post hoc optimization of instantaneous threshold, Θ∞
Similar to ref.20, GLIF model fitting is achieved in two steps. In the first step, parameters are fit directly from the electrophysiological data using linear methods described in detail in the “Parameter fitting and distributions” section of the Supplementary Methods: E L from the mean membrane potential at rest, R, and C from fitting the membrane potential during a sub-threshold noise stimulus, \(\delta I_j,k_j\) from fitting the membrane potential between spikes during a supra-threshold stimulus, \(f_v,\delta t,\delta V\) relating the potential before and after a spike during the noise stimulus, \(b_s,\delta \Theta _S\) from the potential of subsequent spikes during a series of brief square pulses, a v and b v from the potential of subsequent spikes during the supra-threshold noise stimulus. Slices from this parameter space are shown in Fig. 3a–f.
In the second step, the instantaneous threshold, Θ∞, was optimized to fit the probability of a neuron model reproducing observed spike times. This is realized using a non-linear Nelder Mead optimization strategy as described in detail in the “Post hoc optimization” section of the Supplementary Methods. Θ∞ represents the threshold of a neuron when it is stimulated from rest. Any parameter, or combination of parameters, could have been chosen for additional optimization. However, the time needed to optimize more than one parameter is prohibitive. Θ∞ was chosen for further optimization because it is represents the overall excitability of a neuron and therefore is likely to be the best parameter to counteract any error in the fitting or the inherent error introduced by simplifying any complex system to a simple model. The changes in instantaneous threshold, θ∞, with optimization subsection of the Supplementary Methods and Supplementary Fig. 7 discuss how Θ∞ is altered by optimization.
Electrophysiological features
We use a set of 14 electrophysiological features to perform clustering. Detailed explanations of these features can be found at ref. 24. Here, we briefly describe them below.
Membrane time constant (τm): The membrane time constant of the cell was calculated by averaging the time constants of single-exponential fits to hyperpolarizing responses evoked by one-second current steps with amplitudes from −10 pA to −90 pA (20 pA interval). Fits were performed on the traces from 10% of the maximum voltage deflection to the maximum voltage deflection.
Input resistance (R i ): The input resistance was calculated by first measuring the peak voltage deflection evoked by one-second current steps with amplitudes from −10 pA to −90 pA (20 pA interval), then taking the slope of a linear fit to those deflections versus the respective stimulus amplitudes.
Resting potential (Vrest): The resting potential was calculated by averaging the pre-stimulus membrane potential across all one-second current step responses.
Threshold current (Ithresh): The threshold current was the minimum amplitude of a one-second current step that evoked at least one action potential.
Action potential threshold (Vthresh): The action potential threshold was defined as the membrane potential before the action potential peak where the dV/dt was 5% of the maximum dV/dt averaged across all spikes evoked by the stimulus.
Action potential peak (Vpeak): The action potential peak was defined as the maximum membrane potential reached during an action potential.
Action potential fast trough (Vfasttrough): The action potential fast trough was defined as the minimum value of the membrane potential within 5 ms after the action potential peak.
Action potential trough (Vtrough): The action potential fast trough was defined as the minimum value of the membrane potential before the next spike (or end of the stimulus interval).
Upstroke/downstroke ratio (up:downstroke): The ratio between the absolutes values of the action potential peak upstroke (i.e., maximum dV/dt value before the peak) and action potential peak downstroke (i.e., minimum dV/dt value after the peak). Here, the upstroke/downstroke ratio was separately measured on action potentials evoked by a one-second (long square) current step and a 3 ms (short square) current step.
Sag: The sag was calculated as the difference between the minimum membrane potential value reached during a hyperpolarizing one-second current step and the steady-state membrane potential during that step, divided by the difference between the minimum membrane potential value reached during the step and the baseline membrane potential. The sag was calculated on the step where the minimum membrane potential was closest to a value of −100 mV.
f–I curve slope: The f–I curve was calculated by measuring the average firing rate during one-second current steps versus the stimulus amplitude. The supra-threshold part of the curve was fit with a line, and the slope was taken from that linear fit.
Latency to the first action potential (latency): Latency was calculated as the interval between the start of the stimulus and the time of the first spike evoked by the stimulus.
Maximum burst index (max. burst index): If a burst was identified during a response, a burst index was calculated as the difference between the maximum instantaneous firing rate inside the burst and the maximum instantaneous firing rate outside the burst, normalized by their sum. Bursts were identified as a change in the character of the voltage trajectory during the interspike interval (e.g., changing from a “direct” trajectory, where the membrane potential was always increasing after a spike, to a “delay” trajectory, where the membrane potential first hyperpolarized, then depolarized after a spike). If no burst was detected, the index was zero. The maximum index reported here was the maximum across all the supra-threshold responses evoked by depolarizing one-second current steps.
Data availability
All data are publicly and freely available on the Allen Cell Type Database at http://celltypes.brain-map.org. Analysis code is available in the Allen Institute Github repository at https://github.com/AllenInstitute/GLIF_Teeter_et_al_2018. Code for the creation of the pink noise stimulus can be found at https://github.com/AllenInstitute/ephys_pink_noise.
References
Allen Institute for Brain Science. Allen Cell Types Database. http://celltypes.brain-map.org/ (2016).
Allen Institute for Brain Science. Allen Cell Types Database, Technical White Paper: Allen Mouse Common Coordinate Framework. http://help.brain-map.org/display/celltypes/Documentation?preview=/8323525/10813529/CellTypes_Ephys_Overview.pdf (2016).
Koch, C. Biophysics of Computation: Information Processing in Single Neurons (Oxford University Press, New York, NY, 2004).
Herz, A. V., Gollisch, T., Machens, C. K. & Jaeger, D. Modeling single-neuron dynamics and computations: a balance of detail and abstraction. Science 314, 80–85 (2006).
Gerstner, W. & Naud, R. How good are neuron models? Science 326, 379–380 (2009).
Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500 (1952).
Hay, E., Hill, S., Schürmann, F., Markram, H. & Segev, I. Models of neocortical layer 5b pyramidal cells capturing a wide range of dendritic and perisomatic active properties. PLoS Comput. Biol. 7, e1002107 (2011).
Gouwens, N. et al. Systematic generation of biophysically detailed models for diverse cortical neuron types. Nat. Commun. https://doi.org/10.1038/s41467-017-02718-3 (2018).
Prinz, A. A., Billimoria, C. P. & Marder, E. Alternative to hand-tuning conductance-based models: construction and analysis of databases of model neurons. J. Neurophysiol. 90, 3998–4015 (2003).
Markram, H. et al. Reconstruction and simulation of neocortical microcircuitry. Cell 163, 456–492 (2015).
Gollisch, T. & Meister, M. Eye smarter than scientists believed: neural computations in circuits of the retina. Neuron 65, 150–164 (2010).
Potjans, T. C. & Diesmann, M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cereb. Cortex 24, 785 (2014).
Lapicque, L. Recherches quantitatives sur l’excitation electrique des nerfs traitée comme une polarization. J. De. Physiol. Et. Pathol. Gen. 9, 620–635 (1907).
Izhikevich, E. M. Which model to use for cortical spiking neurons. IEEE Trans. Neural Netw. Learn. Syst. 15, 1063–1070 (2004).
Mihalas, S. & Niebur, E. A generalized linear integrate-and-fire neural model produces diverse spiking behaviors. Neural Comput. 21, 704–718 (2009).
Paninski, L., Pillow, J. W. & Simoncelli, E. P. Maximum likelihood estimation of a stochastic integrate-and-fire neural encoding model. Neural Comput. 16, 2533–2561 (2004).
Rossant, C., Goodman, D. F., Platkiewicz, J. & Brette, R. Automatic fitting of spiking neuron models to electrophysiological recordings. Front. Neuroinform. 4, 2 (2010).
Dong, Y., Mihalas, S., Russell, A., Etienne-Cummings, R. & Niebur, E. Estimating parameters of generalized integrate-and-fire neurons from the maximum likelihood of spike trains. Neural Comput. 23, 2833–2867 (2011).
Mensi, S. et al. Parameter extraction and classification of three cortical neuron types reveals two distinct adaptation mechanisms. J. Neurophysiol. 107, 1756–1775 (2012).
Pozzorini, C. et al. Automated high-throughput characterization of single neurons by means of simplified spiking models. PLoS Comput. Biol. 11, e1004275 (2015).
Izhikevich, E. M. et al. Simple model of spiking neurons. IEEE Trans. Neural Netw. Learn. Syst. 14, 1569–1572 (2003).
Izhikevich, E. M. Dynamical Systems in Neuroscience (MIT Press, Cambridge, MA, 2007).
Badel, L. et al. Dynamic I-V curves are reliable predictors of naturalistic pyramidal-neuron voltage traces. J. Neurophysiol. 99, 656–666 (2008).
Allen Institute for Brain Science. Allen Cell Types Database, Technical White Paper: Electrophysiology. http://help.brain-map.org/display/celltypes/Documentation?preview=/8323525/10813529/CellTypes_Ephys_Overview.pdf (2016).
Tasic, B. et al. Adult mouse cortical cell taxonomy revealed by single cell transcriptomics. Nat. Neurosci. 19, 335–346 (2016).
Frey, B. J. & Dueck, D. Clustering by passing messages between data points. Science 315, 972–976 (2007).
Rand, W. Objective criteria for the evaluation of clustering methods. J. Am. Stat. Assoc. 66, 846–850 (1971).
Meila, M. Comparing clusterings by the variation of information. Learn. Theory Kernel Mach. 1, 173–187 (2003).
Allen Institute for Brain Science. Allen Cell Types Database, Technical White Paper: Neuronal Models: Biophysical—All Active. http://help.brain-map.org/display/celltypes/Documentation?preview=/8323525/10256463/BiophysModelAllActive.pdf (2016).
Ardid, S. et al. Mapping of functionally characterized cell classes onto canonical circuit operations in primate prefrontal cortex. J. Neurosci. 35, 2975–2991 (2015).
Brette, R. & Gerstner, W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J. Neurophysiol. 94, 3637–3642 (2005).
Acknowledgements
The authors thank the Allen Institute founder, Paul G. Allen, for his vision, encouragement, and support. We thank the following people for their contributions to the work enabling our project: Lydia Ng for technology team leadership, Amy Bernard and John Phillips for data production team leadership, and Susan Sunkin for program and project management support. We thank all those on the data production and technology teams who helped to collect, curate, quality control and make the data publicly available.
Author information
Authors and Affiliations
Contributions
C.T. developed the project, created and implemented the fitting, optimization, pipeline and analysis codes, developed the algorithms, preformed the mathematical derivations, developed and programmed the experimental stimuli, and wrote, edited and is responsible for all subjects of the manuscript. R.I. performed mathematical derivations, developed and created the code for the generalized linear model (GLM) used in fitting, developed the experimental pink noise stimuli, and wrote and edited the GLM subject. V.M. is responsible for the intellectual and algorithm development as well as code implementation, writing, and editing of the clustering analyses. N.G. is responsible for the conceptualization, algorithm development, code implementation, and writing and editing of the electrophysiological feature extraction analyses. D.F. developed, implemented, and deployed the model pipeline. J.B. was the manager of electrophysiological pipeline data collection and wrote and edited the electrophysicological and cre-line subjects. A.S. was responsible for data curation and public release. N.C. time-optimizated the post hoc optimization algorithm. H.Z. conceived and managed electrophysiological data production pipeline. M.H. directed the primary group conducting this research. C.K. initiated the project, provided overall scientific guidance and edited of all subjects of the manuscript. S.M. conceived and developed the modeling project, created the models, performed mathematical derivations, provided scientific guidance, conceptualized and developed the experimental stimuli, and wrote, edited, and is responsible for all subjects of the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing financial interests.
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Teeter, C., Iyer, R., Menon, V. et al. Generalized leaky integrate-and-fire models classify multiple neuron types. Nat Commun 9, 709 (2018). https://doi.org/10.1038/s41467-017-02717-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41467-017-02717-4
This article is cited by
-
Network model with internal complexity bridges artificial intelligence and neuroscience
Nature Computational Science (2024)
-
Spike frequency adaptation: bridging neural models and neuromorphic applications
Communications Engineering (2024)
-
On the physiological and structural contributors to the overall balance of excitation and inhibition in local cortical networks
Journal of Computational Neuroscience (2024)
-
An Adaptive Generalized Leaky Integrate-and-Fire Model for Hippocampal CA1 Pyramidal Neurons and Interneurons
Bulletin of Mathematical Biology (2023)
-
Deep Federated Machine Learning-Based Optimization Methods for Liver Tumor Diagnosis: A Review
Archives of Computational Methods in Engineering (2023)