Abstract
Detailed conductance-based nonlinear neuron models consisting of thousands of synapses are key for understanding of the computational properties of single neurons and large neuronal networks, and for interpreting experimental results. Simulations of these models are computationally expensive, considerably curtailing their utility. Neuron_Reduce is a new analytical approach to reduce the morphological complexity and computational time of nonlinear neuron models. Synapses and active membrane channels are mapped to the reduced model preserving their transfer impedance to the soma; synapses with identical transfer impedance are merged into one NEURON process still retaining their individual activation times. Neuron_Reduce accelerates the simulations by 40–250 folds for a variety of cell types and realistic number (10,000–100,000) of synapses while closely replicating voltage dynamics and specific dendritic computations. The reduced neuron-models will enable realistic simulations of neural networks at unprecedented scale, including networks emerging from micro-connectomics efforts and biologically-inspired “deep networks”. Neuron_Reduce is publicly available and is straightforward to implement.
Similar content being viewed by others
Introduction
Compartmental models (CMs) were first employed by Wilfrid Rall1 to study the integrative properties of neurons. They enabled him to explore the impact of spatio-temporal activation of conductance-based dendritic synapses on the neuron’s output and the effect of the dendritic location of a synapse on the time course of the somatic excitatory postsynaptic potential2. By simulating electrically distributed neuron models, Rall demonstrated how the cable properties of dendrites explain the variety of somatic excitatory postsynaptic potential (EPSP) shapes that were recorded at the soma of α-motoneurons, thus negating the dominant explanation at that time that the differences in shapes of the somatic EPSPs in these cells result from differences in the kinetics of the respective synapses. This was an impressive example that faithful models of the neuron (as a distributed rather than a “point” electrical unit) are essential for the correct interpretation of experimental results. Since Rall’s 1964 and 1967 studies using CMs, EPSP “shape indices” measured at the soma are routinely used for estimating the electrotonic distance of dendritic synapses from the soma.
Over the years, detailed CMs of neurons have provided key insights into hundreds of experimental findings, both at the single-cell and the network levels. A notable example at the single-cell level is the explanation as to why the somatic Na+ action potential propagates backward in the soma-to-dendrites direction and (typically) not vice versa3. CMs have also pinpointed the conditions for the generation of local dendritic Ca2+ spikes4,5,6 and provided an explanation for the spatial restriction of the active spread of dendritic spikes from distal dendrites to the soma7 and see also refs. 8,9,10,11,12,13,14. Today, detailed CMs are even being used for simulating signal processing in human pyramidal neurons, including their large numbers of dendritic spines/synapses15.
At the network level, detailed CMs are utilized for such noteworthy projects as large-scale simulations of densely in silico reconstructed cortical circuits16,17 and the overarching goal of the Allen Institute to simulate large parts of the visual system of the mouse18,19. Because detailed compartmental modeling is increasingly becoming an essential tool for the understanding of diverse neuronal phenomena, major efforts have been invested in developing user-friendly computer software that implements detailed CMs, the best known of which are NEURON20, GENESIS21, NeuroConstruct22, PyNN23, and, recently, BioNet24, NTS25, NetPyNE26, and Geppetto27.
Modern personal computers can simulate tens of seconds of electrical activity of single neurons comprising thousands of nonlinear compartments and synapses. However, they handle poorly cases where many model configurations need to be evaluated such as in large-scale parameter fitting for single-neuron models5,28, or when the dendritic tree is morphologically and electrically highly intricate and consists of tens of thousands of dendritic synapses, as with the human cortical pyramidal neurons15. When the aim is to simulate a neuronal network consisting of hundreds of thousands of such neurons, only very powerful computers can cope. For example, the simulation of a cortical network consisting of 200,000 detailed neuron models on the BlueGene/Q supercomputer takes several hours to simulate 30 s of biological time17.
To overcome this obstacle, two approaches have been pursued. The first involves developing alternative, cheaper, and more efficient computing architectures (e.g., neuromorphic-based computers29,30). These have not yet reached the stage where they can simulate large-scale network models with neurons consisting of branched nonlinear dendrites having a realistic number of synapses. The other approach is to simplify neuron models while preserving their input/output relationship as faithfully as possible. Rall31 was the first to suggest a reduction scheme in his “equivalent cylinder” model, which showed that, for certain idealized passive dendritic trees, the whole tree could be collapsed into a single cylinder that was analytically identical to the detailed tree. The “equivalent cylinder” preserves the total dendritic membrane area, the electrotonic length of the dendrites, and, most importantly, the postsynaptic potential (amplitude and time course) at the soma for a dendritic synapse when mapped to its respective electrotonic location on the “equivalent cylinder”32,33. However, this method is not applicable for dendritic trees with large variability in their cable lengths (e.g., pyramidal neurons with a long apical tree and short basal trees), conductance-based synapses, or for dendrites with nonlinear membrane properties.
Over the years, several different reduction schemes have been proposed; for example, a recent work mapped all the synapses to a single compartment, taking the filtering effect of the dendrites into account34. Other methods reduce the detailed morphology to a simplified geometric model while preserving the total membrane area35,36,37 or the axial resistivity38; see also refs. 12,39,40. However, these methods have a variety of drawbacks; in particular, they are either “hand fitted” and thus lack a clear analytical underpinning or are complicated to implement, and in some cases, their computational advantage for realistic numbers (thousands) of synapses is not quantified. Most of these methods do not support dendrites with active conductances35,38,39,41,42 and they have not been tested on a broad range of neuron types. Importantly, none of the previous methods provided an easy-to-use open access implementation. Thus, today there is no simple, publicly available reduction method for neuron models that can be used by the extensive neuroscience and machine-learning communities.
To respond to this need, the present study provides an analytic method for reducing the complexity of detailed neuron models while faithfully preserving the essential input/output properties of these models. Neuron_Reduce is based on key theoretical insights from Rall’s cable theory, and its implementation for any neuron type is straightforward without requiring hand-tuning. Depending on the neuron modeled and the number of synapses, Neuron_Reduce accelerates the simulation run-time by a factor of up to 250 while preserving the identity of individual synapses and their respective dendrites. It also preserves specific membrane properties and dendritic nonlinearities, hence maintaining specific dendritic computations. Neuron_Reduce is easy to use, fully documented, and publicly available on GitHub (https://github.com/orena1/neuron_reduce).
Results
Mapping of a detailed neuron model to a multi-cylinder model
The thrust of our analytical reduction method (Neuron_Reduce) is described in Fig. 1a–c. This method is based on representing each of the original stem dendrites by a single cylindrical cable, which has the same specific membrane resistivity (Rm, in Ωcm2), capacitance (Cm, in F/cm2), and axial resistivity (Ra, in Ωcm) as in the detailed tree (Fig. 1a). Also, each cylindrical cable satisfies two constraints: (i) the magnitude of the transfer impedance, \(| {Z_{0,L}\left( \omega \right)} | = | {V_0\left( \omega \right)/I_L\left( \omega \right)} |\), from its distal sealed end (X = L) to its origin at the soma end (X = 0) is identical to the magnitude of the transfer impedance from the electrotonically most distal dendritic tip to the soma in the respective original dendrite; (ii) at its proximal end (X = 0), the magnitude of the input impedance, \(| {Z_{0,0}\left( \omega \right)} | = | {V_0\left( \omega \right)/I_0\left( \omega \right)} |\), is identical to that of the respective stem dendrite (when decoupled from the soma). As shown in Eqs. (1)–(11) (Methods), these two constraints, while preserving the specific membrane and axial properties, guarantee a unique cylindrical cable (with a specific diameter and length) for each of the original dendrites.
Because the magnitude of the transfer impedance in both the original dendrite and in the respective cylindrical cable spans from \(| {Z_{0,L}(\omega )} |\) to \(| {Z_{0,0}(\omega )} |\), all dendritic loci having intermediate transfer impedance values can be mapped to a specific locus in the respective cylinder that preserves this intermediate transfer impedance. This mapping guarantees (for the passive case) that the magnitude of the somatic voltage response, V0(ω), to an input current, Ix(ω), injected at a dendritic location, x, will be identical in both the detailed and the reduced cylinder models (see Methods). Consequently, synapses and nonlinear ion channels are mapped to their respective loci in the reduced cylinder while preserving the respective transfer impedance to the soma (see Fig. 1, Step B, and Methods). Based on Eqs. (1)–(11), Neuron_Reduce generates a reduced multi-cylindrical tree for any ω value (different reduced models for different ω values). Conveniently, we found a close match between the detailed and the reduced models for ω = 0 (the steady-state case). Therefore, all figures in this work are based on reduced models with ω = 0 (see Discussion).
Neuron_Reduce implemented on L5 pyramidal cell with synapses
In Fig. 1, Neuron_Reduce is implemented on a detailed CM of a 3D reconstructed layer 5 pyramidal neuron from the rat somatosensory cortex (same model as in ref. 5). This neuron consists of eight basal dendrites and one apical dendrite (shown in different colors) stemming from the soma. This neuron model has active membrane ion channels at both the soma and dendrites (see below). However, Neuron_Reduce first treats the modeled tree as passive by abolishing all voltage-dependent membrane conductances, and only retaining the leak conductance. Implementing Eqs. (1)–(11) for this cell produced a reduced, multi-cylindrical, passive model (Fig. 1b, Step A) consisting of only 50 compartments rather than the 642 compartments in the detailed model.
Figure 1c shows an example of four synapses located at different apical branches. These synapses all have the same transfer resistance to the soma in the detailed tree. Therefore, Neuron_Reduce maps these synapses to a single respective locus in the respective cylinder, such that their transfer resistance is identical in both models. In the reduced model, these synapses are merged into one “NEURON” process (red synapse in Fig. 1b). However, they retain their individual activation times (see Methods). Figure 1d compares the transfer impedance between a specific point in the apical tree (marked by “d” in Fig. 1a, b) and the soma. By construction, for the passive case, the transfer resistance (for ω = 0) is equivalent for the respective loci in the detailed and the reduced model. This is indeed the case in Fig. 1d (left-most point on the x-axis), thus validating the implementation of the Neuron_Reduce analytic method. Note that although constructed using ω = 0, the similarity between the detailed and reduced model also holds for higher input frequencies. However, for ω around 10–100 Hz, the transfer impedance from d to soma (and vice versa, due to the reciprocity theorem for passive systems43) is somewhat larger in the reduced model (compare the red and black lines).
To test the performance of Neuron_Reduce on transient synaptic inputs (composed of mixed input frequencies), we sequentially activated the four synapses shown in Fig. 1c in both the detailed and the reduced models (see Methods and Supplementary Table 2). Figure 1e shows the close similarity in the composite somatic EPSPs between the two models, further validating that the mapping of the detailed model to the reduced model using ω = 0 provides satisfactory results for the passive case (see also Supplementary Fig. 2).
Accuracy and speed-up of Neuron_Reduce for nonlinear models
To measure the accuracy of Neuron_Reduce for a fully-active nonlinear neuron model, we ran a comprehensive set of simulations using the well-established case of the L5 pyramidal cell model5 shown in Fig. 2a (same cell as in Fig. 1). This neuron model includes a variety of nonlinear dendritic channels including a voltage-dependent Ca2+ “hot spot” in the apical tuft (schematic yellow region in Fig. 1c) and a Na+-based spiking mechanism in the cell body. We randomly distributed 8000 excitatory and 2000 inhibitory synapses on the modeled dendritic tree (the synaptic parameters are listed in Supplementary Table 2) and used Neuron_Reduce to generate a reduced model for this cell. We simulated the detailed model by randomly activating the excitatory synapses at 5 Hz and the inhibitory synapses at 10 Hz (see Methods). The detailed model responded with an average firing rate of 11.8 Hz (black trace in Fig. 2b; only 2 out of 50 s simulation time are shown). The average firing rate of the respective reduced model in response to the same synaptic input was 11.3 Hz (red trace, Fig. 2b; spike timings are shown by small dots on the top). The cross-correlation between the two spike trains peaked around zero (Fig. 2c), and the inter-spike interval (ISI) distributions of the two models were similar (Fig. 2d).
The full range of responses to a random synaptic input for the two models was explored by varying the firing rate of the excitatory (α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA)- and (N-methyl-d-aspartate) (NMDA)-based) synapses and measuring the degree of similarity between the firing rates of the two models, which indicated a good fit between the two (Fig. 2e). We used the SPIKE-synchronization measure44,45 to further quantify the similarity between the spike trains of the detailed and reduced models. The SPIKE-synchronization value for the two spike trains shown in Fig. 2b was 0.8. In Fig. 2f, the SPIKE synchronization was computed as a function of the output rate of the detailed model for both the case where the excitatory synapses consisted of only an AMPA component (blue) and for when they also consisted of an NMDA component (orange). For the AMPA-only case, the SPIKE synchronization was high for all output frequencies, but was poor for low output frequencies when the synapses consisted of an NMDA component, although improving significantly for output frequencies above ~7 Hz (see Discussion). Figure 2g shows the SPIKE-synchronization as a function of the firing rate of the detailed model, for active and passive dendrites and with/without NMDA-based synaptic conductance, demonstrating again that when NMDA synapses are involved, the performance of the reduced model is low for low output rates. We also tested other spike trains similarity metrics46,47 (Supplementary Fig. 3) and found comparable results to those shown in Fig. 2. We have also analyzed the performance of Neuron_Reduce on two additional patterns of synaptic input. In one case, the synaptic input was activated in an oscillatory manner at different frequencies (see Methods). In these cases, the spike-synchronization measure ranged between 0.75 and 1 (Supplementary Fig. 4a, b). In the other case, the synaptic input was taken from a spontaneously active Blue Brain circuit17 (see Methods). In this case, the spike-synchronization measure was 0.71 (Supplementary Fig. 4c).
We compared the performance of our reduction method to two other reduction approaches, one of which was Rall’s “equivalent cable” reduction method31,48. The other method maps all the dendritic synapses to the somatic compartment, after computing the filtering effect of the dendritic cable for each synapse34 (see Methods). Neuron_Reduce outperformed both these reduction methods (Supplementary Fig. 5).
Figure 3 compares the run-time of the detailed versus the reduced model for the neuron model shown in Fig. 2a. For example, simulating the detailed model with 10,000 synapses for 50 s of biological time required 2906 s of computer time (run-time), whereas it took only 68.7 s in the reduced model, a ~42-fold computational speed-up (see Supplementary Table 1). The larger the number of synapses in the detailed model, the longer the run-time (Fig. 3a). In contrast, the run-time in the reduced model is only shallowly dependent on the number of synapses. This is expected when considering the synaptic merging step in our algorithm (see Discussion). The run-time of the reduced model depends on the number of compartments per cylinder; it increases sharply with an increasing number of compartments (the run-time ratio between the detailed and the reduced models decreases, gray line in Fig. 3b). However, there was no improvement in the SPIKE-synchronization measure when the spatial discretization, ΔX, per compartment was <0.1λ, where λ is the length constant (Fig. 3b blue line and see also previous research on the subject49). Therefore, all the results presented in Figs. 1–7 are based on models with a ΔX that does not exceed 0.1λ.
In Fig. 4 we compared the dendritic voltage in the detailed model and in the respective location in the reduced model. We found that: (i) the voltage transients could differ significantly in dendritic branches that are all mapped to the same compartment in the reduced model (e.g., compare the gray traces in the yellow compartments in Fig. 4b). (ii) the average voltage trace of these different dendritic branches (black trace in Fig. 4b) is similar to the voltage in the respective compartment in the reduced model (red trace in Fig. 4b). The implications of the latter finding for capturing highly nonlinear local dendritic events is elaborated in the Discussion.
Neuron_Reduce keeps dendritic nonlinearities and computations
To determine the capabilities of the reduced models to support nonlinear dendritic phenomena and dendritic computations, we repeated two classical experiments in both the detailed and the reduced models of the L5 pyramidal cell shown in Fig. 1. The first simulated experiment started by injecting a brief depolarizing step current to the soma of the detailed model to generate a somatic Na+ action potential (AP, black trace in Fig. 5a). This AP propagated backward to the apical dendrite, the BPAP (red trace in Fig. 5a). Repeating the same current injection in the reduced model led to a similar phenomenon, but with a larger BPAP (Fig. 5c). The detailed model also included a “hot region” with voltage-dependent calcium conductances in its apical dendrite (see also Fig. 1). Combining somatic current injection with synaptic-like transient depolarizing current injected to the apical nexus evoked a prolonged Ca2+ spike in the distal apical dendrite (red trace at the apical tree), which, in turn, generated a burst of somatic Na+ spikes (the BPAP-activated Ca2+ spike (BAC) firing4,5,50, Fig. 5b). Neuron_Reduce maps the nonlinear dendritic “hot” Ca2+ region to its respective location in the reduced model (see Fig. 1 and Methods). Figure 5c, d shows that the exact same combination of somatic and dendritic input currents also produced the BAC firing phenomenon in the reduced model. However, the reduced model was somewhat more excitable than the detailed model; this resulted in a burst of three spikes with a higher frequency (and sometimes with an additional spike) in the reduced model (comparison between Fig. 5b–d).
The second simulated experiment attempted to replicate theoretical and experimental results reported in previous studies1,51,52. In these studies, several excitatory synapses were activated sequentially in time, on a stretch of a basal dendrite, either in the soma-to-dendrites (OUT) direction or vice versa (the IN direction). Rall showed that the shape and size of the resultant composite somatic EPSP depended strongly on the spatio-temporal order of synaptic activation; it was always larger and more delayed for the centripetal (dendrites-to-soma) than for the centrifugal (soma-to-dendrites) sequence of synaptic activation (this difference can serve to compute the direction of motion51). It was shown that the difference in the resulting somatic voltage peak between these two spatio-temporal sequences of synaptic activation was enhanced when nonlinear NMDA-dependent synapses were involved and that it made it possible to discriminate between complex patterns of dendritic activation52.
To simulate these phenomena, 12 excitatory synapses were placed along one basal branch in the detailed model (red dots on the green basal tree, Fig. 6a). At first, the synapses only had an AMPA component. The synapses were activated in temporal order from the tip to the soma (IN, cyan traces) or from the soma to the tip (OUT, blue traces, see Methods for details). As predicted by Rall, activation in the IN direction resulted in larger and delayed somatic EPSP (cyan trace versus the blue trace in Fig. 6b). Neuron_Reduce merged these 12 synapses into five point processes along the respective cylinder (Fig. 6d). We repeated the same experiment in the reduced model and found that the EPSP resulting from the IN direction was larger and delayed, with a similar EPSP waveform to that of the detailed model (Fig. 6e; see also Supplementary Fig. 2 and Discussion). Next, an NMDA component was added to the 12 simulated synapses; this resulted in larger somatic EPSP amplitudes in both directions (and both models) and a smaller difference in the peak timing between the different directions in both the detailed and the reduced models (comparison between Fig. 6c–f).
To generalize the impact of the spatio-temporal order of synaptic activation, we used a directionality index suggested in a previous study52. This measure estimates how different a given synaptic sequence is from the IN sequence by calculating the number of synaptic swaps needed to convert this given pattern into the IN pattern (using the bubble-sort algorithm, see Methods). We tested the EPSPs that resulted from different temporal combinations of synaptic activation (each having a different directionality index), both without (Fig. 6g) and with an NMDA component (Fig. 6i). The peak somatic EPSP in the reduced model (red dots) was larger than in the respective detailed model (black dots), both for the AMPA-only case (by 1.71 ± 0.43 mV; mean ± SD) and for the AMPA + NMDA case (by 4.80 ± 0.74 mV); see Supplementary Fig. 1. Nevertheless, the behavior of the two models was similar when the somatic voltage in the two models was subtracted by the peak value obtained in the OUT direction (Fig. 6h, j). Then, the difference between the reduced and the detailed models was, on average, only 0.11 ± 0.43 mV for the AMPA-only case and 0.35 ± 0.74 mV for the AMPA + NMDA case. Thus, although the detailed and the reduced models differ to a certain extent (see Discussion), the capability of the reduced model to discriminate between spatio-temporal patterns of synaptic activation is similar to that of the detailed model.
Neuron-Reduce applied successfully on a variety of neurons
We next tested the utility of Neuron_Reduce on 13 different neuron models from different brain regions (Fig. 7). Four models were obtained from the Blue Brain database17,53: L6 tufted pyramidal cell, L4 double bouquet cell, L4 spiny stellate cell, and L5 Martinotti cell, all from the rat somatosensory cortex. Two additional models were obtained from the Allen Institute cell-type database11: an L4 spiny cell and an L1 aspiny cell from the mouse visual cortex. Medium spiny neuron from the mouse basal ganglia54; two rat thalamocortical neurons55; Golgi cell from mouse cerebellar cortex; and one inhibitory hippocampal neuron from the rat56. We also took two additional neuron models from our laboratory: rat L2/3 large basket cell57 and a model of a human L2/3 pyramidal cell from the temporal cortex58. All these models were based on 3D reconstructions and were constrained by experimental recordings (see Supplementary Table 2 for details on the various neuron models and input parameters).
Neuron_Reduce successfully generated a reduced model for all these different cell types, with highly faithful response properties in all cases (Fig. 7). Three examples with their respective morphologies for the detailed and reduced models are shown in Fig. 7a–c. For a given input, we measured the spiking activity of the detailed and reduced models (Fig. 7d–f) and calculated the corresponding SPIKE-synchronization values. For the L6 tufted PC model (Fig. 7a, d), the L2/3 large basket cell model (Fig. 7b, e), and the L4 double bouquet model (Fig. 7c, f), the SPIKE-synchronization values were 0.74, 0.85, and 0.91, respectively, for 50-s-long simulations (only 2 s are shown in Fig. 7d–f). The SPIKE-synchronization values for additional inputs, and for the other 10 neuron models and their corresponding reduced models, are shown in Fig. 7g. We have also tested the performance of Neuron_Reduce and the variability of the SPIKE-synchronization measure using eight neocortical neuron types, with 11 cell models per type taken from the Blue Brain cells dataset17,53. Supplementary Fig. 6 shows that, for all cells, the SPIKE-synchronization measure remains similar to that found in Fig. 7 with mean values per cell type ranging between 0.43 and 0.86. Additionally, as in Fig. 7, it increased with the output frequency of the modeled cell.
Discussion
Neuron_Reduce is a new tool for simplifying complex neuron models while enhancing their simulation run-time. It analytically maps the detailed tree into a reduced multi-cylindrical tree, based on Rall’s cable theory and linear circuit theory (Fig. 1). The underpinning of the reduction algorithm is that it preserves the magnitude of the transfer impedance \(| {Z_{0,j}\left( \omega \right)} |\) from each dendritic location, j, to the soma (the dendro-somatic direction, Eqs. (1)–(11) in Methods). Since in linear systems it holds that \(| {Z_{0,j}(\omega )} | = | {Z_{j,0}(\omega )} |\), for passive dendritic trees it also preserves the transfer impedance in the soma-to-dendritic direction (e.g., current injection at the soma will result in the same voltage response at the respective sites in the detailed and reduced models59).
Note that dendritic voltage transients (e.g., synaptic potentials) contain a range of frequencies, ω. We however had to select one frequency to use for the mapping of the detailed-to-the-reduced tree. Consequently, we examined a whole range of possible ω values for this mapping. Conveniently, we found that ω = 0 is the preferred frequency for generating the reduced model (namely, when the mapping from detailed-to-the-reduced model is performed based on the transfer resistance \(| {Z_{0,j}\left( {\omega = 0} \right)} | = | {R_{0,j}} |\), see Supplementary Fig. 7). This result is actually not surprising; Rinzel and Rall33 showed that, in passive trees and current-based synapses, the attenuation of the voltage time integral (the area below the EPSPs) is identical to the attenuation of steady-state voltage. In other words, when using the transfer resistance for our mapping procedure, we preserved the total charge transfer (which in our case, was proportional to the voltage time integral) from the synapse to the soma (and vice versa), but not, for example, the EPSP peak value.
Neuron_Reduce was proven to be accurate in replicating voltage dynamics and spike timing for a large regime of input parameters and a variety of neuron types (Fig. 7, Supplementary Fig. 6, and Supplementary Table 2). This claim is based on using several metrics for assessing the quality of the performance of the reduced model (Supplementary Fig. 3). Neuron_Reduce is straightforward to use, it is fast, and generally applicable, thus enabling its implementation on any neuron morphology with any number (even tens of thousands) of synapses. One key advantage of Neuron_Reduce is that it retains the identity of individual dendrites and synapses and that it maps dendritic nonlinearities to their respective loci in the reduced model, hence preserving local excitable dendritic phenomena and therefore maintaining nonlinear dendritic computations. Neuron_Reduce also preserves the passive cable properties (Rm, Ra, and Cm) of the detailed model, thus preserving synaptic integration and other temporal aspects of the detailed model. Neuron_Reduce can also be applied for reducing cells connected with gap junctions. As Neuron_Reduce preserves the transfer resistance from the location of the synapses (in this case the gap junction) to the soma and vice versa, one expects that the coupling coefficient between the two connected cells will be preserved in the reduced models, after mapping the gap junction to its appropriate location in the reduced model.
Neuron_Reduce enhances the computational speed by a factor of up to several hundred folds, depending on the simulated morphology and the number of simulated synapses (Fig. 3 and Supplementary Table 1). This combination of capabilities, together with its user-friendly documentation and its public availability, make Neuron_Reduce a promising method for the community of neuronal modelers and computational neuroscientists, and for the growing community interested in “biophysical deep learning.”
For a large number of synapses and complex morphologies, the run-time of Neuron_Reduce models can be accelerated by up to 250-fold as compared to their respective detailed models (Fig. 3 and Supplementary Table 1). This is achieved in two associated steps. First, the algorithm reduces the number of compartments of the neuron model; for example, for the reconstructed tree in Fig. 1, it reduced the number of compartments from 642 to 50. Then, synapses (and ion channels) that are mapped to the same electrical compartment in the reduced tree (because they have similar transfer resistance to the soma) are merged into one point process in NEURON. Each of these steps on its own has a relatively small effect on the run-time. However, when combined, a large (supralinear) improvement in the computational speed is achieved (Supplementary Table 1). This is because at each time step, NEURON computes both the voltage in each electrical compartment as well as the currents and states of each point process and membrane mechanism (synapses and conductances). Reducing the number of compartments in a model decreases the number of equations to be solved and the number of synapses to be simulated (due to the reduced number of compartments, a larger number of synapses are merged together). Importantly, merging synapses preserves the activation time of each synapse. Note, however, that in its present state, Neuron_Reduce cannot merge synapses with different kinetics.
Several other reduction methods for single neurons have been proposed over the years12,34,35,36,37,38,39,41. Most are not based on an analytic underpinning and thus require hand-tuning of the respective biophysical and morphological parameters. In addition, most of these methods have not been examined using realistic numbers of dendritic synapses and are incapable of systematic incorporation of dendritic nonlinearities. In most cases, their accuracy has not been assessed for a range of neuron types (but see ref. 41). Many of these methods are not well documented, thus making it hard to compare them directly with Neuron_Reduce. Nevertheless, we did compare the performance of Neuron_Reduce to two other reduction methods and showed that it outperformed them (Supplementary Fig. 5).
It should be noted that although the transfer impedance from a given dendritic locus to the soma is preserved in the reduced model, the input impedance at that locus is not preserved (is lower) in the reduced model. Consequently, the conditions for evoking local dendritic events, and the fine details of these events are not identical in the detailed and the reduced models (e.g., compare Fig. 5a, b to Fig. 5c, d and see Fig. 4). Indeed, if there were highly local dendritic Na+ spikes (as in ref. 60), then Neuron_Reduce will not capture them, as this local dendritic spike will be averaged out in the respective lumped cable. Similarly, because the local voltage response to a current injection in the dendrite depends on the dendritic impedance, the local synaptic responses are somewhat different in the detailed versus the reduced cases, especially when voltage-gated ion channels (such as NMDA-dependent synaptic channels) are involved. In fact, when large dendritic NMDA signals are involved, the resultant somatic EPSPs are expected to be different in the detailed as compared to the reduced model, as is the case in Figs. 2 and 6. Indeed, if one insists on preserving highly local nonlinear dendritic events, then the full dendritic tree should be modeled.
Despite these local differences, the reduced model for L5PC did generate a local dendritic Ca2+ spike in the cylinder representing the apical dendrite and was able to perform an input classification task (enhanced by NMDA conductance), as in the detailed tree (Figs. 5 and 6). Moreover, when embedded in large circuits, individual neurons are likely to receive semi-random dendritic input, rather than a clustered input on specific dendrites. For such inputs, the reduced models generated by Neuron_Reduce capture most of the statistics of the membrane voltage dynamics as in the detailed model (Figs. 2 and 7 and Supplementary Figs. 4 and 6).
The next straightforward step is to use Neuron_Reduce to simplify all the neurons composing a large neural network model, such as the Blue Brain Project17 and the in silico models by Egger et al.16 and by Billeh et al.61. By preserving the connectivity and reducing the complexity of the neuronal models, the reduced models will make it possible to run much longer simulations and/or larger neuronal networks, while faithfully preserving the I/O of each neuron. Such long simulations are critical for reproducing long-term processes such as circuit evolution and structural and functional plasticity.
Methods
Neuron_Reduce algorithm and its implementation in NEURON
Neuron_Reduce maps each original stem dendrite to a unique single cylinder with both ends sealed. This cylinder preserves the specific passive cable properties (Rm, Cm, and Ra) of the original tree as well as both the transfer impedance from the electrotonically most distal dendritic tip to the soma and the input resistance at the soma end of the corresponding stem dendrite (when disconnected from the soma). For a sinusoidal angular frequency ω > 0, the transfer impedance Zi,j(ω) is the ratio between the Fourier transform of the voltage at point (i) and the Fourier transform of the sinusoidal current injected into the injection point (j) (note that in passive systems, Zi,j(ω) = Zj,i(ω)). This ratio is a complex number; its magnitude (|Zi,j(ω)|) is the ratio (in Ω) between the peak voltage response and the amplitude of the injected current. In a short cylindrical cable with sealed ends and electrotonic length L, the transfer impedance, Z0,X(ω), between the somatic end of the cylinder (X = 0) and any location X is33,43,62
where
and
where τ is the membrane time constant, RmCm.
From Eq. (1), the input impedance at X = 0 is
We next want a cylindrical cable of electrotonic length L, in which both \(| {Z_{0,L}\left( \omega \right)} |\) and \(| {Z_{0,0}\left( \omega \right)} |\) are identical to those measured in the respective original stem dendrite (Fig. 1). For this purpose, we first look for an L value in which the ratio \(| {Z_{0,L}\left( \omega \right)} |/| {Z_{0,0}\left( \omega \right)} |\) is preserved. Dividing Eq. (1) by Eq. (4), we get
which can be expressed as
where a and b are the real and the imaginary parts of q, respectively, and M and ϕ are the modulus and phase angle of this complex ratio.
As shown previously62, it follows that
and
Importantly, for a fixed M (and a given ω) there is a unique value of L that satisfies Eq. (7) (see Fig. 4 in ref. 62 and note the-one-to-one mapping between M and L for a given ω value). However, there are an infinite number of cylindrical cables (with different diameters and lengths) that have identical L values preserving a given M value in Eq. (7).
We next need a unique cable, with a specific diameter d, that also preserves the measured value of |Z0,0(ω)| (and therefore it also preserves |Z0,L(ω)|, see Eq. (7)).
and thus
from which we compute the diameter, d, for that cylinder
Equations (1)–(11) provide the unique cylindrical cable (with a specific d and L, and the given membrane and axial properties) that preserves the values of \(| {Z_{0,L}\left( \omega \right)} |\) and \(| {Z_{0,0}\left( \omega \right)} |\) as in the respective stem dendrite. Note that this unique cable does not necessarily preserve the phase ratio (ϕ in Eq. (8)) as in the original tree.
Practically, in order to transform each original stem dendrite (with fixed Rm, Ra, and Cm values) into a corresponding unique cylindrical cable, we proceeded as follows. First, on each modeled stem dendrite (when isolated from the soma), we searched for a distal location x with minimal transfer impedance, \(| {Z_{0,x}\left( \omega \right)} |\), from that particular x to the soma. This location provided the smallest M value for this particular stem dendrite. This distal dendritic locus, x, was mapped to the distal end, X = L, of the corresponding cylinder. We then used Eqs. (1)–(11) to calculate the unique respective cylinder for each stem dendrite.
In order to map synapses from the detailed model to the reduced one, we computed, for each synapse at location j in the detailed model, \(| {Z_{0,j}\left( \omega \right)} |\), and then mapped this synapse to the respective location y in the reduced model, such that \(| {Z_{0,y}\left( \omega \right)} | = | {Z_{0,j}\left( \omega \right)} |\). This reduced model is then compartmentalized into segments (typically with spatial resolution of 0.1λ, see Fig. 3b). We then merged all synapses with identical kinetics and reversal potential, that are mapped to a particular segment, onto a single-point process object in NEURON (Fig. 1, Step B). These synapses retain their original activation time and biophysical properties through the connection of each of their respective original NetStim objects to the single-point process that represents them all (each of these connections was mediated by the synapse’s original NetCon object). As shown in Supplementary Table 1, this step dramatically reduced the running time of the model. We note that all the results presented in this study were obtained using ω = 0 in Eqs. (1)–(11), since running the same simulations with ω = 0 provided the best performance (see Supplementary Fig. 7). However, ω is a parameter in the algorithm code and can be modified by the user. Note also that \(| {Z_{0,0}\left( \omega \right)} |\), \(| {Z_{0,j}\left( \omega \right)} |\), and \(| {Z_{0,L}\left( \omega \right)} |\) were analytically computed for each original stem dendrite using the NEURON impedance tool63.
Neuron models used in the present study
To estimate the accuracy of the reduction method, we ran 50-s simulations of cell morphologies of different types, in both the reduced and detailed models (see also Supplementary Fig. 6). Models of 13 neurons were used in this study; their details are available in Supplementary Table 2. For each of the models, we distributed 1,250–10,000 synapses on their dendritic trees. Eighty percent of the synapses were excitatory, and the rest were inhibitory. The synaptic conductances were modeled using known two-state kinetic synaptic models17. For simplicity, we did not include synaptic facilitation or depression. All models had one type of γ-aminobutyric acid type A (GABAA)-based inhibitory synapses and either AMPA- or AMPA + NMDA-based excitatory synapses. The synaptic rise and decay time constants were taken from various works cited in Supplementary Table 2. When no data were available, we used the default parameters of the Blue Brain Project synaptic models17,53. Inhibitory synapses were activated at 10 Hz, whereas the activation rate of the excitatory synapses was varied to generate different output firing rates in the range of 1–20 Hz (Figs. 2–4, 7 and Supplementary Figs. 3–7); the values used for each model are listed in Supplementary Table 2. In all models except Supplementary Fig. 4, synaptic activation time was randomly sampled from a homogenous Poisson process. In Supplementary Fig. 4a, b the activation time was sampled from an inhomogeneous Poisson process with a time-dependent intensity \(\lambda \left( t \right) = r \, \ast \, {\mathrm{sin}}\left( {t \, \ast f \ast 2\pi } \right) + 1\), where t is time in s, r is the firing rate of the synapse, and f is the oscillation frequency.
In Supplementary Fig. 4c, we extracted a single-layer five thick-tufted pyramidal cell with an early bifurcating apical tuft (L5_TTC2; gid 75586) from active blue brain microcircuit17 with calcium and potassium concentration of 1.23 and 5.0 mM, respectively. The synaptic activation from the microcircuit was replayed to this detailed model and also to its respective reduced model. Synaptic depression and facilitation were disabled, and the synapse time constants, which varied in the microcircuit, were set to their mean value (the decay time constant was set to 1.74 and 8.68 ms for AMPA and GABAA, respectively; the rise time constant for GABAA was set to 4.58 ms); all other variables were as in the blue brain simulations.
Estimating the accuracy of the reduced models
Cross-correlations were calculated between the spike trains of the detailed and the reduced models. The window size was 500 ms, and the bin size was 1 ms. The resulting cross-correlations were normalized by the number of spikes in the detailed model (Fig. 2c). ISIs were binned in windows of 21 ms to create the ISI distribution in Fig. 2d.
SPIKE-synchronization measure is a parameter- and scale-free method that quantifies the degree of synchrony between two spike trains44. SPIKE-synchronization uses the relative number of quasi-simultaneous appearances of spikes in the spike trains. In this study, we used the Python implementation of this method64. To allow comparison to the literature, Supplementary Fig. 3 depicts three additional metrics against which to compare the performance of the detailed and the reduced models: Trace accuracy39, ISI distance44, and Γ coincidence factor65.
Comparison to other reduction algorithms
We compared Neuron_Reduce to two classical reduction algorithms (Supplementary Fig. 5):
-
1.
Equivalent cable using the d3/2 rule for reduction. Rall and Rinzel32 and Rinzel and Rall33 showed that for idealized passive dendritic trees, the entire dendritic tree can be collapsed to a single equivalent cylinder that is analytically identical (from the point of view of the soma) to the detailed tree. However, neurons do not have ideal dendritic trees, mostly because dendritic terminations typically occur at different electrotonic distances from the soma. Nevertheless, it is possible to collapse any dendritic tree using a similar mapping (Rall’s “d3/2 rule”) as in the idealized tree; this will provide an “equivalent cable” (rather than an “equivalent cylinder”) with a varying diameter for the whole dendritic tree (see details in Rall et al.48). The electrotonic distances to the soma of synapses and nonlinear dendritic mechanisms were computed in the original model and then each synapse and mechanism was mapped to the corresponding segment in the “equivalent cable” preserving the electrotonic distance to the soma as in the original tree.
-
2.
Mapping all synapses to the soma. Another recent reduction scheme was proposed where all dendritic synapses are mapped, after implementing cable filtering for each synapse, to the somatic compartment34. Here we used a modified version of this method. We used Neuron_Reduce to generate a multi-cylindrical model of the cell as in Fig. 1b. Then, all the synapses in the original tree were mapped to the model soma. To account for dendritic filtering, for each synapse, we multiplied the original synaptic conductance, gsyn, by the steady-state voltage attenuation factor from the original dendritic location, j, of the synapse to the soma. Specifically,
$$g_{{\mathrm{syn}}}^ \ast = g_{{\mathrm{syn}}} \ast \frac{{| {Z_{0,j}} |}}{{| {Z_{0,0}} |}} = g_{{\mathrm{syn}}} \ast \frac{{V_{0,j}}}{{V_{0,0}}},$$(12)where \(g_{{\mathrm{syn}}}^ \ast\) is the new synaptic weight for synapse j when placed at the soma of the reduced model.
Spatio-temporal patterns of synaptic activation
In Fig. 6, 12 synapses, placed at 25 µm intervals, were distributed on a stretch of one basal dendrite. The peak AMPA conductance per synapse was 5 nS. In cases where the synapses also had an NMDA component, the NMDA-based peak conductance was 3.55 nS. The synapses were activated in a specific temporal order with a time delay of 3.66 ms between them. This resulted in an input velocity of 7 µm/s for the sequential IN and OUT patterns in Fig. 6. In addition, the temporal order of synaptic activation was randomized and scored according to the directionality index52, which sums the number of swaps used by the bubble-sort algorithm to sort a specific temporal pattern into the IN pattern. In this measure, an IN pattern is attributed the value of 0 (no swaps) and the OUT pattern the value of 67 (67 swaps in bubble sort are required to “sort” the OUT pattern into the IN pattern52).
All simulations were performed using NEURON 7.4–7.720 running on the Blue Brain V supercomputer based on HPE SGI 8600 platform hosted at the Swiss National Computing Center in Lugano, Switzerland. Each compute node was composed of an Intel Xeon 6140 CPUs @2.3 GHz and 384 GB DRAM. Analysis and simulation were conducted using Python and visualization using Matplotlib66.
The Neuron_Reduce algorithm is publicly available on GitHub (http://github.com/orena1/neuron_reduce).
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
All spike times and somatic membrane potentials presented in the article are available upon request.
Code availability
The Neuron_Reduce algorithm and most of the models that were used in the paper are publicly available on GitHub (http://github.com/orena1/neuron_reduce). An interactive example is available as a live paper (https://humanbrainproject.github.io/hbp-bsp-live-papers/2020/amsalem_et_al_2020/amsalem_et_al_2020.html). Software used for visualization of neurons in Fig. 7 is available at https://github.com/BlueBrain/RTNeuron.
References
Rall, W. in Neural Theory Model (ed. Reiss, R. F.) 73–97 (Stanford University Press, Palo Alto, 1964).
Rall, W. Distinguishing theoretical synaptic potentials computed for different soma-dendritic distributions of synaptic input. J. Neurophysiol. 30, 1138–1168 (1967).
Rapp, M., Yarom, Y. & Segev, I. Modeling back propagating action potential in weakly excitable dendrites of neocortical pyramidal cells. Proc. Natl Acad. Sci. USA 93, 11985–11990 (1996).
Larkum, M. E., Nevian, T., Sandler, M., Polsky, A. & Schiller, J. Synaptic integration in tuft dendrites of layer 5 pyramidal neurons: a new unifying principle. Science 325, 756–760 (2009).
Hay, E., Hill, S., Schürmann, F., Markram, H. & Segev, I. Models of neocortical layer 5b pyramidal cells capturing a wide range of dendritic and perisomatic active properties. PLoS Comput. Biol. 7, e1002107 (2011).
Almog, M. & Korngreen, A. A quantitative description of dendritic conductances and its application to dendritic excitation in layer 5 pyramidal neurons. J. Neurosci. 34, 182–196 (2014).
Segev, I. Single neurone models: oversimple, complex and reduced. Trends Neurosci. 15, 414–421 (1992).
Stuart, G. & Spruston, N. Determinants of voltage attenuation in neocortical pyramidal neuron dendrites. J. Neurosci. 18, 3501–3510 (1998).
Magee, J. C. & Cook, E. P. Somatic EPSP amplitude is independent of synapse location in hippocampal pyramidal neurons. Nat. Neurosci. 3, 895 (2000).
Poirazi, P., Brannon, T. & Mel, B. W. Arithmetic of subthreshold synaptic summation in a model CA1 pyramidal cell. Neuron 37, 977–987 (2003).
Gouwens, N. W. et al. Systematic generation of biophysically detailed models for diverse cortical neuron types. Nat. Commun. 9, 710 (2018).
Bahl, A., Stemmler, M. B., Herz, A. V. M. & Roth, A. Automated optimization of a reduced layer 5 pyramidal cell model based on experimental data. J. Neurosci. Methods 210, 22–34 (2012).
Migliore, M., Hoffman, D. A., Magee, J. C. & Johnston, D. Role of an A-type K+ conductance in the back-propagation of action potentials in the dendrites of hippocampal pyramidal neurons. J. Comput. Neurosci. 7, 5–15 (1999).
Segev, I. & London, M. A theoretical view of passive and active dendrites. Dendrites 376, xxi (1999).
Eyal, G. et al. Human cortical pyramidal neurons: from spines to spikes via models. Front. Cell. Neurosci. 12, 181 (2018).
Egger, R., Dercksen, V. J., Udvary, D., Hege, H.-C. & Oberlaender, M. Generation of dense statistical connectomes from sparse morphological data. Front. Neuroanat. 8, 129 (2014).
Markram, H. et al. Reconstruction and simulation of neocortical microcircuitry. Cell 163, 456–492 (2015).
Hawrylycz, M. et al. Inferring cortical function in the mouse visual system through large-scale systems neuroscience. Proc. Natl Acad. Sci. USA 113, 7337–7344 (2016).
Arkhipov, A. et al. Visual physiology of the layer 4 cortical circuit in silico. PLoS Comput. Biol. https://doi.org/10.1371/journal.pcbi.1006535 (2018).
Carnevale, N. T. & Hines, M. L. The NEURON Book (Cambridge University Press, Cambridge, 2006).
Bower, J. M. in The Book of Genesis 195–201 (Springer, New York, 1998).
Gleeson, P., Steuber, V. & Silver, R. A. neuroConstruct: a tool for modeling networks of neurons in 3D space. Neuron 54, 219–235 (2007).
Davison, A. P. PyNN: a common interface for neuronal network simulators. Front. Neuroinform. https://doi.org/10.3389/neuro.11.011.2008 (2008).
Gratiy, S. L. et al. BioNet: a Python interface to NEURON for modeling large-scale networks. PLoS ONE 13, e0201630 (2018).
Kozloski, J. & Wagner, J. An ultrascalable solution to large-scale neural tissue simulation. Front. Neuroinform. 5, 15 (2011).
Dura-Bernal, S. et al. NetPyNE, a tool for data-driven multiscale modeling of brain circuits. eLife https://doi.org/10.7554/eLife.44494 (2019).
Cantarelli, M. et al. Geppetto: a reusable modular open platform for exploring neuroscience data and models. Philos. Trans. R. Soc. Ser. B 373, 20170380 (2018).
Van Geit, W. et al. BluePyOpt: leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience. Front. Neuroinform. 10, 17 (2016).
Schemmel, J., Fieres, J. & Meier, K. in IJCNN 2008 (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on Neural Networks 431–438 (IEEE, 2008).
Aamir, S. A., Muller, P., Hartel, A., Schemmel, J. & Meier, K. A highly tunable 65-nm CMOS LIF neuron for a large scale neuromorphic system. in ESSCIRC Conference 2016: 42nd European Solid-State Circuits Conference 71–74 (IEEE, 2016). https://doi.org/10.1109/ESSCIRC.2016.7598245.
Rall, W. Electrophysiology of a dendritic neuron model. Biophys. J. 2, 145–167 (1962).
Rall, W. & Rinzel, J. Branch input resistance and steady attenuation for input to one branch of a dendritic neuron model. Biophys. J. 13, 648–687 (1973).
Rinzel, J. & Rall, W. Transient response in a dendritic neuron model for current injected at one branch. Biophys. J. 14, 759–790 (1974).
Rössert, C. et al. Automated Point-Neuron Simplification of Data-Driven Microcircuit Models (2016). http://arXiv.org/1604.00087.
Stratford, K., Mason, A., Larkman, A., Major, G. & Jack, J. in The Computing Neuron (eds. Durbin, R., Miall, C. & Mitchison, G.) 296–321 (Addison-Wesley Longman Publishing Co., Inc. 1989).
Destexhe, A. Simplified models of neocortical pyramidal cells preserving somatodendritic voltage attenuation. Neurocomputing 38–40, 167–173 (2001).
Hendrickson, E. B., Edgerton, J. R. & Jaeger, D. The capabilities and limitations of conductance-based compartmental neuron models with reduced branched or unbranched morphologies and active dendrites. J. Comput. Neurosci. 30, 301–321 (2011).
Bush, P. C. & Sejnowski, T. J. Reduced compartmental models of neocortical pyramidal cells. J. Neurosci. Methods 46, 159–166 (1993).
Marasco, A., Limongiello, A. & Migliore, M. Fast and accurate low-dimensional reduction of biophysically detailed neuron models. Sci. Rep. 2, 928 (2012).
Hao, J., Wang, X.-d, Dan, Y., Poo, M.-m & Zhang, X.-h An arithmetic rule for spatial summation of excitatory and inhibitory inputs in pyramidal neurons. Proc. Natl Acad. Sci. USA 106, 21906–21911 (2009).
Marasco, A. et al. Using Strahler’s analysis to reduce up to 200-fold the run time of realistic neuron models. Sci. Rep. 3, 2934 (2013).
Brown, S. A., Moraru, I. I., Schaff, J. C. & Loew, L. M. Virtual NEURON: a strategy for merged biochemical and electrophysiological modeling. J. Comput. Neurosci. 31, 385–400 (2011).
Koch, C. Biophysics of Computation: Information Processing in Single Neurons (Oxford University Press, 1999).
Kreuz, T., Mulansky, M. & Bozanic, N. SPIKY: a graphical user interface for monitoring spike train synchrony. J. Neurophysiol. 113, 3432–3445 (2015).
Kreuz, T., Bozanic, N. & Mulansky, M. SPIKE—synchronization: a parameter-free and time-resolved coincidence detector with an intuitive multivariate extension. BMC Neurosci. 16, P170 (2015).
Kreuz, T. Measures of spike train synchrony. Scholarpedia 6, 11934 (2011).
Satuvuori, E. & Kreuz, T. Which spike train distance is most suitable for distinguishing rate and temporal coding? J. Neurosci. Methods 299, 22–33 (2018).
Rall, W. et al. Matching dendritic neuron models to experimental data. Physiol. Rev. 72, S159–S186 (1992).
Parnas, I. & Segev, I. A mathematical model for conduction of action potentials along bifurcating axons. J. Physiol. 295, 323–343 (1979).
Larkum, M. E., Zhu, J. J. & Sakmann, B. A new cellular mechanism for coupling inputs arriving at different cortical layers. Nature 398, 338–341 (1999).
Anderson, J. C., Binzegger, T., Kahana, O., Martin, K. A. C. & Segev, I. Dendritic asymmetry cannot account for directional responses of neurons in visual cortex. Nat. Neurosci. 2, 820 (1999).
Branco, T., Clark, B. A. & Häusser, M. Dendritic discrimination of temporal input sequences in cortical neurons. Science 329, 1671–1675 (2010).
Ramaswamy, S. et al. The neocortical microcircuit collaboration portal: a resource for rat somatosensory cortex. Front. Neural Circuits 9, 44 (2015).
Lindroos, R. et al. Basal ganglia neuromodulation over multiple temporal and structural scales—simulations of direct pathway MSNs investigate the fast onset of dopaminergic effects and predict the role of Kv4.2. Front. Neural Circuits https://doi.org/10.3389/fncir.2018.00003 (2018).
Iavarone, E. et al. Experimentally-constrained biophysical models of tonic and burst firing modes in thalamocortical neurons. PLoS Comput. Biol. https://doi.org/10.1371/journal.pcbi.1006753 (2019).
Migliore, R. et al. The physiological variability of channel density in hippocampal CA1 pyramidal cells and interneurons explored using a unified data-driven modeling workflow. PLoS Comput. Biol. https://doi.org/10.1371/journal.pcbi.1006423 (2018).
Amsalem, O., Van Geit, W., Muller, E., Markram, H. & Segev, I. From neuron biophysics to orientation selectivity in electrically coupled networks of neocortical L2/3 large basket cells. Cereb. Cortex 26, 3655–3668 (2016).
Eyal, G. et al. Unique membrane properties and enhanced signal processing in human neocortical neurons. Elife 5, e16553 (2016).
Koch, C., Poggio, T. & Torres, V. Retinal ganglion cells: a functional interpretation of dendritic morphology. Philos. Trans. R. Soc. Ser. B 298, 227–263 (1982).
Smith, S. L., Smith, I. T., Branco, T. & Häusser, M. Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo. Nature 503, 115–120 (2013).
Billeh, Y. N. et al. Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex. SSRN Electron. J. https://doi.org/10.2139/ssrn.3416643 (2019).
Rall, W. & Segev, I. in Voltage and Patch Clamping with Microelectrodes 191–215 (Springer, 2013). https://doi.org/10.1007/978-1-4614-7601-6_9.
Carnevale, N. T., Tsai, K. Y., Claiborne, B. J. & Brown, T. H. in Advances in Neural Information Processing Systems 7th edn (eds. Tesauro, G., Touretzky, D. S. & Leen, T. K.) 69–76 (MIT Press, 1995).
Mulansky, M. & Kreuz, T. PySpike—a Python library for analyzing spike train synchrony. SoftwareX 5, 183–189 (2016).
Jolivet, R., Lewis, T. J. & Gerstner, W. Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. J. Neurophysiol. 92, 959–976 (2004).
Hunter, J. D. Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9, 90–95 (2007).
Acknowledgements
We thank Gal Eliraz for her early work on the reduction method and Mickey London for advising us along this project. This study received funding from the European Union’s Horizon 2020 Framework Program for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2), the ETH domain for the Blue Brain Project, the Gatsby Charitable Foundation, and the NIH Grant Agreement U01MH114812.
Author information
Authors and Affiliations
Contributions
I.S. proposed the principle theoretical idea for the Neuron_Reduce scheme. I.S., O.A., G.E. and N.R. extended the original idea, planned, and designed the study. O.A., G.E. and N.R. implemented the Neuron_Reduce simulations. F.S. and P.K. assisted with the detailed benchmarking of Neuron_Reduce. M.G. helped refactor the tool to increase its usability, maintainability, and comprehensibility. All authors wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Amsalem, O., Eyal, G., Rogozinski, N. et al. An efficient analytical reduction of detailed nonlinear neuron models. Nat Commun 11, 288 (2020). https://doi.org/10.1038/s41467-019-13932-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41467-019-13932-6
This article is cited by
-
Sub-threshold neuronal activity and the dynamical regime of cerebral cortex
Nature Communications (2024)
-
Regulation of XOR function of reduced human L2/3 pyramidal neurons
Cognitive Neurodynamics (2024)
-
Introducing the Dendrify framework for incorporating dendrites to spiking neural networks
Nature Communications (2023)
-
A Modular Workflow for Model Building, Analysis, and Parameter Estimation in Systems Biology and Neuroscience
Neuroinformatics (2022)
-
A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications
Nature Machine Intelligence (2021)