Integrated Information in the Spiking–Bursting Stochastic Model
<p>Typical time traces of the spiking–bursting model containing <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> nodes (“neurons”) in discrete time. Plots for different neurons are shown with different constant shifts along the ordinate axis. Two bursts (marked) and background uncorrelated spiking dynamics are visible.</p> "> Figure 2
<p>Blue solid lines—plots of <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>s</mi> </msub> <mo>,</mo> <mi>ϵ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> versus <math display="inline"><semantics> <msub> <mi>p</mi> <mi>s</mi> </msub> </semantics></math> varied from 0 to <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>s</mi> <mo movablelimits="true" form="prefix">max</mo> </mrow> </msub> </semantics></math> as per (15), at <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, 0.1, 0.2, 0.5, 1 (from right to left). Function <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>s</mi> </msub> <mo>,</mo> <mi>ϵ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is a universal single function of two arguments, which is explicitly expressed in elementary functions in (<a href="#FD21b-entropy-22-01334" class="html-disp-formula">21b</a>), and allows one to express mutual information (<a href="#FD18-entropy-22-01334" class="html-disp-formula">18</a>) and effective information (<a href="#FD3-entropy-22-01334" class="html-disp-formula">3</a>) in terms of the model parameters. Red dashed lines—approximation (<a href="#FD35-entropy-22-01334" class="html-disp-formula">35</a>). Red dots—upper bounds of approximation applicability range (<a href="#FD36c-entropy-22-01334" class="html-disp-formula">36c</a>).</p> "> Figure 3
<p>Plots of <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </semantics></math> on <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mn>1</mn> </msub> <mo><</mo> <mi>s</mi> <mo><</mo> <mn>1</mn> </mrow> </semantics></math> for several values of <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math> (as indicated) at <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. According to (25a,b), <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </semantics></math> shows the dependence of effective information <math display="inline"><semantics> <msub> <mo>Φ</mo> <mi>eff</mi> </msub> </semantics></math> upon the choice of the bipartition <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </semantics></math>, which is characterized by the value of <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>A</mi> </msub> <mo>=</mo> <mi>s</mi> </mrow> </semantics></math>, while the function parameter <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math> determines the intensity of spontaneous spiking activity. For each value of <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math>, the extremum <math display="inline"><semantics> <mrow> <mo>(</mo> <msqrt> <msub> <mi>s</mi> <mn>1</mn> </msub> </msqrt> <mo>,</mo> <mi>f</mi> <mrow> <mo>(</mo> <msqrt> <msub> <mi>s</mi> <mn>1</mn> </msub> </msqrt> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math> is indicated with a dot.</p> "> Figure 4
<p>Graphs of threshold <math display="inline"><semantics> <msubsup> <mi>s</mi> <mn>1</mn> <mo movablelimits="true" form="prefix">min</mo> </msubsup> </semantics></math> determining the minimal sufficient intensity of spontaneous neuronal spiking activity for positive II. (<b>a</b>) Blue solid lines—plots of <math display="inline"><semantics> <mrow> <msubsup> <mi>s</mi> <mn>1</mn> <mo movablelimits="true" form="prefix">min</mo> </msubsup> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>s</mi> </msub> <mo>,</mo> <mi>ϵ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> versus <math display="inline"><semantics> <msub> <mi>p</mi> <mi>s</mi> </msub> </semantics></math> varied from 0 to <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>s</mi> <mo movablelimits="true" form="prefix">max</mo> </mrow> </msub> </semantics></math> as per (15), at <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, 0.5, 1 (from right to left). Red dashed line—plot of the asymptotic formula (<a href="#FD38-entropy-22-01334" class="html-disp-formula">38</a>). (<b>b</b>) Blue solid lines—plots of <math display="inline"><semantics> <mrow> <msubsup> <mi>s</mi> <mn>1</mn> <mo movablelimits="true" form="prefix">min</mo> </msubsup> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>s</mi> </msub> <mo>,</mo> <mi>ϵ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> varied from 0 to <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </semantics></math> as per (<a href="#FD16-entropy-22-01334" class="html-disp-formula">16</a>), at <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, 0.6, 0.7 (from top to bottom). Vertical position of red dashed lines is the result of (<a href="#FD38-entropy-22-01334" class="html-disp-formula">38</a>), horizontal span denotes the estimated applicability range (36b).</p> "> Figure 5
<p>Comparison of two versions of empirical effective information for the symmetric bipartition—“whole-minus-sum” measure <math display="inline"><semantics> <msub> <mo>Φ</mo> <mi>eff</mi> </msub> </semantics></math> (<a href="#FD3-entropy-22-01334" class="html-disp-formula">3</a>) from [<a href="#B13-entropy-22-01334" class="html-bibr">13</a>] (blue lines) and “decoder based” information <math display="inline"><semantics> <msup> <mo>Φ</mo> <mo>*</mo> </msup> </semantics></math> (5) from [<a href="#B16-entropy-22-01334" class="html-bibr">16</a>] (red lines) versus spiking activity parameter <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math> at various fixed values of the bursting component parameters <math display="inline"><semantics> <msub> <mi>p</mi> <mi>s</mi> </msub> </semantics></math> (indicated on top of the panels) and <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> (indicated in the legends). Panel (<b>a</b>)—unnormalized values, panels (<b>b</b>–<b>d</b>)—normalized by <math display="inline"><semantics> <msup> <mi>ϵ</mi> <mn>2</mn> </msup> </semantics></math>. Threshold <math display="inline"><semantics> <msubsup> <mi>s</mi> <mn>1</mn> <mo movablelimits="true" form="prefix">min</mo> </msubsup> </semantics></math> calculated according to (<a href="#FD38-entropy-22-01334" class="html-disp-formula">38</a>) is shown in each panel with an additional vertical grid line.</p> ">
Abstract
:1. Introduction
2. Definition of II Measures in Use
3. Spiking–Bursting Stochastic Model
4. Model Parameter Scaling
5. Analysis of the Empirical “Whole Minus Sum” Measure for the Spiking–Bursting Process
5.1. Expressing the “Whole Minus Sum” Information
5.2. Determining the Sign of the “Whole Minus Sum” Information
5.3. Asymptotics for Weak Correlations in Time
6. Comparison of Integrated Information Measures
- transitions from negative to positive values at a certain threshold value of , which is well approximated by the formula (38) when is small, as required by (36a,b); the result of Equation (38) is indicated in each panel of Figure 5 by an additional vertical grid line labeled on the abscissae axis—cf. Figure 4;
- reaches a maximum on the interval and tends to zero (from above) at ;
- scales with as , when (36a,b) hold.
7. Discussion
8. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A. Derivation of Parameters Scaling of the Spiking–Bursting Model
Appendix B. Expressing Mutual Information for the Spiking–Bursting Process
Appendix C. Expanding I 0 in Powers of ϵ
References
- Tononi, G. An information integration theory of consciousness. BMC Neurosci. 2004, 5, 42. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Balduzzi, D.; Tononi, G. Integrated information in discrete dynamical systems: Motivation and theoretical framework. PLoS Comput. Biol. 2008, 4, e1000091. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tononi, G. The integrated information theory of consciousness: An updated account. Arch. Ital. Biol. 2012, 150, 293–329. [Google Scholar] [PubMed]
- Oizumi, M.; Albantakis, L.; Tononi, G. From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0. PLoS Comput. Biol. 2014, 10, e1003588. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tononi, G. Consciousness as integrated information: A provisional manifesto. Biol. Bull. 2008, 215, 216–242. [Google Scholar] [CrossRef]
- Peressini, A. Consciousness as Integrated Information A Provisional Philosophical Critique. J. Conscious. Stud. 2013, 20, 180–206. [Google Scholar]
- Tsuchiya, N.; Taguchi, S.; Saigo, H. Using category theory to assess the relationship between consciousness and integrated information theory. Neurosci. Res. 2016, 107, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Tononi, G.; Boly, M.; Massimini, M.; Koch, C. Integrated information theory: From consciousness to its physical substrate. Nat. Rev. Neurosci. 2016, 17, 450. [Google Scholar] [CrossRef]
- Norman, R.; Tamulis, A. Quantum Entangled Prebiotic Evolutionary Process Analysis as Integrated Information: From the origins of life to the phenomenon of consciousness. J. Comput. Theor. Nanosci. 2017, 14, 2255–2267. [Google Scholar] [CrossRef]
- Engel, D.; Malone, T.W. Integrated information as a metric for group interaction. PLoS ONE 2018, 13, e0205335. [Google Scholar] [CrossRef]
- Mediano, P.A.M.; Farah, J.C.; Shanahan, M. Integrated Information and Metastability in Systems of Coupled Oscillators. arXiv 2016, arXiv:1606.08313. [Google Scholar]
- Toker, D.; Sommer, F.T. Information integration in large brain networks. PLoS Comput. Biol. 2019, 15, e1006807. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Barrett, A.B.; Seth, A.K. Practical measures of integrated information for time-series data. PLoS Comput. Biol. 2011, 7, e1001052. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Griffith, V. A Principled Infotheoretic ϕ-like Measure. arXiv 2014, arXiv:1401.0978. [Google Scholar]
- Oizumi, M.; Tsuchiya, N.; Amari, S.i. Unified framework for information integration based on information geometry. Proc. Natl. Acad. Sci. USA 2016, 113, 14817–14822. [Google Scholar] [CrossRef] [Green Version]
- Oizumi, M.; Amari, S.i.; Yanagawa, T.; Fujii, N.; Tsuchiya, N. Measuring integrated information from the decoding perspective. PLoS Comput. Biol. 2016, 12, e1004654. [Google Scholar] [CrossRef]
- Tegmark, M. Improved measures of integrated information. PLoS Comput. Biol. 2016, 12, e1005123. [Google Scholar] [CrossRef]
- Mediano, P.; Seth, A.; Barrett, A. Measuring integrated information: Comparison of candidate measures in theory and simulation. Entropy 2019, 21, 17. [Google Scholar] [CrossRef] [Green Version]
- Kanakov, O.; Gordleeva, S.; Ermolaeva, A.; Jalan, S.; Zaikin, A. Astrocyte-induced positive integrated information in neuron-astrocyte ensembles. Phys. Rev. E 2019, 99, 012418. [Google Scholar] [CrossRef] [Green Version]
- Araque, A.; Carmignoto, G.; Haydon, P.G.; Oliet, S.H.; Robitaille, R.; Volterra, A. Gliotransmitters Travel in Time and Space. Neuron 2014, 81, 728–739. [Google Scholar] [CrossRef] [Green Version]
- Gordleeva, S.Y.; Stasenko, S.V.; Semyanov, A.V.; Dityatev, A.E.; Kazantsev, V.B. Bi-directional astrocytic regulation of neuronal activity within a network. Front. Comput. Neurosci. 2012, 6, 92. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pankratova, E.V.; Kalyakulina, A.I.; Stasenko, S.V.; Gordleeva, S.Y.; Lazarevich, I.A.; Kazantsev, V.B. Neuronal synchronization enhanced by neuron–astrocyte interaction. Nonlinear Dyn. 2019, 97, 647–662. [Google Scholar] [CrossRef]
- Gordleeva, S.Y.; Lebedev, S.A.; Rumyantseva, M.A.; Kazantsev, V.B. Astrocyte as a Detector of Synchronous Events of a Neural Network. JETP Lett. 2018, 107, 440–445. [Google Scholar] [CrossRef]
- Gordleeva, S.Y.; Ermolaeva, A.V.; Kastalskiy, I.A.; Kazantsev, V.B. Astrocyte as Spatiotemporal Integrating Detector of Neuronal Activity. Front. Physiol. 2019, 10. [Google Scholar] [CrossRef] [PubMed]
- Barrett, A.B. Exploration of synergistic and redundant information sharing in static and dynamical Gaussian systems. Phys. Rev. E 2015, 91, 052802. [Google Scholar] [CrossRef] [Green Version]
- Kazantsev, V.B.; Asatryan, S.Y. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks. Phys. Rev. E 2011, 84, 031913. [Google Scholar] [CrossRef]
- Esir, P.M.; Gordleeva, S.Y.; Simonov, A.Y.; Pisarchik, A.N.; Kazantsev, V.B. Conduction delays can enhance formation of up and down states in spiking neuronal networks. Phys. Rev. E 2018, 98. [Google Scholar] [CrossRef]
- Andreev, A.V.; Ivanchenko, M.V.; Pisarchik, A.N.; Hramov, A.E. Stimulus classification using chimera-like states in a spiking neural network. Chaos Solitons Fractals 2020, 139, 110061. [Google Scholar] [CrossRef]
- Lobov, S.A.; Mikhaylov, A.N.; Shamshin, M.; Makarov, V.A.; Kazantsev, V.B. Spatial Properties of STDP in a Self-Learning Spiking Neural Network Enable Controlling a Mobile Robot. Front. Neurosci. 2020, 14. [Google Scholar] [CrossRef]
- Makovkin, S.Y.; Shkerin, I.V.; Gordleeva, S.Y.; Ivanchenko, M.V. Astrocyte-induced intermittent synchronization of neurons in a minimal network. Chaos Solitons Fractals 2020, 138, 109951. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kanakov, O.; Gordleeva, S.; Zaikin, A. Integrated Information in the Spiking–Bursting Stochastic Model. Entropy 2020, 22, 1334. https://doi.org/10.3390/e22121334
Kanakov O, Gordleeva S, Zaikin A. Integrated Information in the Spiking–Bursting Stochastic Model. Entropy. 2020; 22(12):1334. https://doi.org/10.3390/e22121334
Chicago/Turabian StyleKanakov, Oleg, Susanna Gordleeva, and Alexey Zaikin. 2020. "Integrated Information in the Spiking–Bursting Stochastic Model" Entropy 22, no. 12: 1334. https://doi.org/10.3390/e22121334
APA StyleKanakov, O., Gordleeva, S., & Zaikin, A. (2020). Integrated Information in the Spiking–Bursting Stochastic Model. Entropy, 22(12), 1334. https://doi.org/10.3390/e22121334