Implementation of Kalman Filtering with Spiking Neural Networks
"> Figure 1
<p>(<b>a</b>) Membrane voltage <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and spike voltage <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>s</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> of an LIF neuron for an excitatory input current of <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mrow> <mi>s</mi> <mi>y</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>1.5001</mn> <mi>n</mi> <mi>A</mi> </mrow> </semantics></math>. (<b>b</b>) Tuning curve of the neuron, which shows the <span class="html-italic">riobase</span> value for the parameters given in <a href="#sensors-22-08845-t001" class="html-table">Table 1</a>.</p> "> Figure 2
<p>(<b>a</b>) An SNN with three LIF neurons in the input layer and one output layer. (<b>b</b>) Spiking activity of the first layer. (<b>c</b>) Evolution of the weight of the synapse. (<b>d</b>) Neural activity (input current, membrane voltage, and spike voltage) of the output neuron.</p> "> Figure 3
<p>Signal reconstruction using neurons and encoding/decoding algorithms. (<b>a</b>) Assembly of the encoding/decoding, which alternate the input currents of two different neurons. (<b>b</b>) Comparison between the original signal <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and reconstructed signal <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>c</b>) Spiking activity response for each neuron. (<b>d</b>) Input currents <math display="inline"><semantics> <mrow> <msubsup> <mi>I</mi> <mrow> <mi>s</mi> <mi>y</mi> <mi>n</mi> </mrow> <mo>+</mo> </msubsup> <mo>,</mo> <msubsup> <mi>I</mi> <mrow> <mi>s</mi> <mi>y</mi> <mi>n</mi> </mrow> <mo>−</mo> </msubsup> </mrow> </semantics></math> for the neurons (Blue) versus the riobase (red dotted). (<b>d</b>) Output spikes for each neuron in the assembly.</p> "> Figure 4
<p>Block diagram of the Kalman filter in which the typical Kalman gain-obtaining procedure is replaced by an SNN.</p> "> Figure 5
<p>Proposed SNN network architecture for finding the values of the Kalman gain matrix.</p> "> Figure 6
<p>Time evolution of the reconstruction of the Van der Pol oscillator using the proposed architecture in comparison with the ground truth. (<b>a</b>) Comparison of the ground truth <span class="html-italic">x</span> (blue) with the of the reconstruction <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> </semantics></math> (orange) made using the proposed architecture. (<b>b</b>) Time error reconstruction <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> </mrow> </semantics></math> of the two states of the system. (<b>c</b>) Time evolution of each value of the resulting Kalman gain matrix. (<b>d</b>,<b>e</b>) Weight value evolution over time of the <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>2</mn> </mrow> </semantics></math> synapse set (multiple colors) for <math display="inline"><semantics> <mrow> <mi>E</mi> <mi>n</mi> <mi>s</mi> <mo>+</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>E</mi> <mi>n</mi> <mi>s</mi> <mo>−</mo> </mrow> </semantics></math>, respectively. (<b>f</b>) Time error state estimation of the Van der Pol system using the standard discrete EKF algorithm without knowledge of the covariance matrices <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>,</mo> <mi>R</mi> </mrow> </semantics></math>.</p> "> Figure 7
<p>Time evolution of the Lorenz system’s reconstruction when using the proposed architecture in comparison with the ground truth. (<b>a</b>) Comparison of the ground truth <span class="html-italic">x</span> (blue) with the reconstruction <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> </semantics></math> (orange) made by using the proposed SNN architecture. (<b>b</b>) Time error reconstruction <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> </mrow> </semantics></math> of the three states of the Lorenz system. (<b>c</b>) Evolution of each value of the Kalman gain matrix. (<b>d</b>,<b>e</b>) Weight value evolution over time of the <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> synapse set (multiple colors) for <math display="inline"><semantics> <mrow> <mi>E</mi> <mi>n</mi> <mi>s</mi> <mo>+</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>E</mi> <mi>n</mi> <mi>s</mi> <mo>−</mo> </mrow> </semantics></math>, respectively. (<b>f</b>) Time estimation of the Lorenz system using the standard discrete EKF algorithm without knowledge of the covariance matrices <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>,</mo> <mi>R</mi> </mrow> </semantics></math>.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Neuron Modeling
Frequency Response of the Neuron
2.2. Synapse Modeling
2.3. Reward-Modulated STDP (RSTDP)
2.4. Encoding and Decoding in Spiking Neural Networks
Encoding Algorithm
2.5. Discrete Extended Kalman Filter
- 1.
- Prediction: First, a preliminary estimation , is computed by:Then, a covariance estimate is computed, and the noise covariance matrices and the estimate in the previous timestep are taken into account:
- 2.
- Update: The second step consists of computing the Kalman gain matrix withWe can obtain a final estimation that considers errors in measurement and noise statistics:Finally, the moment of the prediction , which will be used for the next timestep in prediction, is computed:
2.6. Proposed Kalman-Filtering SNN Structure
3. Results
3.1. Van der Pol Simulation
3.2. Lorenz System Simulation
4. Discussion
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ANN | Artificial Neural Network |
AWGN | Additive White Gaussian Noise |
CBA | Crossbar Array |
CMOS | Complement Metal-Oxide Semiconductor |
EKF | Extended Kalman Filter |
GPU | Graphic Processing Unit |
GRU | Gated Recurrent Unit |
IBM | International Business Machines |
IK | Inverse Kinematics |
KF | Kalman Filter |
LIF | Leaky Integrate and Fire |
LTD | Long-Term Depreciation |
LTP | Long-Term Potentiation |
PINN | Physics-Informed Neural Networks |
RSTDP | Reward-Modulated STDP |
SINDY | Sparse Identification of Nonlinear Dynamics |
SNN | Spiking Neural Network |
STDP | Synaptic Time-Dependent Plasticity |
TSMC | Taiwan Semiconductor Manufacturing Company |
VLSI | Very Large Scale of Integration |
References
- Brunton, S.L.; Kutz, J.N. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control, 2nd ed.; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar] [CrossRef]
- Kaiser, E.; Kutz, J.N.; Brunton, S.L. Sparse identification of nonlinear dynamics for model predictive control in the low-data limit. Proc. R. Soc. A Math. Phys. Eng. Sci. 2018, 474, 20180335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kaheman, K.; Kutz, J.N.; Brunton, S.L. SINDy-PI: A robust algorithm for parallel implicit sparse identification of nonlinear dynamics. Proc. R. Soc. A Math. Phys. Eng. Sci. 2020, 476, 20200279. [Google Scholar] [CrossRef] [PubMed]
- Teng, Q.; Zhang, L. Data driven nonlinear dynamical systems identification using multi-step CLDNN. AIP Adv. 2019, 9, 085311. [Google Scholar] [CrossRef] [Green Version]
- Kálmán, R.E.; Bucy, R.S. New Results in Linear Filtering and Prediction Theory. J. Basic Eng. 1961, 83, 95–108. [Google Scholar] [CrossRef] [Green Version]
- Haykin, S. (Ed.) Kalman Filtering and Neural Networks; John Wiley & Sons, Inc.: New York, NY, USA, 2001. [Google Scholar]
- Revach, G.; Shlezinger, N.; Ni, X.; Escoriza, A.L.; van Sloun, R.J.G.; Eldar, Y.C. KalmanNet: Neural Network Aided Kalman Filtering for Partially Known Dynamics. IEEE Trans. Signal Process. 2022, 70, 1532–1547. [Google Scholar] [CrossRef]
- Bing, Z.; Jiang, Z.; Cheng, L.; Cai, C.; Huang, K.; Knoll, A. End to End Learning of a Multi-Layered Snn Based on R-Stdp for a Target Tracking Snake-Like Robot. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, AB, Canada, 20–24 May 2019; pp. 9645–9651. [Google Scholar] [CrossRef]
- Thompson, N.C.; Greenewald, K.H.; Lee, K.; Manso, G.F. The Computational Limits of Deep Learning. arXiv 2020, arXiv:2007.05558. [Google Scholar]
- Sandamirskaya, Y. Rethinking computing hardware for robots. Sci. Robot. 2022, 7, eabq3909. [Google Scholar] [CrossRef]
- Tavanaei, A.; Ghodrati, M.; Reza Kheradpisheh, S.; Masquelier, T.; Maida, A. Deep learning in spiking neural networks. Neural Netw. 2019, 111, 47–63. [Google Scholar] [CrossRef] [Green Version]
- Schuman, C.D.; Kulkarni, S.R.; Parsa, M.; Mitchell, J.P.; Date, P.; Kay, B. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2022, 2, 10–19. [Google Scholar] [CrossRef]
- Kendall, J.D.; Kumar, S. The building blocks of a brain-inspired computer. Applied Physics Reviews 2020, 7, 011305. [Google Scholar] [CrossRef]
- Zaidel, Y.; Shalumov, A.; Volinski, A.; Supic, L.; Ezra Tsur, E. Neuromorphic NEF-Based Inverse Kinematics and PID Control. Front. Neurorobotics 2021, 15, 631159. [Google Scholar] [CrossRef] [PubMed]
- Volinski, A.; Zaidel, Y.; Shalumov, A.; DeWolf, T.; Supic, L.; Ezra-Tsur, E. Data-driven artificial and spiking neural networks for inverse kinematics in neurorobotics. Patterns 2022, 3, 100391. [Google Scholar] [CrossRef] [PubMed]
- Davies, M.; Wild, A.; Orchard, G.; Sandamirskaya, Y.; Guerra, G.A.F.; Joshi, P.; Plank, P.; Risbud, S.R. Advancing Neuromorphic Computing With Loihi: A Survey of Results and Outlook. Proc. IEEE 2021, 109, 911–934. [Google Scholar] [CrossRef]
- Modha, D.S. The Brain’s Architecture, Efficiency on a Chip. 2016. Available online: https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/ (accessed on 12 October 2022).
- Modha, D.S. Products–Akida Neural Processor SoC. 2022. Available online: https://brainchip.com/akida-neural-processor-soc/ (accessed on 12 October 2022).
- Sandamirskaya, Y.; Kaboli, M.; Conradt, J.; Celikel, T. Neuromorphic computing hardware and neural architectures for robotics. Sci. Robot. 2022, 7, eabl8419. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Ang, K.W. Hardware Implementation of Neuromorphic Computing Using Large-Scale Memristor Crossbar Arrays. Adv. Intell. Syst. 2021, 3, 2000137. [Google Scholar] [CrossRef]
- Zhang, X.; Lu, J.; Wang, Z.; Wang, R.; Wei, J.; Shi, T.; Dou, C.; Wu, Z.; Zhu, J.; Shang, D.; et al. Hybrid memristor-CMOS neurons for in-situ learning in fully hardware memristive spiking neural networks. Sci. Bull. 2021, 66, 1624–1633. [Google Scholar] [CrossRef]
- Payvand, M.; Moro, F.; Nomura, K.; Dalgaty, T.; Vianello, E.; Nishi, Y.; Indiveri, G. Self-organization of an inhomogeneous memristive hardware for sequence learning. Nat. Commun. 2022, 13, 1–12. [Google Scholar] [CrossRef]
- Kimura, M.; Shibayama, Y.; Nakashima, Y. Neuromorphic chip integrated with a large-scale integration circuit and amorphous-metal-oxide semiconductor thin-fil msynapse devices. Sci. Rep. 2022, 12, 5359. [Google Scholar] [CrossRef]
- Kim, H.; Mahmoodi, M.R.; Nili, H.; Strukov, D.B. 4K-memristor analog-grade passive crossbar circuit. Nat. Commun. 2021, 12. [Google Scholar] [CrossRef]
- Gerstner, W.; Kistler, W.M.; Naud, R.; Paninski, L. Neuronal Dynamics; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar] [CrossRef]
- Bing, Z.; Meschede, C.; Röhrbein, F.; Huang, K.; Knoll, A.C. A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks. Front. Neurorobotics 2018, 12, 35. [Google Scholar] [CrossRef] [Green Version]
- Javanshir, A.; Nguyen, T.T.; Mahmud, M.A.P.; Kouzani, A.Z. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput. 2022, 34, 1289–1328. [Google Scholar] [CrossRef] [PubMed]
- Guo, W.; Fouda, M.E.; Eltawil, A.M.; Salama, K.N. Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems. Front. Neurosci. 2021, 15, 638474. [Google Scholar] [CrossRef] [PubMed]
- Juarez-Lora, A.; Ponce-Ponce, V.H.; Sossa, H.; Rubio-Espino, E. R-STDP Spiking Neural Network Architecture for Motion Control on a Changing Friction Joint Robotic Arm. Front. Neurorobotics 2022, 16, 904017. [Google Scholar] [CrossRef] [PubMed]
- Harris, C.R.; Millman, K.J.; Van Der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef] [PubMed]
- Meurer, A.; Smith, C.P.; Paprocki, M.; Čertík, O.; Kirpichev, S.B.; Rocklin, M.; Kumar, A.; Ivanov, S.; Moore, J.K.; Singh, S.; et al. SymPy: Symbolic computing in Python. PeerJ Comput. Sci. 2017, 3, e103. [Google Scholar] [CrossRef] [Green Version]
- Eshraghian, J.K.; Ward, M.; Neftci, E.; Wang, X.; Lenz, G.; Dwivedi, G.; Bennamoun, M.; Jeong, D.S.; Lu, W.D. Training spiking neural networks using lessons from deep learning. arXiv 2021, arXiv:2109.12894. [Google Scholar]
- Saito, T. Piecewise linear switched dynamical systems: A review. Nonlinear Theory Its Appl. IEICE 2020, 11, 373–390. [Google Scholar] [CrossRef]
LIF Model | Parameter Value |
---|---|
Membrane charging constant | ms |
Membrane resistance | M |
Capacitance of the neuron | nF |
Threshold voltage of the neuron | mV |
Resting potential of the neuron | mV |
Reset potential of the neuron | mV |
Spike amplitude | mV |
Postsynaptic current decay time | ms |
Refractory Period | ms |
Conductance-Based LIF | |
Time decay of the injection current | ms |
Temporal injection current constant |
RSTDP Synapse Model | |
---|---|
Long-term potentiation constant | S/mSeg |
Long-term depreciation constant | S/mSeg |
Transient memory decay time | ms |
Max. conductance Value | ms |
Min. conductance Value | s |
SF Encoding and Decoding | |
Encoding sensibility threshold value in a Van der Pol test | |
Encoding sensibility threshold value in a Lorenz test | |
Decoding sensibility threshold value in both tests | |
Slope modulation constant | |
Noise Parameters | |
Measurement noise’s standard deviation | |
System uncertainties’ standard deviation |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Juárez-Lora, A.; García-Sebastián, L.M.; Ponce-Ponce, V.H.; Rubio-Espino, E.; Molina-Lozano, H.; Sossa, H. Implementation of Kalman Filtering with Spiking Neural Networks. Sensors 2022, 22, 8845. https://doi.org/10.3390/s22228845
Juárez-Lora A, García-Sebastián LM, Ponce-Ponce VH, Rubio-Espino E, Molina-Lozano H, Sossa H. Implementation of Kalman Filtering with Spiking Neural Networks. Sensors. 2022; 22(22):8845. https://doi.org/10.3390/s22228845
Chicago/Turabian StyleJuárez-Lora, Alejandro, Luis M. García-Sebastián, Victor H. Ponce-Ponce, Elsa Rubio-Espino, Herón Molina-Lozano, and Humberto Sossa. 2022. "Implementation of Kalman Filtering with Spiking Neural Networks" Sensors 22, no. 22: 8845. https://doi.org/10.3390/s22228845
APA StyleJuárez-Lora, A., García-Sebastián, L. M., Ponce-Ponce, V. H., Rubio-Espino, E., Molina-Lozano, H., & Sossa, H. (2022). Implementation of Kalman Filtering with Spiking Neural Networks. Sensors, 22(22), 8845. https://doi.org/10.3390/s22228845