Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3372799.3394364acmconferencesArticle/Chapter ViewAbstractPublication PagescpsweekConference Proceedingsconference-collections
research-article

Compiling Spiking Neural Networks to Neuromorphic Hardware

Published: 16 June 2020 Publication History

Abstract

Machine learning applications that are implemented with spike-based computation model, e.g., Spiking Neural Network (SNN), have a great potential to lower the energy consumption when executed on a neuromorphic hardware. How- ever, compiling and mapping an SNN to the hardware is challenging, especially when compute and storage resources of the hardware (viz. crossbars) need to be shared among the neurons and synapses of the SNN. We propose an approach to analyze and compile SNNs on resource-constrained neuromorphic hardware, providing guarantees on key performance metrics such as execution time and throughput. Our approach makes the following three key contributions. First, we propose a greedy technique to partition an SNN into clusters of neurons and synapses such that each cluster can fit on to the resources of a crossbar. Second, we exploit the rich semantics and expressiveness of Synchronous Dataflow Graphs (SDFGs) to represent a clustered SNN and analyze its performance using Max-Plus Algebra, considering the available compute and storage capacities, buffer sizes, and communication bandwidth. Third, we propose a self-timed execution-based fast technique to compile and admit SNN-based applications to a neuromorphic hardware at run-time, adapting dynamically to the available resources on the hard- ware. We evaluate our approach with standard SNN-based applications and demonstrate a significant performance improvement compared to current practices.

Supplementary Material

MP4 File (3372799.3394364.mp4)
Presentation Video

References

[1]
Mart'in Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et almbox. 2016. Tensorflow: A system for large-scale machine learning. In USENIX Symposium on Operating Systems Design and Implementation (OSDI). 265--283.
[2]
Arnon Amir, Pallab Datta, William P Risk, Andrew S Cassidy, Jeffrey A Kusnitz, Steve K Esser, Alexander Andreopoulos, Theodore M Wong, Myron Flickner, Rodrigo Alvarez-Icaza, et almbox. 2013. Cognitive computing programming paradigm: A corelet language for composing networks of neurosynaptic cores. In International Joint Conference on Neural Networks (IJCNN). IEEE, 1--10.
[3]
Aayush Ankit, Abhronil Sengupta, and Kaushik Roy. 2018. Neuromorphic Computing Across the Stack: Devices, Circuits and Architectures. In Workshop on Signal Processing Systems. IEEE, 1--6.
[4]
Marco Bacis, Giuseppe Natale, Emanuele Del Sozzo, and Marco Domenico Santambrogio. 2017. A pipelined and scalable dataflow implementation of convolutional neural networks on FPGA. In International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 90--97.
[5]
Adarsha Balaji, Prathyusha Adiraju, Hirak J Kashyap, Anup Das, Jeffrey L Krichmar, Nikil D Dutt, and Francky Catthoor. 2020. PyCARL: A PyNN Interface for Hardware-Software Co-Simulation of Spiking Neural Network. In International Joint Conference on Neural Networks (IJCNN). IEEE.
[6]
Adarsha Balaji, Federico Corradi, Anup Das, Sandeep Pande, Siebren Schaafsma, and Francky Catthoor. 2018. Power-accuracy trade-offs for heartbeat classification on neural networks hardware. Journal of Low Power Electronics (JOLPE), Vol. 14, 4 (2018), 508--519.
[7]
Adarsha Balaji and Anup Das. 2019. A Framework for the Analysis of Throughput-Constraints of SNNs on Neuromorphic Hardware. In IEEE Computer Society Annual Symposium on VLSI (ISVLSI). IEEE, 193--196.
[8]
Adarsha Balaji, Anup Das, Yuefeng Wu, Khanh Huynh, Francesco G Dell'Anna, Giacomo Indiveri, Jeffrey L Krichmar, Nikil D Dutt, Siebren Schaafsma, and Francky Catthoor. 2019 a. Mapping spiking neural networks to neuromorphic hardware. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 28, 1 (2019), 76--86.
[9]
Adarsha Balaji, Shihao Song, Anup Das, Nikil Dutt, Jeff Krichmar, Nagarajan Kandasamy, and Francky Catthoor. 2019 b. A Framework to Explore Workload-Specific Performance and Lifetime Trade-offs in Neuromorphic Computing. IEEE Computer Architecture Letters, Vol. 18, 2 (2019), 149--152.
[10]
Ben Varkey Benjamin, Peiran Gao, Emmett McQuinn, Swadesh Choudhary, Anand R Chandrasekaran, Jean-Marie Bussat, Rodrigo Alvarez-Icaza, John V Arthur, Paul A Merolla, and Kwabena Boahen. 2014. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE, Vol. 102, 5 (2014), 699--716.
[11]
O Bichler, D Briand, V Gacoin, B Bertelone, T Allenet, and JC Thiele. 2017. N2D2-Neural Network Design & Deployment. https://github.com/CEA-LIST/N2D2 (2017).
[12]
J. Blazewicz. 1976. Scheduling dependent tasks with different arrival times to meet deadlines. In Proceedings of the International Workshop organized by the Commision of the European Communities on Modelling and Performance Evaluation of Computer Systems. North-Holland Publishing Co., 57--65.
[13]
Kwabena A Boahen. 1998. Communicating neuronal ensembles between neuromorphic chips. In Neuromorphic systems engineering. Springer.
[14]
Alessio Bonfietti, Michele Lombardi, Michela Milano, and Luca Benini. 2013. Maximum-throughput mapping of SDFGs on multi-core SoC platforms. J. Parallel and Distrib. Comput., Vol. 73, 10 (2013), 1337--1350.
[15]
Romain Brette. 2015. Philosophy of the spike: rate-based vs. spike-based theories of the brain. Frontiers in Systems Neuroscience, Vol. 9 (2015), 151.
[16]
Geoffrey W. Burr, Robert M. Shelby, Abu Sebastian, Sangbum Kim, Seyoung Kim, Severin Sidler, Kumar Virwani, Masatoshi Ishii, Pritish Narayanan, Alessandro Fumarola, Lucas L. Sanches, Irem Boybat, Manuel Le Gallo, Kibong Moon, Jiyoo Woo, Hyunsang Hwang, and Yusuf Leblebici. 2017. Neuromorphic computing using non-volatile memory. Advances in Physics: X, Vol. 2, 1 (2017), 89--124.
[17]
Yu-Hsin Chen, Joel Emer, and Vivienne Sze. 2017. Using dataflow to optimize energy efficiency of deep neural network accelerators. IEEE Micro, Vol. 37, 3 (2017), 12--21.
[18]
T-S. Chou, H J Kashyap, J Xing, S Listopad, Emily L Rounds, M Beyeler, N Dutt, and J L Krichmar. 2018. CARLsim 4: An open source library for large scale, biologically detailed spiking neural network simulation using heterogeneous clusters. In International Joint Conference on Neural Networks (IJCNN). IEEE, 1158--1165.
[19]
Jason Cong and Zhiru Zhang. 2006. An efficient and versatile scheduling algorithm based on SDC formulation. In Design Automation Conference (DAC). ACM, 433--438.
[20]
Morteza Damavandpeyma, Sander Stuijk, Twan Basten, Marc Geilen, and Henk Corporaal. 2012. Modeling static-order schedules in synchronous dataflow graphs. In Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 775--780.
[21]
Yang Dan and Mu-ming Poo. 2004. Spike timing-dependent plasticity of neural circuits. Neuron, Vol. 44, 1 (2004), 23--30.
[22]
Anup Das and Akash Kumar. 2018. Dataflow-Based Mapping of Spiking Neural Networks on Neuromorphic Hardware. In Proceedings of the Great Lakes Symposium on VLSI (GLSVLSI). ACM, 419--422.
[23]
Anup Das, Akash Kumar, and Bharadwaj Veeravalli. 2014a. Communication and migration energy aware task mapping for reliable multiprocessor systems. Future Generation Computer Systems, Vol. 30 (2014), 216--228.
[24]
Anup Das, Akash Kumar, and Bharadwaj Veeravalli. 2014b. Energy-aware task mapping and scheduling for reliable embedded computing systems. ACM Trans. Embedded Comput. Syst., Vol. 13, 2s (2014), 72:1--72:27. https://doi.org/10.1145/2544375.2544392
[25]
Anup Das, Akash Kumar, and Bharadwaj Veeravalli. 2015. Reliability and energy-aware mapping and scheduling of multimedia applications on multiprocessor systems. IEEE Transactions on Parallel and Distributed Systems, Vol. 27, 3 (2015), 869--884.
[26]
Anup Das, Paruthi Pradhapan, Willemijn Groenendaal, Prathyusha Adiraju, Raj Thilak Rajan, Francky Catthoor, Siebren Schaafsma, Jeffrey L Krichmar, Nikil Dutt, and Chris Van Hoof. 2018a. Unsupervised heart-rate estimation in wearables with liquid states and a probabilistic readout. Neural Networks, Vol. 99 (2018), 134--147.
[27]
Anup Das, Yuefeng Wu, Khanh Huynh, Francesco Dell'Anna, Francky Catthoor, and Siebren Schaafsma. 2018b. Mapping of local and global synapses on spiking neuromorphic hardware. In Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 1217--1222.
[28]
Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et almbox. 2018. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, Vol. 38, 1 (2018), 82--99.
[29]
Andrew P Davison, Daniel Brüderle, Jochen M Eppler, Jens Kremkow, Eilif Muller, Dejan Pecevski, Laurent Perrinet, and Pierre Yger. 2009. PyNN: a common interface for neuronal network simulators. Frontiers in Neuroinformatics, Vol. 2 (2009), 11.
[30]
Michael V DeBole, Brian Taba, Arnon Amir, Filipp Akopyan, Alexander Andreopoulos, William P Risk, Jeff Kusnitz, Carlos Ortega Otero, Tapan K Nayak, Rathinakumar Appuswamy, et almbox. 2019. TrueNorth: Accelerating from zero to 64 million neurons in 10 years. Computer, Vol. 52, 5 (2019), 20--29.
[31]
Li Deng. 2012. The MNIST database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, Vol. 29, 6 (2012), 141--142.
[32]
Peter U Diehl and Matthew Cook. 2015. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Frontiers in Computational Neuroscience, Vol. 9 (2015), 99.
[33]
Charlotte Frenkel, Martin Lefebvre, Jean-Didier Legat, and David Bol. 2018. A 0.086-mm2 12.7-pJ/SOP 64k-synapse 256-neuron online-learning digital spiking neuromorphic processor in 28-nm CMOS. IEEE Transactions on Biomedical Circuits and Systems, Vol. 13, 1 (2018), 145--158.
[34]
Steve B Furber, Francesco Galluppi, Steve Temple, and Luis A Plana. 2014. The SpiNNaker project. Proc. IEEE, Vol. 102, 5 (2014), 652--665.
[35]
Francesco Galluppi, Xavier Lagorce, Evangelos Stromatias, Michael Pfeiffer, Luis A Plana, Steve B Furber, and Ryad B Benosman. 2015. A framework for plasticity implementation on the SpiNNaker neural architecture. Frontiers in Neuroscience, Vol. 8 (2015), 429.
[36]
Daniele Garbin, Elisa Vianello, Olivier Bichler, Quentin Rafhay, Christian Gamrat, Gérard Ghibaudo, Barbara DeSalvo, and Luca Perniola. 2015. HfO 2-based OxRAM devices as synapses for convolutional neural networks. IEEE Transactions on Electron Devices, Vol. 62, 8 (2015), 2494--2501.
[37]
Jayavardhana Gubbi, Rajkumar Buyya, Slaven Marusic, and Marimuthu Palaniswami. 2013. Internet of Things (IoT): A vision, architectural elements, and future directions. Future Generation Computer Systems, Vol. 29, 7 (2013), 1645--1660.
[38]
Bernd Heidergott, Geert Jan Olsder, and Jacob Van Der Woude. 2014. Max Plus at work: modeling and analysis of synchronized systems: a course on Max-Plus algebra and its applications. Princeton University Press.
[39]
Hyesun Hong, Hyunok Oh, and Soonhoi Ha. 2017. Hierarchical dataflow modeling of iterative applications. In Design Automation Conference (DAC). ACM, 1--6.
[40]
Yu Ji, Youhui Zhang, Wenguang Chen, and Yuan Xie. 2018. Bridge the gap between neural networks and neuromorphic hardware with a neural network compiler. In International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 448--460.
[41]
Yu Ji, YouHui Zhang, ShuangChen Li, Ping Chi, CiHang Jiang, Peng Qu, Yuan Xie, and WenGuang Chen. 2016. NEUTRAMS: Neural network transformation and co-design under neuromorphic hardware constraints. In International Symposium on Microarchitecture (MICRO). ACM, 1--13.
[42]
Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. 2014. The CIFAR-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html, Vol. 55 (2014).
[43]
Yann LeCun et almbox. 2015b. LeNet-5, convolutional neural networks. URL: http://yann. lecun. com/exdb/lenet, Vol. 20 (2015), 5.
[44]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015a. Deep learning. Nature, Vol. 521, 7553 (2015), 436.
[45]
E.A. Lee and D.G. Messerschmitt. 1987. Synchronous data flow. Proc. IEEE, Vol. 75, 9 (1987), 1235--1245.
[46]
Matthew Kay Fei Lee, Yingnan Cui, Thannirmalai Somu, Tao Luo, Jun Zhou, Wai Teng Tang, Weng-Fai Wong, and Rick Siow Mong Goh. 2019. A system-level simulator for RRAM-based neuromorphic computing chips. ACM Transactions on Architecture and Code Optimization (TACO), Vol. 15, 4 (2019), 64.
[47]
Wolfgang Maass. 1997. Networks of spiking neurons: the third generation of neural network models. Neural Networks, Vol. 10, 9 (1997).
[48]
Wolfgang Maass, Thomas Natschl"ager, and Henry Markram. 2002. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, Vol. 14 (2002), 2531--2560.
[49]
A Mallik, D Garbin, A Fantini, D Rodopoulos, R Degraeve, J Stuijt, AK Das, S Schaafsma, P Debacker, G Donadio, et almbox. 2017. Design-technology co-optimization for OxRRAM-based synaptic processing unit. In Symposium on VLSI Technology. IEEE, T178--T179.
[50]
George B Moody, Roger G Mark, and Ary L Goldberger. 2001. PhysioNet: a web-based resource for the study of physiologic signals. IEEE Engineering in Medicine and Biology Magazine, Vol. 20, 3 (2001), 70--75.
[51]
S. Moradi, N. Qiao, F. Stefanini, and G. Indiveri. 2018. A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs). IEEE Transactions on Biomedical Circuits and Systems, Vol. 12, 1 (2018), 106--122.
[52]
Orlando M Moreira and Marco JG Bekooij. 2007. Self-timed scheduling analysis for real-time applications. EURASIP Journal on Advances in Signal Processing, Vol. 2007 (2007), 1--14.
[53]
Alexander Neckar, Sam Fok, Ben V Benjamin, Terrence C Stewart, Nick N Oza, Aaron R Voelker, Chris Eliasmith, Rajit Manohar, and Kwabena Boahen. 2018. Braindrop: A mixed-signal neuromorphic architecture with a dynamical systems-based programming model. Proc. IEEE, Vol. 107, 1 (2018), 144--164.
[54]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et almbox. 2019. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS). 8024--8035.
[55]
Shankar Ganesh Ramasubramanian, Rangharajan Venkatesan, Mrigank Sharad, Kaushik Roy, and Anand Raghunathan. 2014. SPINDLE: SPINtronic deep learning engine for large-scale neuromorphic computing. In International Symposium on Low Power Electronics and Design (ISLPED). ACM, 15--20.
[56]
Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, et almbox. 2019. Mlperf inference benchmark. arXiv preprint arXiv:1911.02549 (2019).
[57]
Johannes Schemmel, Andreas Grübl, Stephan Hartmann, Alexander Kononov, Christian Mayr, Karlheinz Meier, Sebastian Millner, Johannes Partzsch, Stefan Schiefer, Stefan Scholze, et almbox. 2012. Live demonstration: A scaled-down version of the brainscales wafer-scale neuromorphic system. In International Symposium on Circuits and Systems (ISCAS). IEEE, 702--702.
[58]
Shihao Song, Adarsha Balaji, Anup Das, Nagarajan Kandasamy, and James Shackleford. 2020 a. DFSynthesizer: A tool for data-flow based analysis of SNNs on neuromorphic hardware. https://github.com/drexel-DISCO/DFSynthesizer Retrieved April, 2020 from
[59]
Shihao Song, Anup Das, and Nagarajan Kandasamy. 2020 b. Exploiting Inter- and Intra-Memory Asymmetries for Data Mapping in Hybrid Tiered-Memories. In ACM International Symposium on Memory Management (ISMM).
[60]
Shihao Song, Anup Das, Onur Mutlu, and Nagarajan Kandasamy. 2019. Enabling and Exploiting Partition-Level Parallelism (PALP) in Phase Change Memories. ACM Transactions in Embedded Computing (TECS) 5s (2019), 1--25.
[61]
Shihao Song, Anup Das, Onur Mutlu, and Nagarajan Kandasamy. 2020 c. Improving Phase Change Memory Performance with Data Content Aware Access. In ACM International Symposium on Memory Management (ISMM).
[62]
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. 2014. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 (2014).
[63]
S. Sriram and S.S. Bhattacharyya. 2000. Embedded Multiprocessors; Scheduling and Synchronization. Marcel Dekker.
[64]
Ralf Stemmer, Hai-Dang Vu, Kim Grüttner, Sébastien Le Nours, Wolfgang Nebel, and Sébastien Pillement. 2020. Towards Probabilistic Timing Analysis for SDFGs on Tile Based Heterogeneous MPSoCs. In Euromicro Conference on Real-Time Systems.
[65]
Sander Stuijk, Twan Basten, MCW Geilen, and Henk Corporaal. 2007. Multiprocessor resource allocation for throughput-constrained synchronous dataflow graphs. In Design Automation Conference (DAC). ACM, 777--782.
[66]
S. Stuijk, M. Geilen, and T. Basten. 2006 a. Exploring trade-offs in buffer requirements and throughput constraints for synchronous dataflow graphs. In Design Automation Conference (DAC). ACM, 899--904.
[67]
Sander Stuijk, Marc Geilen, and Twan Basten. 2006 b. Exploring trade-offs in buffer requirements and throughput constraints for synchronous dataflow graphs. In Design Automation Conference (DAC). ACM, 899--904.
[68]
Wei Wen, Chi-Ruo Wu, Xiaofang Hu, Beiye Liu, Tsung-Yi Ho, Xin Li, and Yiran Chen. 2015. An EDA framework for large scale hybrid neuromorphic computing systems. In Design Automation Conference (DAC). ACM, 1--6.
[69]
Parami Wijesinghe, Aayush Ankit, Abhronil Sengupta, and Kaushik Roy. 2018. An all-memristor deep spiking neural computing system: A step toward realizing the low-power stochastic brain. IEEE Transactions on Emerging Topics in Computational Intelligence, Vol. 2, 5 (2018), 345--358.
[70]
Qiangfei Xia and J Joshua Yang. 2019. Memristive crossbar arrays for brain-inspired computing. Nature Materials, Vol. 18, 4 (2019), 309.
[71]
Xinjiang Zhang, Anping Huang, Qi Hu, Zhisong Xiao, and Paul K Chu. 2018. Neuromorphic computing with memristor crossbar. Physica Status Solidi (a), Vol. 215, 13 (2018), 1700875.
[72]
Zhiru Zhang and Bin Liu. 2013. SDC-based modulo scheduling for pipeline synthesis. In International Conference on Computer-Aided Design (ICCAD). IEEE, 211--218.
[73]
Xue-Yang Zhu, Marc Geilen, Twan Basten, and Sander Stuijk. 2012. Static rate-optimal scheduling of multirate DSP algorithms via retiming and unfolding. In Real Time and Embedded Technology and Applications Symposium. IEEE, 109--118.

Cited By

View all
  • (2024)Hierarchical Mapping of Large-Scale Spiking Convolutional Neural Networks Onto Resource-Constrained Neuromorphic ProcessorIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.334407043:5(1442-1455)Online publication date: May-2024
  • (2024)Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computingNature Communications10.1038/s41467-024-52259-915:1Online publication date: 16-Sep-2024
  • (2023)From Brain Models to Robotic Embodied Cognition: How Does Biological Plausibility Inform Neuromorphic Systems?Brain Sciences10.3390/brainsci1309131613:9(1316)Online publication date: 13-Sep-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
LCTES '20: The 21st ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems
June 2020
163 pages
ISBN:9781450370943
DOI:10.1145/3372799
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 June 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. compiler
  2. data flow
  3. machine learning
  4. neuromorphic computing
  5. spiking neural network

Qualifiers

  • Research-article

Funding Sources

Conference

LCTES '20

Acceptance Rates

Overall Acceptance Rate 116 of 438 submissions, 26%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)174
  • Downloads (Last 6 weeks)13
Reflects downloads up to 19 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Hierarchical Mapping of Large-Scale Spiking Convolutional Neural Networks Onto Resource-Constrained Neuromorphic ProcessorIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.334407043:5(1442-1455)Online publication date: May-2024
  • (2024)Neuromorphic intermediate representation: A unified instruction set for interoperable brain-inspired computingNature Communications10.1038/s41467-024-52259-915:1Online publication date: 16-Sep-2024
  • (2023)From Brain Models to Robotic Embodied Cognition: How Does Biological Plausibility Inform Neuromorphic Systems?Brain Sciences10.3390/brainsci1309131613:9(1316)Online publication date: 13-Sep-2023
  • (2023)GMap : An Open-source Efficient Compiler for Mapping any Network onto any Neuromophic ChipProceedings of the 2023 International Conference on Neuromorphic Systems10.1145/3589737.3605997(1-4)Online publication date: 1-Aug-2023
  • (2023)Hardware-Software Co-Design for On-Chip Learning in AI SystemsProceedings of the 28th Asia and South Pacific Design Automation Conference10.1145/3566097.3568359(624-631)Online publication date: 16-Jan-2023
  • (2023)Preserving Privacy of Neuromorphic Hardware From PCIe Congestion Side-Channel Attack2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC57700.2023.00094(689-698)Online publication date: Jun-2023
  • (2023)Platform-Based Design of Embedded Neuromorphic SystemsEmbedded Machine Learning for Cyber-Physical, IoT, and Edge Computing10.1007/978-3-031-19568-6_12(337-358)Online publication date: 1-Oct-2023
  • (2022)Nonvolatile Memories in Spiking Neural Network Architectures: Current and Emerging TrendsElectronics10.3390/electronics1110161011:10(1610)Online publication date: 18-May-2022
  • (2022)Energy-Efficient Respiratory Anomaly Detection in Premature Newborn InfantsElectronics10.3390/electronics1105068211:5(682)Online publication date: 23-Feb-2022
  • (2022)BSNN: Towards faster and better conversion of artificial neural networks to spiking neural networks with bistable neuronsFrontiers in Neuroscience10.3389/fnins.2022.99185116Online publication date: 12-Oct-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media