Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

A case for neuromorphic ISAs

Published: 05 March 2011 Publication History

Abstract

The desire to create novel computing systems, paired with recent advances in neuroscientific understanding of the brain, has led researchers to develop neuromorphic architectures that emulate the brain. To date, such models are developed, trained, and deployed on the same substrate. However, excessive co-dependence between the substrate and the algorithm prevents portability, or at the very least requires reconstructing and retraining the model whenever the substrate changes. This paper proposes a well-defined abstraction layer -- the Neuromorphic instruction set architecture, or NISA -- that separates a neural application's algorithmic specification from the underlying execution substrate, and describes the Aivo framework, which demonstrates the concrete advantages of such an abstraction layer. Aivo consists of a NISA implementation for a rate-encoded neuromorphic system based on the cortical column abstraction, a state-of-the-art integrated development and runtime environment (IDE), and various profile-based optimization tools. Aivo's IDE generates code for emulating cortical networks on the host CPU, multiple GPGPUs, or as boolean functions. Its runtime system can deploy and adaptively optimize cortical networks in a manner similar to conventional just-in-time compilers in managed runtime systems (e.g. Java, C#).
We demonstrate the abilities of the NISA abstraction by constructing a cortical network model of the mammalian visual cortex, deploying on multiple execution substrates, and utilizing the various optimization tools we have created. For this hierarchical configuration, Aivo's profiling based network optimization tools reduce the memory footprint by 50% and improve the execution time by a factor of 3x on the host CPU. Deploying the same network on a single GPGPU results in a 30x speedup. We further demonstrate that a speedup of 480x can be achieved by deploying a massively scaled cortical network across three GPGPUs. Finally, converting a trained hierarchical network to C/C++ boolean constructs on the host CPU results in 44x speedup.

References

[1]
Systems of neuromorphic adaptive plastic scalable electronics (synapse). 2008. URL http://www.darpa.mil/dso/solicitations/baa08-28.htm.
[2]
Matlab neural network toolbox, July 2010. URL http://www.mathworks.com/ products/ neuralnet/.
[3]
Java neuroph, July 2010. URL http://neuroph.sourceforge.net/index.html.
[4]
G. M. Amdahl, G. A. Blaauw, and F. P. Brooks. Architecture of the ibm system/360. IBM Journal of Research and Development, 8(2):87--101, 1964. ISSN 0018-8646.
[5]
J. Arthur and K. Boahen. Learning in silicon: Timing is everything. In Proceedings of Advances in Neural Information Processing Systems, volume 18, pages 75--82. Advances in Neural Information Processing Systems, 2006.
[6]
R. K. B. Awerbuch. Competitive collaborative learning. Journal of Computer and System Sciences, 74(8):1271--1288, 2008.
[7]
T. Binzegger, R. Douglas, and K. Martin. A quantitative map of the circuit of cat primary visual cortex. J. Neurosci., 24(39):8441--8453, Sep 2004.
[8]
W. Calvin. Cortical columns, modules, and hebbian cell assemblies. In M. A. Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 269--272. MIT Press, Cambridge, MA, 1998.
[9]
L. Chua. Memristor-the missing circuit element. IEEE Transactions on Circuit Theory, 18(5):507--519, 1971.
[10]
D. DeSieno. Adding a conscience to competitive learning. In International Conference on Neural Networks, ICNN, volume 1, pages 117--124, 1988.
[11]
F. Folowosele, R. Vogelstein, and R. Etienne-Cummings. Real-time silicon implementation of v1 in hierarchical visual information processing. In Biomedical Circuits and Systems Conference, 2008. BioCAS 2008. IEEE, pages 181--184, 2008.
[12]
W. Freeman. Random activity at the microscopic neural level in cor-tex ("noise") sustains and is regulated by low-dimensional dynamics of macroscopic activity ("chaos"). International Journal of Neural Systems, 7(4):473--480, 1996.
[13]
K. Grill-Spector, T. Kushnir, T. Hendler, S. Edelman, Y. Itzchak, and R. Malach. A sequence of object-processing stages revealed by fmri in the human occipital lobe. Hum. Brain Map., 6:316--328, 1998.
[14]
A. Hashmi and M. Lipasti. Discovering cortical algorithms. In Proceedings of the International Conference on Neural Computation (ICNC 2010), 2010.
[15]
A. Hashmi, H. Berry, O. Temam, and M. H. Lipasti. Leveraging progress in neurobiology for computing systems. In Proceedings of the Workshop on New Directions in Computer Architecture held in Conjunction with 42nd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-42), 2009.
[16]
J. Hawkins and D. George. Hierarchical temporal memory. 2006. URL www.numenta.com/Numenta HTM Concepts.pdf.
[17]
M. Holler, S. Tam, H. Castro, and R. Benson. An electrically trainable artificial neural network (etann) with 10240 'floating gate' synapses. In Neural Networks, 1989. IJCNN., International Joint Conference on, pages 191--196 vol.2, June 1989.
[18]
D. Hubel and T. Wiesel. Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology, 195:215--243, 1968.
[19]
K. Hynna and K. Boahen. Silicon neurons that burst when primed. Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on, pages 3363--3366, May 2007.
[20]
E. Izhikevich. Which model to use for cortical spiking neurons? Neural Networks, IEEE Transactions on, 15(5):1063--1070, 2004. ISSN 1045--9227.
[21]
H. Jang, A. Park, and K. Jung. Neural network implementation using cuda and openmp. In DICTA '08: Proceedings of the 2008 Digital Image Computing: Techniques and Applications, pages 155--161, Washington, DC, USA, 2008. IEEE Computer Society. ISBN 978-0-7695-3456-5.
[22]
J. Jones and L. Palmer. An evaluation of the two-dimensional gabor filter model of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58(6):1233--1258, December 1987.
[23]
S. Jung and S. su Kim. Hardware implementation of a real-time neural network controller with a dsp and an fpga for nonlinear systems. Industrial Electronics, IEEE Transactions on, 54(1):265--271, 2007. ISSN 0278-0046.
[24]
E. Kandel, J. Schwartz, and T. Jessell. Principles of Neural Science. McGraw-Hill, 4 edition, 2000.
[25]
Y. LeCun, L. Bottou, Y. Bengio, and P. learning applied to document recognition. 86(11):2278--2324, 1998.
[26]
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. 86(11):2278--2324, November 1998. Haffner. Gradient-based Proceedings of the IEEE, Haffner. Gradient-based Proceedings of the IEEE,
[27]
H. Markram. The blue brain project. In SC '06: Proceedings of the 2006 ACM/IEEE conference on Supercomputing, page 53, New York, NY, USA, 2006. ACM. ISBN 0-7695-2700-0.
[28]
T. Martinetz. Competitive hebbian learning rule forms perfectly topology preserving maps. In International Conference on Artificial Neural Networks, ICANN, pages 427--434, 1993.
[29]
M. Matthias and J. Born. Hippocampus whispering in deep sleep to prefrontal cortex for good memories? Neuron, 61:496--498, 2009.
[30]
V. Mountcastle. An organizing principle for cerebral function: The unit model and the distributed system. In G. Edelman and V. Mount-castle, editors, The Mindful Brain. MIT Press, Cambridge, Mass., 1978.
[31]
V. Mountcastle. The columnar organization of the neocortex. Brain, 120:701--722, 1997.
[32]
A. Nere and M. Lipasti. Cortical architectures on a gpgpu. In GPGPU '10: Proceedings of the 3rd Workshop on General-Purpose Computation on Graphics Processing Units, pages 12--18, New York, NY, USA, 2010. ACM. ISBN 978-1-60558-935-0.
[33]
M. O'Neil. Neural network for recognition of handwritten digits, October 2010. URL http://www.codeproject.com/KB/library/NeuralNetRecognition.aspx.
[34]
Y. V. Pershin, S. La Fontaine, and M. Di Ventra. Memristive model of amoeba learning. Phys. Rev. E, 80(2):021926, Aug 2009.
[35]
R. Raina, A. Madhavan, and A. Ng. Large-scale deep unsupervised learning using graphics processors. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 873--880. ACM, 2009. ISBN 978-1-60558-516-1.
[36]
K. L. Rice, T. M. Taha, and C. N. Vutsinas. Scaling analysis of a neocortex inspired cognitive model on the cray xd1. J. Supercomput., 47(1):21--43, 2009. ISSN 0920-8542.
[37]
D. Ringach. Haphazard wiring of simple receptive fields and orientation columns in visual cortex. J. Neurophysiol., 92(1):468--476, Jul 2004.
[38]
U. Rokni, A. Richardson, E. Bizzi, and H. Seung. Motor learning with unstable neural representations. Neuron, 64:653--666, 2007.
[39]
J. Schemmel, J. Fieres, and K. Meier. Wafer-scale integration of analog neural networks. In Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on, pages 431--438, 2008.
[40]
T. Serre, A. Oliva, and T. Poggio. A feedforward architecture accounts for rapid categorization. Proc. Natl. Acad. Sci. USA, 104(15):6424--6429, Apr 2007.
[41]
T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust object recognition with cortex-like mechanisms. IEEE Trans. Pattern Anal. Mach. Intell., 29(3):411--426, Mar 2007.
[42]
R. Vogels and G. Orban. How well do response changes of striate neurons signal difference in orientation: a study in the discriminating monkey. Journal of Neuroscience, 10(11):3543--3558, 1990.
[43]
H. Wersing and E. Korner. Learning optimized features for hierarchical models of invariant object recognition. Neural Computation, 15: 1559--1588, 2003.

Cited By

View all
  • (2020)A system hierarchy for brain-inspired computingNature10.1038/s41586-020-2782-y586:7829(378-384)Online publication date: 14-Oct-2020
  • (2019)A piecewise weight update rule for a supervised training of cortical algorithmsNeural Computing and Applications10.1007/s00521-017-3167-531:6(1915-1930)Online publication date: 20-Jul-2019
  • (2017)A biologically inspired deep neural network of basal ganglia switching in working memory tasks2017 IEEE Symposium Series on Computational Intelligence (SSCI)10.1109/SSCI.2017.8285364(1-8)Online publication date: Nov-2017
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM SIGARCH Computer Architecture News
ACM SIGARCH Computer Architecture News  Volume 39, Issue 1
ASPLOS '11
March 2011
407 pages
ISSN:0163-5964
DOI:10.1145/1961295
Issue’s Table of Contents
  • cover image ACM Conferences
    ASPLOS XVI: Proceedings of the sixteenth international conference on Architectural support for programming languages and operating systems
    March 2011
    432 pages
    ISBN:9781450302661
    DOI:10.1145/1950365
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 March 2011
Published in SIGARCH Volume 39, Issue 1

Check for updates

Author Tags

  1. cortical learning algorithms
  2. gpgpu
  3. neuromorphic archi- tectures

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)26
  • Downloads (Last 6 weeks)2
Reflects downloads up to 20 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2020)A system hierarchy for brain-inspired computingNature10.1038/s41586-020-2782-y586:7829(378-384)Online publication date: 14-Oct-2020
  • (2019)A piecewise weight update rule for a supervised training of cortical algorithmsNeural Computing and Applications10.1007/s00521-017-3167-531:6(1915-1930)Online publication date: 20-Jul-2019
  • (2017)A biologically inspired deep neural network of basal ganglia switching in working memory tasks2017 IEEE Symposium Series on Computational Intelligence (SSCI)10.1109/SSCI.2017.8285364(1-8)Online publication date: Nov-2017
  • (2017)Automated composer recognition for multi-voice piano compositions using rhythmic features, n-grams and modified cortical algorithmsComplex & Intelligent Systems10.1007/s40747-017-0052-x4:1(55-65)Online publication date: 8-Aug-2017
  • (2023)Preserving Privacy of Neuromorphic Hardware From PCIe Congestion Side-Channel Attack2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC)10.1109/COMPSAC57700.2023.00094(689-698)Online publication date: Jun-2023
  • (2019)FlexLearnProceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture10.1145/3352460.3358268(304-318)Online publication date: 12-Oct-2019
  • (2019)An Instruction Set Architecture for Machine LearningACM Transactions on Computer Systems10.1145/333146936:3(1-35)Online publication date: 13-Aug-2019
  • (2018)Computation reuse in DNNs by exploiting input similarityProceedings of the 45th Annual International Symposium on Computer Architecture10.1109/ISCA.2018.00016(57-68)Online publication date: 2-Jun-2018
  • (2017)DaDianNaoIEEE Transactions on Computers10.1109/TC.2016.257435366:1(73-88)Online publication date: 1-Jan-2017
  • (2016)NEUTRAMSThe 49th Annual IEEE/ACM International Symposium on Microarchitecture10.5555/3195638.3195663(1-13)Online publication date: 15-Oct-2016
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media