Nothing Special   »   [go: up one dir, main page]

skip to main content
10.5555/2648668.2648729acmconferencesArticle/Chapter ViewAbstractPublication PagesislpedConference Proceedingsconference-collections
research-article

Memristor-based approximated computation

Published: 04 September 2013 Publication History

Abstract

The cessation of Moore's Law has limited further improvements in power efficiency. In recent years, the physical realization of the memristor has demonstrated a promising solution to ultra-integrated hardware realization of neural networks, which can be leveraged for better performance and power efficiency gains. In this work, we introduce a power efficient framework for approximated computations by taking advantage of the memristor-based multilayer neural networks. A programmable memristor approximated computation unit (Memristor ACU) is introduced first to accelerate approximated computation and a memristor-based approximated computation framework with scalability is proposed on top of the Memristor ACU. We also introduce a parameter configuration algorithm of the Memristor ACU and a feedback state tuning circuit to program the Memristor ACU effectively. Our simulation results show that the maximum error of the Memristor ACU for 6 common complex functions is only 1.87% while the state tuning circuit can achieve 12-bit precision. The implementation of HMAX model atop our proposed memristor-based approximated computation framework demonstrates 22X power efficiency improvements than its pure digital implementation counterpart.

References

[1]
DARPA. Power efficiency revolution for embedded computing technologies.
[2]
NVIDIA TESLA K-SERIES DATASHEET. Kepler family product overview, 2012.
[3]
Esmaeilzadeh et. al. Dark silicon and the end of multicore scaling. In ISCA, pages 365--376. IEEE, 2011.
[4]
Duygu Kuzum, Rakesh GD Jeyasingh, et. al. Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano letters, 12(5): 2179--2186, 2011.
[5]
Jeffrey Dean et. al. Large scale distributed deep networks. In Neural Information Processing Systems, 2012.
[6]
Ngiam et. al. On optimization methods for deep learning. In Proceedings of the 28th International Conference on Machine Learning, 2011.
[7]
Sung Hyun Jo, Ting Chang, et. al. Nanoscale memristor device as synapse in neuromorphic systems. Nano letters, 10(4): 1297--1301, 2010.
[8]
Jianxing Wang et. al. A practical low-power memristor-based analog neural branch predictor. In Proceedings of ISLPED 2013.
[9]
Miao Hu, Hai Li, et. al. Hardware realization of bsb recall function using memristor crossbar arrays. In DAC, pages 498--503, 2012.
[10]
K. Hornik, M. Stinchcombe, et. al. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5): 359--366, 1989.
[11]
Belhadj et. al. Continuous real-world inputs can open up alternative accelerator designs. In ISCA. ACM, 2013.
[12]
Olivier Temam. A defect-tolerant accelerator for emerging high-performance applications. In ISCA, 2012.
[13]
Esmaeilzadeh et. al. Neural acceleration for general-purpose approximate programs. In MICRO, pages 449--460, 2012.
[14]
Dmitri B Strukov, Gregory S Snider, et. al. The missing memristor found. Nature, 453(7191): 80--83, 2008.
[15]
Pino et. al. Statistical memristor modeling and case study in neuromorphic computing. In DAC, pages 585--590. ACM, 2012.
[16]
Yoshifusa Ito. Approximation capability of layered neural networks with sigmoid units on two layers. Neural Computation, 6(6): 1233--1243, 1994.
[17]
Laurene V Fausett. Fundamentals of neural networks: architectures, algorithms, and applications. Prentice-Hall Englewood Cliffs, 1994.
[18]
G. Khodabandehloo, M. Mirhassani, and M. Ahmadi. Analog implementation of a novel resistive-type sigmoidal neuron. TVLSI, 20(4): 750--754, april 2012.
[19]
Sangho Shin, Kyungmin Kim, and Sung-Mo Kang. Memristor applications for programmable analog ics. Nanotechnology, IEEE Transactions on, 10(2): 266--274, march 2011.
[20]
Wei Yi, Frederick Perner, et. al. Feedback write scheme for memristive switching devices. Applied Physics A, 102: 973--982, 2011.
[21]
Jim Mutch and David G. Lowe. Object class recognition and localization using sparse features with limited receptive fields. Int. J. Comput. Vision, 80(1): 45--57, October 2008.
[22]
Ahmed Al Maashri, Michael Debole, et. al. Accelerating neuromorphic vision algorithms for recognition. In DAC, DAC '12, pages 579--584, New York, NY, USA, 2012. ACM.
[23]
Mark Everingham et. al. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2): 303--338, 2010.

Cited By

View all
  • (2019)Trained Biased Number Representation for ReRAM-Based Neural Network AcceleratorsACM Journal on Emerging Technologies in Computing Systems10.1145/330410715:2(1-17)Online publication date: 26-Mar-2019
  • (2018)Lightening the Load with Highly Accurate Storage- and Energy-Efficient LightNNsACM Transactions on Reconfigurable Technology and Systems10.1145/327068911:3(1-24)Online publication date: 12-Dec-2018
  • (2018)AtomlayerProceedings of the 55th Annual Design Automation Conference10.1145/3195970.3195998(1-6)Online publication date: 24-Jun-2018
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ISLPED '13: Proceedings of the 2013 International Symposium on Low Power Electronics and Design
September 2013
440 pages
ISBN:9781479912353

Sponsors

Publisher

IEEE Press

Publication History

Published: 04 September 2013

Check for updates

Author Tags

  1. approximated computation
  2. memristor
  3. neuromorphic
  4. power efficiency

Qualifiers

  • Research-article

Conference

ISLPED'13
Sponsor:

Acceptance Rates

Overall Acceptance Rate 398 of 1,159 submissions, 34%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2019)Trained Biased Number Representation for ReRAM-Based Neural Network AcceleratorsACM Journal on Emerging Technologies in Computing Systems10.1145/330410715:2(1-17)Online publication date: 26-Mar-2019
  • (2018)Lightening the Load with Highly Accurate Storage- and Energy-Efficient LightNNsACM Transactions on Reconfigurable Technology and Systems10.1145/327068911:3(1-24)Online publication date: 12-Dec-2018
  • (2018)AtomlayerProceedings of the 55th Annual Design Automation Conference10.1145/3195970.3195998(1-6)Online publication date: 24-Jun-2018
  • (2017)Accelerator-friendly neural-network trainingProceedings of the Conference on Design, Automation & Test in Europe10.5555/3130379.3130384(19-24)Online publication date: 27-Mar-2017
  • (2017)LightNNProceedings of the Great Lakes Symposium on VLSI 201710.1145/3060403.3060465(35-40)Online publication date: 10-May-2017
  • (2017)STT-RAM Buffer Design for Precision-Tunable General-Purpose Neural Network AcceleratorIEEE Transactions on Very Large Scale Integration (VLSI) Systems10.1109/TVLSI.2016.264427925:4(1285-1296)Online publication date: 1-Apr-2017
  • (2017)Computation-oriented fault-tolerance schemes for RRAM computing systems2017 22nd Asia and South Pacific Design Automation Conference (ASP-DAC)10.1109/ASPDAC.2017.7858421(794-799)Online publication date: 16-Jan-2017
  • (2016)NEUTRAMSThe 49th Annual IEEE/ACM International Symposium on Microarchitecture10.5555/3195638.3195663(1-13)Online publication date: 15-Oct-2016
  • (2016)MNSIMProceedings of the 2016 Conference on Design, Automation & Test in Europe10.5555/2971808.2971917(469-474)Online publication date: 14-Mar-2016
  • (2016)PRIMEACM SIGARCH Computer Architecture News10.1145/3007787.300114044:3(27-39)Online publication date: 18-Jun-2016
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media