Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3299874.3319484acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article

In-memory Processing based on Time-domain Circuit

Published: 13 May 2019 Publication History

Abstract

Deep Neural Networks (DNN) have emerged as a dominant algorithm for machine learning (ML). High performance and extreme energy efficiency are critical for deployments of DNN, especially in mobile platforms such as autonomous vehicles, cameras, and other devices of internet of things. However, DNNs lead to massive data movement and memory accesses, which prevents it from being integrated into always-on Internet-of-Things (IoT) devices. Recently, computing in-memory (CIM) architectures embeds analog computation circuits in/near the memory arrays. It significantly reduces data movement energy. This paper summarizes the most recent novel methods on the CIM architectures based on the time-domain computation. Compared with voltage-domain and frequency-domain analog computing method, time-domain computation provides more flexibility, higher accuracy and greater scalability for larger neural networks. Thereafter, the first in-memory binary weight network (BWN) processor based on pulse-width modulation in which the feature is stored in memory is also presented. This work significantly reduces memory accesses (4x), and achieves state-of-the-art peak energy efficiency of 119.7TOPS/W.

References

[1]
S. Zhang, et al, "Cambricon-X: An accelerator for sparse neural networks," in IEEE/ACM International Symposium on Microarchitecture. IEEE, 2016.
[2]
S. Yin, et al, "A 1.06-to-5.09 TOPS/W reconfigurable hybrid-neural-network processor for deep learning applications," in VLSI Circuits, Symposium, 2017.
[3]
Renzo Andri, et al., "YodaNN: An Architecture for Ultralow Power Binary-Weight CNN Acceleration", TCAD, vol. 37, no.1, pp. 48--60, Jan. 2018.
[4]
Han S, et al., "EIE: Efficient Inference Engine on Compressed Deep Neural Network," in ACM SIGARCH Computer Architecture News, 2016.
[5]
A. Biswas, et al., "Conv-RAM: An Energy-Efficient SRAM with Embedded Convolution Computation for Low-Power CNN-Based Machine Learning Applications," ISSCC, pp. 488--489, 2018.
[6]
A. Amravati, et al., "A 55nm Time-Domain Mixed-Signal Neuromorphic Accelerator With Stochastic Synapses and Embedded Reinforcement Learning For Autonomous Micro-Robots," ISSCC, pp. 124--126, 2018.
[7]
D. Miyashita, et al., "A Neuromorphic Chip Optimized for Deep Learning and CMOS Technology with Time-Domain Analog and Digital Mixed-Signal Processing," JSSC, vol. 52, no. 10, pp. 2679--2689, Oct. 2017.
[8]
L. R. Everson, et al., "A 40×40 Four-Neighbor Time-Based In-Memory Computing Graph ASIC Chip Featuring Wavefront Expansion and 2D Gradient Control," ISSCC, pp. 50--52, 2019
[9]
A. Sayal, et al., "All-Digital Time-Domain CNN Engine Using Bidirectional Memory Delay Lines for Energy-Efficient Edge Computing," ISSCC, pp. 228--230, 2019.
[10]
J. Yang, et al., "Sandwich-RAM: An Energy-Efficient In-Memory BWN Architecture with Pulse-Width Modulation," ISSCC, pp. 394--396, 2019.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
GLSVLSI '19: Proceedings of the 2019 Great Lakes Symposium on VLSI
May 2019
562 pages
ISBN:9781450362528
DOI:10.1145/3299874
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 May 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. bwn
  2. computing in-memory
  3. time-domain

Qualifiers

  • Research-article

Funding Sources

  • National Natural Science Foundation of China
  • National Science and Technology Project under Grant

Conference

GLSVLSI '19
Sponsor:
GLSVLSI '19: Great Lakes Symposium on VLSI 2019
May 9 - 11, 2019
VA, Tysons Corner, USA

Acceptance Rates

Overall Acceptance Rate 312 of 1,156 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 214
    Total Downloads
  • Downloads (Last 12 months)22
  • Downloads (Last 6 weeks)1
Reflects downloads up to 14 Nov 2024

Other Metrics

Citations

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media