Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3386263.3407646acmotherconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
abstract

Deep Neural Network accelerator with Spintronic Memory

Published: 07 September 2020 Publication History

Abstract

Utilizing emerging nonvolatile memories to accelerate deep neural network (DNN) has been considered as one of the promising approaches to solve the bottleneck of data transfer during the multiplication and accumulation (MAC). Among them, spintronic memories show tempting prospect due to their low access power, fast access speed, high density, and relatively mature process. As shown in fig.1, according to the principle to achieve DNN computing, it can be mainly divided into three different technical routes. The first one is an "analog" method [1, 2], as shown in fig.1(a). By transforming the digital input signals into multi-level voltage signals, and applying them to different columns of the memory array, the MAC results can be obtained in different columns with current integrator and analog to digital converter (ADC). Besides, the WL drivers can control the pulse width of different rows, to achieve the effect of multi-bit weights. This method can theoretically achieve high energy efficiency and computing speed. However, the variation of magnetic tunnel junction (MTJ) may have influence on the computing accuracy. Besides, the power consumption and area overhead of the ADC are also challenging. The other two methods are in a "digital" way, and they realize MAC computing through row-by-row read/write operation. Fig.1(b) shows the second reading-based method [3]. The weights of the neural network are stored in the memory cell. By putting the input signal to the modified sensing amplifier (SA), it can also achieve XOR function, which is the core of binary NN, with the content stored in the memory cell. Nevertheless, the modification to the SA is usually to add extra transistors in the read path, which will increase the bit error rate. Fig.1(c) shows the diagram of the last one, which is based on the "stateful logic" [4]. The input data is sent to the modified write driver when the WL receiving weight signals from outside I/O. Based on a unique logic paradigm, it can realize XOR function for BNN within 1 or several memory cells during a write cycle. In this talk, we will review the main research status of DNN accelerators based on spintronic memories. Particularly, our recent work on DNN accelerating will be introduced, which can be implemented with different spintronic memories.

Supplementary Material

MP4 File (3386263.3407646.mp4)
Presentation video

References

[1]
Patil A D, .et. al., 2019, An MRAM-based deep in-memory architecture for deep neural networks, IEEE ISCAS, (May, 2019), 1--5.
[2]
Tang K T, .et. al., 2019, Considerations of integrating computing-in-memory and processing-in-sensor into convolutional neural network accelerators for low-power edge devices, IEEE Symposium on VLSI Circuits, (Jun, 2019), T166-T167.
[3]
Fan D, .et. al., 2017, Energy efficient in-memory binary deep neural network accelerator with dual-mode SOT-MRAM, IEEE ICCD, (Nov, 2017), 609--612.
[4]
Zhang H, .et. al., 2017, Stateful reconfigurable logic via a single-voltage-gated spin Hall-effect driven magnetic tunnel junction in a spintronic memory. IEEE Transactions on Electron Devices, (Jul, 2017), 4295--4301.

Cited By

View all
  • (2021)High performance accelerators for deep neural networks: A reviewExpert Systems10.1111/exsy.1283139:1Online publication date: 11-Oct-2021

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSI
September 2020
597 pages
ISBN:9781450379441
DOI:10.1145/3386263
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 September 2020

Check for updates

Author Tags

  1. DNN accelerator
  2. computing in memory
  3. multiplication and accumulation
  4. spin memories

Qualifiers

  • Abstract

Funding Sources

  • Fundamental Research Funds for the Central Universities

Conference

GLSVLSI '20
GLSVLSI '20: Great Lakes Symposium on VLSI 2020
September 7 - 9, 2020
Virtual Event, China

Acceptance Rates

Overall Acceptance Rate 312 of 1,156 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)6
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2021)High performance accelerators for deep neural networks: A reviewExpert Systems10.1111/exsy.1283139:1Online publication date: 11-Oct-2021

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media