Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3649329.3656214acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article
Free access

Series-Parallel Hybrid SOT-MRAM Computing-in-Memory Macro with Multi-Method Modulation for High Area and Energy Efficiency

Published: 07 November 2024 Publication History

Abstract

Computing-in-memory (CIM) shows its superiority in lots of applications like neural network inference. Recently, there are lots of exploration of the application of Magnetic Random-Access Memory (MRAM) in CIM. This paper aims to investigate the potential of Spin-Orbit-Torque-MRAM (SOT-MRAM) in CIM and proposes a high area and energy efficiency SOT-MRAM CIM macro based on a 6T-4J weight group. The bit-cell array adopts series-parallel hybrid architecture, which combines both serial and parallel configurations of Magnetic Tunnel Junction (MTJ) to solve the problem of high energy cost and low flexibility caused by MRAM-series and MRAM-parallel architecture, respectively. Additionally, the proposed SOT-MRAM CIM macro incorporates a multi-method modulation scheme, ranging from input unit to array, which meanwhile allows for configurable input precision (2/4/6/8-bit). The SOT-MRAM CIM macro is designed and verified in both 180nm and 28nm nodes, based on the verified electrical performance of the SOT-MRAM array in a 200-nm wafer pre-fabricated. The simulation results in 28nm show that this macro can achieve energy efficiency of 23.7~29.6 Tops/W at 8-bit input and output precision.

References

[1]
G. Yoo et al., "Implementing Practical DNN-Based Object Detection Offloading Decision for Maximizing Detection Performance of Mobile Edge Devices," IEEE Access. Vol. 9, pp. 140199--140211, Aug. 2021.
[2]
J. Li et al., "Throughput Maximization of Delay-Aware DNN Inference in Edge Computing by Exploring DNN Model Partitioning and Inference Parallelism." IEEE Trans. on Mobile Comput., vol. 22, no. 5, pp. 3017--3030, May. 2023.
[3]
H. -W. Hu. et al., "A 512Gb In-Memory-Computing 3D-NAND Flash Supporting Similar-Vector-Matching Operations on Edge-AI Devices," in IEEE ISSCC, pp. 138--140, Feb. 2022.
[4]
H. Wang. et al., "A Charge Domain SRAM Compute-in-Memory Macro With C-2C Ladder-Based 8-Bit MAC Unit in 22-nm FinFET Process for Edge Inference," in IEEE J. Solid-State Circuits, vol. 58, no. 4, pp. 1037--1050, Apr. 2023.
[5]
Y. -C. Kwon. et al., "25.4 A 20nm 6GB Function-In-Memory DRAM, Based on HBM2 with a 1.2TFLOPS Programmable Computing Unit Using Bank-Level Parallelism, for Machine Learning Applications," in IEEE ISSCC, pp. 350--352, Feb. 2021.
[6]
H. Cai. et al., "33.4 A 28nm 2Mb STT-MRAM Computing-in-Memory Macro with a Refined Bit-Cell and 22.4 - 41.5TOPS/W for AI Inference," in IEEE ISSCC, pp. 500--502, Feb. 2023.
[7]
J.-M. Hung. et al., "An 8-Mb DC-Current-Free Binary-to-8b Precision ReRAM Nonvolatile Computing-in-Memory Macro using Time-Space-Readout with 1286.4-21.6TOPS/W for Edge-AI Devices," in IEEE ISSCC, pp. 1--3, Feb. 2022.
[8]
W.-S. Khwa. et al., "A 40-nm, 2M-Cell, 8b-Precision, Hybrid SLC-MLC PCM Computing-in-Memory Macro with 20.5 - 65.0TOPS/W for Tiny-Al Edge Devices," in IEEE ISSCC, pp. 1--3, Feb. 2022.
[9]
Y. Li. et al., "A Survey of MRAM-Centric Computing: From Near Memory to In Memory" IEEE Trans. Emerg. Topics Comput., vol. 11, no. 2, pp. 1--12, Oct. 2022.
[10]
S. Jung. et al., "A crossbar array of magnetoresistive memory devices for inmemory computing," Nature, vol. 601, no. 7892, pp. 211--216, Jan. 2022.
[11]
J. Doevenspeck. et al., "SOT-MRAM Based Analog in-Memory Computing for DNN Inference," IEEE Symp. VLSI Technol., pp. 1--2, Jun. 2020.
[12]
Q. Dong et al., "15.3 A 351TOPS/W and 372.4GOPS compute-in memory SRAM macro in 7 nm FinFET CMOS for machine-learning applications," in IEEE ISSCC, pp. 242--244, Feb. 2020.
[13]
X. Si et al., "A Twin-8T SRAM Computation-in-Memory Unit-Macro for Multibit CNN-Based AI Edge Processors," in IEEE J. Solid-State Circuits, vol. 55, no. 1, pp. 189--202, 2020.
[14]
Jiang et al., "Demonstration of a manufacturable SOT-MRAM multiplexer array towards industrial applications," in J. Semicond., vol.44, no.2, Dec. 2023.
[15]
J.-H. Yoon. et al., "29.1 A 40nm 64Kb 56.67TOPS/W Read-Disturb-Tolerant Compute-in-Memory/Digital RRAM Macro with Active-Feedback-Based Read and In-Situ Write Verification," in IEEE ISSCC, pp. 404--406, Feb. 2021.
[16]
P. Deaville et al., "A 22nm 128-kb MRAM Row/Column-Parallel In-Memory Computing Macro with Memory-Resistance Boosting and Multi-Column ADC Readout," in IEEE VLSI Technology and Circuits, pp. 268--269, Jun. 2022.
[17]
H. Cai. et al., "33.4 A 28nm 2Mb STT-MRAM Computing-in-Memory Macro with a Refined Bit-Cell and 22.4 - 41.5TOPS/W for AI Inference," in IEEE ISSCC, pp. 500--502, Feb. 2023.

Index Terms

  1. Series-Parallel Hybrid SOT-MRAM Computing-in-Memory Macro with Multi-Method Modulation for High Area and Energy Efficiency
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image ACM Conferences
          DAC '24: Proceedings of the 61st ACM/IEEE Design Automation Conference
          June 2024
          2159 pages
          ISBN:9798400706011
          DOI:10.1145/3649329
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Sponsors

          In-Cooperation

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 07 November 2024

          Check for updates

          Author Tags

          1. SOT-MRAM
          2. computing-in-memory
          3. series-parallel hybrid
          4. configurable precision

          Qualifiers

          • Research-article

          Conference

          DAC '24
          Sponsor:
          DAC '24: 61st ACM/IEEE Design Automation Conference
          June 23 - 27, 2024
          CA, San Francisco, USA

          Acceptance Rates

          Overall Acceptance Rate 1,770 of 5,499 submissions, 32%

          Upcoming Conference

          DAC '25
          62nd ACM/IEEE Design Automation Conference
          June 22 - 26, 2025
          San Francisco , CA , USA

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 56
            Total Downloads
          • Downloads (Last 12 months)56
          • Downloads (Last 6 weeks)56
          Reflects downloads up to 25 Nov 2024

          Other Metrics

          Citations

          View Options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media