Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3583781.3590248acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article
Public Access

IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph Neural Networks at the Edge

Published: 05 June 2023 Publication History

Abstract

In this paper, we propose IMA-GNN as an In-Memory Accelerator for centralized and decentralized Graph Neural Network inference, explore its potential in both settings and provide a guideline for the community targeting flexible and efficient edge computation. Leveraging IMA-GNN, we first model the computation and communication latencies of edge devices. We then present practical case studies on GNN-based taxi demand and supply prediction and also adopt four large graph datasets to quantitatively compare and analyze centralized and decentralized settings. Our cross-layer simulation results demonstrate that on average, IMA-GNN in the centralized setting can obtain ~790x communication speed-up compared to the decentralized GNN setting. However, the decentralized setting performs computation ~1400x faster while reducing the power consumption per device. This further underlines the need for a hybrid semi-decentralized GNN approach.

References

[1]
2011. NCSU EDA FreePDK45. http://www.eda.ncsu.edu/wiki/FreePDK45
[2]
Minhaz Abedin, Arman Roohi, Maximilian Liehr, Nathaniel Cady, and Shaahin Angizi. 2022. MR-PIPA: An Integrated Multilevel RRAM (HfOx)-Based Processing-In-Pixel Accelerator. IEEE JxCDC, Vol. 8, 2 (2022), 59--67.
[3]
Shaahin Angizi, Sepehr Tabrizchi, and Arman Roohi. 2022. Pisa: A binary-weight processing-in-sensor accelerator for edge image processing. arXiv preprint arXiv:2202.09035 (2022).
[4]
Nagadastagiri Challapalle, Karthik Swaminathan, Nandhini Chandramoorthy, and Vijaykrishnan Narayanan. 2021. Crossbar based processing in memory accelerator architecture for graph convolutional networks. In ICCAD. IEEE, 1--9.
[5]
Ligang Gao et al. 2012. Analog-input analog-weight dot-product operation with Ag/a-Si/Pt memristive devices. In VLSI-SoC. IEEE, 88--93.
[6]
Tong Geng et al. 2020. AWB-GCN: A graph convolutional network accelerator with runtime workload rebalancing. In MICRO. IEEE, 922--936.
[7]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017a. Inductive representation learning on large graphs. Advances in neural information processing systems, Vol. 30 (2017).
[8]
William L Hamilton, Rex Ying, and Jure Leskovec. 2017b. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584 (2017).
[9]
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, Vol. 33 (2020), 22118--22133.
[10]
Kevin Kiningham, Philip Levis, and Christopher Ré. 2022. GRIP: A graph neural network accelerator architecture. IEEE Trans. Comput. (2022).
[11]
Jiajun Li, Ahmed Louri, Avinash Karanth, and Razvan Bunescu. 2021. GCNAX: A flexible and energy-efficient accelerator for graph convolutional neural networks. In HPCA. IEEE, 775--788.
[12]
Qingbiao Li, Fernando Gama, Alejandro Ribeiro, and Amanda Prorok. 2020. Graph neural networks for decentralized multi-robot path planning. In 2020 IROS. IEEE, 11785--11792.
[13]
Shuangchen Li, Liu Liu, Peng Gu, Cong Xu, and Yuan Xie. 2016. Nvsim-cam: a circuit-level simulator for emerging nonvolatile memory based content-addressable memory. In ICCAD. ACM, 1--7.
[14]
Shengwen Liang, Ying Wang, Cheng Liu, Lei He, LI Huawei, Dawen Xu, and Xiaowei Li. 2020. Engn: A high-throughput and energy-efficient accelerator for large graph neural networks. IEEE Trans. Comput., Vol. 70, 9 (2020), 1511--1525.
[15]
Kai Lu, Zhaoshi Li, Leibo Liu, Jiawei Wang, Shouyi Yin, and Shaojun Wei. 2019. Redesk: A reconfigurable dataflow engine for sparse kernels on heterogeneous platforms. In ICCAD. IEEE, 1--8.
[16]
Sumit K Mandal, Gokul Krishnan, A Alper Goksoy, Gopikrishnan Ravindran Nair, Yu Cao, and Umit Y Ogras. 2022. COIN: Communication-Aware In-Memory Acceleration for Graph Convolutional Networks. IEEE JETCAS, Vol. 12 (2022), 472--485.
[17]
Valerian Mannoni et al. 2019. A comparison of the V2X communication systems: ITS-G5 and C-V2X. In VTC2019-Spring. IEEE, 1--5.
[18]
Taichi Miya et al. 2021. Experimental analysis of communication relaying delay in low-energy ad-hoc networks. In CCNC. IEEE, 1--2.
[19]
Mahmoud Nazzal, Abdallah Khreishah, Joyoung Lee, and Shaahin Angizi. 2023. Semi-decentralized Inference in Heterogeneous Graph Neural Networks for Traffic Demand Forecasting: An Edge-Computing Approach. arXiv preprint arXiv:2303.00524 (2023).
[20]
Siddhartha Sahu, Amine Mhedhbi, Semih Salihoglu, Jimmy Lin, and M Tamer Özsu. 2017. The ubiquity of large graphs and surprising challenges of graph processing. Proceedings of the VLDB Endowment, Vol. 11, 4 (2017), 420--431.
[21]
Ekaterina Tolstaya et al. 2020. Learning decentralized controllers for robot swarms with graph neural networks. In Conference on robot learning. PMLR, 671--682.
[22]
Mingyu Yan, Lei Deng, Xing Hu, Ling Liang, Yujing Feng, Xiaochun Ye, Zhimin Zhang, Dongrui Fan, and Yuan Xie. 2020. Hygcn: A gcn accelerator with hybrid architecture. In HPCA. IEEE, 15--29.
[23]
Mingyu Yan, Xing Hu, Shuangchen Li, Abanti Basak, Han Li, Xin Ma, Itir Akgun, et al. 2019. Alleviating irregularity in graph analytics acceleration: A hardware/software co-design approach. In Micro. 615--628.
[24]
Da Zheng et al. 2020. Distdgl: distributed graph neural network training for billion-scale graphs. In IA3. IEEE, 36--44.
[25]
Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2020. Graph neural networks: A review of methods and applications. AI open, Vol. 1 (2020), 57--81.
[26]
Zhenhua Zhu et al. 2020. MNSIM 2.0: A behavior-level modeling tool for memristor-based neuromorphic computing systems. In GLSVLSI. 83--88.

Cited By

View all
  • (2024)TSTL-GNN: Graph-Based Two-Stage Transfer Learning for Timing Engineering Change Order Analysis AccelerationElectronics10.3390/electronics1315289713:15(2897)Online publication date: 23-Jul-2024
  • (2024)Radiation-Immune Spintronic Binary Synapse and Neuron for Process-in-Memory ArchitectureIEEE Magnetics Letters10.1109/LMAG.2024.335681515(1-5)Online publication date: 2024
  • (2023)Deep Mapper: A Multi-Channel Single-Cycle Near-Sensor DNN Accelerator2023 IEEE International Conference on Rebooting Computing (ICRC)10.1109/ICRC60800.2023.10386958(1-5)Online publication date: 5-Dec-2023

Index Terms

  1. IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph Neural Networks at the Edge

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    GLSVLSI '23: Proceedings of the Great Lakes Symposium on VLSI 2023
    June 2023
    731 pages
    ISBN:9798400701252
    DOI:10.1145/3583781
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 June 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    • Best Paper

    Author Tags

    1. edge computing
    2. graph neural network
    3. in-memory computing

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    GLSVLSI '23
    Sponsor:
    GLSVLSI '23: Great Lakes Symposium on VLSI 2023
    June 5 - 7, 2023
    TN, Knoxville, USA

    Acceptance Rates

    Overall Acceptance Rate 312 of 1,156 submissions, 27%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)176
    • Downloads (Last 6 weeks)12
    Reflects downloads up to 02 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)TSTL-GNN: Graph-Based Two-Stage Transfer Learning for Timing Engineering Change Order Analysis AccelerationElectronics10.3390/electronics1315289713:15(2897)Online publication date: 23-Jul-2024
    • (2024)Radiation-Immune Spintronic Binary Synapse and Neuron for Process-in-Memory ArchitectureIEEE Magnetics Letters10.1109/LMAG.2024.335681515(1-5)Online publication date: 2024
    • (2023)Deep Mapper: A Multi-Channel Single-Cycle Near-Sensor DNN Accelerator2023 IEEE International Conference on Rebooting Computing (ICRC)10.1109/ICRC60800.2023.10386958(1-5)Online publication date: 5-Dec-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media