A Fresh Perspective on DNN Accelerators by Performing Holistic Analysis Across Paradigms

T Glint, CK Jha, M Awasthi, J Mekie - arXiv preprint arXiv:2208.05294, 2022 - arxiv.org
arXiv preprint arXiv:2208.05294, 2022arxiv.org
Traditional computers with von Neumann architecture are unable to meet the latency and
scalability challenges of Deep Neural Network (DNN) workloads. Various DNN accelerators
based on Conventional compute Hardware Accelerator (CHA), Near-Data-Processing
(NDP) and Processing-in-Memory (PIM) paradigms have been proposed to meet these
challenges. Our goal in this work is to perform a rigorous comparison among the state-of-the-
art accelerators from DNN accelerator paradigms, we have used unique layers from …
Traditional computers with von Neumann architecture are unable to meet the latency and scalability challenges of Deep Neural Network (DNN) workloads. Various DNN accelerators based on Conventional compute Hardware Accelerator (CHA), Near-Data-Processing (NDP) and Processing-in-Memory (PIM) paradigms have been proposed to meet these challenges. Our goal in this work is to perform a rigorous comparison among the state-of-the-art accelerators from DNN accelerator paradigms, we have used unique layers from MobileNet, ResNet, BERT, and DLRM of MLPerf Inference benchmark for our analysis. The detailed models are based on hardware-realized state-of-the art designs. We observe that for memory-intensive Fully Connected Layer (FCL) DNNs, NDP based accelerator is 10.6x faster than the state-of-the-art CHA and 39.9x faster than PIM based accelerator for inferencing. For compute-intensive image classification and object detection DNNs, the state-of-the-art CHA is ~10x faster than NDP and ~2000x faster than the PIM-based accelerator for inferencing. PIM-based accelerators are suitable for DNN applications where energy is a constraint (~2.7x and ~21x lower energy for CNN and FCL applications, respectively, than conventional ASIC systems). Further, we identify architectural changes (such as increasing memory bandwidth, buffer reorganization) that can increase throughput (up to linear increase) and lower energy (up to linear decrease) for ML applications with a detailed sensitivity analysis of relevant components in CHA, NDP and PIM based accelerators.
arxiv.org