Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1109/IISWC.2012.6402898guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

BenchNN: On the broad potential application scope of hardware neural network accelerators

Published: 04 November 2012 Publication History

Abstract

Recent technology trends have indicated that, although device sizes will continue to scale as they have in the past, supply voltage scaling has ended. As a result, future chips can no longer rely on simply increasing the operational core count to improve performance without surpassing a reasonable power budget. Alternatively, allocating die area towards accelerators targeting an application, or an application domain, appears quite promising, and this paper makes an argument for a neural network hardware accelerator. After being hyped in the 1990s, then fading away for almost two decades, there is a surge of interest in hardware neural networks because of their energy and fault-tolerance properties. At the same time, the emergence of high-performance applications like Recognition, Mining, and Synthesis (RMS) suggest that the potential application scope of a hardware neural network accelerator would be broad. In this paper, we want to highlight that a hardware neural network accelerator is indeed compatible with many of the emerging high-performance workloads, currently accepted as benchmarks for high-performance micro-architectures. For that purpose, we develop and evaluate software neural network implementations of 5 (out of 12) RMS applications from the PARSEC Benchmark Suite. Our results show that neural network implementations can achieve competitive results, with respect to application-specific quality metrics, on these 5 RMS applications.

Cited By

View all
  • (2023)Bang for the Buck: Evaluating the cost-effectiveness of Heterogeneous Edge Platforms for Neural Network WorkloadsProceedings of the Eighth ACM/IEEE Symposium on Edge Computing10.1145/3583740.3628437(94-107)Online publication date: 6-Dec-2023
  • (2022)Workload characterization of a time-series prediction system for spatio-temporal dataProceedings of the 19th ACM International Conference on Computing Frontiers10.1145/3528416.3530242(159-168)Online publication date: 17-May-2022
  • (2021)Neural architecture search as program transformation explorationProceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems10.1145/3445814.3446753(915-927)Online publication date: 19-Apr-2021
  • Show More Cited By
  1. BenchNN: On the broad potential application scope of hardware neural network accelerators

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Guide Proceedings
    IISWC '12: Proceedings of the 2012 IEEE International Symposium on Workload Characterization (IISWC)
    November 2012
    177 pages
    ISBN:9781467345316

    Publisher

    IEEE Computer Society

    United States

    Publication History

    Published: 04 November 2012

    Author Tags

    1. PARSEC
    2. accelerator
    3. benchmark
    4. neural networks

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 30 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Bang for the Buck: Evaluating the cost-effectiveness of Heterogeneous Edge Platforms for Neural Network WorkloadsProceedings of the Eighth ACM/IEEE Symposium on Edge Computing10.1145/3583740.3628437(94-107)Online publication date: 6-Dec-2023
    • (2022)Workload characterization of a time-series prediction system for spatio-temporal dataProceedings of the 19th ACM International Conference on Computing Frontiers10.1145/3528416.3530242(159-168)Online publication date: 17-May-2022
    • (2021)Neural architecture search as program transformation explorationProceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems10.1145/3445814.3446753(915-927)Online publication date: 19-Apr-2021
    • (2020)NNBench-XACM Transactions on Architecture and Code Optimization10.1145/341770917:4(1-25)Online publication date: 10-Nov-2020
    • (2019)QuTiBenchACM Journal on Emerging Technologies in Computing Systems10.1145/335870015:4(1-38)Online publication date: 4-Dec-2019
    • (2018)Employing classification-based algorithms for general-purpose approximate computingProceedings of the 55th Annual Design Automation Conference10.1145/3195970.3196043(1-6)Online publication date: 24-Jun-2018
    • (2018)Design Exploration of IoT centric Neural Inference AcceleratorsProceedings of the 2018 Great Lakes Symposium on VLSI10.1145/3194554.3194614(391-396)Online publication date: 30-May-2018
    • (2017)HiPAProceedings of the International Conference on Supercomputing10.1145/3079079.3079107(1-10)Online publication date: 14-Jun-2017
    • (2017)PARSEC3.0ACM SIGARCH Computer Architecture News10.1145/3053277.305327944:5(1-16)Online publication date: 13-Feb-2017
    • (2016)Evaluation of an analog accelerator for linear algebraACM SIGARCH Computer Architecture News10.1145/3007787.300119744:3(570-582)Online publication date: 18-Jun-2016
    • Show More Cited By

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media