Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3289602.3293937acmconferencesArticle/Chapter ViewAbstractPublication PagesfpgaConference Proceedingsconference-collections
poster
Public Access

SparseBNN: Joint Algorithm/Hardware Optimization to Exploit Structured Sparsity in Binary Neural Network

Published: 20 February 2019 Publication History

Abstract

To reduce power-hungry floating point operations and memory accesses in deep neural networks, quantized neural networks are proposed that replace floating point multiplications with simplified reduced-precision operations. To compensate for the accuracy loss due to the high degree of quantization, wider neural network layers with three or more times as many feature maps are employed. One by-product from these inflated layers is increased redundancy in the network. To further improve computational efficiency and leverage this inherent redundancy, we propose a joint optimization approach that simultaneously explores hardware-oriented training and efficient accelerator implementation of binary neural networks (BNN) in FPGAs. More specifically, our SparseBNN method consists of two parts. First, SparseBNN-SW is a training algorithm developed to enhance the structured sparsity of BNNs by 1) training for zero-valued ternary weights instead of binary that are more amenable to pruning and 2) regulating the sparsity for more efficient hardware deployment. Next, we present SparseBNN-HW, an accelerator architecture designed to directly execute the inference on the sparse-encoded format to save both memory access and computations. Experimental results on various representative datasets demonstrate that SparseBNN improves the power efficiency (GOPS/Watt) and resource efficiency (GOPS/kLUT) over the baseline BNN FPGA implementation by 1.70X and 2.22X.

Index Terms

  1. SparseBNN: Joint Algorithm/Hardware Optimization to Exploit Structured Sparsity in Binary Neural Network

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    FPGA '19: Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays
    February 2019
    360 pages
    ISBN:9781450361378
    DOI:10.1145/3289602
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 February 2019

    Check for updates

    Author Tags

    1. binary neural network
    2. hardware-software codesign
    3. neural network acceleration
    4. structural sparsity

    Qualifiers

    • Poster

    Funding Sources

    Conference

    FPGA '19
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 125 of 627 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 0
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 08 Dec 2024

    Other Metrics

    Citations

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media