Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3456126.3456140acmotherconferencesArticle/Chapter ViewAbstractPublication PagesasseConference Proceedingsconference-collections
Article

Efficient LUT-based FPGA Accelerator Design for Universal Quantized CNN Inference

Published: 29 June 2021 Publication History

Abstract

Deep learning has achieved remarkable success in a variety of tasks in real life, such as speech and vision. However, the vast computational complexity of convolution neural networks (CNN) has limited the speed of the network running in hardware. In recent years, network quantization technology has made it possible to quantize network into the 16-bit fixed point, 8-bit integer, and even binary, maintaining the original performance, while the computational complexity of the network inference is still considerable. Therefore, exploring high-performance and efficient hardware architecture designed for quantized neural networks (QNN) is necessary to eliminate the bottleneck of high-density computing requirements.
FPGA is a highly parallelized hardware computing platform. The outstanding advantage is that it contains a large number of primary configurable logic resources. We explore the possibility of implementation for convolution calculations based on LUTs, introduce the integer multipliers and addition trees based on FPGAs, and propose an efficient computing architecture for QNN. With the optimization of Winograd convolution algorithm for QNN, we demonstrate that our scheme could significantly reduce the number of multipliers without using DSP resources, saving the usage of LUT resources by 2.25× at least. In the end, our LUT-based architecture for QNN will shorten the latency up to 19.3× and represent more effective performance compared other methods.

References

[1]
F. Chollet, "Xception: Deep learning with depthwise separable convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251-1258.
[2]
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification," Detection and Segmentation. arXiv, vol. 1801, 2018.
[3]
M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, "Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1," arXiv preprint arXiv:1602.02830, 2016.
[4]
C. Zhu, S. Han, H. Mao, and W. J. Dally, "Trained ternary quantization," arXiv preprint arXiv:1612.01064, 2016.
[5]
N. Brisebarre, F. De Dinechin, and J.-M. Muller, "Integer and floating-point constant multipliers for FPGAs," in 2008 International Conference on Application-Specific Systems, Architectures and Processors, 2008: IEEE, pp. 239-244.
[6]
J. P. Singh, A. Kumar, and S. Kumar, "A multiplier generator for Xilinx FPGAs," in Proceedings of 9th International Conference on VLSI Design, 1996: IEEE, pp. 322-323.
[7]
R. H. Turner and R. F. Woods, "Highly efficient, limited range multipliers for LUT-based FPGA architectures," IEEE transactions on very large scale integration (vlsi) systems, vol. 12, no. 10, pp. 1113-1118, 2004.
[8]
Shmuel Winograd. Arithmetic complexity of computations,volume 33. Siam, 1980.
[9]
A. Lavin and S. Gray, "Fast algorithms for convolutional neural networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4013-4021.
[10]
P. Jebashini, R. Uma, P. Dhavachelvan, and H. K. Wye, "A survey and comparative analysis of multiply-accumulate (mac) block for digital signal processing application on asic and fpga," J. Appl. Sci., vol. 15, pp. 934-946, 2015.
[11]
L.-H. Chen, O.-C. Chen, T.-Y. Wang, and Y.-C. Ma, "A multiplication-accumulation computation unit with optimized compressors and minimized switching activities," in 2005 IEEE International Symposium on Circuits and Systems, 2005: IEEE, pp. 6118-6121.
[12]
J. Hormigo, G. Caffarena, J. P. Oliver, and E. Boemo, "Self-reconfigurable constant multiplier for fpga," ACM Transactions on Reconfigurable Technology and Systems (TRETS), vol. 6, no. 3, p. 14, 2013.
[13]
M. J. Wirthlin, "Constant coefficient multiplication using look-up tables," Journal of VLSI signal processing systems for signal, image and video technology, vol. 36, no. 1, pp. 7-15, 2004.
[14]
H. Parandeh-Afshar, P. Brisk, and P. Ienne, "Efficient synthesis of compressor trees on FPGAs," in 2008 Asia and South Pacific Design Automation Conference, 2008: IEEE, pp. 138-143.
[15]
H. Parandeh-Afshar, P. Brisk, and P. Ienne, "Exploiting fast carry-chains of FPGAs for designing compressor trees," in 2009 International Conference on Field Programmable Logic and Applications, 2009: IEEE, pp. 242-249.

Cited By

View all
  • (2022)A Higher Performance Accelerator for Resource-Limited FPGA to Deploy Deeper Object Detection Networks2022 IEEE 16th International Conference on Anti-counterfeiting, Security, and Identification (ASID)10.1109/ASID56930.2022.9995953(1-5)Online publication date: 2-Dec-2022

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ASSE '21: 2021 2nd Asia Service Sciences and Software Engineering Conference
February 2021
143 pages
ISBN:9781450389082
DOI:10.1145/3456126
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 June 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. LUT-based FPGA accelerator
  2. Quantized neural networks
  3. Winograd convolution algorithm

Qualifiers

  • Article
  • Research
  • Refereed limited

Conference

ASSE '21

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)69
  • Downloads (Last 6 weeks)5
Reflects downloads up to 20 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2022)A Higher Performance Accelerator for Resource-Limited FPGA to Deploy Deeper Object Detection Networks2022 IEEE 16th International Conference on Anti-counterfeiting, Security, and Identification (ASID)10.1109/ASID56930.2022.9995953(1-5)Online publication date: 2-Dec-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media