Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3649329.3656528acmconferencesArticle/Chapter ViewAbstractPublication PagesdacConference Proceedingsconference-collections
research-article
Free access

Hardware-Aware Neural Dropout Search for Reliable Uncertainty Prediction on FPGA

Published: 07 November 2024 Publication History

Abstract

The increasing deployment of artificial intelligence (AI) for critical decision-making amplifies the necessity for trustworthy AI, where uncertainty estimation plays a pivotal role in ensuring trustworthiness. Dropout-based Bayesian Neural Networks (BayesNNs) are prominent in this field, offering reliable uncertainty estimates. Despite their effectiveness, existing dropout-based BayesNNs typically employ a uniform dropout design across different layers, leading to suboptimal performance. Moreover, as diverse applications require tailored dropout strategies for optimal performance, manually optimizing dropout configurations for various applications is both error-prone and labor-intensive. To address these challenges, this paper proposes a novel neural dropout search framework that automatically optimizes both the dropout-based BayesNNs and their hardware implementations on FPGA. We leverage one-shot supernet training with an evolutionary algorithm for efficient dropout optimization. A layer-wise dropout search space is introduced to enable the automatic design of dropout-based BayesNNs with heterogeneous dropout configurations. Extensive experiments demonstrate that our proposed framework can effectively find design configurations on the Pareto frontier. Compared to manually-designed dropout-based BayesNNs on GPU, our search approach produces FPGA designs that can achieve up to 33× higher energy efficiency. Compared to state-of-the-art FPGA designs of BayesNN, the solutions from our approach can achieve higher algorithmic performance and energy efficiency.

References

[1]
Hiromitsu Awano and Masanori Hashimoto. 2020. BYNQNet: Bayesian neural network with quadratic activations for sampling-free uncertainty estimation on FPGA. In Design, Automation & Test in Europe Conference & Exhibition (DATE). 1402--1407.
[2]
David M Blei, Alp Kucukelbir, and Jon D McAuliffe. 2017. Variational inference: A review for statisticians. Journal of the American statistical Association 112, 518 (2017), 859--877.
[3]
Ruizhe Cai, Ao Ren, Ning Liu, Caiwen Ding, Luhao Wang, Xuehai Qian, Massoud Pedram, and Yanzhi Wang. 2018. VIBNN: Hardware acceleration of Bayesian neural networks. ACM SIGPLAN Notices 53, 2 (2018), 476--488.
[4]
Shi Dong, Ping Wang, and Khushnood Abbas. 2021. A survey on deep learning and its applications. Computer Science Review 40 (2021), 100379.
[5]
Nikita Durasov, Timur Bagautdinov, Pierre Baque, and Pascal Fua. 2021. Masksembles for uncertainty estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 13539--13548.
[6]
Farah Fahim, Benjamin Hawks, Christian Herwig, James Hirschauer, Sergo Jindariani, Nhan Tran, Luca P Carloni, Giuseppe Di Guglielmo, Philip Harris, Jeffrey Krupa, et al. 2021. hls4ml: An open-source codesign workflow to empower scientific low-power machine learning devices. arXiv preprint arXiv:2103.05579 (2021).
[7]
Hongxiang Fan, Mark Chen, Liam Castelli, Zhiqiang Que, He Li, Kenneth Long, and Wayne Luk. 2023. When Monte-Carlo Dropout meets multi-exit: Optimizing Bayesian neural networks on FPGA. In ACM/IEEE Design Automation Conference (DAC). 1--6.
[8]
Hongxiang Fan, Martin Ferianc, and Wayne Luk. 2022. Enabling fast uncertainty estimation: accelerating bayesian transformers via algorithmic and hardware optimizations. In ACM/IEEE Design Automation Conference (DAC). 325--330.
[9]
Hongxiang Fan, Martin Ferianc, Zhiqiang Que, Shuanglong Liu, Xinyu Niu, Miguel RD Rodrigues, and Wayne Luk. 2022. FPGA-based acceleration for Bayesian convolutional neural networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 41, 12 (2022), 5343--5356.
[10]
Hongxiang Fan, Martin Ferianc, Zhiqiang Que, Xinyu Niu, Miguel Rodrigues, and Wayne Luk. 2022. Accelerating bayesian neural networks via algorithmic and hardware optimizations. IEEE Transactions on Parallel and Distributed Systems 33, 12 (2022), 3387--3399.
[11]
Hongxiang Fan, Martin Ferianc, Miguel Rodrigues, Hongyu Zhou, Xinyu Niu, and Wayne Luk. 2021. High-performance FPGA-based accelerator for Bayesian neural networks. In ACM/IEEE Design Automation Conference (DAC). 1063--1068.
[12]
Yoshiki Fujiwara and Shinya Takamaeda-Yamazaki. 2021. ASBNN: Acceleration of Bayesian Convolutional Neural Networks by Algorithm-hardware Co-design. In International Conference on Application-specific Systems, Architectures and Processors (ASAP). 226--233.
[13]
Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning (ICML). 1050--1059.
[14]
Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. 2018. Dropblock: A regularization method for convolutional networks. In Advances in Neural Information Processing Systems (NeurIPS). 10727--10737.
[15]
Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. 2020. Single path one-shot neural architecture search with uniform sampling. In European Conference on Computer Vision (ECCV). 544--560.
[16]
W Keith Hastings. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 1 (1970), 97--109.
[17]
Laurent Valentin Jospin, Hamid Laga, Farid Boussaid, Wray Buntine, and Mohammed Bennamoun. 2022. Hands-on Bayesian neural networks---A tutorial for deep learning users. IEEE Computational Intelligence Magazine 17, 2 (2022), 29--48.
[18]
Radford Neal. 1992. Bayesian learning via stochastic dynamics. In Advances in Neural Information Processing Systems (NeurIPS). 475--482.
[19]
Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. 2019. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In Advances in Neural Information Processing Systems (NeurIPS), Vol. 32.
[20]
Christopher KI Williams and Carl Edward Rasmussen. 2006. Gaussian processes for machine learning. Vol. 2. MIT press Cambridge.
[21]
Zhilu Zhang, Adrian V Dalca, and Mert R Sabuncu. 2019. Confidence calibration for convolutional neural networks using structured dropout. arXiv preprint arXiv:1906.09551 (2019).
[22]
Ke Zou, Zhihao Chen, Xuedong Yuan, Xiaojing Shen, Meng Wang, and Huazhu Fu. 2023. A Review of Uncertainty Estimation and its Application in Medical Imaging. arXiv preprint arXiv:2302.08119 (2023).

Index Terms

  1. Hardware-Aware Neural Dropout Search for Reliable Uncertainty Prediction on FPGA
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        DAC '24: Proceedings of the 61st ACM/IEEE Design Automation Conference
        June 2024
        2159 pages
        ISBN:9798400706011
        DOI:10.1145/3649329
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        In-Cooperation

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 07 November 2024

        Check for updates

        Qualifiers

        • Research-article

        Funding Sources

        • EPSRC

        Conference

        DAC '24
        Sponsor:
        DAC '24: 61st ACM/IEEE Design Automation Conference
        June 23 - 27, 2024
        CA, San Francisco, USA

        Acceptance Rates

        Overall Acceptance Rate 1,770 of 5,499 submissions, 32%

        Upcoming Conference

        DAC '25
        62nd ACM/IEEE Design Automation Conference
        June 22 - 26, 2025
        San Francisco , CA , USA

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 23
          Total Downloads
        • Downloads (Last 12 months)23
        • Downloads (Last 6 weeks)23
        Reflects downloads up to 30 Nov 2024

        Other Metrics

        Citations

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media