Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Nacc-Guard: a lightweight DNN accelerator architecture for secure deep learning

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Recent breakthroughs in artificial intelligence and deep neural networks (DNNs) have produced an explosive demand for computing platforms equipped with customized domain-specific accelerators. However, DNN accelerators have security vulnerabilities. Researchers have previously explored DNN attack and defense technologies that mainly focus on training and inference algorithms or model structure robustness. The problem of how to design a secure accelerator architecture has received relatively little attention, especially with the rapid development of FPGA-based heterogeneous computing SoCs. To mitigate this bottleneck, we propose Nacc-Guard, a lightweight DNN accelerator architecture which can effectively defend against neural network bit-flip attacks and memory Trojan attacks. By utilizing a linear randomization encryption algorithm based on stream cipher Trivium, interrupt signal confused coding, and hash-based message authentication code, Nacc-Guard can not only guarantee the integrity of the uploaded DNN file but also ensure buffer data confidentiality. To evaluate Nacc-Guard, NVDLA and a SIMD accelerator coupling with a RISC-V Rocket and ARM processor is implemented at RTL. Experimental evaluation shows that Nacc-Guard has a 3\(\times \) hardware overhead reduction compared with conventional AES. Experiments on VGG, ResNet50, GoogLeNet, and YOLOv4-tiny validate that this framework can successfully ensure secure DNN inference with negligible performance loss. It achieves a 3.63\(\times \) speedup and 35% energy reduction over the AES baseline.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444

    Article  CAS  PubMed  ADS  Google Scholar 

  2. Foote Keith D (2017) A brief history of deep learning

  3. MLK (2019) Brief History of Deep Learning from 1943 to 2019 [Timeline]

  4. Zou Z, Shi Z, Guo Y, Ye J (2019) Object detection in 20 years: a survey. arXiv:1905.05055

  5. Ham TJ, Jung SJ, et al. (2020) A3: accelerating attention mechanisms in neural networks with approximation. In: HPCA, pp 328–341

  6. Mishra R, Gupta HP, Dutta T (2020) A survey on deep neural network compression: challenges, overview, and solutions. arXiv:2010.03954

  7. Chen T, Ji B, Shi Y, Ding T, Fang B, Yi S, Tu X (2020) Neural network compression via sparse optimization. arXiv:2011.04868

  8. Xu S, Huang A, Chen L, Zhang B (2020) Convolutional neural network pruning: a survey. In: Proceedings of the 39th Chinese Control Conference, pp 7458–7463

  9. Blalock D, Gonzalez Ortiz JJ, Frankle J, Guttag J (2020) What is the state of neural network pruning?. arXiv:2003.03033

  10. Molchanov P, Tyree S, Karras T, Aila T, Kautz J (2016) Pruning convolutional neural networks for resource efficient inference. arXiv:1611.06440

  11. Mittal S, Gupta H, Srivastava S (2021) A survey on hardware security of DNN models and accelerators. J Syst Archit 117:1–30

    Article  Google Scholar 

  12. Hu X, Zhao Y, Deng L, Liang L, Zuo P, Ye J, Lin Y, Xie Y (2020) Practical attacks on deep neural networks by memory trojaning. In: Proceedings of the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

  13. Zuo P, Hua Y, Liang L, et al. (2020) Sealing neural network models in secure deep learning accelerators. Arxiv: 2008.03752

  14. Rakin AS, He Z, Fan D (2019) Bit-flip attack: crushing neural network with progressive bit search. In ICCV, pp 1211–1220

  15. Cai Q, et al. (2018) Curriculum adversarial training. In: IJCAI

  16. Wang X, Hou R, Zhu Y, et al. (2019) NPUFort: a secure architecture of DNN accelerator against model inversion attack. In: Proceedings of the 16th ACM International Conference on Computing Frontiers

  17. Hashemi H, Wang Y, Annavaram M (2021) DarKnight: an accelerated framework for privacy and integrity preserving deep learning using trusted hardware. In: MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture

  18. Stinson DR (2005) Cryptography: theory and practice, 3rd edn. Chapman Hall Press, London

    Book  Google Scholar 

  19. Xu C, Lai S (2021) Accelerating TEE-based DNN inference using mean shift network pruning. In: 17th EAI International Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness, pp 25–41

  20. Capra M (2020) Hardware and software optimizations for accelerating deep neural networks: survey of current trends, challenges, and the road ahead. IEEE Access 8:225134–225180

    Article  Google Scholar 

  21. Chen T, Zidong D, Sun N et al (2014) DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning. ACM SIGARCH Comput Archit News 42:269–284

    Article  Google Scholar 

  22. Graphcore (2019) Introduction to the IPU architecture. [Online]. Available: https://www.graphcore.ai/. Accessed 6 Aug 2019

  23. Cloud TPU, Accessed: 2018-01-31. [Online]. Available: https://cloud.google.com/tpu

  24. Tearing Apart Google’s TPU 3.0 AI coprocessor, Accessed: 2018-05-15. [Online]. Available: https://www.nextplatform.com/2018/05/10/tearing-apart-googles-tpu-3-0-ai-coprocessor

  25. NVIDIA (2018) Hardware architectural specification

  26. Drumond M, Coulon L, Pourhabibi A et al. (2021) Equinox: training (for free) on a custom inference accelerator. In: MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture

  27. Fengbin T, Zihan W, Yiqi W, et al (2022) A 28 nm 15.59uJ/token full-digital bitline-transpose CIM-based sparse transformer accelerator with pipeline/parallel reconfigurable modes. In: 2022 IEEE International Solid-State Circuits Conference (ISSCC)

  28. Shan L, Zhang M, Deng L, et al. (2016) A dynamic multi-precision fixed-point data quantization strategy for convolutional neural network. In: CCF National Conference on Computer Engineering and Technology, pp 102–111

  29. Lin D, Talathi S, Sreekanth V (2016) Fixed point quantization of deep convolutional networks. In: International conference on machine learning

  30. Qiu J, Wang J, Yao S, et al. (2016) Going deeper with embedded FPGA platform for convolutional neural network. In: Proceedings of the 2016 ACM/SIGDA international symposium on field-programmable gate arrays

  31. Cong J, Fang Z, Lo M, et al. (2018) Understanding performance differences of FPGAs and GPUs. In: 2018 IEEE 26th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)

  32. Wang X, Hou R, Zhao B, et al. (2020) DNNGuard: an elastic heterogeneous DNN accelerator architecture against adversarial attacks. In: Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems

  33. [Online]. Available: https://www.thesslstore.com/blog/block-cipher-vs-stream-cipher/

  34. Cannière C (2006) Trivium: a stream cipher construction inspired by block cipher design principles

  35. Gan Y, Qiu Y, Leng J, Guo M, Zhu Y (2020) Ptolemy: architecture support for robust deep learning. In: 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), pp 241–255

  36. Rouhani BD, Samragh M, Javaheripi M, Javidi T, Koushanfar F (2018) Deepfense: online accelerated defense against adversarial deep learning. In: 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp 1–8

  37. Szegedy C, Zaremba W, Sutskever I, et al. (2013) Intriguing properties of neural networks. ArXiv: 1312.6199

  38. Zhang Y, Jia R, Pei H, Wang W, Li B, Song D (2020) The secret revealer: generative model-inversion attacks against deep neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 250–258

  39. Hu X, Liang L, Deng L, Li S, Xie X, Ji Y, Ding Y, Liu C, Sherwood T, Xie Y (2019) Neural network model extracion attacks in edge devices by hearing architectural hints. arXiv:1903.03916

  40. Sun G, Cong Y, Dong J, et al. (2020) Data poisoning attacks against federated learning systems. arXiv:2004.10020

  41. Liu Z, Ye J, Hu X, et al. (2020) Sequence triggered hardware trojan in neural network accelerato. In: 2020 IEEE 38th VLSI Test Symposium (VTS)

  42. Lyu Y, Mishra P (2018) A survey of side-channel attacks on caches and countermeasures. J Hardw Syst Secur 2:33–50

    Article  Google Scholar 

  43. Rakin AS, He Z, Li J, et al. (2021) T-BFA: targeted bit-flip adversarial weight attack. In: Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence

  44. [Online]. Available: https://github.com/dhm2013724/yolov2_xilinx_fpga

  45. [Online]. Available: https://github.com/nvdla/

  46. [Online]. Available: https://maestro.ece.gatech.edu/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Li.

Ethics declarations

Conflict of interest

The authors have declared that they have no conflicts of interest that are relevant to the content of this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, P., Che, C. & Hou, R. Nacc-Guard: a lightweight DNN accelerator architecture for secure deep learning. J Supercomput 80, 5815–5831 (2024). https://doi.org/10.1007/s11227-023-05671-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-023-05671-9

Keywords

Navigation