Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3287624.3288751acmconferencesArticle/Chapter ViewAbstractPublication PagesaspdacConference Proceedingsconference-collections
research-article

A system-level perspective to understand the vulnerability of deep learning systems

Published: 21 January 2019 Publication History

Abstract

Deep neural network (DNN) is nowadays achieving the human-level performance on many machine learning applications like self-driving car, gaming and computer-aided diagnosis. However, recent studies show that such a promising technique has gradually become the major attack target, significantly threatening the safety of machine learning services. On one hand, the adversarial or poisoning attacks incurred by DNN algorithm vulnerabilities can cause the decision misleading with very high confidence. On the other hand, the system-level DNN attacks built upon models, training/inference algorithms and hardware and software in DNN execution, have also emerged for more diversified damages like denial of service, private data stealing. In this paper, we present an overview of such emerging system-level DNN attacks by systematically formulating their attack routines. Several representative cases are selected in our study to summarize the characteristics of system-level DNN attacks. Based on our formulation, we further discuss the challenges and several possible techniques to mitigate such emerging system-level DNN attacks.

References

[1]
John Demme, Matthew Maycock, Jared Schmitz, Adrian Tang, Adam Waksman, Simha Sethumadhavan, and Salvatore Stolfo. 2013. On the feasibility of online malware detection with performance counters. In ACM SIGARCH Computer Architecture News, Vol. 41. ACM, 559--570.
[2]
Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep learning. Vol. 1. MIT press Cambridge.
[3]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. {n. d.}. Explaining and harnessing adversarial examples (2014). arXiv preprint arXiv:1412.6572 ({n. d.}).
[4]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[5]
Divya Gopinath, Kaiyuan Wang, Mengshi Zhang, Corina S Pasareanu, and Sarfraz Khurshid. 2018. Symbolic Execution for Deep Neural Networks. arXiv preprint arXiv:1807.10439 (2018).
[6]
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine 29, 6 (2012), 82--97.
[7]
Ira Kemelmacher-Shlizerman, Steven M Seitz, Daniel Miller, and Evan Brossard. 2016. The megaface benchmark: 1 million faces for recognition at scale. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4873--4882.
[8]
Dhilung Kirat, Jiyong Jang, and Marc Stoecklin. 2018. DeepLocker - Concealing Targeted Attacks with AI Locksmithing. In Blackhat USA 2018. Blackhat.
[9]
Heiko Koziolek. 2010. Performance evaluation of component-based software systems: A survey. Performance Evaluation 67, 8 (2010), 634--658.
[10]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.
[11]
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436.
[12]
Qi Liu, Tao Liu, Zihao Liu, Yanzhi Wang, Yier Jin, and Wujie Wen. 2018. Security analysis and enhancement of model compressed deep learning systems under adversarial attacks. In Proceedings of the 23rd Asia and South Pacific Design Automation Conference. IEEE Press, 721--726.
[13]
Tao Liu, Wujie Wen, and Yier Jin. 2018. SIN 2: Stealth infection on neural network-A low-cost agile neural Trojan attack methodology. In 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST). IEEE, 227--230.
[14]
Zihao Liu, Qi Liu, Tao Liu, Yanzhi Wang, and Wujie Wen. 2018. Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples. arXiv preprint arXiv:1803.05787 (2018).
[15]
Microsoft. {n. d.}. Profiling Overview. https://docs.microsoft.com/en-us/dotnet/framework/unmanaged-api/profiling/profiling-overview#supported-features/.
[16]
Andreas Moser, Christopher Kruegel, and Engin Kirda. 2007. Limits of static analysis for malware detection. In Computer security applications conference, 2007. ACSAC 2007. Twenty-third annual. IEEE, 421--430.
[17]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2016. Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint (2016).
[18]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 372--387.
[19]
Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 582--597.
[20]
Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. 2017. Machine learning models that remember too much. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 587--601.
[21]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[22]
Chaofei Yang, Qing Wu, Hai Li, and Yiran Chen. 2017. Generative Poisoning Attack Method Against Neural Networks. arXiv preprint arXiv:1703.01340 (2017).

Cited By

View all
  • (2024)Security for Machine Learning-based Software Systems: A Survey of Threats, Practices, and ChallengesACM Computing Surveys10.1145/363853156:6(1-38)Online publication date: 23-Feb-2024
  • (2021)Security and Privacy Challenges of Deep LearningResearch Anthology on Privatizing and Securing Data10.4018/978-1-7998-8954-0.ch059(1258-1280)Online publication date: 2021
  • (2020)Security and Privacy Challenges of Deep LearningDeep Learning Strategies for Security Enhancement in Wireless Sensor Networks10.4018/978-1-7998-5068-7.ch003(42-64)Online publication date: 2020
  • Show More Cited By

Index Terms

  1. A system-level perspective to understand the vulnerability of deep learning systems

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ASPDAC '19: Proceedings of the 24th Asia and South Pacific Design Automation Conference
      January 2019
      794 pages
      ISBN:9781450360074
      DOI:10.1145/3287624
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      In-Cooperation

      • IEICE ESS: Institute of Electronics, Information and Communication Engineers, Engineering Sciences Society
      • IEEE CAS
      • IEEE CEDA
      • IPSJ SIG-SLDM: Information Processing Society of Japan, SIG System LSI Design Methodology

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 January 2019

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. DNN
      2. machine learning
      3. mitigation
      4. security
      5. system-level

      Qualifiers

      • Research-article

      Conference

      ASPDAC '19
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 466 of 1,454 submissions, 32%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 13 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Security for Machine Learning-based Software Systems: A Survey of Threats, Practices, and ChallengesACM Computing Surveys10.1145/363853156:6(1-38)Online publication date: 23-Feb-2024
      • (2021)Security and Privacy Challenges of Deep LearningResearch Anthology on Privatizing and Securing Data10.4018/978-1-7998-8954-0.ch059(1258-1280)Online publication date: 2021
      • (2020)Security and Privacy Challenges of Deep LearningDeep Learning Strategies for Security Enhancement in Wireless Sensor Networks10.4018/978-1-7998-5068-7.ch003(42-64)Online publication date: 2020
      • (2020)On Recent Security Issues in Machine Learning2020 International Conference on Software, Telecommunications and Computer Networks (SoftCOM)10.23919/SoftCOM50211.2020.9238337(1-6)Online publication date: 17-Sep-2020
      • (2019)Malware Dynamic Analysis Evasion TechniquesACM Computing Surveys10.1145/336500152:6(1-28)Online publication date: 14-Nov-2019

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media