Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3314221.3314614acmconferencesArticle/Chapter ViewAbstractPublication PagespldiConference Proceedingsconference-collections
research-article

Optimization and abstraction: a synergistic approach for analyzing neural network robustness

Published: 08 June 2019 Publication History

Abstract

In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. Intuitively, robustness means that small perturbations to an input do not cause the network to perform misclassifications. In this paper, we present a novel algorithm for verifying robustness properties of neural networks. Our method synergistically combines gradient-based optimization methods for counterexample search with abstraction-based proof search to obtain a sound and (δ -)complete decision procedure. Our method also employs a data-driven approach to learn a verification policy that guides abstract interpretation during proof search. We have implemented the proposed approach in a tool called Charon and experimentally evaluated it on hundreds of benchmarks. Our experiments show that the proposed approach significantly outperforms three state-of-the-art tools, namely AI^2, Reluplex, and Reluval.

Supplementary Material

WEBM File (p731-anderson.webm)
MP4 File (3314221.3314614.mp4)
Video Presentation

References

[1]
[n. d.]. ELINA: ETH Library for Numerical Analysis. http://elina.ethz.ch.
[2]
[n. d.]. Google Cloud Platform (GCP). https://cloud.google.com/. Accessed: 2018-11-14.
[3]
ApolloAuto. 2017. apollo. https://github.com/ApolloAuto/apollo.
[4]
Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios Vytiniotis, Aditya V. Nori, and Antonio Criminisi. 2016. Measuring Neural Net Robustness with Constraints. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain. 2613-2621.
[5]
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. 2016. End to End Learning for Self-Driving Cars. CoRR abs/1604.07316 (2016). http://arxiv.org/abs/1604.07316.
[6]
Eric Brochu, Vlad M. Cora, and Nando De Freitas. 2010. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning. abs/1012.2599 (12 2010).
[7]
Patrick Cousot and Radhia Cousot. 1977. Abstract Interpretation: A Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages (POPL '77). ACM, New York, NY, USA, 238-252.
[8]
Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, and Ashish Tiwari. 2017. Output Range Analysis for Deep Neural Networks. CoRR abs/1709.09130 (2017). arXiv:1709.09130 http://arxiv.org/abs/1709.09130.
[9]
Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. 2017. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542 (01 2017), 115-118.
[10]
Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. 2017. Robust Physical-World Attacks on Machine Learning Models. CoRR abs/1707.08945 (2017). arXiv:1707.08945 http://arxiv.org/abs/1707.08945.
[11]
Message P Forum. 1994. MPI: A Message-Passing Interface Standard. Technical Report. Knoxville, TN, USA.
[12]
Sicun Gao, Jeremy Avigad, and Edmund M. Clarke. 2012. ?-complete Decision Procedures for Satisfiability over the Reals. In Proceedings of the 6th International Joint Conference on Automated Reasoning (IJCAR'12). Springer-Verlag, Berlin, Heidelberg, 286-300.
[13]
Pranav Garg, Christof Löding, P Madhusudan, and Daniel Neider. 2014. ICE: A robust framework for learning invariants. In International Conference on Computer Aided Verification. Springer, 69-87.
[14]
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and M. Vechev. 2018. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In 2018 IEEE Symposium on Security and Privacy (SP). IEEE, San Francisco, CA, USA, 3-18.
[15]
Khalil Ghorbal, Eric Goubault, and Sylvie Putot. 2009. The Zonotope Abstract Domain Taylor1+. In Proceedings of the 21st International Conference on Computer Aided Verification (CAV '09). Springer-Verlag, Berlin, Heidelberg, 627-633.
[16]
Yuan Gong and Christian Poellabauer. 2018. An Overview of Vulnerabilities of Voice Controlled Systems. arXiv preprint arXiv:1803.09156 (2018).
[17]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
[18]
Divya Gopinath, Guy Katz, Corina S Pasareanu, and Clark Barrett. 2017. Deepsafe: A data-driven approach for checking adversarial robustness in neural networks. arXiv preprint arXiv:1710.00486 (2017).
[19]
Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2017. Adversarial examples for malware detection. In European Symposium on Research in Computer Security. Springer, 62-79.
[20]
Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick D. McDaniel. 2016. Adversarial Perturbations Against Deep Neural Networks for Malware Classification. CoRR abs/1606.04435 (2016). http://arxiv.org/abs/1606.04435.
[21]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770-778.
[22]
Kihong Heo, Hakjoo Oh, and Hongseok Yang. 2016. Learning a variable-clustering strategy for octagon from labeled data generated by a static analysis. In International Static Analysis Symposium. Springer, 237-256.
[23]
Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2016. Safety Verification of Deep Neural Networks. CoRR abs/1610.06940 (2016).
[24]
Kyle D. Julian, Jessica Lopez, Jeffrey S. Brush, Michael P. Owen, and Mykel J. Kochenderfer. 2016. Policy compression for aircraft collision avoidance systems. In 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC). 1-10.
[25]
Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In Proceedings of the 29th International Conference On Computer Aided Verification.
[26]
Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. CoRR abs/1702.01135 (2017).
[27]
Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report.
[28]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 1097-1105. http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf.
[29]
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2016. Adversarial Machine Learning at Scale. CoRR abs/1611.01236 (2016). arXiv:1611.01236 http://arxiv.org/abs/1611.01236.
[30]
Yann Lecun, L'on Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE. 2278-2324.
[31]
Percy Liang, Omer Tripp, and Mayur Naik. 2011. Learning minimal abstractions. In POPL, Vol. 46. ACM, 31-42.
[32]
Alessio Lomuscio and Lalit Maganti. 2017. An approach to reachability analysis for feed-forward ReLU neural networks. CoRR abs/1706.07351 (2017). arXiv:1706.07351 http://arxiv.org/abs/1706.07351.
[33]
Chunchuan Lyu, Kaizhu Huang, and Hai-Ning Liang. 2015. A Unified Gradient Regularization Family for Adversarial Examples. In 2015 IEEE International Conference on Data Mining, ICDM 2015, Atlantic City, NJ, USA, November 14-17, 2015. 301-309.
[34]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks.
[35]
Ruben Martinez-Cantin. 2014. BayesOpt: A Bayesian Optimization Library for Nonlinear Optimization, Experimental Design and Bandits. Journal of Machine Learning Research 15 (2014), 3915-3919. http://jmlr.org/papers/v15/martinezcantin14a.html.
[36]
Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, and Fabio Roli. 2017. Is deep learning safe for robot vision? adversarial examples against the icub humanoid. In Computer Vision Workshop (ICCVW), 2017 IEEE International Conference on. IEEE, 751-759.
[37]
Jonas Mockus. 2010. Bayesian Heuristic Approach to Discrete and Global Optimization: Algorithms, Visualization, Software, and Applications. Springer-Verlag, Berlin, Heidelberg.
[38]
Anh Mai Nguyen, Jason Yosinski, and Jeff Clune. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015. 427-436.
[39]
Hakjoo Oh, Hongseok Yang, and Kwangkeun Yi. 2015. Learning a strategy for adapting a program analysis via bayesian optimisation. In ACM SIGPLAN Notices, Vol. 50. ACM, 572-588.
[40]
Nicolas Papernot, Patrick D. McDaniel, and Ian J. Goodfellow. 2016. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. CoRR abs/1605.07277 (2016). arXiv:1605.07277 http://arxiv.org/abs/1605.07277.
[41]
Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. Deep-Xplore: Automated Whitebox Testing of Deep Learning Systems. In Proceedings of the 26th Symposium on Operating Systems Principles (SOSP '17). ACM, New York, NY, USA, 1-18.
[42]
Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems. CoRR abs/1712.01785 (2017). arXiv:1712.01785 http://arxiv.org/abs/1712.01785.
[43]
Luca Pulina and Armando Tacchella. 2010. An Abstraction-Refinement Approach to Verification of Artificial Neural Networks. In Computer Aided Verification, 22nd International Conference, CAV 2010, Edinburgh, UK, July 15-19, 2010. Proceedings. 243-257.
[44]
Carl Edward Rasmussen and Christopher K. I. Williams. 2006. Gaussian Processes for Machine Learning. The MIT Press.
[45]
Sara Sabour, Yanshuai Cao, Fartash Faghri, and David J. Fleet. 2015. Adversarial Manipulation of Deep Representations. CoRR abs/1511.05122 (2015).
[46]
Karsten Scheibler, Leonore Winterer, Ralf Wimmer, and Bernd Becker. 2015. Towards Verification of Artificial Neural Networks. In Methoden und Beschreibungssprachen zur Modellierung und Verifikation von Schaltungen und Systemen, MBMV 2015, Chemnitz, Germany, March 3-4, 2015. 30-40.
[47]
Rahul Sharma, Saurabh Gupta, Bharath Hariharan, Alex Aiken, and Aditya V Nori. 2013. Verification as learning geometric concepts. In International Static Analysis Symposium. Springer, 388-411.
[48]
Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, and Le Song. 2018. Learning loop invariants for program verification. In Proceedings of the Thirty-second Conference on Neural Information Processing Systems.
[49]
Gagandeep Singh, Markus Püschel, and Martin Vechev. 2017. Fast Polyhedra Abstract Domain. In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages (POPL 2017). ACM, New York, NY, USA, 46-59.
[50]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[51]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR abs/1312.6199 (2013).
[52]
Pedro Tabacof and Eduardo Valle. 2016. Exploring the space of adversarial images. In 2016 International Joint Conference on Neural Networks, IJCNN 2016, Vancouver, BC, Canada, July 24-29, 2016. 426-433.
[53]
Vincent Tjeng and Russ Tedrake. 2017. Verifying Neural Networks with Mixed Integer Programming. CoRR abs/1711.07356 (2017). arXiv:1711.07356 http://arxiv.org/abs/1711.07356.
[54]
Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Formal Security Analysis of Neural Networks using Symbolic Intervals. In 27th USENIX Security Symposium (USENIX Security 18). USENIX Association, Baltimore, MD, 1599-1614. https://www.usenix.org/conference/usenixsecurity18/presentation/wang-shiqi.
[55]
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. CoRR abs/1609.08144 (2016). http://arxiv.org/abs/1609.08144.
[56]
Xiaoyong Yuan, Pan He, Qile Zhu, Rajendra Rana Bhat, and Xiaolin Li. 2017. Adversarial Examples: Attacks and Defenses for Deep Learning. CoRR abs/1712.07107 (2017). arXiv:1712.07107 http://arxiv.org/abs/1712.07107.
[57]
Zhenlong Yuan, Yongqiang Lu, Zhaoguo Wang, and Yibo Xue. 2014. Droid-Sec: Deep Learning in Android Malware Detection. In Proceedings of the 2014 ACM Conference on SIGCOMM (SIGCOMM '14). ACM, 371-372.
[58]
Zhenlong Yuan, Yongqiang Lu, and Yibo Xue. 2016. Droiddetector: android malware characterization and detection using deep learning. Tsinghua Science and Technology 21, 1 (Feb 2016), 114-123.

Cited By

View all
  • (2025)Decreasing adversarial transferability using gradient information of attack pathsApplied Soft Computing10.1016/j.asoc.2025.112692170(112692)Online publication date: Feb-2025
  • (2025)ConstraintFlow: A Declarative DSL for Easy Development of DNN CertifiersStatic Analysis10.1007/978-3-031-74776-2_16(407-424)Online publication date: 21-Jan-2025
  • (2024)A Parallel Optimization Method for Robustness Verification of Deep Neural NetworksMathematics10.3390/math1212188412:12(1884)Online publication date: 17-Jun-2024
  • Show More Cited By

Index Terms

  1. Optimization and abstraction: a synergistic approach for analyzing neural network robustness

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      PLDI 2019: Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation
      June 2019
      1162 pages
      ISBN:9781450367127
      DOI:10.1145/3314221
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 08 June 2019

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Abstract Interpretation
      2. Machine learning
      3. Optimization
      4. Robustness

      Qualifiers

      • Research-article

      Conference

      PLDI '19
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 406 of 2,067 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)38
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 16 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Decreasing adversarial transferability using gradient information of attack pathsApplied Soft Computing10.1016/j.asoc.2025.112692170(112692)Online publication date: Feb-2025
      • (2025)ConstraintFlow: A Declarative DSL for Easy Development of DNN CertifiersStatic Analysis10.1007/978-3-031-74776-2_16(407-424)Online publication date: 21-Jan-2025
      • (2024)A Parallel Optimization Method for Robustness Verification of Deep Neural NetworksMathematics10.3390/math1212188412:12(1884)Online publication date: 17-Jun-2024
      • (2024)Input-Relational Verification of Deep Neural NetworksProceedings of the ACM on Programming Languages10.1145/36563778:PLDI(1-27)Online publication date: 20-Jun-2024
      • (2024)Verification of Neural Networks’ Global RobustnessProceedings of the ACM on Programming Languages10.1145/36498478:OOPSLA1(1010-1039)Online publication date: 29-Apr-2024
      • (2024)Safe and Reliable Training of Learning-Based Aerospace Controllers2024 AIAA DATC/IEEE 43rd Digital Avionics Systems Conference (DASC)10.1109/DASC62030.2024.10749499(1-10)Online publication date: 29-Sep-2024
      • (2024)Verifying the Generalization of Deep Learning to Out-of-Distribution DomainsJournal of Automated Reasoning10.1007/s10817-024-09704-768:3Online publication date: 3-Aug-2024
      • (2024)A Literature Review on Verification and Abstraction of Neural Networks Within the Formal Methods CommunityPrinciples of Verification: Cycling the Probabilistic Landscape10.1007/978-3-031-75778-5_3(39-65)Online publication date: 18-Nov-2024
      • (2024)Boosting Few-Pixel Robustness Verification via Covering Verification DesignsComputer Aided Verification10.1007/978-3-031-65630-9_19(377-400)Online publication date: 25-Jul-2024
      • (2024)DeepCDCL: A CDCL-based Neural Network Verification FrameworkTheoretical Aspects of Software Engineering10.1007/978-3-031-64626-3_20(343-355)Online publication date: 14-Jul-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media