Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation

Published: 01 June 2021 Publication History

Abstract

Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable to small perturbations on the inputs. This has led to safety concerns on applying DNNs to safety-critical domains. Several verification approaches based on constraint solving have been developed to automatically prove or disprove safety properties for DNNs. However, these approaches suffer from the scalability problem, i.e., only small DNNs can be handled. To deal with this, abstraction based approaches have been proposed, but are unfortunately facing the precision problem, i.e., the obtained bounds are often loose. In this paper, we focus on a variety of local robustness properties and a (δ,ε)-global robustness property of DNNs, and investigate novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties:
We propose a method to verify local robustness, which improves a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. Specifically, the values of neurons are represented symbolically and propagated from the input layer to the output layer, on top of the underlying abstract domains. It achieves significantly higher precision and thus can prove more properties.
We propose a Lipschitz constant based verification framework. By utilising Lipschitz constants solved by semidefinite programming, we can prove global robustness of DNNs. We show how the Lipschitz constant can be tightened if it is restricted to small regions. A tightened Lipschitz constantcan be helpful in proving local robustness properties. Furthermore, a global Lipschitz constant can be used to accelerate batch local robustness verification, and thus support the verification of global robustness.
We show how the proposed abstract interpretation and Lipschitz constant based approaches can benefit from each other to obtain more precise results. Moreover, they can be also exploited and combined to improve constraints based approach.
We implement our methods in the tool PRODeep, and conduct detailed experimental results on several benchmarks

References

References

[1]
Anderson G, Pailoor S, Dillig I, Chaudhuri S (2019) Optimization and abstraction: a synergistic approach for analyzing neural network robustness. In: McKinley KS, Fisher K (eds), Proceedings of the 40th ACM SIGPLAN conference on programming language design and implementation, PLDI 2019, Phoenix, AZ, USA, June 22–26, 2019. ACM, pp 731–744
[2]
Akhtar SW, Rehman S, Akhtar M, Khan MA, Riaz F, Chaudry Q, and Young Rupert CD Improving the robustness of neural networks using k-support norm based adversarial training IEEE Access 2016 4 9501-9511
[3]
Cousot P, Cousot R (1977) Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Fourth ACM symposium on principles of programming languages (POPL), pp 238–252
[4]
Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. CoRR, arXiv:2003.01690
[5]
Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP) . IEEE, pp 39–57
[6]
Dutta S, Jha S, Sankaranarayanan S, Tiwari A (2018) Output range analysis for deep feedforward neural networks. In: NASA formal methods - 10th international symposium, NFM 2018, newport news, VA, USA, April 17–19, 2018, Proceedings, pp 121–138
[7]
Dvijotham K, Stanforth R, Gowal S, Mann TA, Kohli P (2018) A dual approach to scalable verification of deep networks. CoRR, arXiv:1803.06567
[8]
Elboher YY, Gottschlich J, and Katz G Lahiri SK and Wang C An abstraction-based framework for neural network verification Computer Aided Verification - 32nd international conference, CAV 2020, Los Angeles, CA, USA, July 21–24, 2020, Proceedings, Part I 2020 Lecture Notes in Computer Science Springer 43-65
[9]
Ehlers R (2017) Formal verification of piece-wise linear feed-forward neural networks. In: 15th International symposium on automated technology for verification and analysis (ATVA2017), pp 269–286
[10]
Fazlyab M, Robey A, Hassani H, Morari M, and Pappas GJ Wallach HM, Larochelle H, Beygelzimer A, d'Alché-Buc F, Fox EB, and Garnett R Efficient and accurate estimation of lipschitz constants for deep neural networks Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, 8–14 December 2019 2019 BC, Canada Vancouver 11423-11434
[11]
Gupta S, Dube P, Verma A (2020) Improving the affordability of robustness training for dnns. In: 2020 IEEE/CVF conference on computer vision and pattern recognition, CVPR workshops 2020, Seattle, WA, USA, June 14–19, 2020. IEEE, pp 3383–3392
[12]
Ghorbal K, Goubault E, Putot S (2009) The zonotope abstract domain taylor1+. In: International conference on computer aided verification. Springer, pp 627–633
[13]
Ghorbal K, Goubault E, Putot S (2010) A logical product approach to zonotope intersection. In: Touili T, Cook B, Jackson PB (eds) Computer aided verification, 22nd international conference, CAV 2010, Edinburgh, UK, July 15–19, 2010. Proceedings, volume 6174 of Lecture Notes in Computer Science. Springer, pp 212–226
[14]
Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M (2018) AI2: Safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (S&P 2018), pp 948–963
[15]
Hinton G, Deng L, Yu D, Dahl GE, Mohamed AR, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN, and Kingsbury B Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups IEEE Signal Process Mag 2012 29 6 82-97
[16]
Huang X, Kwiatkowska M, Wang S, Wu M (2017) Safety verification of deep neural networks. In: 29th international conference on computer aided verification (CAV2017), pp 3–29
[17]
Huang Z, Zhang T (2020) Black-box adversarial attack with transferable model-based embedding. In: 8th international conference on learning representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30, 2020. OpenReview.net
[18]
Jeannin J-B, Ghorbal K, Kouskoulas Y, Gardner R, Schmidt A, Zawadzki E, Platzer A (2015) Formal verification of ACAS x, an industrial airborne collision avoidance system. In: 2015 international conference on embedded Software, EMSOFT 2015, Amsterdam, Netherlands, October 4-9, 2015, pp 127–136
[19]
Julian KD, Kochenderfer MJ, Owen MP (2018) Deep neural network compression for aircraft collision avoidance systems. CoRR, arXiv:1810.04240
[20]
Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ (2017) Towards proving the adversarial robustness of deep neural networks. arXiv:1709.02802
[21]
Katz G, Barrett CW, Dill DL, Julian K, Kochenderfer MJ (2017) Reluplex: an efficient SMT solver for verifying deep neural networks. In: 29th international conference on computer aided verification (CAV2017), pp 97–117
[22]
Katz G, Huang DA, Ibeling D, Julian K, Lazarus C, Lim R, Shah P, Thakoor S, Wu H, Zeljic A, Dill DL, Kochenderfer MJ, and Barrett CW Dillig I and Tasiran S The marabou framework for verification and analysis of deep neural networks Computer aided verification - 31st international conference, CAV 2019, New York City, NY, USA, July 15–18, 2019, proceedings, Part I 2019 Lecture notes in computer science Springer 443-452
[23]
Kang X, Song B, Du X, and Guizani M Adversarial attacks for image segmentation on multiple lightweight models IEEE Access 2020 8 31359-31370
[24]
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems 25: 26th annual conference on neural information processing systems 2012. Proceedings of a meeting held December 3–6, 2012, Lake Tahoe, Nevada, United States., pp 1106–1114
[25]
Lécun Y, Bottou L, Bengio Y, and Haffner P Gradient-based learning applied to document recognition Proc IEEE 1998 86 11 2278-2324
[26]
Li R, Li J, Huang CC, Yang P, Huang X, Zhang L, Xue B, Hermanns H (2020) Prodeep: a platform for robustness verification of deep neural networks. In: ESEC/FSE 2020
[27]
Li Jianlin, Liu Jiangchao, Yang Pengfei, Chen Liqian, Huang Xiaowei, and Zhang Lijun Chang BE Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification Static analysis - 26th international symposium, SAS 2019, Porto, Portugal, October 8–11, 2019, Proceedings 2019 Lecture notes in computer science Springer 296-319
[28]
Lomuscio A, Maganti L (2018) An approach to reachability analysis for feed-forward ReLU neural networks. In: KR2018
[29]
Moosavi-Dezfooli S-M, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR 2016, Las Vegas, NV, USA, June 27–30, 2016 . IEEE Computer Society, pp 2574–2582
[30]
Mao C, Gupta A, Nitin V, Ray B, Song S, Yang J, Vondrick C (2020) Multitask learning strengthens adversarial robustness. CoRR, arXiv:2007.07236
[31]
Miné A Tutorial on static inference of numeric invariants by abstract interpretation Found Trends Program Lang 2017 4 3–4 120-372
[32]
Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 6th International conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net
[33]
Müller C, Singh G, Püschel M, Vechev MT (2020) Neural network robustness verification on gpus. CoRR, arXiv:2007.10868
[34]
Narodytska N, Kasiviswanathan SP, Ryzhyk L, Sagiv M, Walsh T (2017) Verifying properties of binarized deep neural networks. arXiv preprint arXiv:1709.06662
[35]
Neumaier A and Shcherbina O Safe bounds in linear and mixed-integer linear programming Math. Program. 2004 99 2 283-296
[36]
Narayanan A and Wang D Improving robustness of deep neural network acoustic models via speech separation and joint adaptive training IEEE ACM Trans Audio Speech Lang Process 2015 23 1 92-101
[37]
Nandy, J., Wynne, H., Li, L.M., (2019) Robustness for adversarial lp1 perturbations. In: NeurIPS, : workshop on machine learning with guarantees. Vancouver, Canada (2019)
[38]
Nguyen A, Yosinski J, Clune J (2015) Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 427–436
[39]
Prabhakar P and Afzal ZR Wallach HM, Larochelle H, Beygelzimer A, d'Alché-Buc F, Fox EB, and Garnett R Abstraction based output range analysis for neural networks Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, 8–14 December 2019 2019 BC, Canada Vancouver 15762-15772
[40]
Papernot N, McDaniel PD, Jha S, Fredrikson M, Celik ZB, Swami A (2015) The limitations of deep learning in adversarial settings. CoRR, arXiv:1511.07528
[41]
Papernot N, McDaniel PD, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: IEEE European symposium on security and privacy, EuroS&P 2016, Saarbrücken, Germany, March 21–24, 2016. IEEE, pp 372–387
[42]
Pulina L, Tacchella A (2010) An abstraction-refinement approach to verification of artificial neural networks. In: Computer aided verification, 22nd international conference, CAV 2010, Edinburgh, UK, July 15–19, 2010. Proceedings, pp 243–257,
[43]
Ru B, Cobb AD, Blaas A, Gal Y (2020) Bayesopt adversarial attack. In: 8th International conference on learning representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30, 2020. OpenReview.net
[44]
Rothberg E, Chen T, Ji H (2019) Towards better accuracy and robustness with localized adversarial training. In: The Thirty-Third AAAI conference on artificial intelligence, AAAI 2019, The thirty-first innovative applications of artificial Intelligence conference, IAAI 2019, The ninth AAAI symposium on educational advances in artificial intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019 . AAAI Press, pp 10017–10018
[45]
Ruan W, Huang X, Kwiatkowska M (2018) Reachability analysis of deep neural networks with provable guarantees. In: Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI 2018, July 13–19, 2018, Stockholm, Sweden., pp 2651–2659
[46]
Ruan W, Huang X, Kwiatkowska M (2018) Reachability analysis of deep neural networks with provable guarantees. In: IJCAI2018, pp 2651–2659
[47]
Ruan W, Wu M, Sun Y, Huang X, Kroening D, Kwiatkowska M (2019) Global robustness evaluation of deep neural networks with provable guarantees for the hamming distance. In: IJCAI2019
[48]
Singh G, Gehr T, Mirman M, Püschel M, Vechev MT (2018) Fast and effective robustness certification. Advances in neural information processing systems 31: Annual conference on neural information processing systems 2018, NeurIPS 2018, 3–8 December 2018. Montréal, Canada., pp 10825–10836
[49]
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Vechev Martin T (2019) Beyond the single neuron convex barrier for neural network certification. In: Wallach Hanna M, Larochelle Hugo, Beygelzimer Alina, d'Alché-Buc Florence, Fox Emily B, Garnett Roman (eds) Advances in Neural Information Processing Systems 32: annual conference on neural information processing Systems 2019, NeurIPS 2019, 8–14 December 2019. Vancouver, BC, Canada, pp 15072–15083
[50]
Singh G, Gehr T, Püschel M, Vechev MT (2019) An abstract domain for certifying neural networks. PACMPL, 3(POPL):41:1–41:30
[51]
Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap TP, Leach M, Kavukcuoglu K, Graepel T, and Hassabis D Mastering the game of go with deep neural networks and tree search Nature 2016 529 7587 484-489
[52]
Schott L, Rauber J, Bethge M, Brendel W (2019) Towards the first adversarially robust neural network model on MNIST. In: 7th International conference on learning representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net
[53]
Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations (ICLR2014)
[54]
Tramèr F and Boneh D Wallach HM, Larochelle H, Beygelzimer A, d'Alché-Buc F, Fox EB, and Garnett R Adversarial training and robustness for multiple perturbations Advances in neural information processing systems 32: annual conference on neural information processing systems 2019, NeurIPS 2019, 8–14 December 2019 2019 BC, Canada Vancouver 5858-5868
[55]
Tran H-D, Bak S, Xiang W, and Johnson TT Lahiri SK and Wang C Verification of deep convolutional neural networks using imagestars Computer aided verification - 32nd international conference, CAV 2020, Los Angeles, CA, USA, July 21–24, 2020, Proceedings, Part I 2020 Lecture Notes in Computer Science Springer 18-42
[56]
Tran H-D, Lopez DM, Musau P, Yang X, Nguyen LV, Xiang W, and Johnson TT ter Beek MH, McIver A, and Oliveira JN Star-based reachability analysis of deep neural networks Formal methods - the next 30 years - third world congress, FM 2019, Porto, Portugal, October 7–11, 2019, Proceedings 2019 Lecture notes in computer science Springer 670-686
[57]
von Essen C, Giannakopoulou D (2014) Analyzing the next generation airborne collision avoidance system. In: Tools and algorithms for the construction and analysis of systems - 20th international conference, TACAS 2014, held as part of the European joint conferences on theory and practice of software, ETAPS 2014, Grenoble, France, April 5-13, 2014. Proceedings, pp 620–635
[58]
Wicker M, Huang X, Kwiatkowska M (2018) Feature-guided black-box safety testing of deep neural networks. In: International conference on tools and algorithms for the construction and analysis of systems (TACAS2018). Springer, pp 408–426
[59]
Wong E, Kolter JZ (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In: Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp 5283–5292
[60]
Wang Y, Ma X, Bailey J, Yi J, Zhou B, Gu Q (2019) On the convergence and robustness of adversarial training. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of machine learning research . PMLR, pp 6586–6595
[61]
Wu Min, Wicker matthew, Ruan Wenjie, Huang Xiaowei, Kwiatkowska Marta (2019) A game-based approximate verification of deep neural networks with provable guarantees. Theoretical Computer Science, 5
[62]
Wang S, Pei K, Whitehouse J, Yang J, Jana S (2018) Formal security analysis of neural networks using symbolic intervals. CoRR, arXiv:1804.10829
[63]
Weng T-W, Zhang H, Chen H, Song Z, Hsieh C-J, Daniel L, Boning DS, Dhillon IS (2018) Towards fast computation of certified robustness for relu networks. In: Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp 5273–5282
[64]
Xu J, Du Q (2020) Texttricker: Loss-based and gradient-based adversarial attacks on text classification models. Eng Appl Artif Intell 92:
[65]
Xiang W, Tran H, and Johnson TT Output reachable set estimation and verification for multilayer neural networks IEEE Trans Neural Netw Learn Syst 2018 29 11 5777-5783
[66]
Yan Z, Guo Y, and Zhang C Bengio S, Wallach HM, Larochelle H, Grauman K, Cesa-Bianchi N, and Garnett R Deep defense: Training dnns with improved adversarial robustness Advances in neural information processing systems 31: annual conference on neural information processing systems 2018, NeurIPS 2018, 3–8 December 2018 2018 Canada Montréal 417-426
[67]
Zhao C, Fletcher PT, Yu M, Peng Y, Zhang G, Shen C (2019) The adversarial attack and detection under the fisher information metric. In: The thirty-third AAAI conference on artificial intelligence, AAAI 2019, The thirty-first innovative applications of artificial intelligence conference, IAAI 2019, The ninth AAAI symposium on educational advances in artificial intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019. AAAI Press, pp 5869–5876
[68]
Zügner D, Günnemann S (2019) Certifiable robustness and robust training for graph convolutional networks. In Teredesai A, Kumar V, Li Y, Rosales R, Terzi E, Karypis G (eds) Proceedings of the 25th ACM SIGKDD International conference on knowledge discovery & data mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019 . ACM, pp 246–256

Cited By

View all
  • (2023)A Review of Abstraction Methods Toward Verifying Neural NetworksACM Transactions on Embedded Computing Systems10.1145/361750823:4(1-19)Online publication date: 28-Aug-2023
  • (2023)What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety-Critical SystemsBridging the Gap Between AI and Reality10.1007/978-3-031-46002-9_4(55-76)Online publication date: 23-Oct-2023
  • (2023)Improving Neural Network Verification Efficiency Through Perturbation RefinementArtificial Neural Networks and Machine Learning – ICANN 202310.1007/978-3-031-44207-0_42(504-515)Online publication date: 26-Sep-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Formal Aspects of Computing
Formal Aspects of Computing  Volume 33, Issue 3
Special Issue on Formal Methods and AI
Jun 2021
156 pages
ISSN:0934-5043
EISSN:1433-299X
Issue’s Table of Contents

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 01 June 2021
Accepted: 26 March 2021
Revision received: 23 February 2021
Received: 18 September 2020
Published in FAC Volume 33, Issue 3

Author Tags

  1. Deep neural network
  2. Verification
  3. Robustness
  4. Abstract interpretation
  5. Symbolic propagation
  6. Lipschitz constant

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)87
  • Downloads (Last 6 weeks)16
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)A Review of Abstraction Methods Toward Verifying Neural NetworksACM Transactions on Embedded Computing Systems10.1145/361750823:4(1-19)Online publication date: 28-Aug-2023
  • (2023)What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety-Critical SystemsBridging the Gap Between AI and Reality10.1007/978-3-031-46002-9_4(55-76)Online publication date: 23-Oct-2023
  • (2023)Improving Neural Network Verification Efficiency Through Perturbation RefinementArtificial Neural Networks and Machine Learning – ICANN 202310.1007/978-3-031-44207-0_42(504-515)Online publication date: 26-Sep-2023
  • (2023)Verifying Attention Robustness of Deep Neural Networks Against Semantic PerturbationsNASA Formal Methods10.1007/978-3-031-33170-1_3(37-61)Online publication date: 16-May-2023
  • (2022)Dependable learning-enabled multiagent systemsAI Communications10.3233/AIC-22012835:4(407-420)Online publication date: 1-Jan-2022
  • (2022)Application of Business Intelligence Based on the Deep Neural Network in Credit ScoringSecurity and Communication Networks10.1155/2022/26636682022Online publication date: 1-Jan-2022
  • (2022)CLEVEREST: Accelerating CEGAR-based Neural Network Verification via Adversarial AttacksStatic Analysis10.1007/978-3-031-22308-2_20(449-473)Online publication date: 5-Dec-2022

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media