Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Input-Relational Verification of Deep Neural Networks

Published: 20 June 2024 Publication History

Abstract

We consider the verification of input-relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations, monotonicity, etc. Precise verification of these properties requires reasoning about multiple executions of the same DNN. We introduce a novel concept of difference tracking to compute the difference between the outputs of two executions of the same DNN at all layers. We design a new abstract domain, DiffPoly for efficient difference tracking that can scale large DNNs. DiffPoly is equipped with custom abstract transformers for common activation functions (ReLU, Tanh, Sigmoid, etc.) and affine layers and can create precise linear cross-execution constraints. We implement an input-relational verifier for DNNs called RaVeN which uses DiffPoly and linear program formulations to handle a wide range of input-relational properties. Our experimental results on challenging benchmarks show that by leveraging precise linear constraints defined over multiple executions of the DNN, RaVeN gains substantial precision over baselines on a wide range of datasets, networks, and input-relational properties.

References

[1]
Aws Albarghouthi. 2021. Introduction to Neural Network Verification. Found. Trends Program. Lang. 7, 1-2 (2021), 1–157.
[2]
Filippo Amato, Alberto López, Eladia María Peña-Méndez, Petr Vaňhara, Aleš Hampl, and Josef Havel. 2013. Artificial neural networks in medical diagnosis. Journal of Applied Biomedicine 11, 2 (2013).
[3]
Greg Anderson, Shankara Pailoor, Isil Dillig, and Swarat Chaudhuri. 2019. Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness. In Proc. Programming Language Design and Implementation (PLDI). 731–744.
[4]
Stanley Bak, Taylor Dohmen, K. Subramani, Ashutosh Trivedi, Alvaro Velasquez, and Piotr Wojciechowski. 2023. The Octatope Abstract Domain for Verification of Neural Networks. In Formal Methods - 25th International Symposium, FM 2023, Lübeck, Germany, March 6-10, 2023, Proceedings (Lecture Notes in Computer Science, Vol. 14000), Marsha Chechik, Joost-Pieter Katoen, and Martin Leucker (Eds.). Springer, 454–472.
[5]
Stanley Bak, Hoang-Dung Tran, Kerianne Hobbs, and Taylor T. Johnson. 2020. Improved Geometric Path Enumeration for Verifying ReLU Neural Networks. In Computer Aided Verification - 32nd International Conference, CAV 2020, Los Angeles, CA, USA, July 21-24, 2020, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 12224), Shuvendu K. Lahiri and Chao Wang (Eds.). Springer, 66–96.
[6]
Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, and Martin Vechev. 2019. Certifying Geometric Robustness of Neural Networks. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2019/file/f7fa6aca028e7ff4ef62d75ed025fe76-Paper.pdf
[7]
G. Barthe, P.R. D’Argenio, and T. Rezk. 2004. Secure information flow by self-composition. In Proceedings. 17th IEEE Computer Security Foundations Workshop, 2004. 100–114.
[8]
Barry Becker and Ronny Kohavi. 1996. Adult. UCI Machine Learning Repository. .
[9]
Raven Beutner and Bernd Finkbeiner. 2022. Software Verification of Hyperproperties Beyond k-Safety. In Computer Aided Verification - 34th International Conference, CAV 2022, Haifa, Israel, August 7-10, 2022, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 13371), Sharon Shoham and Yakir Vizel (Eds.). Springer, 341–362.
[10]
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016).
[11]
Laura Bozzelli, Adriano Peron, and César Sánchez. 2021. Asynchronous Extensions of HyperLTL. In 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2021, Rome, Italy, June 29 - July 2, 2021. IEEE, 1–13.
[12]
Christopher Brix, Mark Niklas Müller, Stanley Bak, Taylor T Johnson, and Changliu Liu. 2023. First three years of the international verification of neural networks competition (VNN-COMP). International Journal on Software Tools for Technology Transfer (2023), 1–11.
[13]
Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Pushmeet Kohli, P Torr, and P Mudigonda. 2020. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research 21, 2020 (2020).
[14]
Rudy R Bunel, Oliver Hinder, Srinadh Bhojanapalli, and Krishnamurthy Dvijotham. 2020. An efficient nonconvex reformulation of stagewise convex optimization problems. Advances in Neural Information Processing Systems 33 (2020).
[15]
Jacob Burnim and Koushik Sen. 2009. Asserting and Checking Determinism for Multithreaded Programs. In Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering (Amsterdam, The Netherlands) (ESEC/FSE ’09). Association for Computing Machinery, New York, NY, USA, 3–12. isbn:9781605580012
[16]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). Ieee, 39–57.
[17]
Maria Christakis, Hasan Ferit Eniser, Jörg Hoffmann, Adish Singla, and Valentin Wüstholz. 2022. Specifying and Testing k -Safety Properties for Machine-Learning Models. arXiv preprint arXiv:2206.06054 (2022).
[18]
Berkeley R. Churchill, Oded Padon, Rahul Sharma, and Alex Aiken. 2019. Semantic program alignment for equivalence checking. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019, Phoenix, AZ, USA, June 22-26, 2019, Kathryn S. McKinley and Kathleen Fisher (Eds.). ACM, 1027–1040.
[19]
Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. 2019. Certified Adversarial Robustness via Randomized Smoothing. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 1310–1320. https://proceedings.mlr.press/v97/cohen19c.html
[20]
Patrick Cousot and Nicolas Halbwachs. 1978. Automatic Discovery of Linear Restraints among Variables of a Program. In Proceedings of the 5th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages (Tucson, Arizona) (POPL ’78). Association for Computing Machinery, New York, NY, USA, 84–96. isbn:9781450373487
[21]
Hennie Daniels and Marina Velikova. 2010. Monotone and partially monotone neural networks. IEEE Transactions on Neural Networks 21, 6 (2010), 906–917.
[22]
Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian J. Goodfellow, Percy Liang, and Pushmeet Kohli. 2020. Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). https://proceedings.neurips.cc/paper/2020/hash/397d6b4c83c91021fe928a8c4220386b-Abstract.html
[23]
D.Banerjee. 2024. RaVeN: v1.1.
[24]
Hai Duong, Linhan Li, ThanhVu Nguyen, and Matthew Dwyer. 2023. A DPLL (T) Framework for Verifying Deep Neural Networks. arXiv preprint arXiv:2307.10266 (2023).
[25]
Ruediger Ehlers. 2017. Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis.
[26]
Marco Eilers, Peter Müller, and Samuel Hitz. 2018. Modular Product Programs. In Programming Languages and Systems - 27th European Symposium on Programming, ESOP 2018, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece, April 14-20, 2018, Proceedings (Lecture Notes in Computer Science, Vol. 10801), Amal Ahmed (Ed.). Springer, 502–529.
[27]
Azadeh Farzan and Anthony Vandikas. 2019. Automated Hypersafety Verification. In Computer Aided Verification - 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 11561), Isil Dillig and Serdar Tasiran (Eds.). Springer, 200–218.
[28]
Claudio Ferrari, Mark Niklas Mueller, Nikola Jovanović, and Martin Vechev. 2022. Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound. In International Conference on Learning Representations. https://openreview.net/forum?id=l_amHf1oaK
[29]
Bernd Finkbeiner, Markus N. Rabe, and César Sánchez. 2015. Algorithms for Model Checking HyperLTL and HyperCTL*. In Computer Aided Verification - 27th International Conference, CAV 2015, San Francisco, CA, USA, July 18-24, 2015, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 9206), Daniel Kroening and Corina S. Pasareanu (Eds.). Springer, 30–48.
[30]
Aymeric Fromherz, Klas Leino, Matt Fredrikson, Bryan Parno, and Corina Pasareanu. 2021. Fast Geometric Projections for Local Robustness Certification. In International Conference on Learning Representations. https://openreview.net/forum?id=zWy1uxjDdZJ
[31]
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. 2018. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP).
[32]
Chuqin Geng, Nham Le, Xiaojie Xu, Zhaoyue Wang, Arie Gurfinkel, and Xujie Si. 2023. Towards reliable neural specifications. In International Conference on Machine Learning. PMLR, 11196–11212.
[33]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[34]
Akhil Gupta, Naman Shukla, Lavanya Marla, Arinbjörn Kolbeinsson, and Kartik Yellepeddi. 2019. How to incorporate monotonicity in deep networks while preserving flexibility? arXiv preprint arXiv:1909.10662 (2019).
[35]
Gurobi Optimization, LLC. 2018. Gurobi Optimizer Reference Manual.
[36]
Richard W Hamming. 1950. Error detecting and error correcting codes. The Bell system technical journal 29, 2 (1950), 147–160.
[37]
David Harrison Jr and Daniel L Rubinfeld. 1978. Hedonic housing prices and the demand for clean air. Journal of environmental economics and management 5, 1 (1978), 81–102.
[38]
Anan Kabaha and Dana Drachsler-Cohen. 2022. Boosting Robustness Verification of Semantic Feature Neighborhoods. In Static Analysis - 29th International Symposium, SAS 2022, Auckland, New Zealand, December 5-7, 2022, Proceedings (Lecture Notes in Computer Science, Vol. 13790), Gagandeep Singh and Caterina Urban (Eds.). Springer, 299–324.
[39]
Guy Katz, Derek Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljić, David Dill, Mykel Kochenderfer, and Clark Barrett. 2019. The Marabou Framework for Verification and Analysis of Deep Neural Networks. 443–452. isbn:978-3-030-25539-8
[40]
Haitham Khedr and Yasser Shoukry. 2023. CertiFair: A Framework for Certified Global Fairness of Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence 37, 7 (Jun. 2023), 8237–8245.
[41]
Máté Kovács, Helmut Seidl, and Bernd Finkbeiner. 2013. Relational abstract interpretation for the verification of 2-hypersafety properties. In 2013 ACM SIGSAC Conference on Computer and Communications Security, CCS’13, Berlin, Germany, November 4-8, 2013, Ahmad-Reza Sadeghi, Virgil D. Gligor, and Moti Yung (Eds.). ACM, 211–222.
[42]
Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. (2009).
[43]
Jianglin Lan, Yang Zheng, and Alessio Lomuscio. 2022. Tight Neural Network Verification via Semidefinite Relaxations and Linear Reformulations. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022. AAAI Press, 7272–7280. https://ojs.aaai.org/index.php/AAAI/article/view/20689
[44]
Jacob Laurel, Siyuan Brant Qian, Gagandeep Singh, and Sasa Misailovic. 2023. Synthesizing precise static analyzers for automatic differentiation. Proceedings of the ACM on Programming Languages 7, OOPSLA2 (2023), 1964–1992.
[45]
Yann LeCun, Bernhard E. Boser, John S. Denker, Donnie Henderson, Richard E. Howard, Wayne E. Hubbard, and Lawrence D. Jackel. 1989. Handwritten Digit Recognition with a Back-Propagation Network. In NIPS. 396–404.
[46]
Juncheng Li, Shuhui Qu, Xinjian Li, Joseph Szurley, J. Zico Kolter, and Florian Metze. 2019. Adversarial Music: Real world Audio Adversary against Wake-word Detection System. In Proc. Neural Information Processing Systems (NeurIPS). 11908–11918.
[47]
Juncheng Li, Frank R. Schmidt, and J. Zico Kolter. 2019. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In Proc. International Conference on Machine Learning, ICML, Vol. 97. 3896–3904.
[48]
Xingchao Liu, Xing Han, Na Zhang, and Qiang Liu. 2020. Certified monotonic neural networks. Advances in Neural Information Processing Systems 33 (2020), 15427–15438.
[49]
Zikun Liu, Changming Xu, Emerson Sie, Gagandeep Singh, and Deepak Vasisht. 2023. Exploring Practical Vulnerabilities of Machine Learning-based Wireless Systems. In 20th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2023, Boston, MA, April 17-19, 2023. USENIX Association, 1801–1817.
[50]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations. https://openreview.net/forum?id=rJzIBfZAb
[51]
Antoine Miné. 2001. A new numerical abstract domain based on difference-bound matrices. In Programs as Data Objects: Second Symposium, PADO2001 Aarhus, Denmark, May 21–23, 2001 Proceedings. Springer, 155–172.
[52]
Matthew Mirman, Timon Gehr, and Martin Vechev. 2018. Differentiable abstract interpretation for provably robust neural networks. In Proc. International Conference on Machine Learning (ICML). 3578–3586.
[53]
Matthew Mirman, Timon Gehr, and Martin Vechev. 2018. Differentiable Abstract Interpretation for Provably Robust Neural Networks. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, 3578–3586. https://proceedings.mlr.press/v80/mirman18b.html
[54]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1765–1773.
[55]
Mark Niklas Müller, Franziska Eckert, Marc Fischer, and Martin T. Vechev. 2023. Certified Training: Small Boxes are All You Need. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. https://openreview.net/pdf?id=7oFuxtJtUMH
[56]
Satoshi Munakata, Caterina Urban, Haruki Yokoyama, Koji Yamamoto, and Kazuki Munakata. 2023. Verifying Attention Robustness of Deep Neural Networks Against Semantic Perturbations. In NASA Formal Methods - 15th International Symposium, NFM 2023, Houston, TX, USA, May 16-18, 2023, Proceedings (Lecture Notes in Computer Science, Vol. 13903), Kristin Yvonne Rozier and Swarat Chaudhuri (Eds.). Springer, 37–61.
[57]
Alessandro De Palma, Harkirat S. Behl, Rudy Bunel, Philip H. S. Torr, and M. Pawan Kumar. 2021. Scaling the Convex Barrier with Active Sets. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. https://openreview.net/forum?id=uQfOy7LrlTR
[58]
Brandon Paulsen, Jingbo Wang, and Chao Wang. 2020. ReluDiff: Differential Verification of Deep Neural Networks. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (Seoul, South Korea) (ICSE ’20). Association for Computing Machinery, New York, NY, USA, 714–726. isbn:9781450371216
[59]
Brandon Paulsen, Jingbo Wang, Jiawei Wang, and Chao Wang. 2021. NeuroDiff: Scalable Differential Verification of Neural Networks Using Fine-Grained Approximation. In Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering (Virtual Event, Australia) (ASE ’20). Association for Computing Machinery, New York, NY, USA, 784–796. isbn:9781450367684
[60]
Yannik Potdevin, Dirk Nowotka, and Vijay Ganesh. 2019. An empirical investigation of randomized defenses against adversarial attacks. arXiv preprint arXiv:1909.05580 (2019).
[61]
Rob Potharst and Adrianus Johannes Feelders. 2002. Classification trees for problems with monotonicity constraints. ACM SIGKDD Explorations Newsletter 4, 1 (2002), 1–10.
[62]
Chongli Qin, Krishnamurthy (Dj) Dvijotham, Brendan O’Donoghue, Rudy Bunel, Robert Stanforth, Sven Gowal, Jonathan Uesato, Grzegorz Swirszcz, and Pushmeet Kohli. 2019. Verification of Non-Linear Specifications for Neural Networks. In International Conference on Learning Representations. https://openreview.net/forum?id=HyeFAsRctQ
[63]
Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, and Pengchuan Zhang. 2019. A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada.
[64]
Joseph Scott, Guanting Pan, Elias B. Khalil, and Vijay Ganesh. 2022. Goose: A Meta-Solver for Deep Neural Network Verification. In Proceedings of the 20th Internal Workshop on Satisfiability Modulo Theories co-located with the 11th International Joint Conference on Automated Reasoning (IJCAR 2022) part of the 8th Federated Logic Conference (FLoC 2022), Haifa, Israel, August 11-12, 2022 (CEUR Workshop Proceedings, Vol. 3185), David Déharbe and Antti E. J. Hyvärinen (Eds.). CEUR-WS.org, 99–113. https://ceur-ws.org/Vol-3185/extended678.pdf
[65]
Ron Shemer, Arie Gurfinkel, Sharon Shoham, and Yakir Vizel. 2019. Property Directed Self Composition. In Computer Aided Verification - 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 11561), Isil Dillig and Serdar Tasiran (Eds.). Springer, 161–179.
[66]
Zhouxing Shi, Yihan Wang, Huan Zhang, J Zico Kolter, and Cho-Jui Hsieh. 2022. Efficiently computing local lipschitz constants of neural networks via bound propagation. Advances in Neural Information Processing Systems 35 (2022), 2350–2364.
[67]
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, and Martin Vechev. 2019. Beyond the single neuron convex barrier for neural network certification. In Advances in Neural Information Processing Systems.
[68]
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin Vechev. 2018. Fast and effective robustness certification. Advances in Neural Information Processing Systems 31 (2018).
[69]
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3, POPL (2019).
[70]
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. Robustness Certification with Refinement. In International Conference on Learning Representations. https://openreview.net/forum?id=HJgeEh09KQ
[71]
Matthew Sotoudeh, Zhe Tao, and Aditya V Thakur. 2023. SyReNN: a tool for analyzing deep neural networks. International Journal on Software Tools for Technology Transfer 25, 2 (2023), 145–165.
[72]
Matthew Sotoudeh and Aditya V Thakur. 2020. Abstract neural networks. In Static Analysis: 27th International Symposium, SAS 2020, Virtual Event, November 18–20, 2020, Proceedings 27. Springer, 65–88.
[73]
Marcelo Sousa and Isil Dillig. 2016. Cartesian hoare logic for verifying k-safety properties. In Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2016, Santa Barbara, CA, USA, June 13-17, 2016, Chandra Krintz and Emery D. Berger (Eds.). ACM, 57–69.
[74]
Cheng Tan, Yibo Zhu, and Chuanxiong Guo. 2021. Building Verified Neural Networks with Specifications for Systems. In Proceedings of the 12th ACM SIGOPS Asia-Pacific Workshop on Systems (Hong Kong, China) (APSys ’21). 42–47.
[75]
Hoang-Dung Tran, Diago Manzanas Lopez, Patrick Musau, Xiaodong Yang, Luan Viet Nguyen, Weiming Xiang, and Taylor T. Johnson. 2019. Star-Based Reachability Analysis of Deep Neural Networks. In Formal Methods – The Next 30 Years, Maurice H. ter Beek, Annabelle McIver, and José N. Oliveira (Eds.). Springer International Publishing, Cham, 670–686.
[76]
Shubham Ugare, Gagandeep Singh, and Sasa Misailovic. 2022. Proof transfer for fast certification of multiple approximate neural networks. Proc. ACM Program. Lang. 6, OOPSLA1 (2022), 1–29.
[77]
Hiroshi Unno, Tachio Terauchi, and Eric Koskinen. 2021. Constraint-Based Relational Verification. In Computer Aided Verification - 33rd International Conference, CAV 2021, Virtual Event, July 20-23, 2021, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 12759), Alexandra Silva and K. Rustan M. Leino (Eds.). Springer, 742–766.
[78]
Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems.
[79]
Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. 2021. Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification. arXiv preprint arXiv:2103.06624 (2021).
[80]
Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. 2021. Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification. In Advances in Neural Information Processing Systems, A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (Eds.). https://openreview.net/forum?id=ahYIlRBeCFw
[81]
Zhilu Wang, Chao Huang, and Qi Zhu. 2022. Efficient global robustness certification of neural networks via interleaving twin-network encoding. In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 1087–1092.
[82]
Eric Wong and J. Zico Kolter. 2018. Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018 (Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, 5283–5292. http://proceedings.mlr.press/v80/wong18a.html
[83]
Haoze Wu, Clark Barrett, Mahmood Sharif, Nina Narodytska, and Gagandeep Singh. 2022. Scalable Verification of GNN-Based Job Schedulers. Proc. ACM Program. Lang. 6, OOPSLA2, Article 162 (oct 2022), 30 pages.
[84]
Haoze Wu, Teruhiro Tagomori, Alexander Robey, Fengjun Yang, Nikolai Matni, George Pappas, Hamed Hassani, Corina Pasareanu, and Clark Barrett. 2023. Toward certified robustness against real-world distribution shifts. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 537–553.
[85]
Changming Xu and Gagandeep Singh. 2022. Robust Universal Adversarial Perturbations. CoRR abs/2206.10858 (2022). arxiv:2206.10858
[86]
Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, and Cho-Jui Hsieh. 2020. Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond. (2020).
[87]
Kaidi Xu, Huan Zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, and Cho-Jui Hsieh. 2021. Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers. In International Conference on Learning Representations. https://openreview.net/forum?id=nVZtXBI6LNn
[88]
Yi Zeng, Zhouxing Shi, Ming Jin, Feiyang Kang, Lingjuan Lyu, Cho-Jui Hsieh, and Ruoxi Jia. 2023. Towards Robustness Certification Against Universal Perturbations. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=7GEvPKxjtt
[89]
Mustafa Zeqiri, Mark Niklas Müller, Marc Fischer, and Martin T. Vechev. 2023. Efficient Certified Training and Robustness Verification of Neural ODEs. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. https://openreview.net/pdf?id=KyoVpYvWWnK
[90]
Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, and Cho-Jui Hsieh. 2020. Towards stable and efficient training of verifiably robust neural networks. In Proc. International Conference on Learning Representations (ICLR).
[91]
Huan Zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. 2022. General Cutting Planes for Bound-Propagation-Based Neural Network Verification. In Advances in Neural Information Processing Systems, Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (Eds.). https://openreview.net/forum?id=5haAJAcofjc
[92]
Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. Advances in neural information processing systems 31 (2018).

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Programming Languages
Proceedings of the ACM on Programming Languages  Volume 8, Issue PLDI
June 2024
2198 pages
EISSN:2475-1421
DOI:10.1145/3554317
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 June 2024
Published in PACMPL Volume 8, Issue PLDI

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. Abstract Interpretation
  2. Deep Learning
  3. Relational Verification

Qualifiers

  • Research-article

Funding Sources

  • NSF (National Science Foundation)
  • Google
  • Qualcomm

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 368
    Total Downloads
  • Downloads (Last 12 months)368
  • Downloads (Last 6 weeks)141
Reflects downloads up to 13 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media