Abstract
Recent developments in deep neural networks (DNNs) have led to their adoption in safety-critical systems, which in turn has heightened the need for guaranteeing their safety. These safety properties of DNNs can be proven using tools developed by the verification community. However, these tools are themselves prone to implementation bugs and numerical stability problems, which make their reliability questionable. To overcome this, some verifiers produce proofs of their results which can be checked by a trusted checker. In this work, we present a novel implementation of a proof checker for DNN verification. It improves on existing implementations by offering numerical stability and greater verifiability. To achieve this, we leverage two key capabilities of Imandra, an industrial theorem prover: its support for exact real arithmetic and its formal verification infrastructure. So far, we have implemented a proof checker in Imandra, specified its correctness properties and started to verify the checker’s compliance with them. Our ongoing work focuses on completing the formal verification of the checker and further optimising its performance.
R. Desmartin and O. Isac—Both authors contributed equally.
R. Desmartin—Funded by Imandra Inc.
E. Komendantskaya—Funded by EPSRC grant AISEC (EP/T026952/1) and NCSC grant “Neural Network Verification: in search of the missing spec.”.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
The Coq Proof Assistant (1984). https://coq.inria.fr
Bak, S.: Nnenum: verification of ReLU neural networks with optimized abstraction refinement. In: Proceedings of 13th International Symposium NASA Formal Methods (NFM), pp. 19–36 (2021)
Barrett, C., Katz, G., Guidotti, D., Pulina, L., Narodytska, N., Tacchella, A.: The Verification of Neural Networks Library (VNN-LIB) (2019). https://www.vnnlib.org/
Barrett, C., de Moura, L., Fontaine, P.: Proofs in satisfiability modulo theories. In: All About Proofs, Proofs for All, vol. 55, no. 1, pp. 23–44 (2015)
Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Proceedings of 30th Conference on Neural Information Processing Systems (NeurIPS) (2016)
Bray, T.: The JavaScript Object Notation (JSON) Data Interchange Format (2014). https://www.rfc-editor.org/info/rfc7159
Breitner, J., et al.: Ready, set, verify! applying Hs-to-Coq to real-world Haskell code. J. Funct. Program. 31, e5 (2021)
Brix, C., Müller, M.N., Bak, S., Johnson, T.T., Liu, C.: First Three Years of the International Verification of Neural Networks Competition (VNN-COMP). Technical report (2023). http://arxiv.org/abs/2301.05815
Brix, C., Noll, T.: Debona: Decoupled Boundary Network Analysis for Tighter Bounds and Faster Adversarial Robustness Proofs. Technical report (2020). http://arxiv.org/abs/2006.09040
Daggitt, M.L., Kokke, W., Atkey, R., Arnaboldi, L., Komendantskaya, E.: Vehicle: Interfacing Neural Network Verifiers with Interactive Theorem Provers. Technical report (2022). http://arxiv.org/abs/2202.05207
Dantzig, G.: Linear Programming and Extensions. Princeton University Press, Princeton (1963)
Desmartin, R., Isac, O., Passmore, G., Stark, K., Katz, G., Komendantskaya, E.: Towards a Certified Proof Checker for Deep Neural Network Verification. Technical report (2023). http://arxiv.org/abs/2307.06299
Desmartin, R., Passmore, G.O., Komendantskaya, E.: Neural networks in imandra: matrix representation as a verification choice. In: Proceedings of 5th International Workshop of Software Verification and Formal Methods for ML-Enabled Autonomous Systems (FoMLAS) and 15th International Workshop on Numerical Software Verification (NSV), pp. 78–95 (2022)
Dutertre, B., de Moura, L.: A fast linear-arithmetic solver for DPLL(T). In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 81–94. Springer, Heidelberg (2006). https://doi.org/10.1007/11817963_11
Ferrari, C., Mueller, M.N., Jovanović, N., Vechev, M.: Complete verification via multi-neuron relaxation guided branch-and-bound. In: Proceedings of 10th International Conference on Learning Representations (ICLR) (2022)
Henriksen, P., Lomuscio, A.: DEEPSPLIT: an efficient splitting method for neural network verification via indirect effect analysis. In: Proceedings of 30th International Joint Conference on Artificial Intelligence (IJCAI), pp. 2549–2555 (2021)
Isac, O., Barrett, C., Zhang, M., Katz, G.: Neural network verification with proof production. In: Proceedings 22nd International Conference on Formal Methods in Computer-Aided Design (FMCAD), pp. 38–48 (2022)
Jia, K., Rinard, M.: Exploiting verified neural networks via floating point numerical error. In: Drăgoi, C., Mukherjee, S., Namjoshi, K. (eds.) SAS 2021. LNCS, vol. 12913, pp. 191–205. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88806-0_9
Julian, K., Kochenderfer, M., Owen, M.: Deep neural network compression for aircraft collision avoidance systems. J. Guid. Control. Dyn. 42(3), 598–608 (2019)
Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: a calculus for reasoning about deep neural networks. Form. Methods Syst. Des. (FMSD) 60(1), 87–116 (2021)
Katz, G., et al.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_26
Kaufmann, M., Moore, J.S.: ACL2: an industrial strength version of Nqthm. In: Proceedings of 11th Conference on Computer Assurance (COMPASS), pp. 23–34 (1996)
Khedr, H., Ferlez, J., Shoukry, Y.: PEREGRiNN: penalized-relaxation greedy neural network verifier. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 287–300. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_13
Miné, A., Leroy, X., Cuoq, P., Troestler, C.: The Zarith Library (2023). https://github.com/ocaml/Zarith
de Moura, L., Passmore, G.O.: Computation in real closed infinitesimal and transcendental extensions of the rationals. In: Bonacina, M.P. (ed.) CADE 2013. LNCS (LNAI), vol. 7898, pp. 178–192. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38574-2_12
Necula, G.: Compiling with Proofs. Carnegie Mellon University (1998)
Norell, U.: Dependently typed programming in Agda. In: Proceedings of 4th International Workshop on Types in Language Design and Implementation (TLDI), pp. 1–2 (2009)
Passmore, G., et al.: The imandra automated reasoning system (system description). In: Peltier, N., Sofronie-Stokkermans, V. (eds.) IJCAR 2020. LNCS (LNAI), vol. 12167, pp. 464–471. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51054-1_30
Passmore, G.O.: Some lessons learned in the industrialization of formal methods for financial algorithms. In: Huisman, M., Păsăreanu, C., Zhan, N. (eds.) FM 2021. LNCS, vol. 13047, pp. 717–721. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90870-6_39
Paulson, L.C.: Isabelle: A Generic Theorem Prover. Springer, Heidelberg (1994). https://doi.org/10.1007/BFb0030541
Prabhakar, P., Afzal, Z.R.: Abstraction based output range analysis for neural networks. In: Proceedings of 32nd International Conference on Neural Information Processing Systems (NeurIPS), pp. 15762–15772 (2019)
Smith, J., Allen, J., Swaminathan, V., Zhang, Z.: Refutation-Based Adversarial Robustness Verification of Deep Neural Networks (2021)
Suzuki, K.: Overview of deep learning in medical imaging. Radiol. Phys. Technol. 10(3), 257–273 (2017)
Szegedy, C., et al.: Intriguing Properties of Neural Networks. Technical report (2013). http://arxiv.org/abs/1312.6199
Vanderbei, R.: Linear programming: foundations and extensions. J. Oper. Res. Soc. (1996)
Wang, S., et al.: Beta-CROWN: efficient bound propagation with per-neuron split constraints for neural network robustness verification. Adv. Neural. Inf. Process. Syst. 34, 29909–29921 (2021)
Acknowledgements
We thank the reviewers for their valuable comments and suggestions, which greatly helped us to improve our manuscript.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Desmartin, R., Isac, O., Passmore, G., Stark, K., Komendantskaya, E., Katz, G. (2023). Towards a Certified Proof Checker for Deep Neural Network Verification. In: Glück, R., Kafle, B. (eds) Logic-Based Program Synthesis and Transformation. LOPSTR 2023. Lecture Notes in Computer Science, vol 14330. Springer, Cham. https://doi.org/10.1007/978-3-031-45784-5_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-45784-5_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-45783-8
Online ISBN: 978-3-031-45784-5
eBook Packages: Computer ScienceComputer Science (R0)