Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3597503.3623332acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

VeRe: Verification Guided Synthesis for Repairing Deep Neural Networks

Published: 06 February 2024 Publication History

Abstract

Neural network repair aims to fix the 'bugs'1 of neural networks by modifying the model's architecture or parameters. However, due to the data-driven nature of neural networks, it is difficult to explain the relationship between the internal neurons and erroneous behaviors, making further repair challenging. While several work exists to identify responsible neurons based on gradient or causality analysis, their effectiveness heavily rely on the quality of available 'bugged' data and multiple heuristics in layer or neuron selection. In this work, we address the issue utilizing the power of formal verification (in particular for neural networks). Specifically, we propose VeRe, a verification-guided neural network repair framework that performs fault localization based on linear relaxation to symbolically calculate the repair significance of neurons and furthermore optimize the parameters of problematic neurons to repair erroneous behaviors. We evaluated VeRe on various repair tasks, and our experimental results show that VeRe can efficiently and effectively repair all neural networks without degrading the model's performance. For the task of removing backdoors, VeRe successfully reduces attack success rate from 98.47% to 0.38% on average, while causing an average performance drop of 0.9%. For the task of repairing safety properties, VeRe successfully repairs all the 36 tasks and achieves 99.87% generalization on average.

References

[1]
Pranav Ashok, Vahid Hashemi, Jan Kretínský, and Stefanie Mohr. 2020. DeepAbstract: Neural Network Abstraction for Accelerating Verification. In ATVA 2020 (Lecture Notes in Computer Science, Vol. 12302). Springer, 92--107.
[2]
Maryam Badar, Muhammad Haris, and Anam Fatima. 2020. Application of deep learning for retinal image analysis: A review. Computer Science Review 35 (2020), 100203.
[3]
Teodora Baluta, Zheng Leong Chua, Kuldeep S Meel, and Prateek Saxena. 2021. Scalable quantitative verification for deep neural networks. In ICSE 2021. IEEE, Madrid, Spain, 312--323.
[4]
Teodora Baluta, Shiqi Shen, Shweta Shinde, Kuldeep S. Meel, and Prateek Saxena. 2019. Quantitative Verification of Neural Networks and Its Security Applications. In CCS 2019, November 11--15, 2019. ACM, London, UK, 1249--1264.
[5]
Ben Batten, Panagiotis Kouvaros, Alessio Lomuscio, and Yang Zheng. 2021. Efficient neural network verification via layer-based semidefinite relaxations and linear cuts. IJCAI.
[6]
Per Bjesse. 2005. What is formal verification? ACM SIGDA Newsletter 35, 24 (2005), 1--es.
[7]
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016).
[8]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877--1901.
[9]
Rudy Bunel, P Mudigonda, Ilker Turkaslan, P Torr, Jingyue Lu, and Pushmeet Kohli. 2020. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research 21, 2020 (2020).
[10]
Gabriel Cadamuro, Ran Gilad-Bachrach, and Xiaojin Zhu. 2016. Debugging machine learning models. In ICML Workshop on Reliable Machine Learning in the Wild, Vol. 103.
[11]
Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, and Matthew Wicker. 2019. Statistical Guarantees for the Robustness of Bayesian Neural Networks. In IJCAI 2019, August 10--16, 2019, Sarit Kraus (Ed.). ijcai.org, Macao, China, 5693--5700.
[12]
Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, and Andrea Patane. 2019. Robustness Guarantees for Bayesian Inference with Gaussian Processes. In AAAI 2019, January 27 -- February 1, 2019. AAAI Press, Honolulu, Hawaii, USA, 7759--7768.
[13]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). Ieee, 39--57.
[14]
Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, and Biplav Srivastava. 2018. Detecting backdoor attacks on deep neural networks by activation clustering. arXiv:1811.03728 (2018).
[15]
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017).
[16]
Wenyuan Dai, Ou Jin, Gui-Rong Xue, Qiang Yang, and Yong Yu. 2009. Eigentransfer: a unified framework for transfer learning. In Proceedings of the 26th Annual International Conference on Machine Learning. 193--200.
[17]
Cristina David and Daniel Kroening. 2017. Program synthesis: challenges and opportunities. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 375, 2104 (2017), 20150403.
[18]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[19]
Guoliang Dong, Jun Sun, Xingen Wang, Xinyu Wang, and Ting Dai. 2021. Towards repairing neural networks correctly. In 2021 IEEE 21st International Conference on Software Quality, Reliability and Security (QRS). IEEE, 714--725.
[20]
Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, and Ashish Tiwari. 2018. Output Range Analysis for Deep Feedforward Neural Networks. In NFM 2018 (Lecture Notes in Computer Science, Vol. 10811), Aaron Dutle, César A. Muñoz, and Anthony Narkawicz (Eds.). Springer, Newport News, VA, USA, 121--138.
[21]
Rüdiger Ehlers. 2017. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks. In ATVA 2017. Springer, Pune, India, 269--286.
[22]
Yizhak Yisrael Elboher, Justin Gottschlich, and Guy Katz. 2020. An Abstraction-Based Framework for Neural Network Verification. In CAV 2020 (Lecture Notes in Computer Science, Vol. 12224), Shuvendu K. Lahiri and Chao Wang (Eds.). Springer, Los Angeles, CA, USA, 43--65.
[23]
Feisi Fu and Wenchao Li. 2022. Sound and Complete Neural Network Repair with Minimality and Locality Guarantees. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25--29, 2022. OpenReview.net. https://openreview.net/forum?id=xS8AMYiEav3
[24]
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The journal of machine learning research 17, 1 (2016), 2096--2030.
[25]
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. 2018. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE symposium on security and privacy (SP). IEEE, 3--18.
[26]
Sumathi Gokulanathan, Alexander Feldsher, Adi Malca, Clark Barrett, and Guy Katz. 2020. Simplifying neural networks using formal verification. In NASA Formal Methods: 12th International Symposium, NFM 2020, Moffett Field, CA, USA, May 11--15, 2020, Proceedings 12. Springer, 85--93.
[27]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[28]
Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 47230--47244.
[29]
Dario Guidotti, Luca Pulina, and Armando Tacchella. 2020. Never 2.0: Learning, verification and repair of deep neural networks. arXiv preprint arXiv:2011.09933 (2020).
[30]
Dan Hendrycks and Thomas Dietterich. 2019. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261 (2019).
[31]
Patrick Henriksen, Francesco Leofante, and Alessio Lomuscio. 2022. Repairing misclassifications in neural networks using limited data. In Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing. 1031--1038.
[32]
Jeremy Howard. 2019. The Imagenette dataset. https://github.com/fastai/imagenette
[33]
Pei Huang, Yuting Yang, Minghao Liu, Fuqi Jia, Feifei Ma, and Jian Zhang. 2021. ε-weakened Robustness of Deep Neural Networks. CoRR abs/2110.15764 (2021). arXiv:2110.15764
[34]
Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2017. Safety Verification of Deep Neural Networks. In CAV 2017. Springer, Heidelberg, Germany, 3--29.
[35]
Kyle D Julian, Mykel J Kochenderfer, and Michael P Owen. 2019. Deep neural network compression for aircraft collision avoidance systems. Journal of Guidance, Control, and Dynamics 42, 3 (2019), 598--608.
[36]
Kyle D Julian, Jessica Lopez, Jeffrey S Brush, Michael P Owen, and Mykel J Kochenderfer. 2016. Policy compression for aircraft collision avoidance systems. In 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC). IEEE, 1--10.
[37]
Kyle D Julian, Shivam Sharma, Jean-Baptiste Jeannin, and Mykel J Kochenderfer. 2019. Verifying aircraft collision avoidance neural networks through linear approximations of safe regions. arXiv preprint arXiv:1903.00762 (2019).
[38]
Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24--28, 2017, Proceedings, Part I 30. Springer, 97--117.
[39]
Guy Katz, Derek A. Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljic, David L. Dill, Mykel J. Kochenderfer, and Clark W. Barrett. 2019. The Marabou Framework for Verification and Analysis of Deep Neural Networks. In CAV 2019 (Lecture Notes in Computer Science, Vol. 11561), Isil Dillig and Serdar Tasiran (Eds.). Springer, New York City, NY, USA, 443--452.
[40]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[41]
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
[42]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278--2324.
[43]
Jianlin Li, Jiangchao Liu, Pengfei Yang, Liqian Chen, Xiaowei Huang, and Lijun Zhang. 2019. Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification. In Static Analysis: 26th International Symposium, SAS 2019, Porto, Portugal, October 8--11, 2019, Proceedings 26. Springer, 296--319.
[44]
Renjue Li, Pengfei Yang, Cheng-Chao Huang, Youcheng Sun, Bai Xue, and Lijun Zhang. 2022. Towards Practical Robustness Analysis for DNNs based on PAC-Model Learning. In 44th IEEE/ACM 44th International Conference on Software Engineering, ICSE 2022, Pittsburgh, PA, USA, May 25--27, 2022. ACM, 2189--2201.
[45]
Wang Lin, Zhengfeng Yang, Xin Chen, Qingye Zhao, Xiangkun Li, Zhiming Liu, and Jifeng He. 2019. Robustness verification of classification deep neural networks via linear programming. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11418--11427.
[46]
Xuankang Lin, He Zhu, Roopsha Samanta, and Suresh Jagannathan. 2020. ART: abstraction refinement-guided training for provably correct neural networks. In # PLACEHOLDER_PARENT_METADATA_VALUE#, Vol. 1. TU Wien Academic Press, 148--157.
[47]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2018. Trojaning Attack on Neural Networks. In NDSS.
[48]
Jianan Ma, Pengfei Yang, Jingyi Wang, Youcheng Sun, Cheng-chao Huang, and Zhen Wang. 2023. VERE. https://github.com/nninjn/VeRe
[49]
Shiqing Ma, Yingqi Liu, Wen-Chuan Lee, Xiangyu Zhang, and Ananth Grama. 2018. MODE: automated neural network model debugging via state differential analysis and input selection. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 175--186.
[50]
Ravi Mangal, Aditya V. Nori, and Alessandro Orso. 2019. Robustness of neural networks: a probabilistic and practical approach. In ICSE (NIER) 2019, Montreal, QC, Canada, May 29--31, 2019. IEEE / ACM, Montreal, QC, Canada, 93--96.
[51]
Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Zhe Hou, Yan Xiao, Yun Lin, and Jin Song Dong. 2022. Adversarial robustness of deep neural networks: A survey from a formal verification perspective. IEEE Transactions on Dependable and Secure Computing (2022).
[52]
Matthew Mirman, Timon Gehr, and Martin Vechev. 2018. Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning. PMLR, 3578--3586.
[53]
Nina Narodytska, Shiva Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, and Toby Walsh. 2018. Verifying properties of binarized deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
[54]
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. 2011. Reading digits in natural images with unsupervised feature learning. (2011).
[55]
Matan Ostrovsky, Clark W. Barrett, and Guy Katz. 2022. An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks. In Automated Technology for Verification and Analysis - 20th International Symposium, ATVA 2022, Virtual Event, October 25--28, 2022, Proceedings (Lecture Notes in Computer Science, Vol. 13505). Springer, 391--396.
[56]
Brandon Paulsen, Jingbo Wang, and Chao Wang. 2020. Reludiff: Differential verification of deep neural networks. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 714--726.
[57]
Luca Pulina and Armando Tacchella. 2012. Challenging SMT solvers to verify neural networks. Ai Communications 25, 2 (2012), 117--135.
[58]
Xuhong Ren, Bing Yu, Hua Qi, Felix Juefei-Xu, Zhuo Li, Wanli Xue, Lei Ma, and Jianjun Zhao. 2020. Few-shot guided mix for dnn repairing. In 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 717--721.
[59]
Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. 2018. Reachability Analysis of Deep Neural Networks with Provable Guarantees. In IJCAI 2018. ijcai.org, Stockholm, Sweden, 2651--2659.
[60]
Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, and Marta Kwiatkowska. 2019. Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance. In IJCAI 2019, Sarit Kraus (Ed.). ijcai.org, Macao, China, 5944--5952.
[61]
Gagandeep Singh, Rupanshu Ganvir, Markus Puschel, and Martin T. Vechev. 2019. Beyond the Single Neuron Convex Barrier for Neural Network Certification. In NeurIPS 2019. 15072--15083.
[62]
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin Vechev. 2018. Fast and effective robustness certification. Advances in neural information processing systems 31 (2018).
[63]
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3, POPL (2019), 1--30.
[64]
Jeongju Sohn, Sungmin Kang, and Shin Yoo. 2022. Arachne: Search Based Repair of Deep Neural Networks. ACM Transactions on Software Engineering and Methodology (2022).
[65]
ETH Zurich SRI Lab, Department of Computer Science. 2020. ETH Robustness Analyzer for Neural Networks (ERAN). https://github.com/eth-sri/eran
[66]
Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. 2012. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural networks 32 (2012), 323--332.
[67]
Bing Sun, Jun Sun, Long H Pham, and Jie Shi. 2022. Causality-based neural network repair. In Proceedings of the 44th International Conference on Software Engineering. 338--349.
[68]
Youcheng Sun, Xiaowei Huang, Daniel Kroening, James Sharp, Matthew Hill, and Rob Ashmore. 2018. Testing deep neural networks. arXiv preprint arXiv:1803.04792 (2018).
[69]
Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. 2018. Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th international conference on software engineering. 303--314.
[70]
Florian Tramer and Dan Boneh. 2019. Adversarial training and robustness for multiple perturbations. Advances in neural information processing systems 32 (2019).
[71]
Hoang-Dung Tran, Stanley Bak, Weiming Xiang, and Taylor T. Johnson. 2020. Verification of Deep Convolutional Neural Networks Using ImageStars. In CAV 2020 (Lecture Notes in Computer Science, Vol. 12224), Shuvendu K. Lahiri and Chao Wang (Eds.). Springer, Los Angeles, CA, USA, 18--42.
[72]
Hoang-Dung Tran, Diego Manzanas Lopez, Patrick Musau, Xiaodong Yang, Luan Viet Nguyen, Weiming Xiang, and Taylor T. Johnson. 2019. Star-Based Reachability Analysis of Deep Neural Networks. In FM 2019 (Lecture Notes in Computer Science, Vol. 11800), Maurice H. ter Beek, Annabelle McIver, and José N. Oliveira (Eds.). Springer, Porto, Portugal, 670--686.
[73]
Muhammad Usman, Divya Gopinath, Youcheng Sun, and Corina S Păsăreanu. 2022. Rule-Based Runtime Mitigation Against Poison Attacks on Neural Networks. In Runtime Verification. Springer, 67--84.
[74]
Sandra Vieira, Walter HL Pinaya, and Andrea Mechelli. 2017. Using deep learning to investigate the neuroimaging correlates of psychiatric and neurological disorders: Methods and applications. Neuroscience & Biobehavioral Reviews 74 (2017), 58--75.
[75]
Yisen Wang, Xuejiao Deng, Songbai Pu, and Zhiheng Huang. 2017. Residual convolutional CTC networks for automatic speech recognition. arXiv preprint arXiv:1702.07793 (2017).
[76]
Stefan Webb, Tom Rainforth, Yee Whye Teh, and M. Pawan Kumar. 2019. A Statistical Approach to Assessing Neural Network Robustness. In ICLR 2019. OpenReview.net, New Orleans, LA, USA.
[77]
Lily Weng, Pin-Yu Chen, Lam M. Nguyen, Mark S. Squillante, Akhilan Boopathy, Ivan V. Oseledets, and Luca Daniel. 2019. PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach. In ICML 2019, 9--15 June 2019 (Proceedings of Machine Learning Research, Vol. 97). PMLR, Long Beach, California, USA, 6727--6736.
[78]
Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane S. Boning, and Inderjit S. Dhillon. 2018. Towards Fast Computation of Certified Robustness for ReLU Networks. In ICML 2018 (Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, Stockholm, Sweden, 5273--5282.
[79]
Matthew Wicker, Xiaowei Huang, and Marta Kwiatkowska. 2018. Feature-Guided Black-Box Safety Testing of Deep Neural Networks. In TACAS 2018 (Lecture Notes in Computer Science, Vol. 10805), Dirk Beyer and Marieke Huisman (Eds.). Springer, Thessaloniki, Greece, 408--426.
[80]
Matthew Wicker, Luca Laurenti, Andrea Patane, and Marta Kwiatkowska. 2020. Probabilistic Safety for Bayesian Neural Networks. In UAI 2020, August 3--6, 2020 (Proceedings of Machine Learning Research, Vol. 124), Ryan P. Adams and Vibhav Gogate (Eds.). AUAI Press, virtual online, 1198--1207.
[81]
Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. 2020. A game-based approximate verification of deep neural networks with provable guarantees. Theor. Comput. Sci. 807 (2020), 298--329.
[82]
Ruihan Wu, Chuan Guo, Yi Su, and Kilian Q Weinberger. 2021. Online adaptation to label distribution shift. Advances in Neural Information Processing Systems 34 (2021), 11340--11351.
[83]
Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, and Cho-Jui Hsieh. 2020. Automatic perturbation analysis for scalable certified robustness and beyond. Advances in Neural Information Processing Systems 33 (2020), 1129--1141.
[84]
Pengfei Yang, Renjue Li, Jianlin Li, Cheng-Chao Huang, Jingyi Wang, Jun Sun, Bai Xue, and Lijun Zhang. 2021. Improving Neural Network Verification through Spurious Region Guided Refinement. In TACAS 2021 (Lecture Notes in Computer Science, Vol. 12651). Springer, 389--408.
[85]
Wei Ying, Yu Zhang, Junzhou Huang, and Qiang Yang. 2018. Transfer learning via learning to transfer. In International conference on machine learning. PMLR, 5085--5094.
[86]
Bing Yu, Hua Qi, Qing Guo, Felix Juefei-Xu, Xiaofei Xie, Lei Ma, and Jianjun Zhao. 2021. Deeprepair: Style-guided repairing for deep neural networks in the real-world operational environment. IEEE Transactions on Reliability 71, 4 (2021), 1401--1416.
[87]
Hao Zhang and WK Chan. 2019. Apricot: A weight-adaptation approach to fixing deep learning models. In 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 376--387.
[88]
Huan Zhang, Hongge Chen, Chaowei Xiao, Sven Gowal, Robert Stanforth, Bo Li, Duane Boning, and Cho-Jui Hsieh. 2019. Towards stable and efficient training of verifiably robust neural networks. arXiv preprint arXiv:1906.06316 (2019).
[89]
Yue Zhao, Hong Zhu, Kai Chen, and Shengzhi Zhang. 2021. Ai-lancet: Locating error-inducing neurons to optimize neural networks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 141--158.

Cited By

View all
  • (2024)AutoRIC: Automated Neural Network Repairing Based on Constrained OptimizationACM Transactions on Software Engineering and Methodology10.1145/3690634Online publication date: 4-Sep-2024
  • (2024)Interpretability Based Neural Network RepairProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3680330(908-919)Online publication date: 11-Sep-2024

Index Terms

  1. VeRe: Verification Guided Synthesis for Repairing Deep Neural Networks

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ICSE '24: Proceedings of the IEEE/ACM 46th International Conference on Software Engineering
      May 2024
      2942 pages
      ISBN:9798400702174
      DOI:10.1145/3597503
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      In-Cooperation

      • Faculty of Engineering of University of Porto

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 06 February 2024

      Permissions

      Request permissions for this article.

      Check for updates

      Badges

      Author Tags

      1. DNN repair
      2. verification guided synthesis
      3. fault localization

      Qualifiers

      • Research-article

      Funding Sources

      • the Key R&D Programs of Zhejiang
      • the National Natural Science Foundation of China
      • the CAS Project for Young Scientists in Basic Research
      • the Joint Funds of National Natural Science Foundation of China

      Conference

      ICSE '24
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 276 of 1,856 submissions, 15%

      Upcoming Conference

      ICSE 2025

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)260
      • Downloads (Last 6 weeks)30
      Reflects downloads up to 19 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)AutoRIC: Automated Neural Network Repairing Based on Constrained OptimizationACM Transactions on Software Engineering and Methodology10.1145/3690634Online publication date: 4-Sep-2024
      • (2024)Interpretability Based Neural Network RepairProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3680330(908-919)Online publication date: 11-Sep-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media