Nothing Special   »   [go: up one dir, main page]

Skip to main content

Robustness Testing of AI Systems: A Case Study for Traffic Sign Recognition

  • Conference paper
  • First Online:
Artificial Intelligence Applications and Innovations (AIAI 2021)

Abstract

In the last years, AI systems, in particular neural networks, have seen a tremendous increase in performance, and they are now used in a broad range of applications. Unlike classical symbolic AI systems, neural networks are trained using large data sets and their inner structure containing possibly billions of parameters does not lend itself to human interpretation. As a consequence, it is so far not feasible to provide broad guarantees for the correct behaviour of neural networks during operation if they process input data that significantly differ from those seen during training. However, many applications of AI systems are security- or safety-critical, and hence require obtaining statements on the robustness of the systems when facing unexpected events, whether they occur naturally or are induced by an attacker in a targeted way. As a step towards developing robust AI systems for such applications, this paper presents how the robustness of AI systems can be practically examined and which methods and metrics can be used to do so. The robustness testing methodology is described and analysed for the example use case of traffic sign recognition in autonomous driving.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Balunovic, M., Baader, M., Singh, G., Gehr, T., Vechev, M.: Certifying geometric robustness of neural networks. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)

    Google Scholar 

  2. Berghoff, C., Neu, M., von Twickel, A.: Vulnerabilities of connectionist AI applications: evaluation and defense. Front. Big Data 3, 23 (2020). https://doi.org/10.3389/fdata.2020.00023

    Article  Google Scholar 

  3. Bielik, P., Tsankov, P., Krause, A., Vechev, M.: Reliability assessment of traffic sign classifiers. Technica report, Bundesamt für Sicherheit in der Informationstechnik (2020). https://www.bsi.bund.de/ki

  4. Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recognit. 84, 317–331 (2018). https://doi.org/10.1016/j.patcog.2018.07.023

    Article  Google Scholar 

  5. Carlini, N., et al.: On evaluating adversarial robustness. CoRR abs/1902.06705 (2019)

    Google Scholar 

  6. Dalvi, N.N., Domingos, P.M., Sanghai, S.K., Verma, D.: Adversarial classification. In: Kim, W., Kohavi, R., Gehrke, J., DuMouchel, W. (eds.) Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 99–108. ACM (2004). https://doi.org/10.1145/1014052.1014066

  7. D’Amour, A., et al.: Under specification presents challenges for credibility in modern machine learning. CoRR abs/2011.03395 (2020)

    Google Scholar 

  8. Deng, J., Dong, W., Socher, R., Li, L., Li, K., Li, F.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE Computer Society (2009). https://doi.org/10.1109/CVPR.2009.5206848

  9. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: Proceedings of the 36th International Conference on Machine Learning, vol. 97, pp. 1802–1811. PMLR (2019)

    Google Scholar 

  10. Fawzi, A., Moosavi-Dezfooli, S.M., Frossard, P., Soatto, S.: Empirical study of the topology and geometry of deep networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3762–3770 (2018). https://doi.org/10.1109/CVPR.2018.00396

  11. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.T.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy, pp. 3–18. IEEE Computer Society (2018). https://doi.org/10.1109/SP.2018.00058

  12. Geirhos, R., et al.: Shortcut learning in deep neural networks. Nature Mach. Intell. 2, 665–673 (2020)

    Article  Google Scholar 

  13. Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., Igel, C.: Detection of traffic signs in real-world images: the German traffic sign detection benchmark. In: The 2013 International Joint Conference on Neural Networks, pp. 1–8. IEEE (2013). https://doi.org/10.1109/IJCNN.2013.6706807

  14. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1

    Chapter  Google Scholar 

  15. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  Google Scholar 

  16. Kim, Y., Hwang, H., Shin, J.: Robust object detection under harsh autonomous-driving environments. IET Image Process. (2021). https://doi.org/10.1049/ipr2.12159

    Article  Google Scholar 

  17. Michaelis, C., et al.: Benchmarking robustness in object detection: autonomous driving when winter is coming. CoRR abs/1907.07484 (2019)

    Google Scholar 

  18. Ponn, T., Kröger, T., Diermeyer, F.: Identification and explanation of challenging conditions for camera-based object detection of automated vehicles. Sensors 20(13), 3699 (2020). https://doi.org/10.3390/s20133699

    Article  Google Scholar 

  19. Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: An abstract domain for certifying neural networks. In: Proceedings of ACM Program Language, 3(POPL), pp. 1–30 (2019). https://doi.org/10.1145/3290354

  20. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826. IEEE Computer Society (2016). https://doi.org/10.1109/CVPR.2016.308

  21. Szegedy, C., et al.: Intriguing properties of neural networks. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations (2014). http://arxiv.org/abs/1312.6199

  22. Temel, D., Chen, M., AlRegib, G.: Traffic sign detection under challenging conditions: a deeper look into performance variations and spectral characteristics. IEEE Transactions on Intelligent Transportation Systems, pp. 1–11 (2019). https://doi.org/10.1109/TITS.2019.2931429

  23. Temel, D., Kwon, G., Prabhushankar, M., AlRegib, G.: CURE-TSR: challenging unreal and real environments for traffic sign recognition. In: Neural Information Processing Systems (NeurIPS) Workshop on Machine Learning for Intelligent Transportation Systems (2017)

    Google Scholar 

  24. Tramer, F., Carlini, N., Brendel, W., Madry, A.: On adaptive attacks to adversarial example defenses. In: Advances in Neural Information Processing Systems, vol. 33, pp. 1633–1645. Curran Associates, Inc. (2020)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the reviewers for their helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Berghoff .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Berghoff, C., Bielik, P., Neu, M., Tsankov, P., von Twickel, A. (2021). Robustness Testing of AI Systems: A Case Study for Traffic Sign Recognition. In: Maglogiannis, I., Macintyre, J., Iliadis, L. (eds) Artificial Intelligence Applications and Innovations. AIAI 2021. IFIP Advances in Information and Communication Technology, vol 627. Springer, Cham. https://doi.org/10.1007/978-3-030-79150-6_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-79150-6_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-79149-0

  • Online ISBN: 978-3-030-79150-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics