Abstract
Physical adversarial attacks in object detection have become an attractive topic. Many works have proposed adversarial patches or camouflage to perform successful attacks in the real world, but all of these methods have drawbacks, especially for 3D humans. One is that the camouflage-based method is not dynamic or mimetic enough. That is, the adversarial texture is not rendered in conjunction with the background features of the target, which somehow violates the definition of adversarial examples; the other is that there is no detailing of non-rigid physical surfaces, such that the rendered textures are not robust and very rough in 3D scenarios. In this paper, we propose the Mimic Octopus Attack (MOA) to overcome the above gap, a novel method for generating a mimetic and robust physical adversarial texture to target objects to camouflage them against detectors with Multi-View and Multi-Scene. To achieve joint optimization, it utilizes the combined iterative training of mimetic style loss, adversarial loss, and human eye intuition. Experiments in specific scenarios of CARLA, which is generally recognized as an alternative to physical domains, demonstrate its advanced performance, resulting in a 67.62% decrease in mAP@0.5 for the YOLO-V5 detector compared to normal, an average increase of 4.14% compared to the state-of-the-art attacks and an average ASR of up to 85.28%. Besides, the robustness in attacking diverse populations and detectors of MOA proves its outstanding transferability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: International Conference on Machine Learning, pp. 284–293. PMLR (2018)
Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)
Brown, T.B., Mané, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. arXiv preprint arXiv:1712.09665 (2017)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)
CCTVnerd: Can you trick CCTV AI using colorful patterns? (2019). https://youtu.be/fTqhsixhOaM
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Proceedings of the 1st Annual Conference on Robot Learning, pp. 1–16 (2017)
Duan, R., et al.: Adversarial laser beam: Effective physical-world attack to DNNs in a blink. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16062–16071 (2021)
Fan, D.P., Ji, G.P., Cheng, M.M., Shao, L.: Concealed object detection. IEEE Trans. Pattern Anal. Mach. Intell. (2021)
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)
Hu, Y.C.T., Kung, B.H., Tan, D.S., Chen, J.C., Hua, K.L., Cheng, W.H.: Naturalistic physical adversarial patch for object detectors. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7848–7857 (2021)
Hu, Z., Huang, S., Zhu, X., Sun, F., Zhang, B., Hu, X.: Adversarial texture for fooling person detectors in the physical world. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13307–13316 (2022)
Huang, L., et al.: Universal physical camouflage attacks on object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 720–729 (2020)
Huang, Z., Xu, Y., Lassner, C., Li, H., Tung, T.: ARCH: animatable reconstruction of clothed humans. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3093–3102 (2020)
Jocher, G., et al.: Ultralytics/YOLOv5: v6.2 - YOLOv5 classification models, Apple M1, reproducibility, ClearML and Deci.ai integrations (2022). https://doi.org/10.5281/zenodo.7002879
Kato, H., Ushiku, Y., Harada, T.: Neural 3D mesh renderer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3907–3916 (2018)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112. Chapman and Hall/CRC (2018)
Li, Y., Zhai, W., Cao, Y., Zha, Z.J.: Location-free camouflage generation network. arXiv preprint arXiv:2203.09845 (2022)
Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: a skinned multi-person linear model. ACM Trans. Graph. (2015)
Lu, J., Sibai, H., Fabry, E.: Adversarial examples that fool detectors. arXiv preprint arXiv:1712.02494 (2017)
Lu, J., Sibai, H., Fabry, E., Forsyth, D.: No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv preprint arXiv:1707.03501 (2017)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Maesumi, A., Zhu, M., Wang, Y., Chen, T., Wang, Z., Bajaj, C.: Learning transferable 3D adversarial cloaks for deep trained detectors. arXiv preprint arXiv:2104.11101 (2021)
Mir, A., Alldieck, T., Pons-Moll, G.: Learning to transfer texture from clothing images to 3D humans. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7023–7034 (2020)
Poulton, E.: Adaptive coloration in animals. Nature 146(3692), 144–145 (1940)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28 (2015)
Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M.K.: Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528–1540 (2016)
Song, D., et al.: Physical adversarial examples for object detectors. In: 12th USENIX Workshop on Offensive Technologies (WOOT 2018) (2018)
Suryanto, N., et al.: DTA: physical camouflage attacks using differentiable transformation network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15305–15314 (2022)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Thys, S., Van Ranst, W., Goedemé, T.: Fooling automated surveillance cameras: adversarial patches to attack person detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (2019)
Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696 (2022)
Wang, D., et al.: FCA: learning a 3D full-coverage vehicle camouflage for multi-view physical adversarial attack. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 2414–2422 (2022)
Wang, J., Liu, A., Yin, Z., Liu, S., Tang, S., Liu, X.: Dual attention suppression attack: generate adversarial camouflage in physical world. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8565–8574 (2021)
Wu, T., Ning, X., Li, W., Huang, R., Yang, H., Wang, Y.: Physical adversarial attack on vehicle detector in the Carla simulator. arXiv preprint arXiv:2007.16118 (2020)
Wu, Z., Lim, S.-N., Davis, L.S., Goldstein, T.: Making an invisibility cloak: real world adversarial attacks on object detectors. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12349, pp. 1–17. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_1
Xiao, C., Yang, D., Li, B., Deng, J., Liu, M.: MeshAdv: adversarial meshes for visual recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6898–6907 (2019)
Yang, K., Lin, X.Y., Sun, Y., Ho, T.Y., Jin, Y.: 3D-Adv: black-box adversarial attacks against deep learning models through 3D sensors. In: 2021 58th ACM/IEEE Design Automation Conference (DAC), pp. 547–552. IEEE (2021)
Yin, B., et al.: Adv-makeup: a new imperceptible and transferable attack on face recognition. arXiv preprint arXiv:2105.03162 (2021)
Zhang, Q., Yin, G., Nie, Y., Zheng, W.S.: Deep camouflage images. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12845–12852 (2020)
Zhang, Y., Foroosh, H., David, P., Gong, B.: CAMOU: learning physical vehicle camouflages to adversarially attack detectors in the wild. In: International Conference on Learning Representations (2018)
Zhu, Z., Su, H., Liu, C., Xiang, W., Zheng, S.: You cannot easily catch me: a low-detectable adversarial patch for object detectors. arXiv preprint arXiv:2109.15177 (2021)
Acknowledgements
We thank the reviewers for their insightful feedback. This work was supported in part by National Natural Science Foundation of China under Grant No. 62272459.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, J., Zhang, S., Wang, X., Hou, R. (2023). Mimic Octopus Attack: Dynamic Camouflage Adversarial Examples Using Mimetic Feature for 3D Humans. In: Deng, Y., Yung, M. (eds) Information Security and Cryptology. Inscrypt 2022. Lecture Notes in Computer Science, vol 13837. Springer, Cham. https://doi.org/10.1007/978-3-031-26553-2_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-26553-2_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-26552-5
Online ISBN: 978-3-031-26553-2
eBook Packages: Computer ScienceComputer Science (R0)