Abstract
Object detection is widely employed in security-critical scenarios. With the rapid development of deep learning, deep learning-based object detection methods have gradually replaced the traditional object detection technology due to their higher efficiency and accuracy in detection. However, these deep learning-based models are vulnerable to adversarial examples, which pose a serious security threat. Currently, existing adversarial attack methods have limited attack ability and are time-consuming. To address this issue, a low-frequency adversarial examples generation method for object detection using a generative model is proposed. By transforming the generation of adversarial examples from a traditional optimization mechanism into a generation mechanism, our method greatly shortens the time required for generating adversarial examples. Two auxiliary networks are added to the Generative Adversarial Networks framework to guide the network training, using adversarial loss and the feature layer loss to improve the attack ability of adversarial examples. Moreover, a Gaussian Filtering Module is incorporated behind the generator to smooth the perturbation and preserve effective low-frequency perturbation. Experiment results on PASCAL VOC 2007 datasets show that our method can significantly improve generation speed and attack success rate compared to other attack methods. Furthermore, compared with the UEA methods, which also use a generation mechanism, our method exhibits superior performance in terms of generated image quality and attack success rate.
Similar content being viewed by others
Data availability
The data used to support the findings of this study are available from the corresponding author upon request.
References
Wang Q, Zhang L, Bertinetto L et al (2019) Fast online object tracking and segmentation: A unifying approach. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR). pp 1328–1338
Ronneberger O, Fischer P, Brox T (2015) U-Net: Convolutional networks for biomedical image segmentation. International conference on medical image computing and computer- assisted intervention. Springer, Cham, pp 234–241
Rosati R, Romeo L et al (2020) Faster R-CNN approach for detection and quantification of DNA damage in comet assay images. Comput Biol Med 123:103912
Jane H, Carpenter A (2017) Applying faster R-CNN for object detection on malaria images. In: IEEE conference on computer vision and pattern recognition workshops
Bing H, Jianhui W (2020) Detection of PCB surface defects with improved faster-RCNN and feature pyramid network. IEEE Access 8:108335–108345
Quanfu F, Brown L, Smith J (2016) A closer look at Faster R-CNN for vehicle detection. 2016 IEEE intelligent vehicles symposium (IV). pp 124–129
Xudong S, Pengcheng Wu, Hoi SCH (2018) Face detection using deep learning: An improved faster RCNN approach. Neurocomputing 299:42–50
Jiaxing L et al (2017) Facial expression recognition with faster R-CNN. Procedia Comput Sci 107:135–140
Szegedy C, Zaremba W et al (2013) Intriguing properties of neural networks. In: 2nd international conference on learning representations (ICLR). Banff, AB, Canada, pp 1–10
Lu J, Sibai H, Fabry E (2017) Adversarial examples that fool detectors. arXiv preprint arXiv:1712.02494
Ren S, He K, Girshick R et al (2015) Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149
Redmon J, Farhadi A (2017) YOLO9000: Better, Faster, Stronger. In: IEEE Conference on computer vision and pattern recognition (CVPR). Honolulu, HI, USA, pp 7263–7271
Wei X, Liang S, Chen N, Cao X (2019) Transferable adversarial attacks for image and video object detection. In: 28th International joint conference on artificial intelligence (IJCAI). Macao, China, pp 954–960
Chow KH, Liu L, Gursoy ME et al (2020) Understanding object detection through an adversarial lens. European symposium on research in computer security. Springer, Cham, pp 460–481
Girshick R, Donahue J, Darrell T et al (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE conference on computer vision and pattern recognition (CVPR). Columbus, OH, USA, pp 580–587
Girshick R (2015) Fast R-CNN. In: 2015 IEEE international conference on computer vision (ICCV). Santiago, Chile, pp 1440-1448
Liu W, Anguelov D, Erhan D et al (2016) SSD: Single shot multibox detector. European conference on computer vision (ECCV). Springer, Cham, pp 21–37
Wang CY, Bochkovskiy A, Liao H (2022) YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv: 2207.02696
Law H, Deng J (2018) Cornernet: Detecting objects as paired keypoints. In: 15th European conference on computer vision (ECCV). pp 765–781
Zhou X, Wang D, Krähenbühl P (2019) Objects as points. arXiv preprint arXiv:1904.07850
Zhou X, Zhuo J, Krahenbuhl P (2019) Bottom-up object detection by grouping extreme and center points. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR). pp 50–859
Goodfellow I, Abadie J et al (2014) Generative adversarial nets. In: Conference on neural information processing systems (NIPS), pp 2672– 2680
Zhu JY, Park T et al (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE international conference on computer vision (ICCV), Venice, Italy, pp 2242–2251
Isola P, Zhu JY et al (2017) Image-to-image translation with conditional adversarial networks. In: 2017 IEEE Conference on computer vision and pattern recognition (CVPR), Honolulu, HI, USA, pp 5967–5976
Xie C, Wang J et al (2017) Adversarial examples for semantic segmentation and object detection. In: 2017 IEEE international conference on computer vision (ICCV), Venice, Italy, pp 1378–1387
Li Y, Tian D, Chang MC et al (2018) Robust adversarial perturbation on deep proposal-based models. arXiv preprint arXiv:1809.05962
Liu X, Yang H, Liu Z et al (2018) Dpatch: An adversarial patch attack on object detectors. arXiv preprint arXiv:1806.02299
Li Y, Bian X, Lyu S (2018) Attacking object detectors via imperceptible patches on background. arXiv preprint arXiv:1809.05966
Wang D, Li C et al (2022) Daedalus: Breaking nonmaximum suppression in object detection via adversarial examples. IEEE Trans Cybern 52(8):7427–7440
Thys S, Van Ranst W, Goedemé T (2019) Fooling automated surveillance cameras: adversarial patches to attack person detection. In: 2019 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), Long Beach, CA, USA, pp 49–55
Chow KH, Liu L, Gursoy ME et al (2020) TOG: targeted adversarial objectness gradient attacks on real-time object detection systems. arXiv preprint arXiv:2004.04320
Cai Z, Xie X, Li S et al (2022) Context-aware transfer attacks for object detection. AAAI Conf Artif Intell 36(1):149–157
Chaowei X, Bo L et al (2018) Generating adversarial examples with adversarial networks. In: 27th international joint conference on artificial intelligence (IJCAI), pp 3905–3911
Liu A, Liu X, Fan J et al (2019) Perceptual-sensitive GAN for generating adversarial patches. AAAI Conf Artif Intell 33(01):1028–1035
Deng X, Fang Z, Zheng Y et al (2021) Adversarial examples with transferred camouflage style for object detection. J Phys Conf Ser 1738(01):012130
Liang S, Wei X, Cao X (2021) Generate more imperceptible adversarial examples for object detection. In: Workshop on adversarial machine learning (AML)
Zhou W, Hou X, Chen Y et al (2018) Transferable adversarial perturbations. In: European conference on computer vision (ECCV). pp 452–467
Sharma Y, Ding G W, Brubaker M (2019) On the effectiveness of low frequency perturbations. In: 28th International joint conference on artificial intelligence (IJCAI'19). pp 3389–3396
Duan R, Chen Y, Niu D et al (2021) AdvDrop: Adversarial attack to DNNs by dropping information. In: 2021 IEEE/CVF international conference on computer vision (ICCV). pp 7486–7495
Dong Y, Pang T, Su H et al (2019) Evading defenses to transferable adversarial examples by translation-invariant attacks. In: 2019 IEEE/CVF international conference on computer vision and pattern recognition (CVPR). pp 4307–4316
Mark Everingham SM, Eslami Ali, Van Gool Luc et al (2015) The pascal visual object classes challenge: A retrospective. Int J Comput Vis 111(1):98–136
Funding
This work was supported in part by the following funds: National Natural Science Foundation of China under Grant number 61801159 and 61571174, and in part by Science and Technology Plan Project of Hangzhou under Grant number 20201203B124.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Long Yuan, Junmei Sun, Xiumei Li, Zhenxiong Pan and Sisi Liu. The first draft of the manuscript was written by Long Yuan and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethical approval and consent to participate
The submitted work is original and has not been published elsewhere in any form or language.
Human and animal ethics
Not applicable.
Consent for publication
The authors consent for publication.
Competing interests
No, I declare that the authors have no competing interests as defined by Springer, or other interests that might be perceived to influence the results and/or discussion reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yuan, L., Sun, J., Li, X. et al. A low-frequency adversarial attack method for object detection using generative model. Multimed Tools Appl 83, 62423–62442 (2024). https://doi.org/10.1007/s11042-024-18189-w
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-024-18189-w