Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Imperceptible and multi-channel backdoor attack

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Recent researches demonstrate that Deep Neural Networks (DNN) models are vulnerable to backdoor attacks. The backdoored DNN model will behave maliciously when images containing backdoor triggers arrive. To date, almost all the existing backdoor attacks are single-trigger and single-target attacks, and the triggers of most existing backdoor attacks are obvious thus are easy to be detected or noticed. In this paper, we propose a novel imperceptible and multi-channel backdoor attack method against Deep Neural Networks by exploiting Discrete Cosine Transform (DCT) steganography. The proposed method injects backdoor instances into the training set and does not require controlling the whole training process. Specifically, for a colored image, we utilize DCT steganography to construct and embed trigger into different channels of the image in frequency domain. As a result, the trigger shown in the time domain is stealthy and natural. Then the generated backdoor instances are injected into the training dataset to train the DNN model. Based on the proposed backdoor attack method, we implement two cunning variants of backdoor attacks, imperceptible N-to-N (multi-target) backdoor attack and imperceptible N-to-One (multi-trigger) backdoor attack. Experimental results demonstrate that the attack success rate of the N-to-N backdoor attack is 95.09% on CIFAR-10 dataset, 93.33% on TinyImageNet dataset and 92.45% on ImageNet dataset, respectively. The attack success rate of the N-to-One attack is 90.22% on CIFAR-10 dataset, 89.56% on TinyImageNet dataset and 88.29% on ImageNet dataset, respectively. Meanwhile, the proposed backdoor attack does not affect the classification accuracy of the DNN models. Moreover, the proposed attack is demonstrated to be robust against two state-of-the-art backdoor defenses, including the recent frequency domain defense.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1
Fig. 4
Algorithm 2
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The data is available from the corresponding author upon reasonable request.

Notes

  1. https://colab.research.google.com

References

  1. Gu T, Liu K, Dolan-Gavitt B, Garg S (2019) BadNets: Evaluating back dooring attacks on deep neural networks. IEEE Access 7, 47230–47244

  2. Liu Y, Ma S, Aafer Y, Lee W, Zhai J, Wang W, Zhang X (2018) Tro-janing attack on neural networks. In: Proceedings of the 25th Annual Network and Distributed System Security Symposium, pp. 12–20

  3. Xue M, He C, Wang J, Liu W (2022) One-to-N & N-to-One: Two advanced backdoor attacks against deep learning models. IEEE Transactions on Dependable and Secure Computing 19(3):1562–1578

  4. Souri H, Fowl L, Chellappa R, Goldblum M, Goldstein T (2022) Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch. In: Advances in Neural Information Processing Systems, pp. 19165–19178

  5. Zhong H, Liao C, Squicciarini AC, Zhu S, Miller DJ (2020) Backdoor embedding in convolutional neural network models via invisible perturbation. In: Proceedings of the 10th Conference on Data and Application Security and Privacy, pp. 97–108

  6. Li S, Xue M, Zhao B, Zhu H, Zhang X (2021) Invisible backdoor attacks on deep neural networks via steganography and regularization. IEEE Trans Dependable Secure Comput 18:2088–2105

    Google Scholar 

  7. Nguyen TA, Tran AT (2021) WaNet - imperceptible warping-based back door attack. In: 9th International Conference on Learning Representations, pp. 1–16

  8. Doan K, Lao Y, Zhao W, Li P (2021) LIRA: Learnable, imperceptible and robust backdoor attacks. In: International Conference on Computer Vision, pp. 11946–11956

  9. Doan K, Lao Y, Li P (2021) Backdoor attack with imperceptible input and latent modification. In: Annual Conference on Neural Information Processing Systems, pp. 18944–18957

  10. Cheng S, Liu Y, Ma S, Zhang X (2021) Deep feature space trojan attack of neural networks by controlled detoxification. In: Proceedings of 35th Conference on Artificial Intelligence, pp. 1148–1156

  11. Ning R, Li J, Xin C, Wu H (2021) Invisible Poison: A blackbox clean label backdoor attack to deep neural networks. In: Proceedings of the 40th IEEE Conference on Computer Communications, pp. 1–10

  12. Taburet T, Bas P, Sawaya W, Cogranne R (2020) JPEG steganography and synchronization of DCT coefficients for a given development pipeline. In: ACM Workshop on Information Hiding and Multimedia Security, pp. 139–149

  13. Mohd-Yusof Z, Suleiman I, Aspar Z (2000) Implementation of two dimensional forward DCT and inverse DCT using FPGA. In: TENCON Proceedings of Intelligent Systems and Technologies for the New Millennium, pp. 242–245

  14. Krizhevsky A (2009) Learning multiple layers of features from tiny images. Technical report, University of Toronto

  15. Karpathy A (2016) Tiny ImageNet challenge. Technical report, Stanford University

  16. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein MS, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vision 115(3):211–252

    Article  MathSciNet  Google Scholar 

  17. Zeng Y, Park W, Mao ZM, Jia R (2021) Rethinking the backdoor attacks’ triggers: A frequency perspective. In: IEEE/CVF International Conference on Computer Vision, pp. 16473–16481 (2021)

  18. Wang B, Yao Y, Shan S, Li H, Viswanath B, Zheng H, Zhao BY (2019) Neural Cleanse: Identifying and mitigating backdoor attacks in neural networks. In: IEEE Symposium on Security and Privacy, pp. 707–723

  19. Salem A, Wen R, Backes M, Ma S, Zhang Y (2022) Dynamic back door attacks against machine learning models. In: 7th IEEE European Symposium on Security and Privacy, pp. 703–718

  20. Phan H, Xie Y, Liu J, Chen Y, Yuan B (2022) Invisible and efficient backdoor attacks for compressed deep neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 96–100

  21. Zhong N, Qian Z, Zhang X (2022) Imperceptible backdoor attack: From input space to feature representation. In: Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, pp. 1736–1742

  22. Jiang W, Li H, Xu G, Zhang T (2023) Color backdoor: A robust poisoning attack in color space. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8133–8142

  23. Gong X, Wang Z, Chen Y, Xue M, Wang Q, Shen C (2023) Kaleidoscope: Physical backdoor attacks against deep neural networks with RGB filters. IEEE Transactions on Dependable and Secure Computing, 1–12

  24. Arshad I, Qiao Y, Lee B, Ye Y (2023) Invisible encoded backdoor attack on DNNs using conditional GAN. In: IEEE International Conference on Consumer Electronics, pp. 1–5

  25. Doan KD, Lao Y, Li P (2022) Marksman backdoor: Backdoor attacks with arbitrary target class. In: Advances in Neural Information Processing Systems, pp. 38260–38273

  26. Li Z, Li P, Sheng X, Yin C, Zhou L (2023) IMTM: Invisible multi trigger multimodal backdoor attack. In: CCF International Conference on Natural Language Processing and Chinese Computing, pp. 533–545

  27. Xue M, Wu Y, Ni S, Zhang LY, Zhang Y, Liu W (2023) Untargeted backdoor attack against deep neural networks with imperceptible trigger. IEEE Transactions on Industrial Informatics 1–10

  28. Wang T, Yao Y, Xu F, An S, Tong H, Wang T (2022) An invisible black box backdoor attack through frequency domain. In: European Conference on Computer Vision, pp. 396–413

  29. Süsstrunk S, Buckley R, Swen S (1999) Standard RGB color spaces. In: Color and Imaging Conference, pp. 127–134

  30. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations, pp. 1–14

  31. Zhang T (2004) Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: Proceedings of the 21st International Conference on Machine Learning, pp. 1–8

  32. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778

  33. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Process 13(4):600–612

    Article  Google Scholar 

  34. Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595

  35. Li Y, Li Y, Wu B, Li L, He R, Lyu S (2021) Invisible backdoor attack with sample-specific triggers. In: IEEE/CVF International Conference on Computer Vision, pp. 16443–16452

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No. 62372231), and Aeronautical Science Foundation (No. 2022Z071052008).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingfu Xue.

Ethics declarations

Conflicts of interest

The authors declare that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xue, M., Ni, S., Wu, Y. et al. Imperceptible and multi-channel backdoor attack. Appl Intell 54, 1099–1116 (2024). https://doi.org/10.1007/s10489-023-05228-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-05228-6

Keywords

Navigation