Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Compression-resistant backdoor attack against deep neural networks

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In recent years, a number of backdoor attacks against deep neural networks (DNN) have been proposed. In this paper, we reveal that backdoor attacks are vulnerable to image compressions, as backdoor instances used to trigger backdoor attacks are usually compressed by image compression methods during data transmission. When backdoor instances are compressed, the feature of backdoor trigger will be destroyed, which could result in significant performance degradation for backdoor attacks. As a countermeasure, we propose the first compression-resistant backdoor attack method based on feature consistency training. Specifically, both backdoor images and their compressed versions are used for training, and the feature difference between backdoor images and their compressed versions are minimized through feature consistency training. As a result, the DNN treats the feature of compressed images as the feature of backdoor images in feature space. After training, the backdoor attack will be robust to image compressions. Furthermore, we consider three different image compressions (i.e., JPEG, JPEG2000, WEBP) during the feature consistency training, so that the backdoor attack can be robust to multiple image compression algorithms. Experimental results demonstrate that when the backdoor instances are compressed, the attack success rate of common backdoor attack is 6.63% (JPEG), 6.20% (JPEG2000) and 3.97% (WEBP) respectively, while the attack success rate of the proposed compression-resistant backdoor attack is 98.77% (JPEG), 97.69% (JPEG2000), and 98.93% (WEBP) respectively. The compression-resistant attack is robust under various parameters settings. In addition, extensive experiments have demonstrated that even if only one image compression method is used in the feature consistency training process, the proposed compression-resistant backdoor attack has the generalization ability to resist multiple unseen image compression methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The data are available from the corresponding author upon reasonable request.

Notes

  1. https://github.com/python-pillow/Pillow

References

  1. Wang Z., Guo H., Zhang Z., Song M., Zheng S., Wang Q., Niu B. (2020) Towards compression-resistant privacy-preserving photo sharing on social networks. In: the 21st ACM International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, pp 81–90

  2. Wan S., Wu T., Hsu H., Wong W.H., Lee C. (2020) Feature consistency training with JPEG compressed images. IEEE Trans Circuits Syst Video Technol 30(12):4769–4780

    Article  Google Scholar 

  3. Wallace G.K. (1992) The JPEG still picture compression standard. IEEE Trans Consum Electron 38(1):30–44

    Article  Google Scholar 

  4. Skodras A., Christopoulos C.A., Ebrahimi T. (2001) The JPEG 2000 still image compression standard. IEEE Signal Process Mag 18(5):36–58

    Article  MATH  Google Scholar 

  5. Ginesu G., Pintus M., Giusto D.D. (2012) Objective assessment of the WebP image coding algorithm. Signal Processing: Image Communication 27(8):867–874

    Google Scholar 

  6. Gu T., Liu K., Dolan-Gavitt B., Garg S. (2019) BadNets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7:47230–47244

    Article  Google Scholar 

  7. Chen X., Liu C., Li B., Lu K., Song D. (2017) Targeted backdoor attacks on deep learning systems using data poisoning. arXiv:1712.05526, 1–18

  8. Liu Y., Ma S., Aafer Y., Lee W., Zhai J., Wang W., Zhang X. (2018) Trojaning attack on neural networks. In: 25th Annual Network and Distributed System Security Symposium, pp 1–15

  9. Li S., Xue M., Zhao B.Z.H., Zhu H., Zhang X. (2021) Invisible backdoor attacks on deep neural networks via steganography and regularization. IEEE Trans Dependable Secure Comput 18(5):2088–2105

    Google Scholar 

  10. Zhang J., Chen D., Huang Q., Liao J., Zhang W., Feng H., Hua G., Yu N. (2022) Poison ink: Robust and invisible backdoor attack. IEEE Transactions on Image Processin 31:5691–5705

    Article  Google Scholar 

  11. Zhong H., Liao C., Squicciarini A.C., Zhu S., Miller D.J. (2020) Backdoor embedding in convolutional neural network models via invisible perturbation. In: 10th ACM Conference on Data and Application Security and Privacy, pp 97–108

  12. Xue M., He C., Wang J., Liu W. (2022) One-to-N & N-to-One: Two advanced backdoor attacks against deep learning models. IEEE Trans Dependable Secure Comput 19(3):1562–1578

    Article  Google Scholar 

  13. Salem A., Wen R., Backes M., Ma S., Zhang Y. (2020) Dynamic backdoor attacks against machine learning models. arXiv:2003.03675, 1–18

  14. Xue M., He C., Wu Y., Sun S., Zhang Y., Wang J., Liu W. (2022) PTB: Robust physical backdoor attacks against deep neural networks in real world. Computer & Security. 118(102726):1–15

    Google Scholar 

  15. Shin R., Song D. (2017) JPEG-resistant adversarial images. In: NIPS Workshop on Machine Learning and Computer Security, vol 1, pp 1–6

  16. Cao S., Zou Q., Mao X., Ye D., Wang Z. (2021) Metric learning for anti-compression facial forgery detection. In: ACM Multimedia Conference, pp 1929–1937

  17. Cheng S., Liu Y., Ma S., Zhang X. (2021) Deep feature space Trojan attack of neural networks by controlled detoxification. In: 35th AAAI Conference on Artificial Intelligence, pp 1148–1156

  18. Gong X., Chen Y., Wang Q., Huang H., Meng L., Shen C., Zhang Q. (2021) Defense-resistant backdoor attacks against deep neural networks in outsourced cloud environment. IEEE J Sel Areas Commun 39(8):2617–2631

    Article  Google Scholar 

  19. Nguyen T.A., Tran A.T. (2021) WaNet - Imperceptible warping-based backdoor attack. In: 9th International Conference on Learning Representations, pp 1–16

  20. Rumelhart D.E., Hinton G.E., Williams R.J. (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536

    Article  MATH  Google Scholar 

  21. Xue M., He C., Sun S., Wang J., Liu W. (2021) Robust backdoor attacks against deep neural networks in real physical world. In: 20th IEEE International Conference on Trust, Security and Privacy in Computing and Communications, pp 620–626

  22. Zhang J., Gu Z., Jang J., Wu H., Stoecklin M.P., Huang H., Molloy I.M. (2018) Protecting intellectual property of deep neural networks with watermarking. In: Proceedings of the Asia Conference on Computer and Communications Security , pp 159–172

  23. Krizhevsky A. (2009) Learning multiple layers of features from tiny images. Technical report, University of Toronto

  24. Parkhi O.M., Vedaldi A., Zisserman A. (2015) Deep face recognition. In: Proceedings of the British Machine Vision Conference, pp 1–12

  25. Krizhevsky A., Sutskever I., Hinton G.E. (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

    Article  Google Scholar 

  26. He K., Zhang X., Ren S., Sun J. (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 770–778

  27. Simonyan K., Zisserman A. (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations, pp 1–14

  28. Goodfellow I., Bengio Y., Courville A. (2016) Deep Learning. MIT press, Cambridge

    MATH  Google Scholar 

  29. Wenger E., Passananti J., Bhagoji A.N., Yao Y., Zheng H., Zhao B.Y. (2021) Backdoor attacks against deep learning systems in the physical world. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 6206–6215

  30. Chen Y., Mukherjee D., Han J., Grange A., Xu Y., Liu Z., Parker S., Chen C., Su H., Joshi U., Chiang C., Wang Y., Wilkins P., Bankoski J., Trudeau L.N., Egge N.E., Valin J., Davies T., Midtskogen S., Norkin A., Rivaz P.D. (2018) An overview of core coding tools in the AV1 video codec. In: Picture Coding Symposium, pp 41–45

  31. Bellard F. (2022) BPG Image format. https://bellard.org/bpg

  32. Zhang T. (2004) Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: International Conference on Machine Learning, pp 1–8

  33. Tramèr F., Kurakin A., Papernot N., Goodfellow I.J., Boneh D., McDaniel P.D. (2018) Ensemble adversarial training: Attacks and defenses. In: 6th International Conference on Learning Representations, pp 1–20

  34. Gao Y., Wu D., Zhang J., Gan G., Xia S., Niu G., Sugiyama M. (2022) On the effectiveness of adversarial training against backdoor attacks. arXiv:2202.10627, 1–12

  35. Geiping J., Fowl L., Somepalli G., Goldblum M., Moeller M., Goldstein T. (2021) What doesn’t kill you makes you robust(er): Adversarial training against poisons and backdoors. arXiv:2102.13624, 1–25

  36. Sarkar E., Benkraouda H., Maniatakos M. (2020) FaceHack: Triggering backdoored facial recognition systems using facial characteristics. arXiv:2006.11623, 1–13

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China (No. 61602241), and CCF-NSFOCUS Kun-Peng Scientific Research Fund (No. CCF-NSFOCUS 2021012).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingfu Xue.

Ethics declarations

Conflict of Interests

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xue, M., Wang, X., Sun, S. et al. Compression-resistant backdoor attack against deep neural networks. Appl Intell 53, 20402–20417 (2023). https://doi.org/10.1007/s10489-023-04575-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04575-8

Keywords

Navigation