Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models
<p>The comparison of triggers in a traditional attack and in our attack.</p> "> Figure 2
<p>The training process of the encoder–decoder network. It illustrates how a string can be embedded into an image by the encoder, while the decoder is employed to recover the string information.</p> "> Figure 3
<p>The process of our attack.</p> "> Figure 4
<p>The comparison between the origin images and those generated from poison StyleGAN3 and clean StyleGAN3 models, respectively.</p> "> Figure 5
<p>The comparison between origin, clean, and poisoned images in terms of their DCT spectrograms.</p> "> Figure 6
<p>The three distinct GI images of origin, clean, and poisoned images.</p> "> Figure 7
<p>The three distinct DFT-GI images of origin, clean, and poisoned images.</p> ">
Abstract
:1. Introduction
- (1)
- Spread of Misinformation and Propaganda: By manipulating social media algorithms, attackers can inject biased or false information, which can then spread rapidly and influence public opinion.
- (2)
- Legal and Ethical Bias: Data poisoning can be used to inject bias into models employed for legal and ethical decision-making, such as bail recommendations or criminal sentencing. This could result in unfair and discriminatory outcomes.
- (3)
- Adversarial Machine Learning: Data poisoning can be used to generate adversarial examples, which are specifically designed inputs intended to deceive machine learning models. This can lead to the bypassing of security systems, manipulation of algorithms, and even the creation of vulnerabilities in critical infrastructure.
- The key innovation of this research lies in its invisible backdoor attack strategy. Unlike traditional methods that necessitate altering model parameters, this method does not require any modification of the model itself.
- This approach creates a new avenue for backdoor attacks that are undetectable in real-world applications, posing a significant threat to the security of DGMs.
- Comparative analyses demonstrate the effectiveness of the proposed method, exhibiting significantly higher performance in key metrics such as Best Accuracy (BA). This research highlights the critical need for developing robust defence mechanisms to enhance the security of DGMs in real-world scenarios. The findings provide valuable insights for enterprises and researchers, enabling them to develop and deploy deep learning models with improved security, thereby mitigating potential risks associated with backdoor attacks.
2. Related Works
2.1. Deep Generative Models
2.2. Backdoor Attack
2.2.1. Visible Backdoor Attack
2.2.2. Invisible Backdoor Attack
2.2.3. Backdoor Attacks against DGMs
2.3. Defence Strategies
2.3.1. DCT (Discrete Cosine Transform)
2.3.2. GI (Greyscale Image)
2.3.3. DFT-GI (Discrete Fourier Transform of a Greyscale Image)
3. Research Methodology
3.1. Threat Model
3.1.1. Attacker’s Capacities
3.1.2. Attacker’s Goals
3.2. The Proposed Attack
3.2.1. How to Generate Invisible Trigger
3.2.2. The Main Process of the Attack
- (1)
- The preparation stage
- (2)
- The attack stage
4. Experiments
4.1. Set Up
4.1.1. Dataset Selection
4.1.2. Hardware Configuration
4.1.3. Attack Model
4.1.4. Evaluation Metrics
4.2. Results
4.2.1. Model Fidelity
4.2.2. Trigger Stealthiness
5. Discussion
5.1. Defence against DCT
5.2. Defence against GI
5.3. Defence against DFT-GI
6. Conclusions
- (1)
- Exploring More Efficient Attack Methods: This study requires poisoning all images to train StyleGAN3, potentially wasting time and computational resources. Future research could investigate more advanced algorithms and strategies to optimize the poisoning process, aiming to achieve the attack effect by poisoning only a small portion of the dataset.
- (2)
- Exploring Backdoor Attacks in the Potential Space: Investigating backdoor attacks within the potential space offers significant potential for time- and computational-resource savings. It also presents greater flexibility and convenience for attackers.
- (3)
- Evaluation Metrics for Federated Learning Security: Exploring and developing new evaluation metrics that can effectively identify and characterise backdoor attacks in federated learning models, particularly those targeting the privacy and security of individual clients.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Rawat, A.; Levacher, K.; Sinn, M. The devil is in the GAN: Backdoor attacks and defenses in deep generative models. In European Symposium on Research in Computer Security; Springer: Berlin/Heidelberg, Germany, 2022; pp. 776–783. [Google Scholar]
- Dhariwal, P.; Nichol, A. Diffusion models beat gans on image synthesis. Adv. Neural Inf. Process. Syst. 2021, 34, 8780–8794. [Google Scholar]
- Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; Chen, M. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv 2021, arXiv:2112.10741. [Google Scholar]
- Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. arXiv 2018, arXiv:1710.10196. [Google Scholar]
- Brock, A.; Donahue, J.; Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. arXiv 2018, arXiv:1809.11096. [Google Scholar]
- Chan, C.; Ginosar, S.; Zhou, T.; Efros, A. Everybody Dance Now. In Proceedings of the International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Nistal, J.; Lattner, S.; Richard, G. Comparing Representations for Audio Synthesis Using Generative Adversarial Networks. In Proceedings of the European Signal Processing Conference, Dublin, Ireland, 23–27 August 2021. [Google Scholar]
- Truong, L.; Jones, C.; Hutchinson, B.; August, A.; Tuor, A. Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers. arXiv 2020, arXiv:2004.11514. [Google Scholar]
- Ahmed, I.M.; Kashmoola, M.Y. Threats on machine learning technique by data poisoning attack: A survey. In Proceedings of the Advances in Cyber Security: Third International Conference, ACeS 2021, Penang, Malaysia, 24–25 August 2021; Revised Selected Papers 3. Springer: Berlin/Heidelberg, Germany, 2021; pp. 586–600. [Google Scholar]
- Fan, J.; Yan, Q.; Li, M.; Qu, G.; Xiao, Y. A survey on data poisoning attacks and defenses. In Proceedings of the 2022 7th IEEE International Conference on Data Science in Cyberspace (DSC), Guilin, China, 11–13 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 48–55. [Google Scholar]
- Yerlikaya, F.A.; Bahtiyar, Ş. Data poisoning attacks against machine learning algorithms. Expert Syst. Appl. 2022, 208, 118101. [Google Scholar] [CrossRef]
- Tian, Z.; Cui, L.; Liang, J.; Yu, S. A comprehensive survey on poisoning attacks and countermeasures in machine learning. ACM Comput. Surv. 2022, 55, 1–35. [Google Scholar] [CrossRef]
- Cinà, A.E.; Grosse, K.; Demontis, A.; Vascon, S.; Zellinger, W.; Moser, B.A.; Oprea, A.; Biggio, B.; Pelillo, M.; Roli, F. Wild patterns reloaded: A survey of machine learning security against training data poisoning. ACM Comput. Surv. 2023, 55, 1–39. [Google Scholar] [CrossRef]
- Costales, R.; Mao, C.; Norwitz, R.; Kim, B.; Yang, J. Live trojan attacks on deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 796–797. [Google Scholar]
- Gao, Y.; Xu, C.; Wang, D.; Chen, S.; Ranasinghe, D.C.; Nepal, S. Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference, San Juan, PR, USA, 9–13 December 2019; pp. 113–125. [Google Scholar]
- Tang, R.; Du, M.; Liu, N.; Yang, F.; Hu, X. An embarrassingly simple approach for trojan attack in deep neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual, 23–27 August 2020; pp. 218–228. [Google Scholar]
- Wang, D.; Wen, S.; Jolfaei, A.; Haghighi, M.S.; Nepal, S.; Xiang, Y. On the neural backdoor of federated generative models in edge computing. ACM Trans. Internet Technol. (TOIT) 2021, 22, 1–21. [Google Scholar] [CrossRef]
- Nguyen, A.; Yosinski, J.; Clune, J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 427–436. [Google Scholar]
- Chen, J.; Zheng, H.; Su, M.; Du, T.; Lin, C.; Ji, S. Invisible poisoning: Highly stealthy targeted poisoning attack. In Proceedings of the Information Security and Cryptology: 15th International Conference, Inscrypt 2019, Nanjing, China, 6–8 December 2019; Revised Selected Papers 15. Springer: Berlin/Heidelberg, Germany, 2020; pp. 173–198. [Google Scholar]
- Wang, Z.; She, Q.; Ward, T.E. Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy. ACM Comput. Surv. 2021, 54, 1–38. [Google Scholar] [CrossRef]
- Hu, W.; Combden, O.; Jiang, X.; Buragadda, S.; Newell, C.J.; Williams, M.C.; Critch, A.L.; Ploughman, M. Machine learning classification of multiple sclerosis patients based on raw data from an instrumented walkway. BioMedical Eng. OnLine 2022, 21, 21. [Google Scholar] [CrossRef] [PubMed]
- Ak, K.E.; Lim, J.H.; Tham, J.Y.; Kassim, A.A. Attribute Manipulation Generative Adversarial Networks for Fashion Images (ACCEPTED ICCV 2019). In Proceedings of the 2019 International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Marafioti, A.; Perraudin, N.; Holighaus, N.; Majdak, P. Adversarial generation of time-frequency features with application in audio synthesis. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 4352–4362. [Google Scholar]
- Brophy, E.; Wang, Z.; She, Q.; Ward, T. Generative adversarial networks in time series: A survey and taxonomy. arXiv 2021, arXiv:2107.11098. [Google Scholar]
- Nguyen, T.A.; Tran, A. Input-aware dynamic backdoor attack. Adv. Neural Inf. Process. Syst. 2020, 33, 3454–3464. [Google Scholar]
- Zhong, H.; Liao, C.; Squicciarini, A.C.; Zhu, S.; Miller, D. Backdoor embedding in convolutional neural network models via invisible perturbation. In Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy, New Orleans, LA, USA, 16–18 March 2020; pp. 97–108. [Google Scholar]
- Li, S.; Xue, M.; Zhao, B.Z.H.; Zhu, H.; Zhang, X. Invisible backdoor attacks on deep neural networks via steganography and regularization. IEEE Trans. Dependable Secur. Comput. 2020, 18, 2088–2105. [Google Scholar] [CrossRef]
- Salem, A.; Backes, M.; Zhang, Y. Don’t trigger me! a triggerless backdoor attack against deep neural networks. arXiv 2020, arXiv:2010.03282. [Google Scholar]
- Salem, A.; Sautter, Y.; Backes, M.; Humbert, M.; Zhang, Y. Baaan: Backdoor attacks against autoencoder and gan-based machine learning models. arXiv 2020, arXiv:2010.03007. [Google Scholar]
- Arshad, I.; Qiao, Y.; Lee, B.; Ye, Y. Invisible Encoded Backdoor attack on DNNs using Conditional GAN. In Proceedings of the 2023 IEEE International Conference on Consumer Electronics (ICCE), Berlin, Germany, 2–5 September 2023; pp. 1–5. [Google Scholar] [CrossRef]
- Shokri, R. Bypassing backdoor detection algorithms in deep learning. In Proceedings of the 2020 IEEE European Symposium on Security and Privacy (EuroS&P), Genoa, Italy, 7–11 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 175–183. [Google Scholar]
- Hong, Q.; He, B.; Zhang, Z.; Xiao, P.; Du, S.; Zhang, J. Circuit Design and Application of Discrete Cosine Transform Based on Memristor. IEEE J. Emerg. Sel. Top. Circuits Syst. 2023, 13, 502–513. [Google Scholar] [CrossRef]
- Zhang, J.; Liu, Y.; Li, A.; Zeng, J.; Xie, H. Image Processing and Control of Tracking Intelligent Vehicle Based on Grayscale Camera. In Proceedings of the 2022 IEEE 5th International Conference on Image Processing Applications and Systems (IPAS), Genova, Italy, 5–7 December 2022; Volume 5, pp. 1–6. [Google Scholar] [CrossRef]
- Sharma, S.; Varma, T. Discrete combined fractional Fourier transform and its application to image enhancement. Multimed. Tools Appl. 2024, 83, 29881–29896. [Google Scholar] [CrossRef]
- Li, Y.; Jiang, Y.; Li, Z.; Xia, S.T. Backdoor learning: A survey. IEEE Trans. Neural Networks Learn. Syst. 2022, 35, 5–22. [Google Scholar] [CrossRef]
- Tancik, M.; Mildenhall, B.; Ng, R. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–9 June 2020; pp. 2117–2126. [Google Scholar]
- Chen, X.; Liu, C.; Li, B.; Lu, K.; Song, D. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv 2017, arXiv:1712.05526. [Google Scholar]
- Karras, T.; Aittala, M.; Laine, S.; Härkönen, E.; Hellsten, J.; Lehtinen, J.; Aila, T. Alias-free generative adversarial networks. Adv. Neural Inf. Process. Syst. 2021, 34, 852–863. [Google Scholar]
- Li, Y.; Li, Y.; Wu, B.; Li, L.; He, R.; Lyu, S. Invisible backdoor attack with sample-specific triggers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 16463–16472. [Google Scholar]
- Kramberger, T.; Potočnik, B. LSUN-Stanford car dataset: Enhancing large-scale car image datasets using deep learning for usage in GAN training. Appl. Sci. 2020, 10, 4913. [Google Scholar] [CrossRef]
- Karras, T.; Laine, S.; Aila, T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4401–4410. [Google Scholar]
- Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–9 June 2020; pp. 8110–8119. [Google Scholar]
- Bińkowski, M.; Sutherland, D.J.; Arbel, M.; Gretton, A. Demystifying mmd gans. arXiv 2018, arXiv:1801.01401. [Google Scholar]
- Bynagari, N.B. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. Asian J. Appl. Sci. Eng. 2019, 8, 6. [Google Scholar] [CrossRef]
- Qiao, T.; Luo, X.; Wu, T.; Xu, M.; Qian, Z. Adaptive steganalysis based on statistical model of quantized DCT coefficients for JPEG images. IEEE Trans. Dependable Secur. Comput. 2019, 18, 2736–2751. [Google Scholar] [CrossRef]
- Benouini, R.; Batioua, I.; Zenkouar, K.; Najah, S. Fractional-order generalized Laguerre moments and moment invariants for grey-scale image analysis. IET Image Process. 2021, 15, 523–541. [Google Scholar] [CrossRef]
- Saikia, S.; Fernández-Robles, L.; Alegre, E.; Fidalgo, E. Image retrieval based on texture using latent space representation of discrete Fourier transformed maps. Neural Comput. Appl. 2021, 33, 13301–13316. [Google Scholar] [CrossRef]
Name | Proposal (Year) | Merits | Drawbacks | |
---|---|---|---|---|
Attack | Input-Aware Attack | 2020 [25] | Efficiency | Visible Trigger |
Invisible Perturbation | 2020 [26] | Invisible Trigger by Pixel Perturbation | Cannot Using in DGMs | |
Steganography\Regularisation Method | 2021 [27] | High Degree of Stealthiness | Cannot Using in DGMs | |
Triggerless Backdoor Attack | 2021 [28] | Triggerless | Cannot Using in DGMs | |
BAAAN Method | 2020 [29] | Avoid Detection Mechanism | Visible Trigger | |
TrAIL\ReD\ReX Method | 2022 [1] | Realize Invisible Attack against DGMs | Change in Model Structure | |
Invisible Encoded Backdoor Method | 2023 [30] | High Stealthiness of Triggers | Cannot Using in DGMs | |
Defence Strategy | DCT | — | High Coding Efficiency | Weak Reliability and Durability |
DFT-GI | Compatibility | Shift Smoothless | ||
GI | Simplicity and Speed | Limited Information |
Category | Number (Thousand) | Image Format | Image Size |
---|---|---|---|
Airplane | 1500 | .jpg | 256 × 256 |
Bridge | 820 | .jpg | 256 × 256 |
Bus | 690 | .jpg | 256 × 256 |
Car | 1000 | .jpg | 256 × 256 |
Church | 1260 | .jpg | 256 × 256 |
Cat | 1000 | .jpg | 256 × 256 |
Conference | 610 | .jpg | 256 × 256 |
Cow | 630 | .jpg | 256 × 256 |
Class | 600 | .jpg | 256 × 256 |
Dining-room | 650 | .jpg | 256 × 256 |
horse | 1000 | .jpg | 256 × 256 |
human | 1000 | .jpg | 256 × 256 |
Kitchen | 1100 | .jpg | 256 × 256 |
Living-room | 1310 | .jpg | 256 × 256 |
motorbike | 1190 | .jpg | 256 × 256 |
Plant | 1100 | .jpg | 256 × 256 |
Restaurant | 620 | .jpg | 256 × 256 |
Sheep | 600 | .jpg | 256 × 256 |
Tower | 700 | .jpg | 256 × 256 |
Train | 1150 | .jpg | 256 × 256 |
Dataset | LSUN | ||
---|---|---|---|
Metrics | KID | EQ_T | FID |
Clean StyleGAN3 | 0.0093 | 54.35 | 18.91 |
Poison StyleGAN3 | 0.0059 | 52.78 | 18.70 |
Metric | BA | |||
---|---|---|---|---|
Model | Resnet18 | Resnet34 | Resnet50 | Vgg16 |
Clean StyleGAN3 | 97.22 | 98.25 | 99.38 | 98.89 |
Poison StyleGAN3 | 97.37 | 98.47 | 99.49 | 98.36 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, Z.; Zhang, J.; Wang, W.; Li, H. Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models. Appl. Sci. 2024, 14, 8742. https://doi.org/10.3390/app14198742
Yang Z, Zhang J, Wang W, Li H. Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models. Applied Sciences. 2024; 14(19):8742. https://doi.org/10.3390/app14198742
Chicago/Turabian StyleYang, Ziying, Jie Zhang, Wei Wang, and Huan Li. 2024. "Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models" Applied Sciences 14, no. 19: 8742. https://doi.org/10.3390/app14198742
APA StyleYang, Z., Zhang, J., Wang, W., & Li, H. (2024). Invisible Threats in the Data: A Study on Data Poisoning Attacks in Deep Generative Models. Applied Sciences, 14(19), 8742. https://doi.org/10.3390/app14198742