Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Multi-target label backdoor attacks on graph neural networks

Published: 01 August 2024 Publication History

Abstract

Graph neural networks have been shown to have characteristics that make them susceptible to backdoor attacks, and many recent works have proposed feasible graph backdoor attack methods. However, existing graph backdoor attack methods only target one-to-one attack types and lack graph backdoor attack methods that can address one-to-many attack requirements. This paper is the first research work on one-to-many type graph backdoor attacks and proposes the backdoor attack method MLGB, which can achieve multi-target label attacks for GNN node classification tasks. We designed encoding mechanisms to allow MLGB to customize triggers for different target labels and ensure differentiation between triggers for different target labels through loss functions. Additionally, we designed an innovative poisoned node selection method to improve the efficiency of MLGB’s attacks further. Extensive experiments were conducted to validate MLGB’s effectiveness across multiple datasets and model architectures, demonstrating its robustness against graph backdoor attack defense mechanisms. Furthermore, ablation experiments and explainability analyses were conducted to provide deeper insights into MLGB. Our work reveals that graph neural networks are also vulnerable to one-to-many type backdoor attacks, which is important for practitioners to understand model risks comprehensively.

Highlights

To our knowledge, this paper is the first work in the field of one-to-many backdoor attacks on graph neural networks.
We propose MLGB, a graph backdoor attack method that enables attackers to set multiple target labels simultaneously.
We design a poison node selection method to enhance the efficiency of graph backdoor attacks.
We design an encoding mechanism and loss functions tailored for multi-target requirements.
We perform large-scale experiments, and comprehensively evaluate the effectiveness and stealthiness of MLGB.

References

[1]
Kipf T.N., Welling M., Semi-supervised classification with graph convolutional networks, in: International Conference on Learning Representations, 2017, URL https://openreview.net/forum?id=SJU4ayYgl.
[2]
Hamilton W., Ying Z., Leskovec J., Inductive representation learning on large graphs, Advances in Neural Information Processing Systems, vol. 30, 2017.
[3]
Xu K., Hu W., Leskovec J., Jegelka S., How powerful are graph neural networks?, in: International Conference on Learning Representations, 2019, URL https://openreview.net/forum?id=ryGs6iA5Km.
[4]
Wei X., Liu Y., Sun J., Jiang Y., Tang Q., Yuan K., Dual subgraph-based graph neural network for friendship prediction in location-based social networks, ACM Trans. Knowl. Discov. Data 17 (3) (2023) 1–28.
[5]
Gong J., Zhao Y., Zhao J., Zhang J., Ma G., Zheng S., Du S., Tang J., Personalized recommendation via inductive spatiotemporal graph neural network, Pattern Recognit. 145 (2024),.
[6]
Yang Y., Hossain M.Z., Stone E., Rahman S., Spatial transcriptomics analysis of gene expression prediction using exemplar guided graph neural network, Pattern Recognit. 145 (2024),.
[7]
Sharma K., Verma S., Medya S., Bhattacharya A., Ranu S., Task and model agnostic adversarial attack on graph neural networks, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, 2023, pp. 15091–15099.
[8]
Lin X., Zhou C., Wu J., Yang H., Wang H., Cao Y., Wang B., Exploratory adversarial attacks on graph neural networks for semi-supervised node classification, Pattern Recognit. 133 (2023),.
[9]
Liu G., Huang X., Yi X., Adversarial label poisoning attack on graph neural networks via label propagation, in: European Conference on Computer Vision, Springer, 2022, pp. 227–243.
[10]
C. Jiang, Y. He, R. Chapman, H. Wu, Camouflaged poisoning attack on graph neural networks, in: Proceedings of the 2022 International Conference on Multimedia Retrieval, 2022, pp. 451–461.
[11]
E. Dai, M. Lin, X. Zhang, S. Wang, Unnoticeable backdoor attacks on graph neural networks, in: Proceedings of the ACM Web Conference 2023, 2023, pp. 2263–2273.
[12]
Z. Xi, R. Pang, S. Ji, T. Wang, Graph backdoor, in: 30th USENIX Security Symposium, USENIX Security 21, 2021, pp. 1523–1540.
[13]
Gu T., Liu K., Dolan-Gavitt B., Garg S., Badnets: Evaluating backdooring attacks on deep neural networks, IEEE Access 7 (2019) 47230–47244.
[14]
Liu Y., Ma S., Aafer Y., Lee W.-C., Zhai J., Wang W., Zhang X., Trojaning attack on neural networks, 25th Annual Network and Distributed System Security Symposium, Internet Soc, 2018.
[15]
Doan K.D., Lao Y., Li P., Marksman backdoor: Backdoor attacks with arbitrary target class, Adv. Neural Inf. Process. Syst. 35 (2022) 38260–38273.
[16]
Xue M., He C., Wang J., Liu W., One-to-N & N-to-one: Two advanced backdoor attacks against deep learning models, IEEE Trans. Dependable Secure Comput. 19 (3) (2020) 1562–1578.
[17]
Xue M., Ni S., Wu Y., Zhang Y., Wang J., Liu W., Imperceptible and multi-channel backdoor attack against deep neural networks, 2022, arXiv preprint arXiv:2201.13164.
[18]
Gori M., Monfardini G., Scarselli F., A new model for learning in graph domains, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, vol. 2, IEEE, 2005, pp. 729–734.
[19]
Bruna J., Zaremba W., Szlam A., LeCun Y., Spectral networks and locally connected networks on graphs, 2013, arXiv preprint arXiv:1312.6203.
[20]
Chung F.R., Spectral Graph Theory, American Mathematical Soc., 1997.
[21]
Tsitsvero M., Barbarossa S., On the degrees of freedom of signals on graphs, in: 2015 23rd European Signal Processing Conference, EUSIPCO, IEEE, 2015, pp. 1506–1510.
[22]
Defferrard M., Bresson X., Vandergheynst P., Convolutional neural networks on graphs with fast localized spectral filtering, Advances in Neural Information Processing Systems, vol. 29, 2016.
[23]
Levie R., Monti F., Bresson X., Bronstein M.M., Cayleynets: Graph convolutional neural networks with complex rational spectral filters, IEEE Trans. Signal Process. 67 (1) (2018) 97–109.
[24]
Li R., Wang S., Zhu F., Huang J., Adaptive graph convolutional neural networks, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, 2018.
[25]
Niepert M., Ahmed M., Kutzkov K., Learning convolutional neural networks for graphs, in: International Conference on Machine Learning, PMLR, 2016, pp. 2014–2023.
[26]
Sheng Y., Chen R., Cai G., Kuang L., Backdoor attack of graph neural networks based on subgraph trigger, in: Collaborative Computing: Networking, Applications and Worksharing: 17th EAI International Conference, CollaborateCom 2021, Virtual Event, October 16-18, Proceedings, Part II 17, Springer, 2021, pp. 276–296.
[27]
Z. Zhang, J. Jia, B. Wang, N.Z. Gong, Backdoor attacks to graph neural networks, in: Proceedings of the 26th ACM Symposium on Access Control Models and Technologies, 2021, pp. 15–26.
[28]
Chen L., Yan N., Zhang B., Wang Z., Wen Y., Hu Y., A general backdoor attack to graph neural networks based on explanation method, in: 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom, IEEE, 2022, pp. 759–768.
[29]
Xu J., Abad G., Picek S., Rethinking the trigger-injecting position in graph backdoor attack, in: 2023 International Joint Conference on Neural Networks, IJCNN, 2023, pp. 1–8,.
[30]
Chen Y., Ye Z., Zhao H., Wang Y., et al., Feature-based graph backdoor attack in the node classification task, Int. J. Intell. Syst. 2023 (2023).
[31]
J. Xu, R. Wang, S. Koffas, K. Liang, S. Picek, More is better (mostly): On the backdoor attacks in federated graph neural networks, in: Proceedings of the 38th Annual Computer Security Applications Conference, 2022, pp. 684–698.
[32]
J. Xu, S. Picek, Poster: Clean-label Backdoor Attack on Graph Neural Networks, in: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 3491–3493.
[33]
S. Yang, B.G. Doan, P. Montague, O. De Vel, T. Abraham, S. Camtepe, D.C. Ranasinghe, S.S. Kanhere, Transferable graph backdoor attack, in: Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses, 2022, pp. 321–332.
[34]
Ying Z., Bourgeois D., You J., Zitnik M., Leskovec J., Gnnexplainer: Generating explanations for graph neural networks, Advances in Neural Information Processing Systems, vol. 32, 2019.
[35]
Luo D., Cheng W., Xu D., Yu W., Zong B., Chen H., Zhang X., Parameterized explainer for graph neural network, Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 19620–19631.
[36]
Schlichtkrull M.S., Cao N.D., Titov I., Interpreting graph neural networks for NLP with differentiable edge masking, in: International Conference on Learning Representations, 2021, URL https://openreview.net/forum?id=WznmQa42ZAx.
[37]
H. Yuan, J. Tang, X. Hu, S. Ji, Xgnn: Towards model-level explanations of graph neural networks, in: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 430–438.
[38]
Yuan H., Yu H., Wang J., Li K., Ji S., On explainability of graph neural networks via subgraph explorations, in: International Conference on Machine Learning, PMLR, 2021, pp. 12241–12252.
[39]
Wei F., Mei K., Towards self-explainable graph convolutional neural network with frequency adaptive inception, Pattern Recognit. 146 (2024),.
[40]
Franceschi L., Frasconi P., Salzo S., Grazzi R., Pontil M., Bilevel programming for hyperparameter optimization and meta-learning, in: International Conference on Machine Learning, PMLR, 2018, pp. 1568–1577.
[41]
Yang Z., Cohen W., Salakhudinov R., Revisiting semi-supervised learning with graph embeddings, in: International Conference on Machine Learning, PMLR, 2016, pp. 40–48.
[42]
B. Rozemberczki, R. Sarkar, Characteristic functions on graphs: Birds of a feather, from statistical descriptors to parametric models, in: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp. 1325–1334.
[43]
Rozemberczki B., Allen C., Sarkar R., Multi-scale attributed node embedding, J. Complex Netw. 9 (2) (2021).

Cited By

View all
  • (2024)Benchmarking Backdoor Attacks on Graph Convolution Neural Networks: A Comprehensive Analysis of Poisoning TechniquesSecurity, Privacy, and Applied Cryptography Engineering10.1007/978-3-031-80408-3_10(149-174)Online publication date: 13-Dec-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Pattern Recognition
Pattern Recognition  Volume 152, Issue C
Aug 2024
527 pages

Publisher

Elsevier Science Inc.

United States

Publication History

Published: 01 August 2024

Author Tags

  1. Backdoor attack
  2. Graph neural networks
  3. Node classification

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Benchmarking Backdoor Attacks on Graph Convolution Neural Networks: A Comprehensive Analysis of Poisoning TechniquesSecurity, Privacy, and Applied Cryptography Engineering10.1007/978-3-031-80408-3_10(149-174)Online publication date: 13-Dec-2024

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media