Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3576915.3624387acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
poster

Poster: Multi-target & Multi-trigger Backdoor Attacks on Graph Neural Networks

Published: 21 November 2023 Publication History

Abstract

Recent research has indicated that Graph Neural Networks (GNNs) are vulnerable to backdoor attacks, and existing studies focus on the One-to-One attack where there is a single target triggered by a single backdoor. In this work, we explore two advanced backdoor attacks, i.e., the multi-target and multi-trigger backdoor attacks, on GNNs: 1) One-to-N attack, where there are multiple backdoor targets triggered by controlling different values of the trigger; 2) N-to-One attack, where the attack is only triggered when all the N triggers are present. The initial experimental results illustrate that both attacks can achieve a high attack success rate (up to 99.72%) on GNNs for the node classification task.

References

[1]
Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[2]
Xiang Ling, Lingfei Wu, Wei Deng, Zhenqing Qu, Jiangyu Zhang, Sheng Zhang, Tengfei Ma, Bin Wang, Chunming Wu, and Shouling Ji. 2022. Malgraph: Hierarchical graph neural networks for robust windows malware detection. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications. IEEE, 1998--2007.
[3]
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI magazine (2008).
[4]
Alexander Turner, Dimitris Tsipras, and Aleksander Madry. 2018. Clean-label backdoor attacks. (2018).
[5]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
[6]
Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. 2021. Graph backdoor. In 30th USENIX Security Symposium (USENIX Security 21). 1523--1540.
[7]
Jing Xu, Rui Wang, Stefanos Koffas, Kaitai Liang, and Stjepan Picek. 2022. More is better (mostly): On the backdoor attacks in federated graph neural networks. In Proceedings of the 38th Annual Computer Security Applications Conference. 684--698.
[8]
Jing Xu, Minhui Xue, and Stjepan Picek. 2021. Explainability-based backdoor attacks against graph neural networks. In Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning. 31--36.
[9]
Mingfu Xue, Can He, Jian Wang, and Weiqiang Liu. 2020. One-to-N & N-to-one: Two advanced backdoor attacks against deep learning models. IEEE Transactions on Dependable and Secure Computing, Vol. 19, 3 (2020), 1562--1578.
[10]
Jiawei Zhang, Bowen Dong, and S Yu Philip. 2020. Fakedetector: Effective fake news detection with deep diffusive neural network. In 2020 IEEE 36th international conference on data engineering (ICDE). IEEE, 1826--1829.
[11]
Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. 2021. Backdoor attacks to graph neural networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies. 15--26.

Cited By

View all
  • (2024)Crucial rather than random: Attacking crucial substructure for backdoor attacks on graph neural networksEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108966136(108966)Online publication date: Oct-2024

Index Terms

  1. Poster: Multi-target & Multi-trigger Backdoor Attacks on Graph Neural Networks

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security
      November 2023
      3722 pages
      ISBN:9798400700507
      DOI:10.1145/3576915
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 November 2023

      Check for updates

      Author Tags

      1. backdoor attacks
      2. graph neural networks
      3. node classification

      Qualifiers

      • Poster

      Funding Sources

      Conference

      CCS '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

      Upcoming Conference

      CCS '24
      ACM SIGSAC Conference on Computer and Communications Security
      October 14 - 18, 2024
      Salt Lake City , UT , USA

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)280
      • Downloads (Last 6 weeks)23
      Reflects downloads up to 02 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Crucial rather than random: Attacking crucial substructure for backdoor attacks on graph neural networksEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108966136(108966)Online publication date: Oct-2024

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media