Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3534678.3539418acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Public Access

How does Heterophily Impact the Robustness of Graph Neural Networks?: Theoretical Connections and Practical Implications

Published: 14 August 2022 Publication History

Abstract

We bridge two research directions on graph neural networks (GNNs), by formalizing the relation between heterophily of node labels (i.e., connected nodes tend to have dissimilar labels) and the robustness of GNNs to adversarial attacks. Our theoretical and empirical analyses show that for homophilous graph data, impactful structural attacks always lead to reduced homophily, while for heterophilous graph data the change in the homophily level depends on the node degrees. These insights have practical implications for defending against attacks on real-world graphs: we deduce that separate aggregators for ego- and neighbor-embeddings, a design principle which has been identified to significantly improve prediction for heterophilous graph data, can also offer increased robustness to GNNs. Our comprehensive experiments show that GNNs merely adopting this design achieve improved empirical and certifiable robustness compared to the best-performing unvaccinated model. Additionally, combining this design with explicit defense mechanisms against adversarial attacks leads to an improved robustness with up to 18.33% performance increase under attacks compared to the best-performing vaccinated model.

References

[1]
Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Hrayr Harutyunyan, Nazanin Alipourfard, Kristina Lerman, Greg Ver Steeg, and Aram Galstyan. Mixhop: Higher-order graph convolution architectures via sparsified neighborhood mixing. In ICML, 2019.
[2]
Deyu Bo, Xiao Wang, Chuan Shi, and Huawei Shen. Beyond low-frequency information in graph convolutional networks. In AAAI, 2021.
[3]
Aleksandar Bojchevski and Stephan Gü nnemann. Adversarial attacks on node embeddings via graph poisoning. In ICML, 2019 a.
[4]
Aleksandar Bojchevski and Stephan Gü nnemann. Certifiable robustness to graph perturbations. In NeurIPS, 2019 b.
[5]
Aleksandar Bojchevski, Johannes Klicpera, and Stephan Günnemann. Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. In ICML, 2020.
[6]
Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34 (4): 18--42, 2017.
[7]
Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, and Junzhou Huang. A restricted black-box adversarial framework towards attacking graph embedding models. In AAAI, 2020.
[8]
Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. Adaptive universal generalized pagerank graph neural network. In ICLR, 2021.
[9]
Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In ICML, pp. 1310--1320. PMLR, 2019.
[10]
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack on graph structured data. In ICML, 2018.
[11]
Yushun Dong, Kaize Ding, Brian Jalaian, Shuiwang Ji, and Jundong Li. Adagnn: Graph neural networks with adaptive frequency response filter. In CIKM, 2021.
[12]
Negin Entezari, Saba A Al-Sayouri, Amirali Darvishzadeh, and Evangelos E Papalexakis. All you need is low (rank) defending against adversarial attacks on graphs. In WSDM, 2020.
[13]
Simon Geisler, Daniel Zügner, and Stephan Günnemann. Reliable graph neural networks via robust aggregation. In NeurIPS, 2020.
[14]
Leo A. Goodman. Snowball Sampling. The Annals of Mathematical Statistics, 32 (1): 148--170, 1961. 10.1214/aoms/1177705148.
[15]
Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, 2017.
[16]
Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. Graph structure learning for robust graph neural networks. In KDD, 2020.
[17]
Wei Jin, Yaxing Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, and Jiliang Tang. Adversarial attacks and defenses on graphs: A review, a tool and empirical studies. SIGKDD Explor. Newsl., 22 (2): 19--34, Jan 2021. ISSN 1931-0145.
[18]
Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
[19]
Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate: Graph neural networks meet personalized pagerank. In ICLR, 2018.
[20]
Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion improves graph learning. In NeurIPS, 2019.
[21]
Guang-He Lee, Yang Yuan, Shiyu Chang, and Tommi Jaakkola. Tight certificates of adversarial robustness for randomly smoothed classifiers. In NeurIPS, 2019.
[22]
Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014.
[23]
Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graphs over time: Densification laws, shrinking diameters and possible explanations. In KDD, 2005.
[24]
Li, Honglei Zhang, Zhichao Han, Yu Rong, Hong Cheng, and Junzhou Huang. Adversarial attack on community detection by hiding individuals. In WebConf, 2020 a.
[25]
Shouheng Li, Dongwoo Kim, and Qing Wang. Beyond low-pass filters: Adaptive feature propagation on graphs. In ECML-PKDD, 2021.
[26]
Yaxin Li, Wei Jin, Han Xu, and Jiliang Tang. Deeprobust: A pytorch library for adversarial attacks and defenses. arXiv preprint arXiv:2005.06149, 2020 b.
[27]
Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. In NeurIPS, 2021.
[28]
Meng Liu, Zhengyang Wang, and Shuiwang Ji. Non-local graph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
[29]
Donald Loveland, Jiong Zhu, Mark Heimann, Ben Fish, Michael Schaub, and Danai Koutra. On graph neural network fairness in the presence of heterophilous neighborhoods. In DLG-KDD, 2022.
[30]
Jiaqi Ma, Shuangrui Ding, and Qiaozhu Mei. Towards more practical adversarial attacks on graph neural networks. In NeurIPS, 2020.
[31]
Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks? In ICLR, 2022.
[32]
A.K. McCallum. Automating the construction of internet portals with machine learning. Information Retrieval, 3: 127--163, 01 2000.
[33]
Galileo Namata, Ben London, Lise Getoor, and Bert Huang. Query-driven active surveying for collective classification. In MLG, 2012.
[34]
Leto Peel. Graph-based semi-supervised learning for relational networks. In Proceedings of the 2017 SIAM International Conference on Data Mining, 2017.
[35]
Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. In ICLR, 2020.
[36]
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 2008.
[37]
Lichao Sun, Yingtong Dou, Carl Yang, Ji Wang, Philip S Yu, Lifang He, and Bo Li. Adversarial attack and defense on graph data: A survey. arXiv preprint arXiv:1812.10528, 2020.
[38]
Tsubasa Takahashi. Indirect adversarial attacks via poisoning neighbors for graph convolutional networks. In Big Data. IEEE, 2019.
[39]
Amanda L. Traud, Peter J. Mucha, and Mason A. Porter. Social structure of facebook networks. Physica A: Statistical Mechanics and its Applications, 391 (16): 4165--4180, 2012. ISSN 0378--4371. 10.1016/j.physa.2011.12.021.
[40]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. ICLR, 2018.
[41]
Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples for graph data: Deep insights into attack and defense. In IJCAI, 2019. 10.24963/ijcai.2019/669.
[42]
Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. Topology attack and defense for graph neural networks: an optimization perspective. In IJCAI, 2019.
[43]
Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In ICML, volume 80, pp. 5449--5458. PMLR, 2018.
[44]
Yujun Yan, Milad Hashemi, Kevin Swersky, Yaoqing Yang, and Danai Koutra. Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. arXiv preprint arXiv:2102.06462, 2021.
[45]
Xiang Zhang and Marinka Zitnik. Gnnguard: Defending graph neural networks against adversarial attacks. In NeurIPS, 2020.
[46]
Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. Robust graph convolutional networks against adversarial attacks. In KDD, 2019.
[47]
Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. Beyond homophily in graph neural networks: Current limitations and effective designs. In NeurIPS, 2020.
[48]
Jiong Zhu, Ryan A Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K Ahmed, and Danai Koutra. Graph neural networks with heterophily. In AAAI, 2021.
[49]
Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. In ICLR, 2019 a.
[50]
Daniel Zügner and Stephan Günnemann. Certifiable robustness and robust training for graph convolutional networks. In KDD, 2019 b.
[51]
Daniel Zügner and Stephan Günnemann. Certifiable robustness of graph convolutional networks under structure perturbations. In KDD, 2020.
[52]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In KDD, 2018.

Cited By

View all
  • (2024)Robust Heterophily Graph Learning via Uniformity AugmentationProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679991(4193-4197)Online publication date: 21-Oct-2024
  • (2024)PROSPECT: Learn MLPs on Graphs Robust against Adversarial Structure AttacksProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679857(425-435)Online publication date: 21-Oct-2024
  • (2024)A New Strategy of Graph Structure Attack: Multi-View Perturbation Candidate Edge LearningIEEE Transactions on Network Science and Engineering10.1109/TNSE.2024.340086011:5(4158-4168)Online publication date: Sep-2024
  • Show More Cited By

Index Terms

  1. How does Heterophily Impact the Robustness of Graph Neural Networks?: Theoretical Connections and Practical Implications

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
        August 2022
        5033 pages
        ISBN:9781450393850
        DOI:10.1145/3534678
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 14 August 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. adversarial attacks
        2. graph neural networks
        3. heterophily
        4. relation
        5. robustness
        6. structural perturbation

        Qualifiers

        • Research-article

        Funding Sources

        Conference

        KDD '22
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)438
        • Downloads (Last 6 weeks)41
        Reflects downloads up to 21 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Robust Heterophily Graph Learning via Uniformity AugmentationProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679991(4193-4197)Online publication date: 21-Oct-2024
        • (2024)PROSPECT: Learn MLPs on Graphs Robust against Adversarial Structure AttacksProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679857(425-435)Online publication date: 21-Oct-2024
        • (2024)A New Strategy of Graph Structure Attack: Multi-View Perturbation Candidate Edge LearningIEEE Transactions on Network Science and Engineering10.1109/TNSE.2024.340086011:5(4158-4168)Online publication date: Sep-2024
        • (2024)Revisiting Attack-Caused Structural Distribution Shift in Graph Anomaly DetectionIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.338070936:9(4849-4861)Online publication date: Sep-2024
        • (2024)An effective targeted label adversarial attack on graph neural networks by strategically allocating the attack budgetKnowledge-Based Systems10.1016/j.knosys.2024.111689293:COnline publication date: 9-Jul-2024
        • (2023)Demystifying structural disparity in graph neural networksProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3667732(37013-37067)Online publication date: 10-Dec-2023
        • (2023)Identification of Cancer Driver Genes by Integrating Multiomics Data with Graph Neural NetworksMetabolites10.3390/metabo1303033913:3(339)Online publication date: 24-Feb-2023
        • (2023)Graph neural convection-diffusion with heterophilyProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/518(4656-4664)Online publication date: 19-Aug-2023
        • (2023)Homophily-oriented Heterogeneous Graph RewiringProceedings of the ACM Web Conference 202310.1145/3543507.3583454(511-522)Online publication date: 30-Apr-2023
        • (2023)Robust Mid-Pass Filtering Graph Convolutional NetworksProceedings of the ACM Web Conference 202310.1145/3543507.3583335(328-338)Online publication date: 30-Apr-2023
        • Show More Cited By

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media