Nothing Special   »   [go: up one dir, main page]

Skip to main content

When Do We Need Graph Neural Networks for Node Classification?

  • Conference paper
  • First Online:
Complex Networks & Their Applications XII (COMPLEX NETWORKS 2023)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1141))

Included in the following conference series:

  • 1180 Accesses

Abstract

Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by additionally making use of graph structure based on the relational inductive bias (edge bias), rather than treating the nodes as collections of independent and identically distributed (i.i.d.) samples. Though GNNs are believed to outperform basic NNs in real-world tasks, it is found that in some cases, GNNs have little performance gain or even underperform graph-agnostic NNs. To identify these cases, based on graph signal processing and statistical hypothesis testing, we propose two measures which analyze the cases in which the edge bias in features and labels does not provide advantages. Based on the measures, a threshold value can be given to predict the potential performance advantages of graph-aware models over graph-agnostic models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ahmed, H.B., Dare, D., Boudraa, A.-O.: Graph signals classification using total variation and graph energy informations. In: 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 667–671. IEEE (2017)

    Google Scholar 

  2. Battaglia, P.W., et al.: Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 (2018)

  3. Chen, S., Sandryhaila, A., Moura, J.M., Kovacevic, J.: Signal recovery on graphs: variation minimization. IEEE Trans. Signal Process. 63(17), 4609–4624 (2015)

    Article  MathSciNet  Google Scholar 

  4. Chung, F.R.: Spectral Graph Theory, vol. 92. American Mathematical Soc. (1997)

    Google Scholar 

  5. Cong, W., Ramezani, M., Mahdavi, M.: On provable benefits of depth in training graph convolutional networks. Adv. Neural. Inf. Process. Syst. 34, 9936–9949 (2021)

    Google Scholar 

  6. Daković, M., Stanković, L., Sejdić, E.: Local smoothness of graph signals. Math. Probl. Eng. 2019, 1–14 (2019)

    Google Scholar 

  7. Defferrard, M., Bresson, X., Vandergheynst, P.: Convolutional neural networks on graphs with fast localized spectral filtering. Adv. Neural Inf. Process. Syst. 29 (2016)

    Google Scholar 

  8. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  9. Hamilton, W.L.: Graph representation learning. Synth. Lect. Artif. Intell. Mach. Learn. 14(3), 1–159 (2020)

    Google Scholar 

  10. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: International Conference on Learning Representations (2016)

    Google Scholar 

  11. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)

    Google Scholar 

  12. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  13. Li, Q., Han, Z., Wu, X.-M.: Deeper insights into graph convolutional networks for semi-supervised learning. Proc. AAAI Conf. Artif. Intell. 32 (2018)

    Google Scholar 

  14. Lim, D., et al.: Large scale learning on non-homophilous graphs: new benchmarks and strong simple methods. Adv. Neural. Inf. Process. Syst. 34, 20887–20902 (2021)

    Google Scholar 

  15. Lim, D., Li, X., Hohne, F., Lim, S.-N.: New benchmarks for learning on non-homophilous graphs. arXiv preprint arXiv:2104.01404 (2021)

  16. Liu, M., Wang, Z., Ji, S.: Non-local graph neural networks. arXiv preprint arXiv:2005.14612 (2020)

  17. Luan, S.: On addressing the limitations of graph neural networks. arXiv preprint arXiv:2306.12640 (2023)

  18. Luan, S., et al.: Is heterophily a real nightmare for graph neural networks to do node classification? arXiv preprint arXiv:2109.05641 (2021)

  19. Luan, S., et al.: Revisiting heterophily for graph neural networks. Adv. Neural. Inf. Process. Syst. 35, 1362–1375 (2022)

    Google Scholar 

  20. Luan, S., et al.: When do graph neural networks help with node classification: investigating the homophily principle on node distinguishability. Adv. Neural Inf. Process. Syst. 36 (2023)

    Google Scholar 

  21. Luan, S., Zhao, M., Chang, X.-W., Precup, D.: Break the ceiling: stronger multi-scale deep graph convolutional networks. Adv. Neural Inf. Process. Syst. 32 (2019)

    Google Scholar 

  22. Luan, S., Zhao, M., Chang, X.-W., Precup, D.: Training matters: unlocking potentials of deeper graph convolutional neural networks. arXiv preprint arXiv:2008.08838 (2020)

  23. Luan, S., Zhao, M., Hua, C., Chang, X.-W., Precup, D.: Complete the missing half: augmenting aggregation filtering with diversification for graph convolutional networks. In: NeurIPS 2022 Workshop: New Frontiers in Graph Learning (2022)

    Google Scholar 

  24. Maehara, T.: Revisiting graph neural networks: all we have is low-pass filters. arXiv preprint arXiv:1905.09550 (2019)

  25. McPherson, M., Smith-Lovin, L., Cook, J.M.: Birds of a feather: homophily in social networks. Ann. Rev. Sociol. 27(1), 415–444 (2001)

    Article  Google Scholar 

  26. Pei, H, Wei, B., Chang, K.C.-C., Lei, Y., Yang, B.: Geom-gcn: geometric graph convolutional networks. In: International Conference on Learning Representations (2020)

    Google Scholar 

  27. Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  28. Wu, F., Souza, A., Zhang, T., Fifty, C., Yu, T., Weinberger, K.: Simplifying graph convolutional networks. In: International Conference on Machine Learning, pp. 6861–6871. PMLR (2019)

    Google Scholar 

  29. Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: Advances in Neural Information Processing Systems, pp. 321–328 (2004)

    Google Scholar 

  30. Zhu, J., Yan, Y., Zhao, L., Heimann, M., Akoglu, L., Koutra, D.: Generalizing graph neural networks beyond homophily. arXiv preprint arXiv:2006.11468 (2020)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sitao Luan .

Editor information

Editors and Affiliations

A Details of NSV and Sample Covariance Matrix

A Details of NSV and Sample Covariance Matrix

The sample covariance matrix S is computed as follows

$$\begin{aligned} \begin{aligned} {X} & =\left[ \begin{array}{c}\boldsymbol{x}_{1:} \\ \vdots \\ \boldsymbol{x}_{N:} \end{array}\right] , \ \ \bar{\boldsymbol{x}} = \frac{1}{N} \sum _{i=1}^{N} \boldsymbol{x}_{i:} = \frac{1}{N} \boldsymbol{1}^T {X}, \\ S & = \frac{1}{N-1} \left( {X}-\boldsymbol{1} \bar{\boldsymbol{x}} \right) ^{\top } \left( {X}-\boldsymbol{1} \bar{\boldsymbol{x}} \right) \end{aligned} \end{aligned}$$
(12)

It is easy to verify that

$$\begin{aligned} \begin{aligned} S &= \frac{1}{N-1} \left( {X}-\frac{1}{N} \boldsymbol{1}\boldsymbol{1}^T {X} \right) ^{\top } \left( {X}- \frac{1}{N} \boldsymbol{1} \boldsymbol{1}^T {X} \right) \\ & = \frac{1}{N-1} \left( {X}^T {X} - \frac{1}{N} {X}^T \boldsymbol{1} \boldsymbol{1}^T {X} \right) \\ & = \frac{1}{N(N-1)} \textrm{trace}\left( {X}^T (NI - \boldsymbol{1}\boldsymbol{1}^T ) {X}\right) \\ & = \frac{1}{N(N-1)} \left( E_D^\mathcal {G}({X}) + E_D^{\mathcal {G}^C}({X})\right) \end{aligned} \end{aligned}$$
(13)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Luan, S., Hua, C., Lu, Q., Zhu, J., Chang, XW., Precup, D. (2024). When Do We Need Graph Neural Networks for Node Classification?. In: Cherifi, H., Rocha, L.M., Cherifi, C., Donduran, M. (eds) Complex Networks & Their Applications XII. COMPLEX NETWORKS 2023. Studies in Computational Intelligence, vol 1141. Springer, Cham. https://doi.org/10.1007/978-3-031-53468-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53468-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53467-6

  • Online ISBN: 978-3-031-53468-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics