Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

RSGNN: residual structure graph neural network

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Compared to conventional artificial neural networks, Graph Neural Networks (GNNs) better handle graph-structured data. Graph topology plays an important role in learning graph representations and impacts the performance of GNNs. However, existing GNNs encounter challenges in adequately capturing and representing the entire graph topology. In order to better capture the information about topological graph structures during message-passing, we propose a novel GNN architecture called Residual Structure Graph Neural Network (RSGNN). Specifically, RSGNN constructs residual links on local subgraphs to express the potential relationships between nodes, thus compensating for the lack of structural information solely conveyed by real edge connections. Meanwhile, the influence of edge structures of neighbor nodes is considered. We conduct comprehensive experiments on various graph benchmark datasets to evaluate the efficacy of the proposed RSGNN model. The experimental results demonstrate that our model outperforms existing state-of-the-art methods and alleviates the over-smoothing issue.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The data is based on open-source repository.

Notes

  1. https://github.com/BorgwardtLab/graph-kernels.

  2. https://github.com/mockingbird2/GraphKernelBenchmark.

  3. https://github.com/ZhenZhang19920330/RetGK_Code.

  4. https://github.com/KangchengHou/gntk.

  5. https://github.com/jcatw/dcnn.

  6. https://github.com/tkipf/gcn.

  7. https://github.com/williamleif/GraphSAGE.

  8. https://github.com/muhanzhang/DGCNN.

  9. https://github.com/weihua916/powerful-gnns.

  10. https://github.com/gbouritsas/GSN.

  11. https://github.com/wokas36/GraphSNN.

References

  1. Li C, Zhong Q, Xie D, Pu S (2017) Skeleton-based action recognition with convolutional neural networks. In: 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), IEEE, pp 597–600

  2. Kosaraju V, Sadeghian A, Martín-Martín R, Reid I, Rezatofighi H, Savarese S (2019) Social-bigat: multimodal trajectory forecasting using bicycle-GAN and graph attention networks. Adv Neural Inform Process Syst 32

  3. De Cao N, Kipf T (2018) Molgan: an implicit generative model for small molecular graphs. arXiv preprint arXiv:1805.11973

  4. Ding M, Tang J, Zhang J (2018) Semi-supervised learning on graphs with generative adversarial nets. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp 913–922

  5. Zhou Y, Kutyniok G, Ribeiro B (2022) Ood link prediction generalization capabilities of message-passing gnns in larger test graphs. arXiv preprint arXiv:2205.15117

  6. Tang M, Yang C, Li P (2022) Graph auto-encoder via neighborhood Wasserstein reconstruction. arXiv preprint arXiv:2202.09025

  7. Sato R (2020) A survey on the expressive power of graph neural networks. arXiv preprint arXiv:2003.04078

  8. Morris C, Ritzert M, Fey M, Hamilton WL, Lenssen JE, Rattan G, Grohe M (2019) Weisfeiler and leman go neural: Higher-order graph neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 33, pp 4602–4609

  9. Leman A, Weisfeiler B (1968) A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsiya 2(9):12–16

    Google Scholar 

  10. Gilmer J, Schoenholz SS, Riley PF, Vinyals O, Dahl GE (2017) Neural message passing for quantum chemistry. In: International Conference on Machine Learning, PMLR, pp. 1263–1272

  11. Hamilton WL, Ying R, Leskovec J (2017) Representation learning on graphs: methods and applications. arXiv preprint arXiv:1709.05584

  12. Arvind V, Fuhlbrück F, Köbler J, Verbitsky O (2019) On weisfeiler-leman invariance: Subgraph counts and related graph properties. In: International Symposium on Fundamentals of Computation Theory, Springer, pp 111–125

  13. Chen Z, Chen L, Villar S, Bruna J (2020) Can graph neural networks count substructures? Adv Neural Inform Process Syst 33:10383–10395

    Google Scholar 

  14. Granovetter M (1983) The strength of weak ties: a network theory revisited. Sociol Theory, pp 201–233

  15. Girvan M, Newman ME (2002) Community structure in social and biological networks. Proc Natl Acad Sci 99(12):7821–7826

    Article  MathSciNet  Google Scholar 

  16. Elton DC, Boukouvalas Z, Fuge MD, Chung PW (2019) Deep learning for molecular designùa review of the state of the art. Mol Syst Des Eng 4(4):828–849

    Article  Google Scholar 

  17. Estrach JB, Zaremba W, Szlam A, LeCun Y (2014) Spectral networks and deep locally connected networks on graphs. In: 2nd International Conference on Learning Representations, ICLR, vol 2014

  18. Defferrard M, Bresson X, Vandergheynst P (2016) Convolutional neural networks on graphs with fast localized spectral filtering. Adv Neural Inform Process Syst 29

  19. Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907

  20. Xu K, Hu W, Leskovec J, Jegelka S (2018) How powerful are graph neural networks? arXiv preprint arXiv:1810.00826

  21. Maron H, Ben-Hamu H, Serviansky H, Lipman Y (2019) Provably powerful graph networks. Adv Neural Inform Process Syst 32

  22. Morris C, Rattan G, Mutzel P (2020) Weisfeiler and leman go sparse: towards scalable higher-order graph embeddings. Adv Neural Inform Process Syst 33:21824–21840

    Google Scholar 

  23. Wijesinghe A, Wang Q (2022) A new perspective on how graph neural networks go beyond Weisfeiler-Lehman? In: International Conference on Learning Representations

  24. Li R, Wang S, Zhu F, Huang J (2018) Adaptive graph convolutional neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 32

  25. Weininger D (1988) Smiles, a chemical language and information system. 1. Introduction to methodology and encoding rules. J Chem Inform Comput Sci 28(1):31–36

    Article  Google Scholar 

  26. Qiu J, Tang J, Ma H, Dong Y, Wang K, Tang J (2018) Deepinf: social influence prediction with deep learning. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp 2110–2119

  27. Wang Z, Ioannidis VN, Rangwala H, Arai T, Brand R, Li M, Nakayama Y (2022) Graph neural networks in life sciences: opportunities and solutions. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp 4834–4835

  28. Yu B, Yin H, Zhu Z (2017) Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. arXiv preprint arXiv:1709.04875

  29. Gori M, Monfardini G, Scarselli F (2005) A new model for learning in graph domains. In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks, vol 2, pp 729–734

  30. Scarselli F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G (2008) The graph neural network model. IEEE Trans Neural Netw 20(1):61–80

    Article  Google Scholar 

  31. Hammond DK, Vandergheynst P, Gribonval R (2011) Wavelets on graphs via spectral graph theory. Appl Comput Harmon Anal 30(2):129–150

    Article  MathSciNet  Google Scholar 

  32. Hamilton W, Ying Z, Leskovec J (2017) Inductive representation learning on large graphs. Adv Neural Inform Process Syst 30

  33. Veličković P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2017) Graph attention networks. arXiv preprint arXiv:1710.10903

  34. Bouritsas G, Frasca F, Zafeiriou SP, Bronstein M (2022) Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Trans Pattern Anal Mach Intell 45(1):657–668

    Article  Google Scholar 

  35. Li Q, Han Z, Wu X-M (2018) Deeper insights into graph convolutional networks for semi-supervised learning. In: Thirty-Second AAAI Conference on Artificial Intelligence

  36. Zhao L, Akoglu L (2019) Pairnorm: tackling oversmoothing in gnns. arXiv preprint arXiv:1909.12223

  37. Luan S, Zhao M, Chang X-W, Precup D (2019) Break the ceiling: stronger multi-scale deep graph convolutional networks. Adv Neural Inform Process Syst 32

  38. Zhou J, Cui G, Hu S, Zhang Z, Yang C, Liu Z, Wang L, Li C, Sun M (2020) Graph neural networks: a review of methods and applications. AI Open 1:57–81

    Article  Google Scholar 

  39. Oono K, Suzuki T (2019) Graph neural networks exponentially lose expressive power for node classification. arXiv preprint arXiv:1905.10947

  40. Cai C, Wang Y (2020) A note on over-smoothing for graph neural networks. arXiv preprint arXiv:2006.13318

  41. Chen D, Lin Y, Li W, Li P, Zhou J, Sun X (2020) Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 34, pp 3438–3445

  42. Zhang W, Sheng Z, Yin Z, Jiang Y, Xia Y, Gao J, Yang Z, Cui B (2022) Model degradation hinders deep graph neural networks. arXiv preprint arXiv:2206.04361

  43. Zhang W, Sheng Z, Yang M, Li Y, Shen Y, Yang Z, Cui B (2022) Nafs: a simple yet tough-to-beat baseline for graph representation learning. In: International Conference on Machine Learning, pp 26467–26483. PMLR

  44. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  Google Scholar 

  45. Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, pp 315–323

  46. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR

  47. Debnath AK, Lopez de Compadre RL, Debnath G, Shusterman AJ, Hansch C (1991) Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds correlation with molecular orbital energies and hydrophobicity. J Med Chem 34(2):786–797

    Article  Google Scholar 

  48. Helma C, King RD, Kramer S, Srinivasan A (2001) The predictive toxicology challenge 2000–2001. Bioinformatics 17(1):107–108

    Article  Google Scholar 

  49. Sutherland JJ, Obrien LA, Weaver DF (2003) Spline-fitting with a genetic algorithm: a method for developing classification structure activity relationships. J Chem Inform Comput Sci 43(6):1906–1915

    Article  Google Scholar 

  50. Borgwardt KM, Ong CS, Schönauer S, Vishwanathan S, Smola AJ, Kriegel H-P (2005) Protein function prediction via graph kernels. Bioinformatics 21(suppl 1):47–56

    Article  Google Scholar 

  51. Wale N, Watson IA, Karypis G (2008) Comparison of descriptor spaces for chemical compound retrieval and classification. Knowl Inform Syst 14(3):347–375

    Article  Google Scholar 

  52. Yanardag P, Vishwanathan S (2015) Deep graph kernels. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 1365–1374

  53. Shervashidze N, Schweitzer P, Van Leeuwen EJ, Mehlhorn K, Borgwardt KM (2011) Weisfeiler-Lehman graph kernels. J Mach Learn Res 12(9)

  54. Kriege N, Mutzel P (2012) Subgraph matching kernels for attributed graphs. arXiv preprint arXiv:1206.6483

  55. Zhang Z, Wang M, Xiang Y, Huang Y, Nehorai A (2018) Retgk: Graph kernels based on return probabilities of random walks. Adv Neural Inform Process Syst 31

  56. Du SS, Hou K, Salakhutdinov RR, Poczos, B, Wang R, Xu K (2019) Graph neural tangent kernel: fusing graph neural networks with graph kernels. Adv Neural Inform Process Syst 32

  57. Atwood J, Towsley D (2016) Diffusion-convolutional neural networks. Adv Neural Inform Process Syst 29

  58. Zhang M, Cui Z, Neumann M, Chen Y (2018) An end-to-end deep learning architecture for graph classification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 32

  59. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980

  60. Sen P, Namata G, Bilgic M, Getoor L, Galligher B, Eliassi-Rad T (2008) Collective classification in network data. AI Mag 29(3):93–93

    Google Scholar 

  61. Van der Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(11)

Download references

Acknowledgements

This research was funded by the National Natural Science Foundation of China (nos. 62072024 and 41971396), the Projects of Beijing Advanced Innovation Center for Future Urban Design (nos. UDC2019033324 and UDC2017033322), R &D Program of Beijing Municipal Education Commission (KM202210016002 and KM202110016001), the Fundamental Research Funds for Municipal Universities of Beijing University of Civil Engineering and Architecture (nos. X20084 and ZF17061), and the BUCEA Post Graduate Innovation Project (PG2022144).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changlun Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, S., Zhang, C., Gu, F. et al. RSGNN: residual structure graph neural network. Int. J. Mach. Learn. & Cyber. 15, 4079–4092 (2024). https://doi.org/10.1007/s13042-024-02136-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-024-02136-0

Keywords

Navigation