Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Polarized message-passing in graph neural networks

Published: 17 July 2024 Publication History

Abstract

In this paper, we present Polarized message-passing (PMP), a novel paradigm to revolutionize the design of message-passing graph neural networks (GNNs). In contrast to existing methods, PMP captures the power of node-node similarity and dissimilarity to acquire dual sources of messages from neighbors. The messages are then coalesced to enable GNNs to learn expressive representations from sparse but strongly correlated neighbors. Three novel GNNs based on the PMP paradigm, namely PMP graph convolutional network (PMP-GCN), PMP graph attention network (PMP-GAT), and PMP graph PageRank network (PMP-GPN) are proposed to perform various downstream tasks. Theoretical analysis is also conducted to verify the high expressiveness of the proposed PMP-based GNNs. In addition, an empirical study of five learning tasks based on 12 real-world datasets is conducted to validate the performances of PMP-GCN, PMP-GAT, and PMP-GPN. The proposed PMP-GCN, PMP-GAT, and PMP-GPN outperform numerous strong message-passing GNNs across all five learning tasks, demonstrating the effectiveness of the proposed PMP paradigm.

Highlights

A novel paradigm PMP is proposed to enable GNNs to learn more expressive representations.
Novel GNN architectures, namely PMP-GCN, PMP-GAT, and PMP-GPN are constructed for various downstream tasks.
The expressiveness of the PMP-based GNNs is theoretically guaranteed.

References

[1]
T.N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, in: International Conference on Learning Representations, 2017.
[2]
J. Gilmer, S.S. Schoenholz, P.F. Riley, O. Vinyals, G.E. Dahl, Message passing neural networks, in: Machine Learning Meets Quantum Physics, 2020, pp. 199–214.
[3]
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention networks, in: International Conference on Learning Representations, 2018.
[4]
G. Wang, R. Ying, J. Huang, J. Leskovec, Multi-Hop Attention Graph Neural Networks, 2021, pp. 3089–3096.
[5]
Gasteiger, J.; Bojchevski, A.; Günnemann, S. (2018): Predict then propagate: graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997.
[6]
S.L. Feld, Why your friends have more friends than you do, Am. J. Sociol. 96 (1991) 1464–1477.
[7]
N. Alipourfard, B. Nettasinghe, A. Abeliuk, V. Krishnamurthy, K. Lerman, Friendship paradox biases perceptions in directed networks, Nat. Commun. 11 (2020) 707.
[8]
W. Hamilton, Z. Ying, J. Leskovec, Inductive representation learning on large graphs, in: Advances in Neural Information Processing Systems, 2017, pp. 1024–1034.
[9]
J. You, R. Ying, J. Leskovec, Position-aware graph neural networks, in: International Conference on Machine Learning, PMLR, 2019, pp. 7134–7143.
[10]
F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, K. Weinberger, Simplifying graph convolutional networks, in: International Conference on Machine Learning, PMLR, 2019, pp. 6861–6871.
[11]
Wang, G.; Ying, R.; Huang, J.; Leskovec, J. (2019): Improving graph attention networks with large margin-based constraints. arXiv preprint arXiv:1910.11945.
[12]
J. Klicpera, S. Weißenberger, S. Günnemann, Diffusion improves graph learning, in: Advances in Neural Information Processing Systems, 2019, pp. 13354–13366.
[13]
E. Chien, J. Peng, P. Li, O. Milenkovic, Adaptive universal generalized pagerank graph neural network, in: International Conference on Learning Representations, 2021.
[14]
Corso, G.; Cavalleri, L.; Beaini, D.; Liò, P.; Veličković, P. (2020): Principal neighbourhood aggregation for graph nets. arXiv preprint arXiv:2004.05718.
[15]
S. Zhang, L. Xie, Improving attention mechanism in graph neural networks via cardinality preservation, IJCAI: Proceedings of the Conference, vol. 2020, NIH Public Access, 2020, p. 1395.
[16]
A. Wijesinghe, Q. Wang, A new perspective on “how graph neural networks go beyond Weisfeiler-Lehman?”, in: International Conference on Learning Representations, 2022.
[17]
Xu, K.; Hu, W.; Leskovec, J.; Jegelka, S. (2018): How powerful are graph neural networks?. arXiv preprint arXiv:1810.00826.
[18]
Morris, C.; Geerts, F.; Tönshoff, J.; Grohe, M. (2023): Wl meet vc. arXiv preprint arXiv:2301.11039.
[19]
B. Weisfeiler, A. Leman, The reduction of a graph to canonical form and the algebra which appears therein, NTI Ser. 2 (1968).
[20]
T. He, Y.S. Ong, L. Bai, Learning conjoint attentions for graph neural nets, Adv. Neural Inf. Process. Syst. 34 (2021) 2641–2653.
[21]
A. Hasanzadeh, E. Hajiramezanali, S. Boluki, M. Zhou, N. Duffield, K. Narayanan, X. Qian, Bayesian graph neural networks with adaptive connection sampling, in: International Conference on Machine Learning, 2020.
[22]
K. Yao, J. Liang, J. Liang, M. Li, F. Cao, Multi-view graph convolutional networks with attention mechanism, Artif. Intell. 307 (2022).
[23]
Y. Liang, F. Meng, Y. Zhang, Y. Chen, J. Xu, J. Zhou, Emotional conversation generation with heterogeneous graph neural network, Artif. Intell. 308 (2022).
[24]
L. Huang, D. Ma, S. Li, X. Zhang, H. Wang, Text level graph neural network for text classification, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 3444–3450.
[25]
J. You, J.M. Gomes-Selman, R. Ying, J. Leskovec, Identity-aware graph neural networks, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, 2021, pp. 10737–10745.
[26]
V. Garg, S. Jegelka, T. Jaakkola, Generalization and representational limits of graph neural networks, in: International Conference on Machine Learning, PMLR, 2020, pp. 3419–3430.
[27]
D. Chen, L. O'Bray, K. Borgwardt, Structure-aware transformer for graph representation learning, in: International Conference on Machine Learning, PMLR, 2022, pp. 3469–3489.
[28]
F.M. Bianchi, D. Grattarola, L. Livi, C. Alippi, Graph neural networks with convolutional arma filters, IEEE Trans. Pattern Anal. Mach. Intell. 44 (2022) 3496–3507.
[29]
K. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, S. Jegelka, Representation learning on graphs with jumping knowledge networks, in: International Conference on Machine Learning, PMLR, 2018, pp. 5453–5462.
[30]
H. Gao, S. Ji, Graph representation learning via hard and channel-wise attention networks, in: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 741–749.
[31]
X. He, L. Li, D. Roqueiro, K. Borgwardt, Multi-view spectral clustering on conflicting views, in: Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2017, Skopje, Macedonia, September 18–22, 2017, Proceedings, Part II 10, Springer, 2017, pp. 826–842.
[32]
Y. Liu, Y. Liu, K. Chan, Ordinal regression via manifold learning, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 25, 2011, pp. 398–403.
[33]
F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, M.M. Bronstein, Geometric deep learning on graphs and manifolds using mixture model cnns, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5115–5124.
[34]
S. Brody, U. Alon, E. Yahav, How attentive are graph attention networks?, in: International Conference on Learning Representations, 2022.
[35]
P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, T. Eliassi-Rad, Collective classification in network data, AI Mag. 29 (2008) 93.
[36]
X. Huang, J. Li, X. Hu, Label informed attributed network embedding, in: Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, 2017, pp. 731–739.
[37]
Shchur, O.; Mumme, M.; Bojchevski, A.; Günnemann, S. (2018): Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868.
[38]
D. Greene, D. O'Callaghan, P. Cunningham, How many topics? Stability analysis for topic models, in: Proc. European Conference on Machine Learning (ECML'14), 2014.
[39]
Kingma, D.P.; Ba, J. (2014): Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[40]
Z. Yang, W. Cohen, R. Salakhudinov, Revisiting semi-supervised learning with graph embeddings, in: International Conference on Machine Learning, PMLR, 2016, pp. 40–48.
[41]
X. Liang, Y. Qian, Q. Guo, H. Cheng, J. Liang, AF: An association-based fusion method for multi-modal classification, IEEE Trans. Pattern Anal. Mach. Intell. 44 (2021) 9236–9254.
[42]
F. Li, Y. Qian, J. Wang, C. Dang, L. Jing, Clustering ensemble based on sample's stability, Artif. Intell. 273 (2019) 37–55.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Artificial Intelligence
Artificial Intelligence  Volume 331, Issue C
Jun 2024
324 pages

Publisher

Elsevier Science Publishers Ltd.

United Kingdom

Publication History

Published: 17 July 2024

Author Tags

  1. Graph neural networks
  2. Message-passing graph neural networks
  3. Representation learning
  4. Graph analysis

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Dec 2024

Other Metrics

Citations

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media