Abstract
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs. Most GNNs use the message passing mechanism to obtain a distinguished feature representation. However, due to this message passing mechanism, most existing GNNs are inherently restricted by over-smoothing and poor robustness. Therefore, we propose a simple yet effective Network Embedding framework Without Neighborhood Aggregation (NE-WNA). Specifically, NE-WNA removes the neighborhood aggregation operation from the message passing mechanism. It only takes node features as input and then obtains node representations by a simple autoencoder. We also design an enhanced neighboring contrastive (ENContrast) loss to incorporate the graph structure into the node representations. In the representation space, the ENContrast encourages low-order neighbors to be closer to the target node than high-order neighbors. Experimental results show that NE-WNA enjoys high accuracy on the node classification task and high robustness against adversarial attacks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Waikhom, L., Patgiri, R.: Graph neural networks: methods, applications, and opportunities. arXiv preprint arXiv:2108.10733 (2021)
Feng, W., et al.: Graph random neural networks for semi-supervised learning on graphs. In: NIPS (2020)
Feng, W., et al.: GRAND+: scalable graph random neural networks. In: WWW, pp. 3248–3258 (2022)
Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: ICLR (2019)
Wang, X., He, X., Wang, M., Feng, F., Chua, T.: Neural graph collaborative filtering. In: SIGIR, pp. 165–174 (2019)
Li, Q., Han, Z., Wu, X.: Deeper insights into graph convolutional networks for semi-supervised learning. In: AAAI, pp. 3538–3545 (2018)
Oono, K., Suzuki, T.: Graph neural networks exponentially lose expressive power for node classification. In: ICLR (2020)
Zhu, D., Zhang, Z., Cui, P., Zhu, W.: Robust graph convolutional networks against adversarial attacks. In: SIGKDD, pp. 1399–1407 (2019)
Zügner, D., Akbarnejad, A., Günnemann, S.: Adversarial attacks on neural networks for graph data. In: SIGKDD, pp. 2847–2856 (2018)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)
Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: ICLR (2018)
Klicpera, J., Bojchevski, A., Günnemann, S.: Predict then Propagate: graph neural networks meet personalized PageRank. In: ICLR (2019)
Wu, F., de Souza Jr., A.H., Zhang, T., Fifty, C., Yu, T., Weinberger, K.Q.: Simplifying graph convolutional networks. In: ICML, pp. 6861–6871 (2019)
Rossi, E., Frasca, F., Chamberlain, B., Eynard, D., Bronstein, M.M., Monti, F.: SIGN: scalable inception graph neural networks. arXiv preprint arXiv:2004.11198 (2020)
Zhu, H., Koniusz, P.: Simple spectral graph convolution. In: ICLR (2021)
Rong, Y., Huang, W., Xu, T., Huang, J.: DropEdge: towards deep graph convolutional networks on node classification. In: ICLR (2020)
Chen, M., Wei, Z., Huang, Z., Ding, B., Li, Y.: Simple and deep graph convolutional networks. In: ICML, pp. 1725–1735 (2020)
Wu, H., Wang, C., Tyshetskiy, Y., Docherty, A., Lu, K., Zhu, L.: Adversarial examples for graph data: deep insights into attack and defense. In: IJCAI, pp. 4816–4823 (2019)
Entezari, N., Al-Sayouri, S.A., Darvishzadeh, A., Papalexakis, E.E.: All you need is low (Rank): defending against adversarial attacks on graphs. In: WSDM, pp. 169–177 (2020)
Jin, W., Ma, Y., Liu, X., Tang, X., Wang, S., Tang, J.: Graph structure learning for robust graph neural networks. In: SIGKDD, pp. 66–74 (2020)
Wu, L., Lin, H., Gao, Z., Tan, C., Li, S.Z.: Self-supervised on graphs: contrastive, generative, or predictive. arXiv preprint arXiv:2105.07342 (2021)
Liu, Y., Pan, S., Jin, M., Zhou, C., Xia, F., Yu, P.S.: Graph self-supervised learning: a survey. arXiv preprint arXiv:2103.00111 (2021)
Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 (2020)
Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Graph contrastive learning with adaptive augmentation. In: WWW, pp. 2069–2080 (2021)
Wang, D., Cui, P., Zhu, W.: Structural deep network embedding. In: SIGKDD, pp. 1225–1234 (2016)
Bo, D., Wang, X., Shi, C., Zhu, M., Lu, E., Cui, P.: Structural deep clustering network. In: WWW, pp. 1400–1410 (2020)
Hinton, G.E., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)
Shchur, O., Mumme, M., Bojchevski, A., Günnemann, S.: Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868 (2018)
Fey, M., Lenssen, J.E.: Fast graph representation learning with PyTorch geometric. arXiv preprint arXiv:1903.02428 (2019)
Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K., Jegelka, S.: Representation learning on graphs with jumping knowledge networks. In: ICML, pp. 5449–5458 (2018)
Zhu, Y., Xu, W., Zhang, J., Liu, Q., Wu, S., Wang, L.: Deep graph structure learning for robust representations: a survey. arXiv preprint arXiv:2103.03036 (2021)
Zügner, D., Günnemann, S.: Adversarial attacks on graph neural networks via meta learning. In: ICLR (2019)
Li, Y., Jin, W., Xu, H., Tang, J.: DeepRobust: a PyTorch library for adversarial attacks and defenses. arXiv preprint arXiv:2005.06149 (2020)
Van Der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)
Acknowledgements
This work was supported by the National Natural Science Foundation of China (No. 61972135), and the Natural Science Foundation of Heilongjiang Province in China (No. LH2020F043).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, J., Yang, Y., Liu, Y., Han, M. (2023). NE-WNA: A Novel Network Embedding Framework Without Neighborhood Aggregation. In: Amini, MR., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13714. Springer, Cham. https://doi.org/10.1007/978-3-031-26390-3_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-26390-3_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-26389-7
Online ISBN: 978-3-031-26390-3
eBook Packages: Computer ScienceComputer Science (R0)