Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Diffuse and Smooth: Beyond Truncated Receptive Field for Scalable and Adaptive Graph Representation Learning

Published: 27 February 2023 Publication History

Abstract

As the scope of receptive field and the depth of Graph Neural Networks (GNNs) are two completely orthogonal aspects for graph learning, existing GNNs often have shallow layers with truncated-receptive field and far from achieving satisfactory performance. In this article, we follow the idea of decoupling graph convolution into propagation and transformation processes, which generates representations over a sequence of increasingly larger neighborhoods. Though this manner can enlarge the receptive field, it has two critical problems unsolved: how to find the suitable receptive field to avoid under-smoothing or over-smoothing? and how to balance different diffusion operators for better capturing the local and global dependencies? We tackle these challenges and propose a Scalable, Adaptive Graph Convolutional Networks (SAGCN) with Transformer architecture. Concretely, we propose a novel non-heuristic metric method that quickly finds the suitable number of diffusing iterations and produces smoothed local embeddings that enable the truncated receptive field to become scalable and independent of prior experience. Furthermore, we devise smooth2seq and diffusion-based position schemes introduced into Transformer architecture for better capturing local and global information among embeddings. Experimental results show that SAGCN enjoys high accuracy, scalability and efficiency on various open benchmarks and is competitive with other state-of-the-art competitors.

References

[1]
Jinheon Baek, Minki Kang, and Sung Ju Hwang. 2021. Accurate learning of graph representations with graph multiset pooling. In Proceedings of the International Conference on Learning Representations 2021.
[2]
Y. Bai, F. Wang, G. Cheung, Y. Nakatsukasa, and W. Gao. 2020. Fast graph sampling set selection using gershgorin disc alignment. IEEE Transactions on Signal Processing 68 (2020), 2419–2434.
[3]
Jesus M. Barajas. 2020. Supplemental infrastructure: How community networks and immigrant identity influence cycling. Transportation 47, 3 (2020), 1251–1274.
[4]
Aleksandar Bojchevski, Johannes Klicpera, and Bryan Perozzi. 2020. Scaling graph neural networks with approximate pagerank. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2464–2473.
[5]
Deli Chen and Yankai Lin. 2021. Topology-imbalance learning for semi-supervised node classification. In Proceedings of the Neural Information Processing Systems.
[6]
Jianfei Chen, Jun Zhu, and Le Song. Stochastic training of graph convolutional networks with variance reduction. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Vol. 80. 941–949.
[7]
Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. 2020. Simple and deep graph convolutional networks. In Proceedings of the International Conference on Machine Learning.
[8]
Pengfei Chen, Xuandi Fu, and Xue Wang. 2021. A graph convolutional stacked bidirectional unidirectional-LSTM neural network for metro ridership prediction. IEEE Transactions on Intelligent Transportation Systems 23, 7 (2021), 6950–6962.
[9]
Wei-Lin Chiang, Xuanqing Liu, Si Si, and Yang Li. Cluster-GCN: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019.
[10]
Eli Chien, Jianhao Peng, Pan Li, and Olgica Milenkovic. 2021. Adaptive universal generalized pagerank graph neural network. In Proceedings of the 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3–7, 2021.
[11]
Fan R. K. Chung and Fan Chung Graham. 1997. Spectral Graph Theory. Number 92. American Mathematical Soc.
[12]
Theresa Dambach, Simone Göttlich, and Stephan Knapp. 2020. Car path tracking in traffic flow networks with bounded buffers at junctions. Mathematical Methods in the Applied Sciences 43, 6 (2020), 3331–3353.
[13]
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of the Annual Conference on Neural Information Processing Systems 2016, December 5-10. 3837–3845.
[14]
Ailin Deng and Bryan Hooi. 2021. Graph neural network-based anomaly detection in multivariate time series. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. 4027–4035.
[15]
Vijay Prakash Dwivedi and Xavier Bresson. 2021. A generalization of transformer networks to graphs. AAAI Workshop on Deep Learning on Graphs: Methods and Applications (2021).
[16]
Wenzheng Feng, Jie Zhang, Yuxiao Dong, Yu Han, and Jie Tang. 2020. Graph random neural networks for semi-supervised learning on graphs. In Proceedings of the Advances in Neural Information Processing Systems.
[17]
Charless Fowlkes, Serge Belongie, Fan Chung, and Jitendra Malik. 2004. Spectral grouping using the nystrom method. IEEE Transactions on Pattern Analysis and Machine Intelligence 26, 2 (2004), 214–225.
[18]
Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Benjamin Chamberlain, Michael Bronstein, and Federico Monti. 2020. Sign: Scalable inception graph neural networks. In Proceeding of the ICML 2020 Workshop on Graph Representation Learning and Beyond.
[19]
Zhihao Guo, Feng Wang, Kaixuan Yao, Jiye Liang, and Zhiqiang Wang. 2022. Multi-scale variational graph autoencoder for link prediction. In Proceedings of the 15th ACM International Conference on Web Search and Data Mining. 334–342.
[20]
William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Proceedings of the Advances in Neural Information Processing Systems 30. 1024–1034.
[21]
Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, et al. 2023. A survey on vision transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 1 (2023), 87–110.
[22]
Kaveh Hassani and Amir Hosein Khas Ahmadi. 2020. Contrastive multi-view representation learning on graphs. In Proceeding of the ICML, Vol. 119, 4116–4126.
[23]
Kaveh Hassani and Amir Hosein Khas Ahmadi. 2020. Contrastive multi-view representation learning on graphs. In Proceedings of the International Conference on Machine Learning.
[24]
Muhammad Aqib Javed, Muhammad Shahzad Younis, Siddique Latif, Junaid Qadir, and Adeel Baig. 2018. Community detection in networks: A multidisciplinary review. Journal of Network and Computer Applications 108 (2018), 87–111.
[25]
Anees Kazi, Shayan Shekarforoush, S. Arvind Krishna, Hendrik Burwinkel, and Gerome Vivar. 2019. InceptionGCN: Receptive field aware graph convolutional network for disease prediction. In Proceedings of the Information Processing in Medical Imaging.
[26]
Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In Proceeding of the 5th International Conference on Learning Representations (ICLR’17). Toulon.
[27]
Johannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. 2019. Predict then propagate: Graph neural networks meet personalized pagerank. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, 2019.
[28]
Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. 2019. Diffusion improves graph learning. In Proceedings of the Neural Information Processing Systems.
[29]
Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Létourneau, and Prudencio Tossou. 2021. Rethinking graph transformers with spectral attention. Advances in Neural Information Processing Systems 34 (2021), 21618–21629.
[30]
Guohao Li, Matthias Müller, Bernard Ghanem, and Vladlen Koltun. 2021. Training graph neural networks with 1000 layers. In Proceedings of the 38th International Conference on Machine Learning (ICML’21), Vol. 139, 6437–6449.
[31]
Guohao Li, Matthias Müller, Ali K. Thabet, and Bernard Ghanem. DeepGCNs: Can GCNs go as deep as CNNs?. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, ICCV. IEEE, 9266–9275.
[32]
Guohao Li, Chenxin Xiong, Ali Thabet Bernard Ghanem, Visual Computing Center, KAUST Thuwal, and Saudi Arabia. 2020. DeeperGCN: All you need to train deeper GCNs. arXiv:2006.07739. Retrieved from https://arxiv.org/abs/2006.07739.
[33]
Jianxin Li, Taotao Cai, Ke Deng, Xinjue Wang, Timos Sellis, and Feng Xia. 2020. Community-diversified influence maximization in social networks. Information Systems 92 (2020), 101522.
[34]
Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. 2020. Distance encoding: Design provably more powerful neural networks for graph representation learning. Advances in Neural Information Processing Systems 33 (2020), 4465–4478.
[35]
Bin Lu, Xiaoying Gan, Haiming Jin, Luoyi Fu, Xinbing Wang, and Haisong Zhang. 2022. Make more connections: Urban traffic flow forecasting with spatiotemporal adaptive gated graph convolution network. ACM Transactions on Intelligent Systems and Technology (TIST) 13, 2 (2022), 1–25.
[36]
Ryan L. Murphy, Balasubramaniam Srinivasan, Vinayak A. Rao, and Bruno Ribeiro. 2019. Relational pooling for graph representations. In Proceedings of the International Conference on Machine Learning, Vol. 97. 4663–4673.
[37]
Mark E. J. Newman. 2006. Modularity and community structure in networks. Proceedings of the National Academy of Sciences 103, 23 (2006), 8577–8582.
[38]
Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. 2016. Learning convolutional neural networks for graphs. In Proceedings of the 33nd International Conference on Machine Learning, ICML, Vol. 48. JMLR.org, 2014–2023.
[39]
Zhen Peng, Wenbing Huang, Minnan Luo, Qinghua Zheng, Yu Rong, Tingyang Xu, and Junzhou Huang. 2020. Graph representation learning via graphical mutual information maximization. In Proceedings of the Web Conference 2020. 259–270.
[40]
Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. 2020. DropEdge: Towards deep graph convolutional networks on node classification. In Proceedings of the International Conference on Machine Learning.
[41]
Björn Schmalfuss. 1998. A random fixed point theorem and the random graph transformation. Journal of Mathematical Analysis and Applications 225, 1 (1998), 91–113.
[42]
Indro Spinelli, Simone Scardapane, and Aurelio Uncini. 2020. Adaptive propagation graph convolutional network. IEEE Transactions on Neural Networks and Learning Systems 32, 10 (2020), 4755–4760.
[43]
Balasubramaniam Srinivasan and Bruno Ribeiro. 2020. On the equivalence between positional node embeddings and structural graph representations. In International Conference on Machine Learning 2020. Addis Ababa, Ethiopia, April 26–30, 2020.
[44]
Chuxiong Sun and Guoshi Wu. 2021. Scalable and adaptive graph neural networks with self-label-enhanced training. arXiv:2104.09376. Retrieved from https://arxiv.org/abs/2104.09376.
[45]
Ke Sun, Zhanxing Zhu, and Zhouchen Lin. 2021. AdaGCN: Adaboosting graph convolutional networks into deep models. In Proceedings of the International Conference on Machine Learning.
[46]
Hui Tang, Xun Liang, Yuhui Guo, Bo Wu, and Xiangping Zheng. 2022. CoNet: Co-embedding by reinforcing graph feature and topology information. In Proceedings of the 2022 IEEE International Conference on Multimedia and Expo (ICME). 01–06.
[47]
Hui Tang, Xun Liang, Yuhui Guo, Xiangping Zheng, and Bo Wu. 2022. Graph fine-grained contrastive representation learning. In Proceedings of the 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 3478–3482.
[48]
Hui Tang, Xun Liang, Bo Wu, Zhenyu Guan, Yuhui Guo, and Xiangping Zheng. 2021. Graph ensemble networks for semi-supervised embedding learning. In Proceedings of the International Conference on Knowledge Science, Engineering and Management. Springer, 408–420.
[49]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, and Llion Jones. 2017. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30, December 4–9, 2017. 5998–6008.
[50]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, and Adriana Romero. 2018. Graph attention networks. In Proceedings of the International Conference on Machine Learning.
[51]
Xiao Wang, Meiqi Zhu, Deyu Bo, Peng Cui, Chuan Shi, and Jian Pei. 2020. AM-GCN: Adaptive multi-channel graph convolutional networks. In Proceedings of the KDD’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, August 23–27, 2020. ACM, 1243–1253.
[52]
Felix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. 2019. Simplifying graph convolutional networks. In Proceedings of the International Conference on Machine Learning.
[53]
Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S. Yu Philip. 2020. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems 32, 1 (2020), 4–24.
[54]
Teng Xiao, Zhengyu Chen, Donglin Wang, and Suhang Wang. 2021. Learning how to propagate messages in graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 1894–1903.
[55]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural networks?. In Proceedings of the International Conference on Learning Representations.
[56]
Keyulu Xu, Chengtao Li, Yonglong Tian, and Tomohiro Sonobe. Representation learning on graphs with jumping knowledge networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018.
[57]
Tianhan Xu and Wataru Takano. 2021. Graph stacked hourglass networks for 3d human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16105–16114.
[58]
Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. 2021. Do transformers really perform badly for graph representation? Advances in Neural Information Processing Systems 34 (2021), 28877–28888.
[59]
Hanqing Zeng, Hongkuan Zhou, Ajitesh Srivastava, Rajgopal Kannan, and Viktor K. Prasanna. GraphSAINT: Graph sampling based inductive learning method. In Proceedings of the 8th International Conference on Learning Representations, ICLR 2020.
[60]
Wentao Zhang, Mingyu Yang, Zhi Yang, and Bin Cui. 2021. Node dependent local smoothing for scalable graph learning. NeurIPS 34 (2021), 20321–20332.
[61]
Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. 2020. Learning hierarchy-aware knowledge graph embeddings for link prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 3065–3072.
[62]
Jianan Zhao, Chaozhuo Li, Qianlong Wen, Yiqi Wang, and Yuming Liu. 2021. Gophormer: Ego-graph transformer for node classification. arXiv:2110.13094. Retrieved from https://arxiv.org/abs/2110.13094.
[63]
Hao Zhu and Piotr Koniusz. 2021. Simple spectral graph convolution. In Proceedings of the International Conference on Learning Representations.

Cited By

View all
  • (2024)Optimizing Urban Traffic Flow Prediction: Integrating Spatial–Temporal Analysis with a Hybrid GNN and Gated-Attention GRU ModelSmart Data Intelligence10.1007/978-981-97-3191-6_29(381-391)Online publication date: 28-Jul-2024

Index Terms

  1. Diffuse and Smooth: Beyond Truncated Receptive Field for Scalable and Adaptive Graph Representation Learning

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Knowledge Discovery from Data
      ACM Transactions on Knowledge Discovery from Data  Volume 17, Issue 5
      June 2023
      386 pages
      ISSN:1556-4681
      EISSN:1556-472X
      DOI:10.1145/3583066
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 February 2023
      Online AM: 22 November 2022
      Accepted: 16 November 2022
      Revised: 04 September 2022
      Received: 04 May 2022
      Published in TKDD Volume 17, Issue 5

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Graph Neural Network
      2. diffusion and smoothing
      3. graph Transformer
      4. scalability and adaptability
      5. graph receptive field

      Qualifiers

      • Research-article

      Funding Sources

      • National Natural Science Foundation of China
      • Research Seed Funds of School of Interdisciplinary Studies of Renmin University of China, National Social Science Foundation of China
      • Opening Project of State Key Laboratory of Digital Publishing Technology of Founder Group

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)114
      • Downloads (Last 6 weeks)12
      Reflects downloads up to 16 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Optimizing Urban Traffic Flow Prediction: Integrating Spatial–Temporal Analysis with a Hybrid GNN and Gated-Attention GRU ModelSmart Data Intelligence10.1007/978-981-97-3191-6_29(381-391)Online publication date: 28-Jul-2024

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media