Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Learning discrete adaptive receptive fields for graph convolutional networks

  • Research Paper
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

Different nodes in a graph neighborhood generally yield different importance. In previous work of graph convolutional networks (GCNs), such differences are typically modeled with attention mechanisms. However, as we prove in our paper, soft attention weights suffer from undesired smoothness large neighborhoods (not to be confused with the oversmoothing effect in deep GCNs). To address this weakness, we introduce a novel framework of conducting graph convolutions, where nodes are discretely selected among multi-hop neighborhoods to construct adaptive receptive fields (ARFs). ARFs enable GCNs to get rid of the smoothness of soft attention weights, as well as to efficiently explore long-distance dependencies in graphs. We further propose GRARF (GCN with reinforced adaptive receptive fields) as an instance, where an optimal policy of constructing ARFs is learned with reinforcement learning. GRARF achieves or matches state-of-the-art performances on public datasets from different domains. Our further analysis corroborates that GRARF is more robust than attention models against neighborhood noises.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bruna J, Zaremba W, Szlam A, et al. Spectral networks and locally connected networks on graphs. In: Proceedings of the 2nd International Conference on Learning Representations, Banff, 2014

  2. Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations, Toulon, 2017

  3. Velickovic P, Cucurull G, Casanova A, et al. Graph attention networks. 2017. ArXiv:1710.10903

  4. Xu K, Hu W, Leskovec J, et al. How powerful are graph neural networks? In: Proceedings of the 7th International Conference on Learning Representations, New Orleans, 2019

  5. Li Z, Zhang L, Song G. GCN-LASE: towards adequately incorporating link attributes in graph convolutional networks. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, 2019

  6. Abu-El-Haija S, Perozzi B, Kapoor A, et al. Mixhop: higher-order graph convolutional architectures via sparsified neighborhood mixing. In: Proceedings of the 36th International Conference on Machine Learning, Long Beach, 2019

  7. Gilmer J, Schoenholz S S, Riley P F, et al. Neural message passing for quantum chemistry. In: Proceedings of the 34th International Conference on Machine Learning, Sydney, 2017. 1263–1272

  8. Hamilton W L, Ying Z, Leskovec J. Inductive representation learning on large graphs. In: Proceedings of Annual Conference on Neural Information Processing Systems, Long Beach, 2017. 1024–1034

  9. Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd International Conference on Learning Representations, San Diego, 2015

  10. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: Proceedings of Annual Conference on Neural Information Processing Systems, Long Beach, 2017. 5998–6008

  11. Liu Z, Chen C, Li L, et al. Geniepath: graph neural networks with adaptive receptive paths. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, 2019. 4424–4431

  12. Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518: 529–533

    Article  Google Scholar 

  13. Ng A Y, Harada D, Russell S J. Policy invariance under reward transformations: theory and application to reward shaping. In: Proceedings of the 16th International Conference on Machine Learning, Bled, 1999. 278–287

  14. You J, Liu B, Ying Z, et al. Graph convolutional policy network for goal-directed molecular graph generation. In: Proceedings of Annual Conference on Neural Information Processing Systems, Montréal, 2018. 6412–6422

  15. Jiang J, Dun C, Huang T, et al. Graph convolutional reinforcement learning. In: Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, 2020

  16. Chen J, Ma T, Xiao C. FastGCN: fast learning with graph convolutional networks via importance sampling. In: Proceedings of the 6th International Conference on Learning Representations, Vancouver, 2018

  17. Huang W, Zhang T, Rong Y, et al. Adaptive sampling towards fast graph representation learning. In: Proceedings of Annual Conference on Neural Information Processing Systems, Montréal, 2018. 4563–4572

  18. Zeng H, Zhou H, Srivastava A, et al. GraphSAINT: graph sampling based inductive learning method. In: Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, 2020

  19. Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. 2014. ArXiv:1406.2661

  20. Ye Y, Ji S. Sparse graph attention networks. IEEE Trans Knowl Data Eng, 2021, doi: https://doi.org/10.1109/TKDE.2021.3072345

  21. Zheng C, Zong B, Cheng W, et al. Robust graph representation learning via neural sparsification. In: Proceedings of International Conference on Machine Learning, 2020. 11458–11468

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guojie Song.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, X., Li, Z., Song, G. et al. Learning discrete adaptive receptive fields for graph convolutional networks. Sci. China Inf. Sci. 66, 222101 (2023). https://doi.org/10.1007/s11432-021-3443-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-021-3443-y

Keywords

Navigation