Abstract
Different nodes in a graph neighborhood generally yield different importance. In previous work of graph convolutional networks (GCNs), such differences are typically modeled with attention mechanisms. However, as we prove in our paper, soft attention weights suffer from undesired smoothness large neighborhoods (not to be confused with the oversmoothing effect in deep GCNs). To address this weakness, we introduce a novel framework of conducting graph convolutions, where nodes are discretely selected among multi-hop neighborhoods to construct adaptive receptive fields (ARFs). ARFs enable GCNs to get rid of the smoothness of soft attention weights, as well as to efficiently explore long-distance dependencies in graphs. We further propose GRARF (GCN with reinforced adaptive receptive fields) as an instance, where an optimal policy of constructing ARFs is learned with reinforcement learning. GRARF achieves or matches state-of-the-art performances on public datasets from different domains. Our further analysis corroborates that GRARF is more robust than attention models against neighborhood noises.
Similar content being viewed by others
References
Bruna J, Zaremba W, Szlam A, et al. Spectral networks and locally connected networks on graphs. In: Proceedings of the 2nd International Conference on Learning Representations, Banff, 2014
Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations, Toulon, 2017
Velickovic P, Cucurull G, Casanova A, et al. Graph attention networks. 2017. ArXiv:1710.10903
Xu K, Hu W, Leskovec J, et al. How powerful are graph neural networks? In: Proceedings of the 7th International Conference on Learning Representations, New Orleans, 2019
Li Z, Zhang L, Song G. GCN-LASE: towards adequately incorporating link attributes in graph convolutional networks. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, 2019
Abu-El-Haija S, Perozzi B, Kapoor A, et al. Mixhop: higher-order graph convolutional architectures via sparsified neighborhood mixing. In: Proceedings of the 36th International Conference on Machine Learning, Long Beach, 2019
Gilmer J, Schoenholz S S, Riley P F, et al. Neural message passing for quantum chemistry. In: Proceedings of the 34th International Conference on Machine Learning, Sydney, 2017. 1263–1272
Hamilton W L, Ying Z, Leskovec J. Inductive representation learning on large graphs. In: Proceedings of Annual Conference on Neural Information Processing Systems, Long Beach, 2017. 1024–1034
Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd International Conference on Learning Representations, San Diego, 2015
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: Proceedings of Annual Conference on Neural Information Processing Systems, Long Beach, 2017. 5998–6008
Liu Z, Chen C, Li L, et al. Geniepath: graph neural networks with adaptive receptive paths. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Honolulu, 2019. 4424–4431
Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518: 529–533
Ng A Y, Harada D, Russell S J. Policy invariance under reward transformations: theory and application to reward shaping. In: Proceedings of the 16th International Conference on Machine Learning, Bled, 1999. 278–287
You J, Liu B, Ying Z, et al. Graph convolutional policy network for goal-directed molecular graph generation. In: Proceedings of Annual Conference on Neural Information Processing Systems, Montréal, 2018. 6412–6422
Jiang J, Dun C, Huang T, et al. Graph convolutional reinforcement learning. In: Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, 2020
Chen J, Ma T, Xiao C. FastGCN: fast learning with graph convolutional networks via importance sampling. In: Proceedings of the 6th International Conference on Learning Representations, Vancouver, 2018
Huang W, Zhang T, Rong Y, et al. Adaptive sampling towards fast graph representation learning. In: Proceedings of Annual Conference on Neural Information Processing Systems, Montréal, 2018. 4563–4572
Zeng H, Zhou H, Srivastava A, et al. GraphSAINT: graph sampling based inductive learning method. In: Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, 2020
Goodfellow I J, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. 2014. ArXiv:1406.2661
Ye Y, Ji S. Sparse graph attention networks. IEEE Trans Knowl Data Eng, 2021, doi: https://doi.org/10.1109/TKDE.2021.3072345
Zheng C, Zong B, Cheng W, et al. Robust graph representation learning via neural sparsification. In: Proceedings of International Conference on Machine Learning, 2020. 11458–11468
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ma, X., Li, Z., Song, G. et al. Learning discrete adaptive receptive fields for graph convolutional networks. Sci. China Inf. Sci. 66, 222101 (2023). https://doi.org/10.1007/s11432-021-3443-y
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11432-021-3443-y