Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1609/aaai.v38i12.29255guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Hierarchical topology isomorphism expertise embedded graph contrastive learning

Published: 20 February 2024 Publication History

Abstract

Graph contrastive learning (GCL) aims to align the positive features while differentiating the negative features in the latent space by minimizing a pair-wise contrastive loss. As the embodiment of an outstanding discriminative unsupervised graph representation learning approach, GCL achieves impressive successes in various graph benchmarks. However, such an approach falls short of recognizing the topology isomorphism of graphs, resulting in that graphs with relatively homogeneous node features cannot be sufficiently discriminated. By revisiting classic graph topology recognition works, we disclose that the corresponding expertise intuitively complements GCL methods. To this end, we propose a novel hierarchical topology isomorphism expertise embedded graph contrastive learning, which introduces knowledge distillations to empower GCL models to learn the hierarchical topology isomorphism expertise, including the graph-tier and subgraph-tier. On top of this, the proposed method holds the feature of plug-and-play, and we empirically demonstrate that the proposed method is universal to multiple state-of-the-art GCL models. The solid theoretical analyses are further provided to prove that compared with conventional GCL methods, our method acquires the tighter upper bound of Bayes classification error. We conduct extensive experiments on real-world benchmarks to exhibit the performance superiority of our method over candidate GCL methods, e.g., for the real-world graph representation learning experiments, the proposed method beats the state-of-the-art method by 0.23% on unsupervised representation learning setting, 0.43% on transfer learning setting. Our code is available at https://github.com/jyf123/HTML.

References

[1]
Allen, D. M. 1971. Mean square error of prediction as a criterion for selecting variables. Technometrics, 13(3): 469-475.
[2]
Babai, L. 2015. Graph Isomorphism in Quasipolynomial Time. CoRR, abs/1512.03547.
[3]
Babai, L.; and Kucera, L. 1979. Canonical Labelling of Graphs in Linear Average Time. In 20th Annual Symposium on Foundations of Computer Science, San Juan, Puerto Rico, 29-31 October 1979, 39-46. IEEE Computer Society.
[4]
Berne, B. J.; Boon, J. P.; and Rice, S. A. 1966. On the calculation of autocorrelation functions of dynamical variables. The Journal of Chemical Physics, 45(4): 1086-1096.
[5]
Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. E. 2020. Simple Framework for Contrastive Learning of Visual Representations. In ICML 2020, 13-18 July 2020, Virtual Event, volume 119, 1597-1607. PMLR.
[6]
Corneil, D. G.; and Gotlieb, C. C. 1970. An Efficient Algorithm for Graph Isomorphism. J. ACM, 17(1): 51-64.
[7]
De Maesschalck, R.; Jouan-Rimbaud, D.; and Massart, D. L. 2000. The mahalanobis distance. Chemometrics and intelligent laboratory systems, 50(1): 1-18.
[8]
Gao, H.; Li, J.; Qiang, W.; Si, L.; Xu, B.; Zheng, C.; and Sun, F. 2023. Robust causal graph representation learning against confounding effects. In AAAI, volume 37, 7624-7632.
[9]
Gao, T.; Yao, X.; and Chen, D. 2021. SimCSE: Simple Contrastive Learning of Sentence Embeddings. In Moens, M.; Huang, X.; Specia, L.; and Yih, S. W., eds., EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 711 November, 2021, 6894-6910. Association for Computational Linguistics.
[10]
Grill, J.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P. H.; Buchatskaya, E.; Doersch, C.; Pires, B. Á.; Guo, Z.; Azar, M. G.; Piot, B.; Kavukcuoglu, K.; Munos, R.; and Valko, M. 2020. Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., NeurIPS 2020, December 6-12, 2020, virtual.
[11]
Hamilton, W. L.; Ying, Z.; and Leskovec, J. 2017. Inductive Representation Learning on Large Graphs. In Guyon, I.; von Luxburg, U.; Bengio, S.; Wallach, H. M.; Fergus, R.; Vishwanathan, S. V. N.; and Garnett, R., eds., NeurIPS 2017, December 4-9, 2017, Long Beach, CA, USA, 1024-1034.
[12]
Hassani, K.; and Ahmadi, A. H. K. 2020. Contrastive MultiView Representation Learning on Graphs. In ICML 2020, 13-18 July 2020, Virtual Event, volume 119, 4116-4126. PMLR.
[13]
He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. B. 2020. Momentum Contrast for Unsupervised Visual Representation Learning. In CVPR2020, Seattle, WA, USA, June 13-19, 2020, 9726-9735. Computer Vision Foundation / IEEE.
[14]
Heckman, J. J. 1979. Sample selection bias as a specification error. Econometrica: Journal of the econometric society, 153-161.
[15]
Hu, W.; Liu, B.; Gomes, J.; Zitnik, M.; Liang, P.; Pande, V. S.; and Leskovec, J. 2020. Strategies for Pre-training Graph Neural Networks. In 8th ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
[16]
Jaccard, P. 1912. The distribution of the flora in the alpine zone. 1. New phytologist, 11(2): 37-50.
[17]
Jain, A. K.; Duin, R. P. W.; and Mao, J. 2000. Statistical Pattern Recognition: A Review. IEEE Trans. Pattern Anal. Mach. Intell., 22(1): 4-37.
[18]
Jiao, Y.; Xiong, Y.; Zhang, J.; Zhang, Y.; Zhang, T.; and Zhu, Y. 2020. Sub-graph Contrast for Scalable Self-Supervised Graph Representation Learning. In Plant, C.; Wang, H.; Cuzzocrea, A.; Zaniolo, C.; and Wu, X., eds., ICDM 2020, Sorrento, Italy, November 17-20, 2020, 222-231. IEEE.
[19]
Ju, W.; Gu, Y.; Luo, X.; Wang, Y.; Yuan, H.; Zhong, H.; and Zhang, M. 2023. Unsupervised graph-level representation learning with hierarchical contrasts. Neural Networks, 158: 359-368.
[20]
Kipf, T. N.; and Welling, M. 2016. Semi-Supervised Classification with Graph Convolutional Networks. CoRR, abs/1609.02907.
[21]
Li, J.; Qiang, W.; Zhang, Y.; Mo, W.; Zheng, C.; Su, B.; and Xiong, H. 2022a. MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning. NeurIPS, 35: 38501-38515.
[22]
Li, J.; Qiang, W.; Zheng, C.; Su, B.; and Xiong, H. 2022b. Metaug: Contrastive learning via meta feature augmentation. In ICML 2022, 12964-12978. PMLR.
[23]
Li, S.; Wang, X.; Zhang, A.; Wu, Y.; He, X.; and Chua, T. 2022c. Let Invariant Rationale Discovery Inspire Graph Contrastive Learning. In Chaudhuri, K.; Jegelka, S.; Song, L.; Szepesvari, C.; Niu, G.; and Sabato, S., eds., ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162, 13052-13065. PMLR.
[24]
Lin, L.; Chen, J.; and Wang, H. 2023. Spectral Augmentation for Self-Supervised Learning on Graphs. In ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
[25]
Lin, S.; Zhou, P.; Hu, Z.; Wang, S.; Zhao, R.; Zheng, Y.; Lin, L.; Xing, E. P.; and Liang, X. 2021. Prototypical Graph Contrastive Learning. CoRR, abs/2106.09645.
[26]
McKay, B. D.; and Piperno, A. 2014. Practical graph isomorphism, II. J. Symb. Comput., 60: 94-112.
[27]
Morris, C.; Kriege, N. M.; Bause, F.; Kersting, K.; Mutzel, P.; and Neumann, M. 2020. TUDataset: A collection of benchmark datasets for learning with graphs. CoRR, abs/2007.08663.
[28]
Qiang, W.; Li, J.; Zheng, C.; Su, B.; and Xiong, H. 2022. Interventional Contrastive Learning with Meta Semantic Regularizer. In Chaudhuri, K.; Jegelka, S.; Song, L.; Szepesvári, C.; Niu, G.; and Sabato, S., eds., ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162, 18018-18030. PMLR.
[29]
Qu, Y.; Shen, D.; Shen, Y.; Sajeev, S.; Chen, W.; and Han, J. 2021. CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding. In ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenRe-view.net.
[30]
Rajaraman, S.; Zamzmi, G.; and Antani, S. K. 2021. Multiloss ensemble deep learning for chest X-ray classification. CoRR, abs/2109.14433.
[31]
Rosenblatt, M. 1956. A central limit theorem and a strong mixing condition. Proceedings of the national Academy of Sciences, 42(1): 43-47.
[32]
Shervashidze, N.; Schweitzer, P.; van Leeuwen, E. J.; Mehlhorn, K.; and Borgwardt, K. M. 2011. Weisfeiler-Lehman Graph Kernels. J. Mach. Learn. Res., 12: 2539-2561.
[33]
Shervashidze, N.; Vishwanathan, S. V. N.; Petri, T.; Mehlhorn, K.; and Borgwardt, K. M. 2009. Efficient graphlet kernels for large graph comparison. In Dyk, D. A. V.; and Welling, M., eds., AISTATS 2009, Clearwater Beach, Florida, USA, April 16-18, 2009, volume 5, 488-495. JMLR.org.
[34]
Shimodaira, H. 2000. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2): 227-244.
[35]
Sohn, K. 2016. Improved Deep Metric Learning with Multiclass N-pair Loss Objective. In Lee, D. D.; Sugiyama, M.; von Luxburg, U.; Guyon, I.; and Garnett, R., eds., NeurIPS 2016, December 5-10, 2016, Barcelona, Spain, 1849-1857.
[36]
Sterling, T.; and Irwin, J. J. 2015. ZINC 15 - Ligand Discovery for Everyone. J. Chem. Inf. Model., 55(11): 2324-2337.
[37]
Sun, F.; Hoffmann, J.; Verma, V.; and Tang, J. 2020. InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization. In ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
[38]
Suresh, S.; Li, P.; Hao, C.; and Neville, J. 2021. Adversarial graph augmentation to improve graph contrastive learning. NeurIPS, 34: 15920-15933.
[39]
Tang, H.; Zhang, X.; Wang, J.; Cheng, N.; and Xiao, J. 2022. Avqvc: One-Shot Voice Conversion By Vector Quantization With Applying Contrastive Learning. In ICASSP 2022, Virtual and Singapore, 23-27 May 2022, 4613-4617. IEEE.
[40]
Thakoor, S.; Tallec, C.; Azar, M. G.; Azabou, M.; Dyer, E. L.; Munos, R.; Velickovic, P.; and Valko, M. 2022. Large-Scale Representation Learning on Graphs via Bootstrapping. In ICLR 2022, Virtual Event, April 25-29, 2022. Open-Review.net.
[41]
Tumer, K. 1996. Linear and order statistics combiners for reliable pattern classification. The University of Texas at Austin.
[42]
van den Oord, A.; Li, Y.; and Vinyals, O. 2018. Representation Learning with Contrastive Predictive Coding. CoRR, abs/1807.03748.
[43]
Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; and Bengio, Y. 2017. Graph Attention Networks. CoRR, abs/1710.10903.
[44]
Velickovic, P.; Fedus, W.; Hamilton, W. L.; Liò, P.; Bengio, Y.; and Hjelm, R. D. 2019. Deep Graph Infomax. In ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
[45]
Weisfeiler, B.; and Leman, A. 1968. The reduction of a graph to canonical form and the algebra which appears therein. nti, Series, 2(9): 12-16.
[46]
Wijesinghe, A.; and Wang, Q. 2022. A New Perspective on "How Graph Neural Networks Go Beyond Weisfeiler-Lehman?". In ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
[47]
Wu, Z.; Xiong, Y.; Yu, S. X.; and Lin, D. 2018. Unsupervised Feature Learning via Non-Parametric Instance Discrimination. In CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, 3733-3742. Computer Vision Foundation / IEEE Computer Society.
[48]
Xia, P.; Zhang, L.; and Li, F. 2015. Learning similarity with cosine similarity ensemble. Inf. Sci., 307: 39-52.
[49]
Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2018. How Powerful are Graph Neural Networks? CoRR, abs/1810.00826.
[50]
Yanardag, P.; and Vishwanathan, S. V. N. 2015. Deep Graph Kernels. In Cao, L.; Zhang, C.; Joachims, T.; Webb, G. I.; Margineantu, D. D.; and Williams, G., eds., SIGKDD, Sydney, NSW, Australia, August 10-13, 2015, 1365-1374. ACM.
[51]
Yao, D.; Zhao, Z.; Zhang, S.; Zhu, J.; Zhu, Y.; Zhang, R.; and He, X. 2022. Contrastive Learning with Positive-Negative Frame Mask for Music Representation. In Laforest, F.; Troncy, R.; Simperl, E.; Agarwal, D.; Gionis, A.; Herman, I.; and Médini, L., eds., WWW 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, 2906-2915. ACM.
[52]
You, Y.; Chen, T.; Shen, Y.; and Wang, Z. 2021. Graph Contrastive Learning Automated. In Meila, M.; and Zhang, T., eds., ICML 2021, 18-24 July 2021, Virtual Event, volume 139, 12121-12132. PMLR.
[53]
You, Y.; Chen, T.; Sui, Y.; Chen, T.; Wang, Z.; and Shen, Y. 2020. Graph Contrastive Learning with Augmentations. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., NeurIPS 2020, December 6-12, 2020, virtual.
[54]
Zbontar, J.; Jing, L.; Misra, I.; LeCun, Y.; and Deny, S. 2021. Barlow Twins: Self-Supervised Learning via Redundancy Reduction. In Meila, M.; and Zhang, T., eds., ICML 2021, 18-24 July 2021, Virtual Event, volume 139, 12310-12320. PMLR.
[55]
Zhang, S.; Hu, Z.; Subramonian, A.; and Sun, Y. 2020. Motif-Driven Contrastive Learning of Graph Representations. CoRR, abs/2012.12533.
[56]
Zhu, Y.; Xu, Y.; Yu, F.; Liu, Q.; Wu, S.; and Wang, L. 2020. Deep Graph Contrastive Representation Learning. CoRR, abs/2006.04131.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
AAAI'24/IAAI'24/EAAI'24: Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence
February 2024
23861 pages
ISBN:978-1-57735-887-9

Sponsors

  • Association for the Advancement of Artificial Intelligence

Publisher

AAAI Press

Publication History

Published: 20 February 2024

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media