Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Inductive link prediction on temporal networks through causal inference

Published: 18 October 2024 Publication History

Abstract

The aim of inductive temporal link prediction is to forecast future edges associated with nodes unseen during training, which is a crucial task in the field of temporal network analysis. Existing methods mainly make predictions by learning from the node/edge attributes or investigating the node substructures. However, the deficiency of attributes limits the application scope of the attribute-aware methods, while the performance of the substructure-aware models is hindered by neglecting the node correlations or introducing structure bias. In addition, current inductive temporal link prediction methods struggle to generalize the learned network evolution pattern across different networks. To address these issues, we propose a Causal LInk Prediction (CLIP) framework for the inductive temporal link prediction task. Specifically, building upon the existing anonymous distance encoding strategy, we propose to eliminate the structure bias for estimating the true distance between nodes, which is achieved by the backdoor adjustment through the do-calculus, followed by decoupling the distance encoding vector to approximate the result. Moreover, to better adapt to the realistic scenarios, we further leverage the node substructures by considering the substructure features as the intervention on the basis of the true distance between nodes. In addition, our proposed approach achieves true inductive temporal link prediction by learning the universal evolution pattern across various temporal networks, which is accomplished through training on the synthetic dynamic graph data generated from the powerlaw cluster networks. We conduct extensive experiments on four real-world temporal networks, i.e., SuperUser, MathOverflow, AskUbuntu and StackOverflow, and the experimental results demonstrate that CLIP outperforms the baselines in terms of AP and AUC. In addition, experiments on the synthetic test graph data with various distributions showcase the remarkable generalization ability of CLIP.

References

[1]
L.A. Adamic, E. Adar, Friends and neighbors on the web, Soc. Netw. 25 (2003) 211–230,.
[2]
F.M. Bianchi, D. Grattarola, L. Livi, C. Alippi, Graph neural networks with convolutional ARMA filters, IEEE Trans. Pattern Anal. Mach. Intell. 44 (2022) 3496–3507,.
[3]
Z. Bu, Y. Wang, H. Li, J. Jiang, Z. Wu, J. Cao, Link prediction in temporal networks: integrating survival analysis and game theory, Inf. Sci. 498 (2019) 41–61,.
[4]
T.Q. Chen, Y. Rubanova, J. Bettencourt, D. Duvenaud, Neural ordinary differential equations, in: NeurIPS, 2018, pp. 6572–6583.
[5]
W. Cong, S. Zhang, J. Kang, B. Yuan, H. Wu, X. Zhou, H. Tong, M. Mahdavi, Do we really need complicated model architectures for temporal networks?, in: The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1–5, 2023, 2023.
[6]
Gao, C.; Zheng, Y.; Wang, W.; Feng, F.; He, X.; Li, Y. (2022): Causal inference in recommender systems: a survey and future directions. arXiv preprint arXiv:2208.12397.
[7]
E. Hajiramezanali, A. Hasanzadeh, K.R. Narayanan, N. Duffield, M. Zhou, X. Qian, Variational graph recurrent neural networks, in: NeurIPS, 2019, pp. 10700–10710.
[8]
W.L. Hamilton, Z. Ying, J. Leskovec, Inductive representation learning on large graphs, in: NeurIPS, 2017, pp. 1024–1034.
[9]
P. Holme, B.J. Kim, Growing scale-free networks with tunable clustering, Phys. Rev. E 65 (2002).
[10]
C. Hou, H. Zhang, S. He, K. Tang, Glodyne: global topology preserving dynamic network embedding, IEEE Trans. Knowl. Data Eng. 34 (2022) 4826–4837,.
[11]
H. Huang, J. Tang, L. Liu, J. Luo, X. Fu, Triadic closure pattern analysis and prediction in social networks, IEEE Trans. Knowl. Data Eng. 27 (2015) 3374–3389,.
[12]
M. Jin, Y. Li, S. Pan, Neural temporal walks: motif-aware representation learning on continuous-time dynamic graphs, in: NeurIPS, 2022.
[13]
S.M. Kazemi, R. Goel, K. Jain, I. Kobyzev, A. Sethi, P. Forsyth, P. Poupart, Representation learning for dynamic graphs: a survey, J. Mach. Learn. Res. 21 (2020) 70:1–70:73.
[14]
T.N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, in: ICLR, 2017.
[15]
S. Kumar, X. Zhang, J. Leskovec, Predicting dynamic embedding trajectory in temporal interaction networks, in: KDD, ACM, 2019, pp. 1269–1278,.
[16]
P. Li, Y. Wang, H. Wang, J. Leskovec, Distance encoding: design provably more powerful neural networks for graph representation learning, in: NeurIPS, 2020.
[17]
Y. Li, D. Tarlow, M. Brockschmidt, R.S. Zemel, Gated graph sequence neural networks, in: ICLR, 2016.
[18]
Y. Li, Y. Wu, M. Sun, B. Yang, Y. Wang, Learning continuous dynamic network representation with transformer-based temporal graph neural network, Inf. Sci. 649 (2023),.
[19]
B. Liu, M. Ding, S. Shaham, W. Rahayu, F. Farokhi, Z. Lin, When machine learning meets privacy: a survey and outlook, ACM Comput. Surv. 54 (2022) 31:1–31:36,.
[20]
N. Liu, X. Wang, L. Wu, Y. Chen, X. Guo, C. Shi, Compact graph structure learning via mutual information compression, in: WWW, ACM, 2022, pp. 1601–1610,.
[21]
Y. Liu, Y. Zheng, D. Zhang, H. Chen, H. Peng, S. Pan, Towards unsupervised deep graph structure learning, in: WWW, ACM, 2022, pp. 1392–1403,.
[22]
Z. Liu, C. Huang, Y. Yu, J. Dong, Motif-preserving dynamic attributed network embedding, in: WWW, ACM / IW3C2, 2021, pp. 1629–1638,.
[23]
F. Lorrain, H.C. White, Structural equivalence of individuals in social networks, J. Math. Sociol. 1 (1971) 49–80.
[24]
L. Lü, C.H. Jin, T. Zhou, Similarity index based on local paths for link prediction of complex networks, Phys. Rev. E 80 (2009).
[25]
M.E. Newman, Clustering and preferential attachment in growing networks, Phys. Rev. E 64 (2001).
[26]
Z. Pan, F. Cai, X. Liu, H. Chen, Distance-aware learning for inductive link prediction on temporal networks, IEEE Trans. Neural Netw. Learn. Syst. (2023),.
[27]
J. Pearl, Causality, Cambridge University Press, 2009.
[28]
Q. Ren, Y. Chen, Y. Mo, Q. Wu, J. Yan, DICE: domain-attack invariant causal learning for improved data privacy protection and adversarial robustness, in: KDD, ACM, 2022, pp. 1483–1492,.
[29]
Rossi, E.; Chamberlain, B.; Frasca, F.; Eynard, D.; Monti, F.; Bronstein, M.M. (2020): Temporal graph networks for deep learning on dynamic graphs. arXiv preprint arXiv:2006.10637.
[30]
D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning representations by back-propagating errors, Nature 323 (1986) 533–536.
[31]
A. Sankar, Y. Wu, L. Gou, W. Zhang, H. Yang, Dysat: deep neural representation learning on dynamic graphs via self-attention networks, in: WSDM, ACM, 2020, pp. 519–527,.
[32]
U. Singer, I. Guy, K. Radinsky, Node embedding over temporal graphs, in: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2019.
[33]
Q. Tian, K. Kuang, K. Jiang, F. Liu, Z. Wang, F. Wu, Confoundergan: protecting image data privacy with causal confounder, in: NeurIPS, 2022.
[34]
R. Trivedi, M. Farajtabar, P. Biswal, H. Zha, Dyrep: learning representations over dynamic graphs, in: ICLR, 2019.
[35]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, I. Polosukhin, Attention is all you need, in: NeurIPS, 2017, pp. 5998–6008.
[36]
P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention networks, in: ICLR, 2018.
[37]
J. Wang, K. Ding, L. Hong, H. Liu, J. Caverlee, Next-item recommendation with sequential hypergraphs, in: SIGIR, ACM, 2020, pp. 1101–1110,.
[38]
J. Wang, G. Song, Y. Wu, L. Wang, Streaming graph neural networks via continual learning, in: Proceedings of the International Conference on Information and Knowledge Management (CIKM), 2020.
[39]
W. Wang, F. Feng, X. He, X. Wang, T. Chua, Deconfounded recommendation for alleviating bias amplification, in: KDD, ACM, 2021, pp. 1717–1725,.
[40]
Wang, X.; Yang, H.; Zhang, M. : Neural common neighbor with completion for link prediction. CoRR arXiv:2302.00890 [abs] (2023): Neural common neighbor with completion for link prediction. https://doi.org/10.48550/arXiv.2302.00890.
[41]
Y. Wang, Y.-Y. Chang, Y. Liu, J. Leskovec, P. Li, Inductive representation learning in temporal networks via causal anonymous walks, in: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3–7, 2021, 2021.
[42]
Y. Wang, P. Li, C. Bai, J. Leskovec, TEDIC: neural modeling of behavioral patterns in dynamic social interaction networks, in: WWW, ACM / IW3C2, 2021, pp. 693–705,.
[43]
Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, P.S. Yu, A comprehensive survey on graph neural networks, IEEE Trans. Neural Netw. Learn. Syst. 32 (2021) 4–24,.
[44]
D. Xu, C. Ruan, E. Körpeoglu, S. Kumar, K. Achan, Inductive representation learning on temporal graphs, in: ICLR, 2020.
[45]
C. Yang, C. Wang, Y. Lu, X. Gong, C. Shi, W. Wang, X. Zhang, Few-shot link prediction in dynamic networks, in: WSDM, ACM, 2022, pp. 1245–1255,.
[46]
C. Yang, J. Li, K. Lu, B. Hooi, L. Zhou, Continuous-time graph directed information maximization for temporal network representation, Inf. Sci. 644 (2023),.
[47]
X. Yang, F. Xiao, An improved gravity model to identify influential nodes in complex networks based on k-shell method, Knowl.-Based Syst. 227 (2021),.
[48]
Y. Yin, Y. Wu, X. Yang, W. Zhang, X. Yuan, SE-GRU: structure embedded gated recurrent unit neural networks for temporal link prediction, IEEE Trans. Netw. Sci. Eng. 9 (2022) 2495–2509,.
[49]
T. Zhao, G. Liu, D. Wang, W. Yu, M. Jiang, Learning from counterfactual links for link prediction, in: ICML, PMLR, 2022, pp. 26911–26926.
[50]
L. Zhou, Y. Yang, X. Ren, F. Wu, Y. Zhuang, Dynamic network embedding by modeling triadic closure process, in: AAAI, AAAI Press, 2018, pp. 571–578,.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Information Sciences: an International Journal
Information Sciences: an International Journal  Volume 681, Issue C
Oct 2024
1022 pages

Publisher

Elsevier Science Inc.

United States

Publication History

Published: 18 October 2024

Author Tags

  1. Temporal networks
  2. Inductive link prediction
  3. Causal inference

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media