Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Attention-based exploitation and exploration strategy for multi-hop knowledge graph reasoning

Published: 01 January 2024 Publication History

Abstract

Knowledge Graphs (KGs) typically suffer from incompleteness. A popular approach to solve this problem is multi-hop reasoning through Reinforcement Learning (RL) framework, which is an explainable and effective model to predict missing links in KGs. However, many previous RL-based models use the scoring function of the pre-trained Knowledge Graph Embedding (KGE) methods as the reward function, which will lead to the performance of the model be limited to the KGE methods. Moreover, the agent may reason a meaningless path if it cannot distinguish the different aspect of each entity in different triples. To solve both problems, we propose a multi-hop reasoning model named Ae2KGR, by applying two novel strategies: attention-based exploitation and attention-based exploration. The attention-based exploitation strategy incorporates historical and query information with the neighborhood of the current entity, and dynamically updates the current state during reasoning process to assign different semantic information to the entity to distinguish different triples. The attention-based exploration strategy designs a novel policy network and reward function to dynamically make decisions based on the constantly changing state. Extensive experiments on three standard datasets confirm the effectiveness of our innovations, and the performance of our proposed Ae2KGR is significantly improved compared to the state-of-the-art methods.

References

[1]
K. Guu, J. Miller, P. Liang, Traversing knowledge graphs in vector space, in: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, pp. 318–327.
[2]
H. He, A. Balakrishnan, M. Eric, P. Liang, Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings, in: 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Association for Computational Linguistics (ACL), 2017, pp. 1766–1776.
[3]
A. Yang, Q. Wang, J. Liu, K. Liu, Y. Lyu, H. Wu, Q. She, S. Li, Enhancing pre-trained language representations with rich knowledge for machine reading comprehension, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, pp. 2346–2357.
[4]
Z. Hou, X. Jin, Z. Li, L. Bai, Rule-aware reinforcement learning for knowledge graph reasoning, in: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021, pp. 4687–4692.
[5]
H. Liu, S. Zhou, C. Chen, T. Gao, J. Xu, M. Shu, Dynamic knowledge graph reasoning based on deep reinforcement learning, Knowl.-Based Syst. 241 (2022).
[6]
N. Usunier, A. Garcia-Duran, J. Weston, O. Yakhnenko, Translating embeddings for modeling multi-relational data, Adv. Neural Inf. Process. Syst. 26 (2013).
[7]
T. Dettmers, P. Minervini, P. Stenetorp, S. Riedel, Convolutional 2d knowledge graph embeddings, in: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, 2018.
[8]
X. Lv, X. Han, L. Hou, J. Li, Z. Liu, W. Zhang, Y. Zhang, H. Kong, S. Wu, Dynamic anticipation and completion for multi-hop reasoning over sparse knowledge graph, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, pp. 5694–5703.
[9]
S. Ji, S. Pan, E. Cambria, P. Marttinen, S.Y. Philip, A survey on knowledge graphs: representation, acquisition, and applications, IEEE Trans. Neural Netw. Learn. Syst. 33 (2) (2021) 494–514.
[10]
A. Sadeghian, M. Armandpour, P. Ding, D.Z. Wang, Drum: end-to-end differentiable rule mining on knowledge graphs, Adv. Neural Inf. Process. Syst. 32 (2019).
[11]
X.V. Lin, R. Socher, C. Xiong, Multi-hop knowledge graph reasoning with reward shaping, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018, pp. 3243–3253.
[12]
W. Xiong, T. Hoang, W.Y. Wang, Deeppath: a reinforcement learning method for knowledge graph reasoning, in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017, pp. 564–573.
[13]
R. Das, S. Dhuliawala, M. Zaheer, L. Vilnis, I. Durugkar, A. Krishnamurthy, A. Smola, A. McCallum, Go for a walk and arrive at the answer: reasoning over paths in knowledge bases using reinforcement learning, in: International Conference on Learning Representations, 2018.
[14]
M. Hildebrandt, J.A.Q. Serna, Y. Ma, M. Ringsquandl, M. Joblin, V. Tresp, Reasoning on knowledge graphs with debate dynamics, in: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, 2020, pp. 4123–4131.
[15]
A. Nair, B. McGrew, M. Andrychowicz, W. Zaremba, P. Abbeel, Overcoming exploration in reinforcement learning with demonstrations, in: 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2018, pp. 6292–6299.
[16]
K. Guu, P. Pasupat, E. Liu, P. Liang, From language to programs: bridging reinforcement learning and maximum marginal likelihood, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2017, pp. 1051–1062.
[17]
Z. Wang, J. Zhang, J. Feng, Z. Chen, Knowledge graph embedding by translating on hyperplanes, in: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28, 2014.
[18]
G. Ji, S. He, L. Xu, K. Liu, J. Zhao, Knowledge graph embedding via dynamic mapping matrix, in: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2015, pp. 687–696.
[19]
Y. Lin, Z. Liu, M. Sun, Y. Liu, X. Zhu, Learning entity and relation embeddings for knowledge graph completion, in: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29, 2015.
[20]
Z. Sun, Z.-H. Deng, J.-Y. Nie, J. Tang, Rotate: knowledge graph embedding by relational rotation in complex space, in: International Conference on Learning Representations, 2019.
[21]
M. Nickel, V. Tresp, H.-P. Kriegel, A three-way model for collective learning on multi-relational data, in: Proceedings of the 28th International Conference on International Conference on Machine Learning, 2011, pp. 809–816.
[22]
B. Yang, S.W.-t. Yih, X. He, J. Gao, L. Deng, Embedding entities and relations for learning and inference in knowledge bases, in: International Conference on Learning Representations, 2015.
[23]
T. Trouillon, J. Welbl, S. Riedel, É. Gaussier, G. Bouchard, Complex embeddings for simple link prediction, in: International Conference on Machine Learning, PMLR, 2016, pp. 2071–2080.
[24]
Z. Zhang, X. Han, Z. Liu, X. Jiang, M. Sun, Q. Liu Ernie, Enhanced language representation with informative entities, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, pp. 1441–1451.
[25]
I. Balažević, C. Allen, T. Hospedales Tucker, Tensor factorization for knowledge graph completion, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 5185–5194.
[26]
Y. Tang, J. Huang, G. Wang, X. He, B. Zhou, Orthogonal relation transforms with graph context modeling for knowledge graph embedding, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 2713–2722.
[27]
L. Chao, J. He, T. Wang, W. Chu, Pairre: knowledge graph embeddings via paired relation vectors, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 4360–4369.
[28]
T.D. Nguyen, D.Q. Nguyen, D. Phung, et al., A novel embedding model for knowledge base completion based on convolutional neural network, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), 2018, pp. 327–333.
[29]
M. Schlichtkrull, T.N. Kipf, P. Bloem, R. Van Den Berg, I. Titov, M. Welling, Modeling relational data with graph convolutional networks, in: The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15, Springer, 2018, pp. 593–607.
[30]
D. Nathani, J. Chauhan, C. Sharma, M. Kaul, Learning attention-based embeddings for relation prediction in knowledge graphs, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019, pp. 4710–4723.
[31]
K. Teru, E. Denis, W. Hamilton, Inductive relation prediction by subgraph reasoning, in: International Conference on Machine Learning, PMLR, 2020, pp. 9448–9457.
[32]
S. Vashishth, S. Sanyal, V. Nitin, P. Talukdar, Composition-based multi-relational graph convolutional networks, in: International Conference on Learning Representations, 2020.
[33]
I. Chami, A. Wolf, D.-C. Juan, F. Sala, S. Ravi, C. Ré, Low-dimensional hyperbolic knowledge graph embeddings, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 6901–6914.
[34]
S. Chen, X. Liu, J. Gao, J. Jiao, R. Zhang, Y. Ji Hitter, Hierarchical transformers for knowledge graph embeddings, in: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp. 10395–10407.
[35]
J. Zhang, J. Huang, J. Gao, R. Han, C. Zhou, Knowledge graph embedding by logical-default attention graph convolution neural network for link prediction, Inf. Sci. 593 (2022) 201–215.
[36]
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, Y. Bengio, Graph attention networks, in: International Conference on Learning Representations, 2018.
[37]
N. Lao, T. Mitchell, W. Cohen, Random walk inference and learning in a large scale knowledge base, in: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, 2011, pp. 529–539.
[38]
F. Yang, Z. Yang, W.W. Cohen, Differentiable learning of logical rules for knowledge base reasoning, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 2316–2325.
[39]
W. Chen, W. Xiong, X. Yan, W.Y. Wang, Variational knowledge graph reasoning, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2018, pp. 1823–1832.
[40]
H. Wang, S. Li, R. Pan, M. Mao, Incorporating graph attention mechanism into knowledge graph reasoning based on deep reinforcement learning, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 2623–2631.
[41]
Q. Wang, Y. Hao, J. Cao, Adrl: an attention-based deep reinforcement learning framework for knowledge graph reasoning, Knowl.-Based Syst. 197 (2020).
[42]
D. Lei, G. Jiang, X. Gu, K. Sun, Y. Mao, X. Ren, Learning collaborative agents with rule guidance for knowledge graph reasoning, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, pp. 8541–8547.
[43]
G. Wan, S. Pan, C. Gong, C. Zhou, G. Haffari, Reasoning like human: hierarchical reinforcement learning for knowledge graph reasoning, in: Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, 2021, pp. 1926–1932.
[44]
Q. Lin, J. Liu, Y. Pan, L. Zhang, X. Hu, J. Ma, Rule-enhanced iterative complementation for knowledge graph reasoning, Inf. Sci. 575 (2021) 66–79.
[45]
G. Wan, B. Du, Gaussianpath: a bayesian multi-hop reasoning framework for knowledge graph reasoning, in: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, 2021, pp. 4393–4401.
[46]
R.J. Williams, Simple statistical gradient-following algorithms for connectionist reinforcement learning, Mach. Learn. 8 (3) (1992) 229–256.
[47]
K. Toutanova, D. Chen, P. Pantel, H. Poon, P. Choudhury, M. Gamon, Representing text for joint embedding of text and knowledge bases, in: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, pp. 1499–1509.
[48]
D.P. Kingma, J. Ba Adam, A method for stochastic optimization, in: International Conference on Learning Representations, 2015.

Cited By

View all

Index Terms

  1. Attention-based exploitation and exploration strategy for multi-hop knowledge graph reasoning
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Information Sciences: an International Journal
      Information Sciences: an International Journal  Volume 653, Issue C
      Jan 2024
      602 pages

      Publisher

      Elsevier Science Inc.

      United States

      Publication History

      Published: 01 January 2024

      Author Tags

      1. Knowledge graph reasoning
      2. Reinforcement learning
      3. Graph attention networks
      4. Deep learning

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 27 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media