Nothing Special   »   [go: up one dir, main page]

Skip to main content

Assessing LLMs Suitability for Knowledge Graph Completion

  • Conference paper
  • First Online:
Neural-Symbolic Learning and Reasoning (NeSy 2024)

Abstract

Recent work has shown the capability of Large Language Models (LLMs) to solve tasks related to Knowledge Graphs, such as Knowledge Graph Completion, even in Zero- or Few-Shot paradigms. However, they are known to hallucinate answers, or output results in a non-deterministic manner, thus leading to wrongly reasoned responses, even if they satisfy the user’s demands. To highlight opportunities and challenges in knowledge graphs-related tasks, we experiment with three distinguished LLMs, namely Mixtral-8x7b-Instruct-v0.1, GPT-3.5-Turbo-0125 and GPT-4o, on Knowledge Graph Completion for static knowledge graphs, using prompts constructed following the TELeR taxonomy, in Zero- and One-Shot contexts, on a Task-Oriented Dialogue system use case. When evaluated using both strict and flexible metrics measurement manners, our results show that LLMs could be fit for such a task if prompts encapsulate sufficient information and relevant examples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://huggingface.co/mistralai/Mixtral-8x7b-Instruct-v0.1.

  2. 2.

    https://platform.openai.com/docs/models/gpt-3-5-turbo.

  3. 3.

    https://platform.openai.com/docs/models/gpt-4o.

  4. 4.

    https://github.com/IonutIga/LLMs-for-KGC.

  5. 5.

    For prompt engineering we also followed OpenAI (https://community.openai.com) and HuggingFace (https://huggingface.co/docs/transformers/main/tasks/prompting) suggestions.

References

  1. Chen, H., Liu, X., Yin, D., Tang, J.: A survey on dialogue systems: recent advances and new frontiers. ACM SIGKDD Explorations Newsl 19(2), 25–35 (2017). https://doi.org/10.1145/3166054.3166058

    Article  Google Scholar 

  2. Fill, H., Fettke, P., Köpke, J.: Conceptual modeling and large language models: Impressions from first experiments with ChatGPT. Enterp. Model. Inf. Syst. Archit. Int. J. Concept. Model. 18, 3 (2023). https://doi.org/10.18417/EMISA.18.3

  3. Han, J., Collier, N., Buntine, W.L., Shareghi, E.: PiVe: Prompting with iterative verification improving graph-based generative capability of LLMs. CoRR abs/2305.12392 (2023). https://doi.org/10.48550/ARXIV.2305.12392

  4. Hogan, A., et al.: Knowledge Graphs. Synthesis Lectures on Data, Semantics, and Knowledge, Morgan & Claypool Publishers (2021). https://doi.org/10.2200/S01125ED1V01Y202109DSK022

  5. Iga, V.I., Silaghi, G.C.: Leveraging BERT for natural language understanding of domain-specific knowledge. In: 25th Intl. Symp. on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2023, Nancy, France pp. 210–215. IEEE (2023). https://doi.org/10.1109/SYNASC61333.2023.00035

  6. Iga, V.I., Silaghi, G.C.: Ontology-based dialogue system for domain-specific knowledge acquisition. In: da Silva, A.R., et al. (ed.) Information Systems Development: Organizational Aspects and Societal Trends (ISD2023 Proceedings), Lisboa, Portugal. AIS (2023). https://doi.org/10.62036/ISD.2023.46

  7. Ji, S., Pan, S., Cambria, E., Marttinen, P., Yu, P.S.: A survey on knowledge graphs: representation, acquisition, and applications. IEEE Trans. Neural Networks Learn. Syst. 33(2), 494–514 (2022). https://doi.org/10.1109/TNNLS.2021.3070843

    Article  MathSciNet  Google Scholar 

  8. Jiang, A.Q., et al.: Mixtral of Experts. CoRR abs/2401.04088 (2024). https://doi.org/10.48550/ARXIV.2401.04088

  9. Khorashadizadeh, H., Mihindukulasooriya, N., Tiwari, S., Groppe, J., Groppe, S.: Exploring in-context learning capabilities of foundation models for generating knowledge graphs from text. CEUR Workshop Proceedings, vol. 3447, pp. 132–153. CEUR-WS.org (2023). https://ceur-ws.org/Vol-3447/Text2KG_Paper_9.pdf

  10. Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., Wu, X.: Unifying Large Language Models and knowledge graphs: A roadmap. CoRR abs/2306.08302 (2023). https://doi.org/10.48550/ARXIV.2306.08302

  11. Santu, S.K.K., Feng, D.: TELeR: A general taxonomy of LLM prompts for benchmarking complex tasks. In: Bouamor, H., et al. (ed.) Findings of the ACL: EMNLP 2023, Singapore, 2023. pp. 14197–14203. ACL (2023). https://doi.org/10.18653/V1/2023.FINDINGS-EMNLP.946

  12. Wei, X., et al.: ChatIE: zero-shot information extraction via chatting with ChatGPT. CoRR abs/2302.10205 (2023). https://doi.org/10.48550/ARXIV.2302.10205

  13. Zhang, J., Chen, B., Zhang, L., Ke, X., Ding, H.: Neural, symbolic and neural-symbolic reasoning on knowledge graphs. AI Open 2, 14–35 (2021). https://doi.org/10.1016/J.AIOPEN.2021.03.001

    Article  Google Scholar 

  14. Zhao, W.X., et al.: A survey of large language models. CoRR abs/2303.18223 (2023). https://doi.org/10.48550/ARXIV.2303.18223

  15. Zhu, Y., et al.: LLMs for knowledge graph construction and reasoning: recent capabilities and future opportunities. CoRR abs/2305.13168 (2023). https://doi.org/10.48550/ARXIV.2305.13168

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gheorghe Cosmin Silaghi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Iga, V.I.R., Silaghi, G.C. (2024). Assessing LLMs Suitability for Knowledge Graph Completion. In: Besold, T.R., d’Avila Garcez, A., Jimenez-Ruiz, E., Confalonieri, R., Madhyastha, P., Wagner, B. (eds) Neural-Symbolic Learning and Reasoning. NeSy 2024. Lecture Notes in Computer Science(), vol 14980. Springer, Cham. https://doi.org/10.1007/978-3-031-71170-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-71170-1_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-71169-5

  • Online ISBN: 978-3-031-71170-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics