Abstract
Recent work has shown the capability of Large Language Models (LLMs) to solve tasks related to Knowledge Graphs, such as Knowledge Graph Completion, even in Zero- or Few-Shot paradigms. However, they are known to hallucinate answers, or output results in a non-deterministic manner, thus leading to wrongly reasoned responses, even if they satisfy the user’s demands. To highlight opportunities and challenges in knowledge graphs-related tasks, we experiment with three distinguished LLMs, namely Mixtral-8x7b-Instruct-v0.1, GPT-3.5-Turbo-0125 and GPT-4o, on Knowledge Graph Completion for static knowledge graphs, using prompts constructed following the TELeR taxonomy, in Zero- and One-Shot contexts, on a Task-Oriented Dialogue system use case. When evaluated using both strict and flexible metrics measurement manners, our results show that LLMs could be fit for such a task if prompts encapsulate sufficient information and relevant examples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
For prompt engineering we also followed OpenAI (https://community.openai.com) and HuggingFace (https://huggingface.co/docs/transformers/main/tasks/prompting) suggestions.
References
Chen, H., Liu, X., Yin, D., Tang, J.: A survey on dialogue systems: recent advances and new frontiers. ACM SIGKDD Explorations Newsl 19(2), 25–35 (2017). https://doi.org/10.1145/3166054.3166058
Fill, H., Fettke, P., Köpke, J.: Conceptual modeling and large language models: Impressions from first experiments with ChatGPT. Enterp. Model. Inf. Syst. Archit. Int. J. Concept. Model. 18, 3 (2023). https://doi.org/10.18417/EMISA.18.3
Han, J., Collier, N., Buntine, W.L., Shareghi, E.: PiVe: Prompting with iterative verification improving graph-based generative capability of LLMs. CoRR abs/2305.12392 (2023). https://doi.org/10.48550/ARXIV.2305.12392
Hogan, A., et al.: Knowledge Graphs. Synthesis Lectures on Data, Semantics, and Knowledge, Morgan & Claypool Publishers (2021). https://doi.org/10.2200/S01125ED1V01Y202109DSK022
Iga, V.I., Silaghi, G.C.: Leveraging BERT for natural language understanding of domain-specific knowledge. In: 25th Intl. Symp. on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2023, Nancy, France pp. 210–215. IEEE (2023). https://doi.org/10.1109/SYNASC61333.2023.00035
Iga, V.I., Silaghi, G.C.: Ontology-based dialogue system for domain-specific knowledge acquisition. In: da Silva, A.R., et al. (ed.) Information Systems Development: Organizational Aspects and Societal Trends (ISD2023 Proceedings), Lisboa, Portugal. AIS (2023). https://doi.org/10.62036/ISD.2023.46
Ji, S., Pan, S., Cambria, E., Marttinen, P., Yu, P.S.: A survey on knowledge graphs: representation, acquisition, and applications. IEEE Trans. Neural Networks Learn. Syst. 33(2), 494–514 (2022). https://doi.org/10.1109/TNNLS.2021.3070843
Jiang, A.Q., et al.: Mixtral of Experts. CoRR abs/2401.04088 (2024). https://doi.org/10.48550/ARXIV.2401.04088
Khorashadizadeh, H., Mihindukulasooriya, N., Tiwari, S., Groppe, J., Groppe, S.: Exploring in-context learning capabilities of foundation models for generating knowledge graphs from text. CEUR Workshop Proceedings, vol. 3447, pp. 132–153. CEUR-WS.org (2023). https://ceur-ws.org/Vol-3447/Text2KG_Paper_9.pdf
Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., Wu, X.: Unifying Large Language Models and knowledge graphs: A roadmap. CoRR abs/2306.08302 (2023). https://doi.org/10.48550/ARXIV.2306.08302
Santu, S.K.K., Feng, D.: TELeR: A general taxonomy of LLM prompts for benchmarking complex tasks. In: Bouamor, H., et al. (ed.) Findings of the ACL: EMNLP 2023, Singapore, 2023. pp. 14197–14203. ACL (2023). https://doi.org/10.18653/V1/2023.FINDINGS-EMNLP.946
Wei, X., et al.: ChatIE: zero-shot information extraction via chatting with ChatGPT. CoRR abs/2302.10205 (2023). https://doi.org/10.48550/ARXIV.2302.10205
Zhang, J., Chen, B., Zhang, L., Ke, X., Ding, H.: Neural, symbolic and neural-symbolic reasoning on knowledge graphs. AI Open 2, 14–35 (2021). https://doi.org/10.1016/J.AIOPEN.2021.03.001
Zhao, W.X., et al.: A survey of large language models. CoRR abs/2303.18223 (2023). https://doi.org/10.48550/ARXIV.2303.18223
Zhu, Y., et al.: LLMs for knowledge graph construction and reasoning: recent capabilities and future opportunities. CoRR abs/2305.13168 (2023). https://doi.org/10.48550/ARXIV.2305.13168
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Iga, V.I.R., Silaghi, G.C. (2024). Assessing LLMs Suitability for Knowledge Graph Completion. In: Besold, T.R., d’Avila Garcez, A., Jimenez-Ruiz, E., Confalonieri, R., Madhyastha, P., Wagner, B. (eds) Neural-Symbolic Learning and Reasoning. NeSy 2024. Lecture Notes in Computer Science(), vol 14980. Springer, Cham. https://doi.org/10.1007/978-3-031-71170-1_22
Download citation
DOI: https://doi.org/10.1007/978-3-031-71170-1_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-71169-5
Online ISBN: 978-3-031-71170-1
eBook Packages: Computer ScienceComputer Science (R0)