Evaluating Complex Entity Knowledge Propagation for Knowledge Editing in LLMs
<p>Retrieving entities and text descriptions, creating KGs to form comprehensive triplets to serve as inputs into the LLMs, and evaluating the performance of KE methods.</p> "> Figure 2
<p>Task formulation: We take a sample text document from Wikipedia about an entity. Relevant KGs related to the particular entity are created. We then compose a comprehensive representation through a triplet which is then used for evaluating knowledge propagation by different KE methods.</p> "> Figure 3
<p>Step 1: Overview of the proposed approach with Brooklyn Bridge as the example entity; the process of generating a comprehensive triplet via KGs from the documents collected from Wikipedia is also shown. Step 2: Evaluation using multi-hop question answering.</p> "> Figure A1
<p>Sample data example for evaluation of the proposed approach. Each example has a unique case ID, followed by various single hop and multi-hop questions to evaluate different evaluation metrics of the KE models.</p> ">
Abstract
:1. Introduction
2. Related Works
2.1. Editing Models for Factual Knowledge in LLMs
2.2. Evaluating KE Methods in LLMs
2.3. Recognizing Approachable Knowledge for LLMs
2.4. Knowledge Graphs and LLMs
3. Task Definition
- Editing existing knowledge or the tail entity .
- Injecting or adding new knowledge (new tail entity) for an existing head entity , where ∅ represents no or a null value for the tail entity.
- Editing one-to-many relations, i.e., . This includes editing or injection depending on whether the knowledge is present in the LLM or not. An example of one-to-many relations would be different information about a person, i.e., date of birth, gender, occupation, parents, siblings, etc.
4. Data Generation
5. Evaluation Metrics
- Generalization (Gn): Evaluates whether, through the comprehensive triplet representation, the knowledge editor can also update the facts that are semantically related to the head entity whose tail entity was updated. For example, in “Tomin is the sibling of James”, sibling is symmetric, and therefore the editing methodology should be able to imply that “James is the sibling of Tomin” as well.
- Head Entity Aliasing (HEA): This refers to whether the editing method could also apply the edit to an alias of the head entity. For example, in generalization, the editing method should be able to tell that James is also the sibling of Tomin. However, in HEA, if we modify something for Tomin, we must check if the facts were changed for James as well.
- Compositionality (CI): In the compositionality test, we check if the editing method can create the edited fact with other knowledge or facts about the target tail entity. Also, we check if the editing method can create a fact about an alternate head entity with the edited fact.
- Forgetfulness (Fo): When we are dealing with recursive entities and multiple head and tail entities through the comprehensive triplet, we should make sure that injecting new facts should not change the content of the head or tail entity that are not related to the new inserted fact.
- Specificity (Sp): We check whether, for a given head entity where the tail entity has been edited with updated knowledge, the other tail entities for other relations for the same head entity are unaffected if they are not relevant to the edit.
6. Experiments and Results
- ROME: ROME was introduced by [16] as an editing method for LLMs, where the authors first locate where the knowledge is stored in the LLMs feed-forward network (FFN); it then considers the FFN to act as a key-value pair. It takes the subject as the input, which is thought to be stored in the first linear layer of the FFN, and the second linear layer of the FFN contains the value or the object for the specific subject. To update the value for a subject, the authors proposed a rank-one update to the weight of the second linear layer to modify the old value to a new value.
- MEMIT: ROME is capable of editing a single fact, whereas MEMIT was proposed to edit multiple facts at the same time. MEMIT also falls under the parametric update category [13].
- Fine-tuning: Fine-tuning, proposed by [56], involves tailoring a pre-trained LLM to a particular domain. We similarly employed FT to acclimate to emerging entities. Our experiments revolve around selectively updating the parameters solely within the last layer of the transformer model in GPT-J, GPT2-XL, and Llama-2-7B.
- IKE: IKE stands for in-context knowledge editing, which involves incorporating a fresh piece of information into an LLM using ‘K’ demonstrations. Each demonstration comprises a novel fact ), a probing prompt and its corresponding prediction . We consider the head and tail entities as the x, y pair and prepare the data in a format that works for in-context learning.
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
References
- Li, J.; Tang, T.; Zhao, W.X.; Nie, J.Y.; Wen, J.R. Pretrained language models for text generation: A survey. arXiv 2022, arXiv:2201.05273. [Google Scholar]
- Dou, Z.Y.; Peng, N. Zero-shot commonsense question answering with cloze translation and consistency optimization. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; Volume 36, pp. 10572–10580. [Google Scholar]
- Guu, K.; Lee, K.; Tung, Z.; Pasupat, P.; Chang, M. Retrieval augmented language model pre-training. In Proceedings of the International Conference on Machine Learning, PMLR, Vienna, Austria, 12–18 July 2020; pp. 3929–3938. [Google Scholar]
- Jin, X.; Zhang, D.; Zhu, H.; Xiao, W.; Li, S.W.; Wei, X.; Arnold, A.; Ren, X. Lifelong pretraining: Continually adapting language models to emerging corpora. arXiv 2021, arXiv:2110.08534. [Google Scholar]
- Dhingra, B.; Cole, J.R.; Eisenschlos, J.M.; Gillick, D.; Eisenstein, J.; Cohen, W.W. Time-aware language models as temporal knowledge bases. Trans. Assoc. Comput. Linguist. 2022, 10, 257–273. [Google Scholar] [CrossRef]
- Jang, J.; Ye, S.; Yang, S.; Shin, J.; Han, J.; Kim, G.; Choi, S.J.; Seo, M. Towards continual knowledge learning of language models. arXiv 2021, arXiv:2110.03215. [Google Scholar]
- Zhai, Y.; Tong, S.; Li, X.; Cai, M.; Qu, Q.; Lee, Y.J.; Ma, Y. Investigating the catastrophic forgetting in multimodal large language models. arXiv 2023, arXiv:2309.10313. [Google Scholar]
- Li, Z. The dark side of chatgpt: Legal and ethical challenges from stochastic parrots and hallucination. arXiv 2023, arXiv:2304.14347. [Google Scholar]
- Liu, Z.; Wang, J.; Dao, T.; Zhou, T.; Yuan, B.; Song, Z.; Shrivastava, A.; Zhang, C.; Tian, Y.; Re, C.; et al. Deja vu: Contextual sparsity for efficient llms at inference time. In Proceedings of the International Conference on Machine Learning, PMLR, Honolulu, HI, USA, 23–29 July 2023; pp. 22137–22176. [Google Scholar]
- De Cao, N.; Aziz, W.; Titov, I. Editing factual knowledge in language models. arXiv 2021, arXiv:2104.08164. [Google Scholar]
- Wang, P.; Zhang, N.; Xie, X.; Yao, Y.; Tian, B.; Wang, M.; Xi, Z.; Cheng, S.; Liu, K.; Zheng, G.; et al. EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models. arXiv 2023, arXiv:2308.07269. [Google Scholar]
- Zhong, Z.; Wu, Z.; Manning, C.D.; Potts, C.; Chen, D. MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions. arXiv 2023, arXiv:2305.14795. [Google Scholar]
- Meng, K.; Sharma, A.S.; Andonian, A.; Belinkov, Y.; Bau, D. Mass-editing memory in a transformer. arXiv 2022, arXiv:2210.07229. [Google Scholar]
- Mitchell, E.; Lin, C.; Bosselut, A.; Finn, C.; Manning, C.D. Fast model editing at scale. arXiv 2021, arXiv:2110.11309. [Google Scholar]
- Sinitsin, A.; Plokhotnyuk, V.; Pyrkin, D.; Popov, S.; Babenko, A. Editable neural networks. arXiv 2020, arXiv:2004.00345. [Google Scholar]
- Meng, K.; Bau, D.; Andonian, A.; Belinkov, Y. Locating and editing factual associations in GPT. Adv. Neural Inf. Process. Syst. 2022, 35, 17359–17372. [Google Scholar]
- Li, J.; Hui, B.; Qu, G.; Li, B.; Yang, J.; Li, B.; Wang, B.; Qin, B.; Cao, R.; Geng, R.; et al. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls. arXiv 2023, arXiv:2305.03111. [Google Scholar]
- Zheng, C.; Li, L.; Dong, Q.; Fan, Y.; Wu, Z.; Xu, J.; Chang, B. Can We Edit Factual Knowledge by In-Context Learning? arXiv 2023, arXiv:2305.12740. [Google Scholar]
- Agrawal, G.; Kumarage, T.; Alghami, Z.; Liu, H. Can Knowledge Graphs Reduce Hallucinations in LLMs?: A Survey. arXiv 2023, arXiv:2311.07914. [Google Scholar]
- Zhang, Y.; Chen, Z.; Zhang, W.; Chen, H. Making Large Language Models Perform Better in Knowledge Graph Completion. arXiv 2023, arXiv:2310.06671. [Google Scholar]
- Ye, Q.; Liu, J.; Chong, D.; Zhou, P.; Hua, Y.; Liu, A. Qilin-med: Multi-stage knowledge injection advanced medical large language model. arXiv 2023, arXiv:2310.09089. [Google Scholar]
- Pan, S.; Luo, L.; Wang, Y.; Chen, C.; Wang, J.; Wu, X. Unifying Large Language Models and Knowledge Graphs: A Roadmap. arXiv 2023, arXiv:2306.08302. [Google Scholar] [CrossRef]
- Liu, C.; Wu, B. Evaluating large language models on graphs: Performance insights and comparative analysis. arXiv 2023, arXiv:2308.11224. [Google Scholar]
- Cohen, R.; Biran, E.; Yoran, O.; Globerson, A.; Geva, M. Evaluating the ripple effects of knowledge editing in language models. arXiv 2023, arXiv:2307.12976. [Google Scholar]
- Geva, M.; Bastings, J.; Filippova, K.; Globerson, A. Dissecting recall of factual associations in auto-regressive language models. arXiv 2023, arXiv:2304.14767. [Google Scholar]
- Hase, P.; Bansal, M.; Kim, B.; Ghandeharioun, A. Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models. arXiv 2023, arXiv:2301.04213. [Google Scholar]
- Han, X.; Li, R.; Li, X.; Pan, J.Z. A divide and conquer framework for Knowledge Editing. Knowl. Based Syst. 2023, 279, 110826. [Google Scholar] [CrossRef]
- Dai, D.; Dong, L.; Hao, Y.; Sui, Z.; Chang, B.; Wei, F. Knowledge neurons in pretrained transformers. arXiv 2021, arXiv:2104.08696. [Google Scholar]
- Dong, Q.; Dai, D.; Song, Y.; Xu, J.; Sui, Z.; Li, L. Calibrating factual knowledge in pretrained language models. arXiv 2022, arXiv:2210.03329. [Google Scholar]
- Mitchell, E.; Lin, C.; Bosselut, A.; Manning, C.D.; Finn, C. Memory-based model editing at scale. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 15817–15831. [Google Scholar]
- Hernandez, E.; Li, B.Z.; Andreas, J. Inspecting and editing knowledge representations in language models. arXiv 2023, arXiv:2304.00740. [Google Scholar]
- Li, B.Z.; Nye, M.; Andreas, J. Implicit representations of meaning in neural language models. arXiv 2021, arXiv:2106.00737. [Google Scholar]
- Levy, O.; Seo, M.; Choi, E.; Zettlemoyer, L. Zero-shot relation extraction via reading comprehension. arXiv 2017, arXiv:1706.04115. [Google Scholar]
- Onoe, Y.; Zhang, M.J.; Padmanabhan, S.; Durrett, G.; Choi, E. Can lms learn new entities from descriptions? Challenges in propagating injected knowledge. arXiv 2023, arXiv:2305.01651. [Google Scholar]
- Hoelscher-Obermaier, J.; Persson, J.; Kran, E.; Konstas, I.; Barez, F. Detecting Edit Failures In Large Language Models: An Improved Specificity Benchmark. arXiv 2023, arXiv:2305.17553. [Google Scholar]
- Gupta, A.; Mondal, D.; Sheshadri, A.K.; Zhao, W.; Li, X.L.; Wiegreffe, S.; Tandon, N. Editing Commonsense Knowledge in GPT. arXiv 2023, arXiv:2305.14956. [Google Scholar]
- Ju, Y.; Zhang, Z. KLoB: A Benchmark for Assessing Knowledge Locating Methods in Language Models. arXiv 2023, arXiv:2309.16535. [Google Scholar]
- Xu, Y.; Li, W.; Vaezipoor, P.; Sanner, S.; Khalil, E.B. LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-based Representations. arXiv 2023, arXiv:2305.18354. [Google Scholar]
- Chollet, F. On the measure of intelligence. arXiv 2019, arXiv:1911.01547. [Google Scholar]
- Wu, X.; Yao, W.; Chen, J.; Pan, X.; Wang, X.; Liu, N.; Yu, D. From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction Tuning. arXiv 2023, arXiv:2310.00492. [Google Scholar]
- Guo, J.; Li, J.; Li, D.; Tiong, A.M.H.; Li, B.; Tao, D.; Hoi, S. From Images to Textual Prompts: Zero-shot Visual Question Answering with Frozen Large Language Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 10867–10877. [Google Scholar]
- Ji, S.; Pan, S.; Cambria, E.; Marttinen, P.; Philip, S.Y. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 494–514. [Google Scholar] [CrossRef]
- Zhang, Z.; Liu, X.; Zhang, Y.; Su, Q.; Sun, X.; He, B. Pretrain-KGE: Learning knowledge representation from pretrained language models. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, Virtual, 16–20 November 2020; pp. 259–266. [Google Scholar]
- Kumar, A.; Pandey, A.; Gadia, R.; Mishra, M. Building knowledge graph using pre-trained language model for learning entity-aware relationships. In Proceedings of the 2020 IEEE International Conference on Computing, Power and Communication Technologies (GUCON), Greater Noida, India, 2–4 October 2020; IEEE: New York, NY, USA, 2020; pp. 310–315. [Google Scholar]
- Chen, Z.; Xu, C.; Su, F.; Huang, Z.; Dou, Y. Incorporating Structured Sentences with Time-enhanced BERT for Fully-inductive Temporal Relation Prediction. arXiv 2023, arXiv:2304.04717. [Google Scholar]
- Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. Adv. Neural Inf. Process. Syst. 2013, 26. [Google Scholar]
- Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; Zhu, X. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; Volume 29. [Google Scholar]
- Wang, Z.; Zhang, J.; Feng, J.; Chen, Z. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, Quebec City, QC, Canada, 27–31 July 2014; Volume 28. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
- Min, B.; Ross, H.; Sulem, E.; Veyseh, A.P.B.; Nguyen, T.H.; Sainz, O.; Agirre, E.; Heintz, I.; Roth, D. Recent advances in natural language processing via large pre-trained language models: A survey. ACM Comput. Surv. 2023, 56, 1–40. [Google Scholar] [CrossRef]
- Abu-Rasheed, H.; Abdulsalam, M.H.; Weber, C.; Fathi, M. Supporting Student Decisions on Learning Recommendations: An LLM-Based Chatbot with Knowledge Graph Contextualization for Conversational Explainability and Mentoring. arXiv 2024, arXiv:2401.08517. [Google Scholar]
- Hu, Z.; Li, X.; Pan, X.; Wen, S.; Bao, J. A question answering system for assembly process of wind turbines based on multi-modal knowledge graph and large language model. J. Eng. Des. 2023, 1–25. [Google Scholar] [CrossRef]
- Hu, Y.; Zou, F.; Han, J.; Sun, X.; Wang, Y. Llm-Tikg: Threat Intelligence Knowledge Graph Construction Utilizing Large Language Model; SSRN: Rochester, NY, USA, 2023. [Google Scholar]
- Zhu, C.; Rawat, A.S.; Zaheer, M.; Bhojanapalli, S.; Li, D.; Yu, F.; Kumar, S. Modifying memories in transformer models. arXiv 2020, arXiv:2012.00363. [Google Scholar]
- Gururangan, S.; Marasović, A.; Swayamdipta, S.; Lo, K.; Beltagy, I.; Downey, D.; Smith, N.A. Don’t stop pretraining: Adapt language models to domains and tasks. arXiv 2020, arXiv:2004.10964. [Google Scholar]
Head Entity | |
---|---|
Instance | 1 |
1st-Hop | 10 |
2nd-Hop | 30 |
3rd-Hop | 90 |
4th-Hop | 270 |
5th-Hop | 810 |
Total (Max) | 1210 |
Input | CTR-KE | ENTITY INFERENCES | ECBD-Easy |
---|---|---|---|
Entity | Mount Everest | Mount Everest | Mount Everest |
Input Format (Recursive Triplet/Definition) | (Mount Everest, located, Mahalangur Himal) (Mahalangur Himal, section of, Himalayas) (Himalayas, mountain range in, Asia) … | Mount Everest is Earth’s highest mountain above sea level, located in the Mahalangur Himal sub-range of the Himalayas. | Mount Everest is Earth’s highest mountain above sea level, located in the Mahalangur Himal sub-range of the Himalayas. |
Masked Sentence | Mount Everest is a mountain range in <MASK> | Mount Everest is a mountain range in the continent of <MASK> | Mount Everest is located in the Mahalangur Himal sub-range of the <MASK> |
Target Value | Asia | Asia/{Europe, Australia, Africa, …} | Himalayas |
Edit Set (ES) | (Finland, President, Tarja Halonen→Sauli Niinistö) (President of Finland, age, 80→74) |
Question Set (QS) |
|
Answer Set (AS) |
|
Sequence of Facts (SF) | PR-F→(Mike, born in, Finland) (Finland, President, Tarja Halonen) (Tarja Halonen, age, 80) PO-F→(Mike, born in, Finland) (Finland, President, Sauli Niinistö) (Sauli Niinistö, age, 74) |
Fine-Tuning | ROME | MEMIT | IKE | ||||||
---|---|---|---|---|---|---|---|---|---|
Pr-Ed | Po-Ed | Pr-Ed | Po-Ed | Pr-Ed | Po-Ed | Pr-Ed | Po-Ed | ||
Gn | STR | 39.5 | 46.4 | 35.2 | 44.7 | 47.9 | 52.9 | 49.0 | 55.1 |
CTR | 41.3 | 50.9 | 40.5 | 49.5 | 45.2 | 52.3 | 51.2 | 59.3 | |
HEA | STR | 73.9 | 89.3 | 71.2 | 86.1 | 81.5 | 90.1 | 86.4 | 91.7 |
CTR | 76.8 | 89.9 | 74.1 | 88.4 | 84.0 | 92.3 | 87.0 | 92.7 | |
CI | STR | 38.2 | 46.7 | 35.9 | 45.4 | 39.1 | 47.2 | 47.2 | 50.2 |
CTR | 39.0 | 46.1 | 40.1 | 46.9 | 43.2 | 50.3 | 48.9 | 51.6 | |
Fo | STR | 60.2 | 67.8 | 59.0 | 67.2 | 61.4 | 68.4 | 62.1 | 69.5 |
CTR | 64.5 | 71.2 | 62.4 | 70.1 | 65.2 | 72.5 | 66.4 | 74.8 | |
Sp | STR | 37.1 | 40.3 | 55.2 | 65.7 | 59.8 | 69.2 | 63.6 | 73.4 |
CTR | 39.8 | 45.6 | 57.9 | 69.3 | 61.4 | 70.7 | 66.8 | 74.1 |
Fine-Tuning | ROME | MEMIT | IKE | ||||||
---|---|---|---|---|---|---|---|---|---|
Pr-Ed | Po-Ed | Pr-Ed | Po-Ed | Pr-Ed | Po-Ed | Pr-Ed | Po-Ed | ||
Gn | STR | 38.8 | 49.2 | 42.5 | 53.5 | 43.3 | 55.6 | 56.3 | 62.3 |
CTR | 39.3 | 49.8 | 46.7 | 55.8 | 48.9 | 59.3 | 55.7 | 62.1 | |
HEA | STR | 69.8 | 78.4 | 73.2 | 79.0 | 75.1 | 85.6 | 76.2 | 87.1 |
CTR | 71.2 | 79.1 | 76.0 | 84.5 | 76.4 | 87.2 | 79.3 | 88.4 | |
CI | STR | 44.3 | 52.6 | 43.7 | 48.9 | 49.2 | 58.6 | 52.1 | 58.9 |
CTR | 45.4 | 53.1 | 44.8 | 50.2 | 51.1 | 59.7 | 51.5 | 58.7 | |
Fo | STR | 54.8 | 65.7 | 59.9 | 70.2 | 63.9 | 75.4 | 73.2 | 80.1 |
CTR | 55.2 | 66.9 | 63.2 | 74.5 | 66.8 | 78.5 | 73.9 | 81.2 | |
Sp | STR | 37.2 | 46.1 | 56.2 | 67.4 | 67.2 | 75.3 | 70.1 | 79.0 |
CTR | 36.6 | 45.2 | 58.8 | 69.0 | 68.9 | 77.8 | 71.2 | 79.1 |
Fine-Tuning | ROME | MEMIT | IKE | ||||||
---|---|---|---|---|---|---|---|---|---|
Pr-Ed | Po-Ed | Pr-Ed | Po-Ed | Pr-Ed | Po-Ed | Pr-Ed | Po-Ed | ||
Gn | STR | 34.1 | 47.8 | 46.3 | 58.9 | 48.1 | 60.3 | 52.5 | 61.3 |
CTR | 37.8 | 50.3 | 48.0 | 59.2 | 48.9 | 63.9 | 55.3 | 62.9 | |
HEA | STR | 53.2 | 68.3 | 54.5 | 69.2 | 63.6 | 78.1 | 76.8 | 81.4 |
CTR | 56.7 | 71.8 | 59.2 | 73.5 | 66.1 | 79.2 | 77.2 | 83.8 | |
CI | STR | 47.9 | 55.8 | 53.6 | 60.3 | 61.6 | 68.9 | 61.1 | 68.3 |
CTR | 51.2 | 56.2 | 55.6 | 62.5 | 63.7 | 73.1 | 67.2 | 73.4 | |
Fo | STR | 48.2 | 58.7 | 58.9 | 67.8 | 63.6 | 76.2 | 65.2 | 77.9 |
CTR | 53.3 | 60.7 | 62.4 | 70.3 | 67.9 | 82.6 | 69.3 | 81.5 | |
Sp | STR | 43.9 | 52.3 | 53.6 | 61.6 | 60.4 | 69.3 | 62.4 | 70.2 |
CTR | 47.2 | 51.5 | 55.9 | 66.2 | 61.3 | 69.8 | 66.7 | 74.9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shafqat, W.; Na, S.-H. Evaluating Complex Entity Knowledge Propagation for Knowledge Editing in LLMs. Appl. Sci. 2024, 14, 1508. https://doi.org/10.3390/app14041508
Shafqat W, Na S-H. Evaluating Complex Entity Knowledge Propagation for Knowledge Editing in LLMs. Applied Sciences. 2024; 14(4):1508. https://doi.org/10.3390/app14041508
Chicago/Turabian StyleShafqat, Wafa, and Seung-Hoon Na. 2024. "Evaluating Complex Entity Knowledge Propagation for Knowledge Editing in LLMs" Applied Sciences 14, no. 4: 1508. https://doi.org/10.3390/app14041508