Editing the Mind of Giants: An In-Depth Exploration of Pitfalls of Knowledge Editing in Large Language Models

CH Hsueh, PKM Huang, TH Lin, CW Liao… - arXiv preprint arXiv …, 2024 - arxiv.org
CH Hsueh, PKM Huang, TH Lin, CW Liao, HC Fang, CW Huang, YN Chen
arXiv preprint arXiv:2406.01436, 2024arxiv.org
Knowledge editing is a rising technique for efficiently updating factual knowledge in Large
Language Models (LLMs) with minimal alteration of parameters. However, recent studies
have identified concerning side effects, such as knowledge distortion and the deterioration
of general abilities, that have emerged after editing. This survey presents a comprehensive
study of these side effects, providing a unified view of the challenges associated with
knowledge editing in LLMs. We discuss related works and summarize potential research …
Knowledge editing is a rising technique for efficiently updating factual knowledge in Large Language Models (LLMs) with minimal alteration of parameters. However, recent studies have identified concerning side effects, such as knowledge distortion and the deterioration of general abilities, that have emerged after editing. This survey presents a comprehensive study of these side effects, providing a unified view of the challenges associated with knowledge editing in LLMs. We discuss related works and summarize potential research directions to overcome these limitations. Our work highlights the limitations of current knowledge editing methods, emphasizing the need for deeper understanding of inner knowledge structures of LLMs and improved knowledge editing methods. To foster future research, we have released the complementary materials such as paper collection publicly at https://github.com/MiuLab/EditLLM-Survey
arxiv.org