Nothing Special   »   [go: up one dir, main page]

Rebuilding ROME : Resolving Model Collapse during Sequential Model Editing

Akshat Gupta, Sidharth Baskaran, Gopala Anumanchipalli


Abstract
Recent work using Rank-One Model Editing (ROME), a popular model editing method, has shown that there are certain facts that the algorithm is unable to edit without breaking the model. Such edits have previously been called disabling edits. These disabling edits cause immediate model collapse and limits the use of ROME for sequential editing. In this paper, we show that disabling edits are an artifact of irregularities in the implementation of ROME. With this paper, we provide a more stable implementation ROME, which we call r-ROME and show that model collapse is no longer observed when making large scale sequential edits with r-ROME, while further improving generalization and locality of model editing compared to the original implementation of ROME. We also provide a detailed mathematical explanation of the reason behind disabling edits.
Anthology ID:
2024.emnlp-main.1210
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21738–21744
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1210
DOI:
Bibkey:
Cite (ACL):
Akshat Gupta, Sidharth Baskaran, and Gopala Anumanchipalli. 2024. Rebuilding ROME : Resolving Model Collapse during Sequential Model Editing. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21738–21744, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Rebuilding ROME : Resolving Model Collapse during Sequential Model Editing (Gupta et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1210.pdf