Nothing Special   »   [go: up one dir, main page]

Mass-Editing Memory in a TransformerDownload PDF

Published: 01 Feb 2023, Last Modified: 14 Oct 2024ICLR 2023 notable top 25%Readers: Everyone
Keywords: language models, GPT, transformers, model editing, factual associations, memory
TL;DR: An algorithm that can make tens of thousands of edits to an autoregressive transformer's memory.
Abstract: Recent work has shown exciting promise in updating large language models with new memories, so as to replace obsolete information or add specialized knowledge. However, this line of work is predominantly limited to updating single associations. We develop MEMIT, a method for directly updating a language model with many memories, demonstrating experimentally that it can scale up to thousands of associations for GPT-J (6B) and GPT-NeoX (20B), exceeding prior work by an order of magnitude. Our code and data will be open-sourced upon publication.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/mass-editing-memory-in-a-transformer/code)
11 Replies

Loading