Nothing Special   »   [go: up one dir, main page]

What a lovely hat

Is it made out of tin foil?

Paper 2024/1739

Provably Robust Watermarks for Open-Source Language Models

Miranda Christ, Columbia University
Sam Gunn, University of California, Berkeley
Tal Malkin, Columbia University
Mariana Raykova, Google (United States)
Abstract

The recent explosion of high-quality language models has necessitated new methods for identifying AI-generated text. Watermarking is a leading solution and could prove to be an essential tool in the age of generative AI. Existing approaches embed watermarks at inference and crucially rely on the large language model (LLM) specification and parameters being secret, which makes them inapplicable to the open-source setting. In this work, we introduce the first watermarking scheme for open-source LLMs. Our scheme works by modifying the parameters of the model, but the watermark can be detected from just the outputs of the model. Perhaps surprisingly, we prove that our watermarks are unremovable under certain assumptions about the adversary's knowledge. To demonstrate the behavior of our construction under concrete parameter instantiations, we present experimental results with OPT-6.7B and OPT-1.3B. We demonstrate robustness to both token substitution and perturbation of the model parameters. We find that the stronger of these attacks, the model-perturbation attack, requires deteriorating the quality score to 0 out of 100 in order to bring the detection rate down to 50%.

Metadata
Available format(s)
PDF
Category
Applications
Publication info
Preprint.
Keywords
watermarkinglarge language modelsgenerative AI
Contact author(s)
mchrist @ cs columbia edu
gunn @ berkeley edu
tal @ cs columbia edu
marianar @ google com
History
2024-10-25: approved
2024-10-24: received
See all versions
Short URL
https://ia.cr/2024/1739
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2024/1739,
      author = {Miranda Christ and Sam Gunn and Tal Malkin and Mariana Raykova},
      title = {Provably Robust Watermarks for Open-Source Language Models},
      howpublished = {Cryptology {ePrint} Archive, Paper 2024/1739},
      year = {2024},
      url = {https://eprint.iacr.org/2024/1739}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.