Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Watermark for Large Language Models

About

Potential harms of large language models can be mitigated by watermarking model output, i.e., embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens. We propose a watermarking framework for proprietary language models. The watermark can be embedded with negligible impact on text quality, and can be detected using an efficient open-source algorithm without access to the language model API or parameters. The watermark works by selecting a randomized set of "green" tokens before a word is generated, and then softly promoting use of green tokens during sampling. We propose a statistical test for detecting the watermark with interpretable p-values, and derive an information-theoretic framework for analyzing the sensitivity of the watermark. We test the watermark using a multi-billion parameter model from the Open Pretrained Transformer (OPT) family, and discuss robustness and security.

John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingC4
Perplexity7
1422
Mathematical ReasoningGSM8K (test)
Accuracy13.87
770
Mathematical ReasoningGSM8K--
246
Translation Attack17 supported languages LLaMA-3.2 1B
AUC87.7
68
AI-generated text detectionLong-form QA 3K generations corpus
Detection Accuracy (1% FPR)100
42
Question AnsweringTruthfulQA
Truthful*Inf Score64.81
42
WatermarkingC4
TPR (FPR < 10^-4)100
40
WatermarkingLFQA
TPR (FPR < 10^-4)100
40
Watermark DetectionAya-23 8B 10 unsupported languages
AUC0.768
40
Watermarking RobustnessLVLM Evaluation Set (test)
Relative Drop0.00e+0
36
Showing 10 of 145 rows
...

Other info

Follow for update