Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Watermark for Large Language Models

About

Potential harms of large language models can be mitigated by watermarking model output, i.e., embedding signals into generated text that are invisible to humans but algorithmically detectable from a short span of tokens. We propose a watermarking framework for proprietary language models. The watermark can be embedded with negligible impact on text quality, and can be detected using an efficient open-source algorithm without access to the language model API or parameters. The watermark works by selecting a randomized set of "green" tokens before a word is generated, and then softly promoting use of green tokens during sampling. We propose a statistical test for detecting the watermark with interpretable p-values, and derive an information-theoretic framework for analyzing the sensitivity of the watermark. We test the watermark using a multi-billion parameter model from the Open Pretrained Transformer (OPT) family, and discuss robustness and security.

John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingC4
Perplexity7
1182
Mathematical ReasoningGSM8K (test)
Accuracy13.87
751
Mathematical ReasoningGSM8K--
177
AI-generated text detectionLong-form QA 3K generations corpus
Detection Accuracy (1% FPR)100
42
Question AnsweringTruthfulQA
Truthful*Inf Score64.81
42
Watermarking RobustnessLVLM Evaluation Set (test)
Relative Drop0.00e+0
36
Large Language Model WatermarkingMistral-7B-Instruct (test)
Perplexity (PPL)3.37
34
Watermark DetectionGSM8K
True Detection Rate (TD)88
30
Watermark DetectabilityC4 RealNewsLike (Del-0.2) (test)
AUC98.6
28
Watermarkingeli5-category (test)
PPL1.649
28
Showing 10 of 88 rows
...

Other info

Follow for update