Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Watermarking Language Models with Error Correcting Codes

About

Recent progress in large language models enables the creation of realistic machine-generated content. Watermarking is a promising approach to distinguish machine-generated text from human text, embedding statistical signals in the output that are ideally undetectable to humans. We propose a watermarking framework that encodes such signals through an error correcting code. Our method, termed robust binary code (RBC) watermark, introduces no noticeable degradation in quality. We evaluate our watermark on base and instruction fine-tuned models and find that our watermark is robust to edits, deletions, and translations. We provide an information-theoretic perspective on watermarking, a powerful statistical test for detection and for generating $p$-values, and theoretical guarantees. Our empirical findings suggest our watermark is fast, powerful, and robust, comparing favorably to the state-of-the-art.

Patrick Chao, Yan Sun, Edgar Dobriban, Hamed Hassani• 2024

Related benchmarks

TaskDatasetResultRank
Watermark DetectionLlama-3 8B Instruct 30 tokens (generations)
Mean Precision16
13
Watermark DetectionLlama-3-8B-Instruct 150 tokens (generations)
Mean P1.2
13
Watermark Detection RobustnessLlama-3-8B Swap 50%, 30 Tokens
Mean P13
6
Watermark DetectionLlama3-8B generated text max 30 tokens
Detection Time (s)0.0156
6
Watermark Detection RobustnessLlama-3-8B Swap 50%, 150 Tokens
Mean P0.015
6
Watermark DetectionLlama-3-8B Swap perturbation, 30 tokens 1.0 (test)
Mean P0.0038
6
Watermark Detection RobustnessLlama-3-8B Swap 30%, 30 Tokens
Mean P0.019
6
Watermark Detection RobustnessLlama-3-8B Delete 50%, 30 Tokens
Mean P0.065
6
Watermark Detection RobustnessLlama-3-8B Delete 50%, 150 Tokens
Mean P0.001
6
Watermark Detection RobustnessLlama-3-8B GPT-4o Paraphrase, 30 Tokens
Mean P20
6
Showing 10 of 21 rows

Other info

Follow for update