Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Three Bricks to Consolidate Watermarks for Large Language Models

About

The task of discerning between generated and natural texts is increasingly challenging. In this context, watermarking emerges as a promising technique for ascribing generated text to a specific model. It alters the sampling generation process so as to leave an invisible trace in the generated output, facilitating later detection. This research consolidates watermarks for large language models based on three theoretical and empirical considerations. First, we introduce new statistical tests that offer robust theoretical guarantees which remain valid even at low false-positive rates (less than 10$^{\text{-6}}$). Second, we compare the effectiveness of watermarks using classical benchmarks in the field of natural language processing, gaining insights into their real-world applicability. Third, we develop advanced detection schemes for scenarios where access to the LLM is available, as well as multi-bit watermarking.

Pierre Fernandez, Antoine Chaffin, Karim Tit, Vivien Chappelier, Teddy Furon• 2023

Related benchmarks

TaskDatasetResultRank
Text CompletionText Completion
Binary Accuracy98.25
27
Text SummarizationText Summarization
BA64.75
24
Watermark DetectionWikiText, IMDB, AG News, Yelp Polarity mixed human-written corpus (test)
FPR (%)1.5
5
Watermark DetectionAlpaca instruction-following 52K
TPR20.17
5
Watermark DetectionWatermarking Evaluation Set
True Positive Rate @ 10% FPR0.8
4
Showing 5 of 5 rows

Other info

Follow for update