Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

More Haste, Less Speed: Weaker Single-Layer Watermark Improves Distortion-Free Watermark Ensembles

About

Watermarking has emerged as a crucial technique for detecting and attributing content generated by large language models. While recent advancements have utilized watermark ensembles to enhance robustness, prevailing methods typically prioritize maximizing the strength of the watermark at every individual layer. In this work, we identify a critical limitation in this "stronger-is-better" approach: strong watermarks significantly reduce the entropy of the token distribution, which paradoxically weakens the effectiveness of watermarking in subsequent layers. We theoretically and empirically show that detectability is bounded by entropy and that watermark ensembles induce a monotonic decrease in both entropy and the expected green-list ratio across layers. To address this inherent trade-off, we propose a general framework that utilizes weaker single-layer watermarks to preserve the entropy required for effective multi-layer ensembling. Empirical evaluations demonstrate that this counter-intuitive strategy mitigates signal decay and consistently outperforms strong baselines in both detectability and robustness.

Ruibo Chen, Yihan Wu, Xuehao Cui, Jingqi Zhang, Heng Huang• 2026

Related benchmarks

TaskDatasetResultRank
Watermark DetectionC4 150 tokens
TPR @ FPR 0.1%0.822
12
Watermark DetectionC4 250 tokens
TPR @ FPR 0.1%96.7
12
Watermark DetectionC4 Random Token Replacement attack (test)
TPR (FPR=0.1%)87.5
6
Watermark DetectionC4 GPT Back Translation attack (test)
TPR (FPR=0.1%)58.9
6
Watermark DetectionC4 GPT Rephrase attack (test)
TPR (at FPR=0.1%)16.4
6
Watermark DetectionC4 DIPPER attack (test)
TPR @ FPR=0.1%1.27e+3
6
Showing 6 of 6 rows

Other info

Follow for update