Watermarking Low-entropy Generation for Large Language Models: An Unbiased and Low-risk Method
About
Recent advancements in large language models (LLMs) have highlighted the risk of misusing them, raising the need for accurate detection of LLM-generated content. In response, a viable solution is to inject imperceptible identifiers into LLMs, known as watermarks. Our research extends the existing watermarking methods by proposing the novel Sampling One Then Accepting (STA-1) method. STA-1 is an unbiased watermark that preserves the original token distribution in expectation and has a lower risk of producing unsatisfactory outputs in low-entropy scenarios compared to existing unbiased watermarks. In watermark detection, STA-1 does not require prompts or a white-box LLM, provides statistical guarantees, demonstrates high efficiency in detection time, and remains robust against various watermarking attacks. Experimental results on low-entropy and high-entropy datasets demonstrate that STA-1 achieves the above properties simultaneously, making it a desirable solution for watermarking LLMs. Implementation codes for this study are available online.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Watermark Detection | C4 subset | -- | 24 | |
| Text Generation | C4 | TPR @ FPR=1%84.93 | 15 | |
| Watermark Robustness under GPT rephrasing attack | LLaMA-2 generated sequences (1,000) | TPR@FPR=5%24 | 7 | |
| Watermark Detection | LLaMA-2 Token Replacement Attack epsilon=0.05 (1,000 generated sequences) | TPR@FPR=0.1%60.84 | 7 | |
| Watermark Detection | LLaMA-2 Token Replacement Attack epsilon=0.1 (1,000 generated sequences) | TPR@FPR=0.1%47.15 | 7 | |
| Watermark Detection | LLaMA-2 Token Replacement Attack, epsilon=0.2 (1,000 generated sequences) | TPR @ FPR=0.1%21.35 | 7 |