WorldCup Sampling for Multi-bit LLM Watermarking
About
As large language models (LLMs) generate increasingly human-like text, watermarking offers a promising solution for reliable attribution beyond mere detection. While multi-bit watermarking enables richer provenance encoding, existing methods largely extend zero-bit schemes through seed-driven steering, leading to indirect information flow, limited effective capacity, and suboptimal decoding. In this paper, we propose WorldCup, a multi-bit watermarking framework for LLMs that treats sampling as a natural communication channel and embeds message bits directly into token selection via a hierarchical competition mechanism guided by complementary signals. Moreover, WorldCup further adopts entropy-aware modulation to preserve generation quality and supports robust message recovery through confidence-aware decoding. Comprehensive experiments show that WorldCup achieves a strong balance across capacity, detectability, robustness, text quality, and decoding efficiency, consistently outperforming prior baselines and laying a solid foundation for future LLM watermarking studies.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-bit LLM Watermarking | C4 GEMMA2-9B-BASE Max 256 Tokens | AUC1 | 20 | |
| Multi-bit LLM Watermarking | Gemma2-9B-Base Max 256 Tokens | AUC1 | 20 | |
| Multi-bit LLM Watermarking | C4 LLaMA3-8B-BASE Max 128 Tokens | AUC1 | 20 | |
| Multi-bit LLM Watermarking | C4 LLaMA3-8B-BASE Max 256 Tokens | AUC100 | 20 | |
| Multi-bit LLM Watermarking | C4 GEMMA2-9B-BASE Max 128 Tokens | AUC100 | 20 | |
| Multi-bit LLM Watermarking | LLaMA3-8B-Base Max 128 Tokens | AUC1 | 20 | |
| Multi-bit LLM Watermarking | LLaMA3-8B-Base Max 256 Tokens | AUC1 | 20 | |
| Multi-bit LLM Watermarking | Gemma2-9B-Base Max 128 Tokens | AUC0.998 | 20 | |
| Long-form QA | Long-form QA Short Q, Long A (test) | GPT4 Score6.182 | 15 | |
| Machine Translation | Machine Translation Short Q, Short A (test) | BLEU0.417 | 15 |