Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Provably Robust Multi-bit Watermarking for AI-generated Text

About

Large Language Models (LLMs) have demonstrated remarkable capabilities of generating texts resembling human language. However, they can be misused by criminals to create deceptive content, such as fake news and phishing emails, which raises ethical concerns. Watermarking is a key technique to address these concerns, which embeds a message (e.g., a bit string) into a text generated by an LLM. By embedding the user ID (represented as a bit string) into generated texts, we can trace generated texts to the user, known as content source tracing. The major limitation of existing watermarking techniques is that they achieve sub-optimal performance for content source tracing in real-world scenarios. The reason is that they cannot accurately or efficiently extract a long message from a generated text. We aim to address the limitations. In this work, we introduce a new watermarking method for LLM-generated text grounded in pseudo-random segment assignment. We also propose multiple techniques to further enhance the robustness of our watermarking algorithm. We conduct extensive experiments to evaluate our method. Our experimental results show that our method substantially outperforms existing baselines in both accuracy and robustness on benchmark datasets. For instance, when embedding a message of length 20 into a 200-token generated text, our method achieves a match rate of $97.6\%$, while the state-of-the-art work Yoo et al. only achieves $49.2\%$. Additionally, we prove that our watermark can tolerate edits within an edit distance of 17 on average for each paragraph under the same setting.

Wenjie Qu, Wengrui Zheng, Tianyang Tao, Dong Yin, Yanze Jiang, Zhihua Tian, Wei Zou, Jinyuan Jia, Jiaheng Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Multi-bit LLM WatermarkingGemma2-9B-Base Max 256 Tokens
AUC0.999
20
Multi-bit LLM WatermarkingC4 GEMMA2-9B-BASE Max 256 Tokens
AUC0.999
20
Multi-bit LLM WatermarkingC4 GEMMA2-9B-BASE Max 128 Tokens
AUC97.9
20
Multi-bit LLM WatermarkingGemma2-9B-Base Max 128 Tokens
AUC0.979
20
Multi-bit LLM WatermarkingC4 LLaMA3-8B-BASE Max 128 Tokens
AUC0.993
20
Multi-bit LLM WatermarkingC4 LLaMA3-8B-BASE Max 256 Tokens
AUC99.5
20
Multi-bit LLM WatermarkingLLaMA3-8B-Base Max 128 Tokens
AUC0.993
20
Multi-bit LLM WatermarkingLLaMA3-8B-Base Max 256 Tokens
AUC0.995
20
Multi-bit WatermarkingLLaMA2-7B 300 tokens (test)
Perplexity32.8184
14
Multi-bit WatermarkingLLM text 200 tokens
Perplexity32.6466
14
Showing 10 of 19 rows

Other info

Follow for update