Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Watermarking Discrete Diffusion Language Models

About

Watermarking has emerged as a promising technique to track AI-generated content and differentiate it from authentic human creations. While prior work extensively studies watermarking for autoregressive large language models (LLMs) and image diffusion models, it remains comparatively underexplored for discrete diffusion language models (DDLMs), which are becoming popular due to their high inference throughput. In this paper, we introduce one of the first watermarking methods for DDLMs. Our approach applies a distribution-preserving Gumbel-max sampling trick at every diffusion step and seeds the randomness by sequence position to enable reliable detection. We empirically demonstrate reliable detectability on LLaDA, a state-of-the-art DDLM. We also analytically prove that the watermark is distortion-free, with a false detection probability that decays exponentially in the sequence length. A key practical advantage is that our method realizes desired watermarking properties with no expensive hyperparameter tuning, making it straightforward to deploy and scale across models and benchmarks.

Avi Bagchi, Akhil Bhimaraju, Moulik Choraria, Daniel Alabi, Lav R. Varshney• 2025

Related benchmarks

TaskDatasetResultRank
Text Generation Quality EvaluationWaterBench 1000 prompts
PPL10.652
6
Watermarking DetectionWaterBench 1000 prompts
Completeness96
5
Showing 2 of 2 rows

Other info

Follow for update