Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Watermark in the Latent Space of Generative Models

About

Existing approaches for watermarking AI-generated images often rely on post-hoc methods applied in pixel space, introducing computational overhead and potential visual artifacts. In this work, we explore latent space watermarking and introduce DistSeal, a unified approach for latent watermarking that works across both diffusion and autoregressive models. Our approach works by training post-hoc watermarking models in the latent space of generative models. We demonstrate that these latent watermarkers can be effectively distilled either into the generative model itself or into the latent decoder, enabling in-model watermarking. The resulting latent watermarks achieve competitive robustness while offering similar imperceptibility and up to 20x speedup compared to pixel-space baselines. Our experiments further reveal that distilling latent watermarkers outperforms distilling pixel-space ones, providing a solution that is both more efficient and more robust.

Sylvestre-Alvise Rebuffi, Tuan Tran, Valeriu Lacatusu, Pierre Fernandez, Tom\'a\v{s} Sou\v{c}ek, Nikola Jovanovi\'c, Tom Sander, Hady Elsahar, Alexandre Mourachko• 2026

Related benchmarks

TaskDatasetResultRank
Image WatermarkingRAR Latent Decoder
FID3.09
21
WatermarkingDCAE
FID10.72
18
Latent Watermark DistillationDCAE latent decoder (val)
FID11.34
8
Image WatermarkingRAR-generated images ImageNet (val)
FID3.44
3
WatermarkingDCAE-generated images (val)
PSNR31.06
3
Showing 5 of 5 rows

Other info

Follow for update