Watermarking Images in Self-Supervised Latent Spaces
About
We revisit watermarking techniques based on pre-trained deep networks, in the light of self-supervised approaches. We present a way to embed both marks and binary messages into their latent spaces, leveraging data augmentation at marking time. Our method can operate at any resolution and creates watermarks robust to a broad range of transformations (rotations, crops, JPEG, contrast, etc). It significantly outperforms the previous zero-bit methods, and its performance on multi-bit watermarking is on par with state-of-the-art encoder-decoder architectures trained end-to-end for watermarking. The code is available at github.com/facebookresearch/ssl_watermarking
Pierre Fernandez, Alexandre Sablayrolles, Teddy Furon, Herv\'e J\'egou, Matthijs Douze• 2021
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Watermarking | ImageNet | Bit Accuracy (Overall)99 | 98 | |
| Watermark Extraction | COCO | Bit Accuracy99 | 98 | |
| Image Watermarking | MS-COCO | PSNR41.81 | 21 | |
| Watermark Generation | COCO | PSNR37.8068 | 21 | |
| Image Watermarking | DiffDB | PSNR41.84 | 17 | |
| Image Watermarking | DiffusionDB | PSNR31.01 | 17 | |
| Image Watermarking | WikiArt | PSNR41.81 | 8 | |
| Watermark Imperceptibility | DIV2K | PSNR36.3833 | 8 | |
| Watermark Imperceptibility | Chameleon | PSNR36.1513 | 8 | |
| Watermark Extraction | COCO, DIV2K, and Chameleon averaged | Bit Acc (GN, σ=6)62.02 | 8 |
Showing 10 of 15 rows