HiDDeN: Hiding Data With Deep Networks
About
Recent work has shown that deep neural networks are highly sensitive to tiny perturbations of input images, giving rise to adversarial examples. Though this property is usually considered a weakness of learned models, we explore whether it can be beneficial. We find that neural networks can learn to use invisible perturbations to encode a rich amount of useful information. In fact, one can exploit this capability for the task of data hiding. We jointly train encoder and decoder networks, where given an input message and cover image, the encoder produces a visually indistinguishable encoded image, from which the decoder can recover the original message. We show that these encodings are competitive with existing data hiding algorithms, and further that they can be made robust to noise: our models learn to reconstruct hidden information in an encoded image despite the presence of Gaussian blurring, pixel-wise dropout, cropping, and JPEG compression. Even though JPEG is non-differentiable, we show that a robust model can be trained using differentiable approximations. Finally, we demonstrate that adversarial training improves the visual quality of encoded images.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Watermarking | ImageNet | Bit Accuracy (Overall)99 | 98 | |
| Watermark Extraction | COCO | Bit Accuracy99 | 98 | |
| Face Forgery Detection | DFDCP | Frame-level AUC69.25 | 64 | |
| Image Reconstruction | ImageNet | PSNR19.88 | 56 | |
| Face Forgery Detection | DFDC | AUC63.8 | 52 | |
| Face Forgery Detection | CDF v2 | Frame-level AUC69.55 | 42 | |
| Face Forgery Detection | DFD | Frame-level AUC72.07 | 41 | |
| Face Forgery Detection | CDF v1 | Frame-level AUC0.6896 | 40 | |
| Face Forgery Detection | FaceShifter HQ (FSh) | Video-level AUC61.72 | 37 | |
| Adversarial Attack Imperceptibility | ImageNet | PSNR25.09 | 36 |