Your ViT is Secretly a Hybrid Discriminative-Generative Diffusion Model
About
Diffusion Denoising Probability Models (DDPM) and Vision Transformer (ViT) have demonstrated significant progress in generative tasks and discriminative tasks, respectively, and thus far these models have largely been developed in their own domains. In this paper, we establish a direct connection between DDPM and ViT by integrating the ViT architecture into DDPM, and introduce a new generative model called Generative ViT (GenViT). The modeling flexibility of ViT enables us to further extend GenViT to hybrid discriminative-generative modeling, and introduce a Hybrid ViT (HybViT). Our work is among the first to explore a single ViT for image generation and classification jointly. We conduct a series of experiments to analyze the performance of proposed models and demonstrate their superiority over prior state-of-the-arts in both generative and discriminative tasks. Our code and pre-trained models can be found in https://github.com/sndnyang/Diffusion_ViT .
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Unconditional Image Generation | CIFAR-10 | FID20.2 | 171 | |
| Image Generation | CIFAR10 32x32 (test) | FID20.2 | 154 | |
| Image Synthesis | CIFAR-10 | FID20.2 | 79 | |
| Image Classification | CIFAR-10 512-image subset (test) | Clean Accuracy95.9 | 26 | |
| Image Classification | CIFAR-10 (test) | Clean Accuracy95.9 | 19 | |
| Visual Field Prediction | UWHVF | MAE8.61 | 8 |