Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MoVQ: Modulating Quantized Vectors for High-Fidelity Image Generation

About

Although two-stage Vector Quantized (VQ) generative models allow for synthesizing high-fidelity and high-resolution images, their quantization operator encodes similar patches within an image into the same index, resulting in a repeated artifact for similar adjacent regions using existing decoder architectures. To address this issue, we propose to incorporate the spatially conditional normalization to modulate the quantized vectors so as to insert spatially variant information to the embedded index maps, encouraging the decoder to generate more photorealistic images. Moreover, we use multichannel quantization to increase the recombination capability of the discrete codes without increasing the cost of model and codebook. Additionally, to generate discrete tokens at the second stage, we adopt a Masked Generative Image Transformer (MaskGIT) to learn an underlying prior distribution in the compressed latent space, which is much faster than the conventional autoregressive model. Experiments on two benchmark datasets demonstrate that our proposed modulated VQGAN is able to greatly improve the reconstructed image quality as well as provide high-fidelity image generation.

Chuanxia Zheng, Long Tung Vuong, Jianfei Cai, Dinh Phung• 2022

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256 (val)--
293
Conditional Image GenerationImageNet-1K 256x256 (val)
gFID7.13
86
Image ReconstructionImageNet1K (val)
FID1.12
83
Image ReconstructionFFHQ (val)
PSNR26.72
66
Image ReconstructionImageNet (val)
rFID1.12
54
Image ReconstructionImageNet 50k 1k (val)
rFID1.12
25
Unconditional Image GenerationFFHQ 256x256 (test)
FID8.52
25
Unconditional image synthesisFFHQ
FID8.52
15
Image ReconstructionImageNet (test)
FID1.12
10
Class-conditional Image GenerationImageNet (test)
FID7.13
9
Showing 10 of 12 rows

Other info

Code

Follow for update