Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VP-VAE: Rethinking Vector Quantization via Adaptive Vector Perturbation

About

Vector Quantized Variational Autoencoders (VQ-VAEs) are fundamental to modern generative modeling, yet they often suffer from training instability and "codebook collapse" due to the inherent coupling of representation learning and discrete codebook optimization. In this paper, we propose VP-VAE (Vector Perturbation VAE), a novel paradigm that decouples representation learning from discretization by eliminating the need for an explicit codebook during training. Our key insight is that, from the neural network's viewpoint, performing quantization primarily manifests as injecting a structured perturbation in latent space. Accordingly, VP-VAE replaces the non-differentiable quantizer with distribution-consistent and scale-adaptive latent perturbations generated via Metropolis--Hastings sampling. This design enables stable training without a codebook while making the model robust to inference-time quantization error. Moreover, under the assumption of approximately uniform latent variables, we derive FSP (Finite Scalar Perturbation), a lightweight variant of VP-VAE that provides a unified theoretical explanation and a practical improvement for FSQ-style fixed quantizers. Extensive experiments on image and audio benchmarks demonstrate that VP-VAE and FSP improve reconstruction fidelity and achieve substantially more balanced token usage, while avoiding the instability inherent to coupled codebook training.

Linwei Zhai, Han Ding, Mingzhi Lin, Cui Zhao, Fei Wang, Ge Wang, Wang Zhi, Wei Xi• 2026

Related benchmarks

TaskDatasetResultRank
Image ReconstructionImageNet
PSNR25.4315
43
Image ReconstructionCOCO (test)
CVU0.8852
24
Audio ReconstructionCommon Voice
CVU0.286
21
Audio ReconstructionLibriSpeech (test-clean test-other)
CVU0.1372
21
Showing 4 of 4 rows

Other info

Follow for update