Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VAR-3D: View-aware Auto-Regressive Model for Text-to-3D Generation via a 3D Tokenizer

About

Recent advances in auto-regressive transformers have achieved remarkable success in generative modeling. However, text-to-3D generation remains challenging, primarily due to bottlenecks in learning discrete 3D representations. Specifically, existing approaches often suffer from information loss during encoding, causing representational distortion before the quantization process. This effect is further amplified by vector quantization, ultimately degrading the geometric coherence of text-conditioned 3D shapes. Moreover, the conventional two-stage training paradigm induces an objective mismatch between reconstruction and text-conditioned auto-regressive generation. To address these issues, we propose View-aware Auto-Regressive 3D (VAR-3D), which intergrates a view-aware 3D Vector Quantized-Variational AutoEncoder (VQ-VAE) to convert the complex geometric structure of 3D models into discrete tokens. Additionally, we introduce a rendering-supervised training strategy that couples discrete token prediction with visual reconstruction, encouraging the generative process to better preserve visual fidelity and structural consistency relative to the input text. Experiments demonstrate that VAR-3D significantly outperforms existing methods in both generation quality and text-3D alignment.

Zongcheng Han, Dongyan Cao, Haoran Sun, Yu Hong• 2026

Related benchmarks

TaskDatasetResultRank
Text-conditioned 3D GenerationObjaverse (test)
PSNR18.52
5
Showing 1 of 1 rows

Other info

Follow for update