Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Generative Refinement Networks for Visual Synthesis

About

While diffusion models dominate the field of visual generation, they are computationally inefficient, applying a uniform computational effort regardless of different complexity. In contrast, autoregressive (AR) models are inherently complexity-aware, as evidenced by their variable likelihoods, but are often hindered by lossy discrete tokenization and error accumulation. In this work, we introduce Generative Refinement Networks (GRN), a next-generation visual synthesis paradigm to address these issues. At its core, GRN addresses the discrete tokenization bottleneck through a theoretically near-lossless Hierarchical Binary Quantization (HBQ), achieving a reconstruction quality comparable to continuous counterparts. Built upon HBQ's latent space, GRN fundamentally upgrades AR generation with a global refinement mechanism that progressively perfects and corrects artworks -- like a human artist painting. Besides, GRN integrates an entropy-guided sampling strategy, enabling complexity-aware, adaptive-step generation without compromising visual quality. On the ImageNet benchmark, GRN establishes new records in image reconstruction (0.56 rFID) and class-conditional image generation (1.81 gFID). We also scale GRN to more challenging text-to-image and text-to-video generation, delivering superior performance on an equivalent scale. We release all models and code to foster further research on GRN.

Jian Han, Jinlai Liu, Jiahuan Wang, Bingyue Peng, Zehuan Yuan• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationGenEval
Overall Score76
391
Class-conditional Image GenerationImageNet 256x256 (test)
FID1.81
208
Text-to-Video GenerationVBench
Quality Score84.41
155
Image ReconstructionImageNet 256x256
rFID0.56
150
Video ReconstructionHigh-motion video 160 videos (val)
rFVD30.1
9
Showing 5 of 5 rows

Other info

GitHub

Follow for update