Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VCT: Training Consistency Models with Variational Noise Coupling

About

Consistency Training (CT) has recently emerged as a strong alternative to diffusion models for image generation. However, non-distillation CT often suffers from high variance and instability, motivating ongoing research into its training dynamics. We propose Variational Consistency Training (VCT), a flexible and effective framework compatible with various forward kernels, including those in flow matching. Its key innovation is a learned noise-data coupling scheme inspired by Variational Autoencoders, where a data-dependent encoder models noise emission. This enables VCT to adaptively learn noise-todata pairings, reducing training variance relative to the fixed, unsorted pairings in classical CT. Experiments on multiple image datasets demonstrate significant improvements: our method surpasses baselines, achieves state-of-the-art FID among non-distillation CT approaches on CIFAR-10, and matches SoTA performance on ImageNet 64 x 64 with only two sampling steps. Code is available at https://github.com/sony/vct.

Gianluigi Silvestri, Luca Ambrogioni, Chieh-Hsin Lai, Yuhta Takida, Yuki Mitsufuji• 2025

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 64x64 (test)
FID3.07
86
Unconditional Image GenerationCIFAR-10 32x32 unconditional (test)
FID2.02
33
Showing 2 of 2 rows

Other info

Follow for update