Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning on the Manifold: Unlocking Standard Diffusion Transformers with Representation Encoders

About

Leveraging representation encoders for generative modeling offers a path for efficient, high-fidelity synthesis. However, standard diffusion transformers fail to converge on these representations directly. While recent work attributes this to a capacity bottleneck proposing computationally expensive width scaling of diffusion transformers we demonstrate that the failure is fundamentally geometric. We identify Geometric Interference as the root cause: standard Euclidean flow matching forces probability paths through the low-density interior of the hyperspherical feature space of representation encoders, rather than following the manifold surface. To resolve this, we propose Riemannian Flow Matching with Jacobi Regularization (RJF). By constraining the generative process to the manifold geodesics and correcting for curvature-induced error propagation, RJF enables standard Diffusion Transformer architectures to converge without width scaling. Our method RJF enables the standard DiT-B architecture (131M parameters) to converge effectively, achieving an FID of 3.37 where prior methods fail to converge. Code: https://github.com/amandpkr/RJF

Amandeep Kumar, Vishal M. Patel• 2026

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256 (train val)
FID2.81
178
Image GenerationImageNet-1K 256x256 (val)--
85
Showing 2 of 2 rows

Other info

GitHub

Follow for update