Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DiffusionBrowser: Interactive Diffusion Previews via Multi-Branch Decoders

About

Video diffusion models have revolutionized generative video synthesis, but they are imprecise, slow, and can be opaque during generation -- keeping users in the dark for a prolonged period. In this work, we propose DiffusionBrowser, a model-agnostic, lightweight decoder framework that allows users to interactively generate previews at any point (timestep or transformer block) during the denoising process. Our model can generate multi-modal preview representations that include RGB and scene intrinsics at more than 4$\times$ real-time speed (less than 1 second for a 4-second video) that convey consistent appearance and motion to the final video. With the trained decoder, we show that it is possible to interactively guide the generation at intermediate noise steps via stochasticity reinjection and modal steering, unlocking a new control capability. Moreover, we systematically probe the model using the learned decoders, revealing how scene, object, and other details are composed and assembled during the otherwise black-box denoising process.

Susung Hong, Chongjian Ge, Zhifei Zhang, Jui-Hsien Wang• 2025

Related benchmarks

TaskDatasetResultRank
Depth EstimationSynthetic video dataset (val)
PSNR16.95
4
Base Color PredictionSynthetic video dataset (val)
PSNR16.38
3
RGB ReconstructionSynthetic video dataset (val)
PSNR18.03
3
Roughness PredictionSynthetic video dataset (val)
PSNR17.03
3
Surface Normal EstimationSynthetic video dataset (val)
PSNR20.04
3
Metallicity PredictionSynthetic video dataset (val)
PSNR16.42
3
Intrinsic Preview EvaluationUser Study
Content Predictability74.6
2
Showing 7 of 7 rows

Other info

Follow for update