Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

BADiff: Bandwidth Adaptive Diffusion Model

About

In this work, we propose a novel framework to enable diffusion models to adapt their generation quality based on real-time network bandwidth constraints. Traditional diffusion models produce high-fidelity images by performing a fixed number of denoising steps, regardless of downstream transmission limitations. However, in practical cloud-to-device scenarios, limited bandwidth often necessitates heavy compression, leading to loss of fine textures and wasted computation. To address this, we introduce a joint end-to-end training strategy where the diffusion model is conditioned on a target quality level derived from the available bandwidth. During training, the model learns to adaptively modulate the denoising process, enabling early-stop sampling that maintains perceptual quality appropriate to the target transmission condition. Our method requires minimal architectural changes and leverages a lightweight quality embedding to guide the denoising trajectory. Experimental results demonstrate that our approach significantly improves the visual fidelity of bandwidth-adapted generations compared to naive early-stopping, offering a promising solution for efficient image delivery in bandwidth-constrained environments. Code is available at: https://github.com/xzhang9308/BADiff.

Xi Zhang, Hanwei Zhu, Yan Zhong, Jiamang Wang, Weisi Lin• 2025

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10 (test)--
483
Image CompressionCelebA-HQ (test)
FID7.4
36
Image CompressionLSUN (test)
FID5.8
36
Text-to-Image GenerationCOCO 2017 (val)
FID11
23
Image Generation1024x1024
Latency (ms)145.6
6
High-Resolution Image GenerationImages 512x512
FID6.85
3
Showing 6 of 6 rows

Other info

Follow for update