Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Stochastic Forward-Backward Deconvolution: Training Diffusion Models with Finite Noisy Datasets

About

Recent diffusion-based generative models achieve remarkable results by training on massive datasets, yet this practice raises concerns about memorization and copyright infringement. A proposed remedy is to train exclusively on noisy data with potential copyright issues, ensuring the model never observes original content. However, through the lens of deconvolution theory, we show that although it is theoretically feasible to learn the data distribution from noisy samples, the practical challenge of collecting sufficient samples makes successful learning nearly unattainable. To overcome this limitation, we propose to pretrain the model with a small fraction of clean data to guide the deconvolution process. Combined with our Stochastic Forward--Backward Deconvolution (SFBD) method, we attain FID 6.31 on CIFAR-10 with just 4% clean images (and 3.58 with 10%). We also provide theoretical guarantees that SFBD learns the true data distribution. These results underscore the value of limited clean pretraining, or pretraining on similar datasets. Empirical studies further validate and enrich our findings.

Haoye Lu, Qifan Wu, Yaoliang Yu• 2025

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10 32x32
FID2.58
147
Image GenerationCelebA-64
FID5.91
75
Image Distribution RecoveryCIFAR-10 (test)
FID13.53
15
DenoisingCIFAR-10 32x32
FID13.53
13
DenoisingCelebA-HQ 64x64
FID6.49
9
Showing 5 of 5 rows

Other info

Follow for update