Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DMin: Scalable Training Data Influence Estimation for Diffusion Models

About

Identifying the training data samples that most influence a generated image is a critical task in understanding diffusion models (DMs), yet existing influence estimation methods are constrained to small-scale or LoRA-tuned models due to computational limitations. To address this challenge, we propose DMin (Diffusion Model influence), a scalable framework for estimating the influence of each training data sample on a given generated image. To the best of our knowledge, it is the first method capable of influence estimation for DMs with billions of parameters. Leveraging efficient gradient compression, DMin reduces storage requirements from hundreds of TBs to mere MBs or even KBs, and retrieves the top-k most influential training samples in under 1 second, all while maintaining performance. Our empirical results demonstrate DMin is both effective in identifying influential training samples and efficient in terms of computational and storage requirements.

Huawei Lin, Yingjie Lao, Weijie Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Influential Training Sample IdentificationFlowers
Top-5 Identification Rate96.41
34
Influential Training Sample IdentificationLego Sets (subset)
Top-5 Accuracy72.94
34
Influential Training Sample IdentificationMagic Cards
Top-5 Accuracy100
34
Influence EstimationMNIST
Detection Rate (Top 5)80.06
4
Showing 4 of 4 rows

Other info

Follow for update