Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Concept-TRAK: Understanding how diffusion models learn concepts through concept-level attribution

About

While diffusion models excel at image generation, their growing adoption raises critical concerns about copyright issues and model transparency. Existing attribution methods identify training examples influencing an entire image, but fall short in isolating contributions to specific elements, such as styles or objects, that are of primary concern to stakeholders. To address this gap, we introduce concept-level attribution through a novel method called Concept-TRAK, which extends influence functions with a key innovation: specialized training and utility loss functions designed to isolate concept-specific influences rather than overall reconstruction quality. We evaluate Concept-TRAK on novel concept attribution benchmarks using Synthetic and CelebA-HQ datasets, as well as the established AbC benchmark, showing substantial improvements over prior methods in concept-level attribution scenarios. We further demonstrate its versatility on real-world text-to-image generation with compositional and multi-concept prompts.

Yonghyun Park, Chieh-Hsin Lai, Satoshi Hayakawa, Yuhta Takida, Naoki Murata, Wei-Hsiang Liao, Woosung Choi, Kin Wai Cheuk, Junghyun Koo, Yuki Mitsufuji• 2025

Related benchmarks

TaskDatasetResultRank
Data AttributionCelebA-HQ In-distribution (test)
Precision@10 (Eyeglasses)97
7
Data AttributionCelebA-HQ Out-of-distribution (test)
P@10 (Eyeglasses)100
4
Data AttributionSynthetic (In-distribution)
Shape Score100
4
Data AttributionSynthetic (Out-of-distribution)
Shape Fidelity80
4
Set-level AttributionTOY
Shape Accuracy100
3
Concept AttributionAbC LAION-100k
Object Score89
3
Showing 6 of 6 rows

Other info

Follow for update