Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LACON: Training Text-to-Image Model from Uncurated Data

About

The success of modern text-to-image generation is largely attributed to massive, high-quality datasets. Currently, these datasets are curated through a filter-first paradigm that aggressively discards low-quality raw data based on the assumption that it is detrimental to model performance. Is the discarded bad data truly useless, or does it hold untapped potential? In this work, we critically re-examine this question. We propose LACON (Labeling-and-Conditioning), a novel training framework that exploits the underlying uncurated data distribution. Instead of filtering, LACON re-purposes quality signals, such as aesthetic scores and watermark probabilities as explicit, quantitative condition labels. The generative model is then trained to learn the full spectrum of data quality, from bad to good. By learning the explicit boundary between high- and low-quality content, LACON achieves superior generation quality compared to baselines trained only on filtered data using the same compute budget, proving the significant value of uncurated data.

Zhiyang Liang, Ziyu Wan, Hongyu Liu, Dong Chen, Qiu Shen, Hao Zhu, Dongdong Chen• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO
FID10.9
131
Text-to-Image GenerationGenEval
GenEval Score0.703
88
Text-to-Image GenerationGenEval 1024x1024
Overall Score (GenEval)0.715
23
Text-to-Image GenerationGenEval 512x512 resolution
GenEval Score71.6
12
Text-to-Image GenerationDPG 512x512 resolution
DPG Score78.1
12
Text-to-Image GenerationFID 512x512 resolution
FID11.2
12
Text-to-Image GenerationDPG
DPG Score80.1
6
Text-to-Image GenerationDPG 1024x1024 resolution
DPG Score78.8
3
Text-to-Image GenerationFID 1024x1024 resolution
FID11.3
3
Showing 9 of 9 rows

Other info

Follow for update