Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Evaluating Demographic Misrepresentation in Image-to-Image Portrait Editing

About

Demographic bias in text-to-image (T2I) generation is well studied, yet demographic-conditioned failures in instruction-guided image-to-image (I2I) editing remain underexplored. We examine whether identical edit instructions yield systematically different outcomes across subject demographics in open-weight I2I editors. We formalize two failure modes: Soft Erasure, where edits are silently weakened or ignored in the output image, and Stereotype Replacement, where edits introduce unrequested, stereotype-consistent attributes. We introduce a controlled benchmark that probes demographic-conditioned behavior by generating and editing portraits conditioned on race, gender, and age using a diagnostic prompt set, and evaluate multiple editors with vision-language model (VLM) scoring and human evaluation. Our analysis shows that identity preservation failures are pervasive, demographically uneven, and shaped by implicit social priors, including occupation-driven gender inference. Finally, we demonstrate that a prompt-level identity constraint, without model updates, can substantially reduce demographic change for minority groups while leaving majority-group portraits largely unchanged, revealing asymmetric identity priors in current editors. Together, our findings establish identity preservation as a central and demographically uneven failure mode in I2I editing and motivate demographic-robust editing systems. Project page: https://seochan99.github.io/i2i-demographic-bias

Huichan Seo, Minki Hong, Sieun Choi, Jihie Kim, Jean Oh• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image DebiasingWinobias--
6
Image-to-Image EditingWinoBias adapted for I2I editing (test)--
3
Instruction-based Image EditingFairFace--
3
Showing 3 of 3 rows

Other info

Follow for update