Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multimodal Large Language Models for Multi-Subject In-Context Image Generation

About

Recent advances in text-to-image (T2I) generation have enabled visually coherent image synthesis from descriptions, but generating images containing multiple given subjects remains challenging. As the number of reference identities increases, existing methods often suffer from subject missing and semantic drift. To address this problem, we propose MUSIC, the first MLLM specifically designed for \textbf{MU}lti-\textbf{S}ubject \textbf{I}n-\textbf{C}ontext image generation. To overcome the data scarcity, we introduce an automatic and scalable data generation pipeline that eliminates the need for manual annotation. Furthermore, we enhance the model's understanding of multi-subject semantic relationships through a vision chain-of-thought (CoT) mechanism, guiding step-by-step reasoning from subject images to semantics and generation. To mitigate identity entanglement and manage visual complexity, we develop a novel semantics-driven spatial layout planning method and demonstrate its test-time scalability. By incorporating complex subject images during training, we improve the model's capacity for chained reasoning. In addition, we curate MSIC, a new benchmark tailored for multi-subject in-context generation. Experimental results demonstrate that MUSIC significantly surpasses other methods in both multi- and single-subject scenarios.

Yucheng Zhou, Dubing Chen, Huan Zheng, Jianbing Shen• 2026

Related benchmarks

TaskDatasetResultRank
Subject-driven image generationDreamBench
DINO Score76.8
100
Multi-subject in-context image generationMSIC
CLIP-T Score0.33
8
Showing 2 of 2 rows

Other info

Follow for update