Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UNIC-Adapter: Unified Image-instruction Adapter with Multi-modal Transformer for Image Generation

About

Recently, text-to-image generation models have achieved remarkable advancements, particularly with diffusion models facilitating high-quality image synthesis from textual descriptions. However, these models often struggle with achieving precise control over pixel-level layouts, object appearances, and global styles when using text prompts alone. To mitigate this issue, previous works introduce conditional images as auxiliary inputs for image generation, enhancing control but typically necessitating specialized models tailored to different types of reference inputs. In this paper, we explore a new approach to unify controllable generation within a single framework. Specifically, we propose the unified image-instruction adapter (UNIC-Adapter) built on the Multi-Modal-Diffusion Transformer architecture, to enable flexible and controllable generation across diverse conditions without the need for multiple specialized models. Our UNIC-Adapter effectively extracts multi-modal instruction information by incorporating both conditional images and task instructions, injecting this information into the image generation process through a cross-attention mechanism enhanced by Rotary Position Embedding. Experimental results across a variety of tasks, including pixel-level spatial control, subject-driven image generation, and style-image-based image synthesis, demonstrate the effectiveness of our UNIC-Adapter in unified controllable image generation.

Lunhao Duan, Shanshan Zhao, Wenjun Yan, Yinglun Li, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, Mingming Gong, Gui-Song Xia• 2024

Related benchmarks

TaskDatasetResultRank
Subject-driven image generationDreamBench
DINO Score81.6
62
SegmentationADE20K
mIoU42.89
52
Pixel-level Spatial Control (Canny)MultiGen-20M
F1 Score38.94
8
Pixel-level Spatial Control (Depth)MultiGen-20M
RMSE31.1
8
Pixel-level Spatial Control (HED)MultiGen-20M
SSIM0.8369
7
Subject-driven generationMulti-task Image-driven Generation Evaluation Set
CLIP-I0.645
6
Style-driven GenerationMulti-task Image-driven Generation Evaluation Set
CSD48.4
6
Image-driven Generation3SGen-Bench
Subject Fidelity Score6.64
6
Structure-driven GenerationMulti-task Image-driven Generation Evaluation Set
Struc-Sim33.22
4
Showing 9 of 9 rows

Other info

Code

Follow for update