Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

InstructDiffusion: A Generalist Modeling Interface for Vision Tasks

About

We present InstructDiffusion, a unifying and generic framework for aligning computer vision tasks with human instructions. Unlike existing approaches that integrate prior knowledge and pre-define the output space (e.g., categories and coordinates) for each vision task, we cast diverse vision tasks into a human-intuitive image-manipulating process whose output space is a flexible and interactive pixel space. Concretely, the model is built upon the diffusion process and is trained to predict pixels according to user instructions, such as encircling the man's left shoulder in red or applying a blue mask to the left car. InstructDiffusion could handle a variety of vision tasks, including understanding tasks (such as segmentation and keypoint detection) and generative tasks (such as editing and enhancement). It even exhibits the ability to handle unseen tasks and outperforms prior methods on novel datasets. This represents a significant step towards a generalist modeling interface for vision tasks, advancing artificial general intelligence in the field of computer vision.

Zigang Geng, Binxin Yang, Tiankai Hang, Chen Li, Shuyang Gu, Ting Zhang, Jianmin Bao, Zheng Zhang, Han Hu, Dong Chen, Baining Guo• 2023

Related benchmarks

TaskDatasetResultRank
Referring Image SegmentationRefCOCO+ (test-B)--
200
Referring Image SegmentationRefCOCO (val)--
197
Referring Image SegmentationRefCOCO (test A)--
178
Referring Image SegmentationRefCOCO (test-B)--
119
Referring Image SegmentationRefCOCO+ (val)--
117
Referring Image SegmentationRefCOCO+ (test-A)--
89
Referring Image SegmentationRefCOCOg (val)--
37
Low-light enhancementLow-light enhancement dataset
LPIPS0.368
11
Complex instruction-based image editingCIE-Bench
CLIP-I0.8815
10
Instruction-based Image EditingMagicBrush
L1 Loss0.0905
9
Showing 10 of 27 rows

Other info

Follow for update