Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

InstructDiffusion: A Generalist Modeling Interface for Vision Tasks

About

We present InstructDiffusion, a unifying and generic framework for aligning computer vision tasks with human instructions. Unlike existing approaches that integrate prior knowledge and pre-define the output space (e.g., categories and coordinates) for each vision task, we cast diverse vision tasks into a human-intuitive image-manipulating process whose output space is a flexible and interactive pixel space. Concretely, the model is built upon the diffusion process and is trained to predict pixels according to user instructions, such as encircling the man's left shoulder in red or applying a blue mask to the left car. InstructDiffusion could handle a variety of vision tasks, including understanding tasks (such as segmentation and keypoint detection) and generative tasks (such as editing and enhancement). It even exhibits the ability to handle unseen tasks and outperforms prior methods on novel datasets. This represents a significant step towards a generalist modeling interface for vision tasks, advancing artificial general intelligence in the field of computer vision.

Zigang Geng, Binxin Yang, Tiankai Hang, Chen Li, Shuyang Gu, Ting Zhang, Jianmin Bao, Zheng Zhang, Han Hu, Dong Chen, Baining Guo• 2023

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU33.6
2888
Referring Image SegmentationRefCOCO (val)--
259
Referring Image SegmentationRefCOCO+ (test-B)--
252
Referring Image SegmentationRefCOCO (test A)--
230
Referring Image SegmentationRefCOCO+ (val)--
179
Referring Image SegmentationRefCOCO (test-B)--
171
Referring Image SegmentationRefCOCOg (val)--
100
Referring Image SegmentationRefCOCO+ (test-A)--
89
Instruction-based Image EditingSmartEdit Understanding
PSNR16.486
14
Instruction-based Image EditingSmartEdit Reasoning
PSNR18.463
14
Showing 10 of 31 rows

Other info

Follow for update