Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PixelLM: Pixel Reasoning with Large Multimodal Model

About

While large multimodal models (LMMs) have achieved remarkable progress, generating pixel-level masks for image reasoning tasks involving multiple open-world targets remains a challenge. To bridge this gap, we introduce PixelLM, an effective and efficient LMM for pixel-level reasoning and understanding. Central to PixelLM is a novel, lightweight pixel decoder and a comprehensive segmentation codebook. The decoder efficiently produces masks from the hidden embeddings of the codebook tokens, which encode detailed target-relevant information. With this design, PixelLM harmonizes with the structure of popular LMMs and avoids the need for additional costly segmentation models. Furthermore, we propose a target refinement loss to enhance the model's ability to differentiate between multiple targets, leading to substantially improved mask quality. To advance research in this area, we construct MUSE, a high-quality multi-target reasoning segmentation benchmark. PixelLM excels across various pixel-level image reasoning and understanding tasks, outperforming well-established methods in multiple benchmarks, including MUSE, single- and multi-referring segmentation. Comprehensive ablations confirm the efficacy of each proposed component. All code, models, and datasets will be publicly available.

Zhongwei Ren, Zhicheng Huang, Yunchao Wei, Yao Zhao, Dongmei Fu, Jiashi Feng, Xiaojie Jin• 2023

Related benchmarks

TaskDatasetResultRank
Referring Expression ComprehensionRefCOCO+ (val)--
354
Referring Expression ComprehensionRefCOCO (val)--
344
Referring Expression ComprehensionRefCOCO (testA)--
342
Referring Expression ComprehensionRefCOCOg (val)--
300
Referring Expression ComprehensionRefCOCOg (test)--
300
Referring Image SegmentationRefCOCO (val)
mIoU73
259
Referring Expression SegmentationRefCOCO (testA)
cIoU76.5
257
Referring Image SegmentationRefCOCO+ (test-B)
mIoU58.3
252
Diagram UnderstandingAI2D
Accuracy0.00e+0
247
Referring Expression ComprehensionRefCOCO+ (testB)--
244
Showing 10 of 107 rows
...

Other info

Code

Follow for update