Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SegLLM: Multi-round Reasoning Segmentation

About

We present SegLLM, a novel multi-round interactive reasoning segmentation model that enhances LLM-based segmentation by exploiting conversational memory of both visual and textual outputs. By leveraging a mask-aware multimodal LLM, SegLLM re-integrates previous segmentation results into its input stream, enabling it to reason about complex user intentions and segment objects in relation to previously identified entities, including positional, interactional, and hierarchical relationships, across multiple interactions. This capability allows SegLLM to respond to visual and text queries in a chat-like manner. Evaluated on the newly curated MRSeg benchmark, SegLLM outperforms existing methods in multi-round interactive reasoning segmentation by over 20%. Additionally, we observed that training on multi-round reasoning segmentation data enhances performance on standard single-round referring segmentation and localization tasks, resulting in a 5.5% increase in cIoU for referring expression segmentation and a 4.5% improvement in Acc@0.5 for referring expression localization.

XuDong Wang, Shaolun Zhang, Shufan Li, Konstantinos Kallidromitis, Kehan Li, Yusuke Kato, Kazuki Kozuka, Trevor Darrell• 2024

Related benchmarks

TaskDatasetResultRank
Referring Expression SegmentationRefCOCO (testA)
cIoU81.5
217
Referring Expression SegmentationRefCOCO+ (val)
cIoU70.3
201
Referring Expression SegmentationRefCOCO (testB)
cIoU75.4
191
Referring Expression SegmentationRefCOCO (val)
cIoU80.2
190
Referring Expression SegmentationRefCOCO+ (testA)
cIoU73
190
Referring Expression SegmentationRefCOCO+ (testB)
cIoU62.5
188
Referring Expression SegmentationRefCOCOg (val (U))
cIoU72.6
89
Referring Expression SegmentationRefCOCOg (test(U))
cIoU73.6
78
Visual GroundingReasonSeg
cIoU (Overall)48.4
15
Showing 9 of 9 rows

Other info

Follow for update