Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GREx: Generalized Referring Expression Segmentation, Comprehension, and Generation

About

Referring Expression Segmentation (RES) and Comprehension (REC) respectively segment and detect the object described by an expression, while Referring Expression Generation (REG) generates an expression for the selected object. Existing datasets and methods commonly support single-target expressions only, i.e., one expression refers to one object, not considering multi-target and no-target expressions. This greatly limits the real applications of REx (RES/REC/REG). This paper introduces three new benchmarks called Generalized Referring Expression Segmentation (GRES), Comprehension (GREC), and Generation (GREG), collectively denoted as GREx, which extend the classic REx to allow expressions to identify an arbitrary number of objects. We construct the first large-scale GREx dataset gRefCOCO that contains multi-target, no-target, and single-target expressions and their corresponding images with labeled targets. GREx and gRefCOCO are designed to be backward-compatible with REx, facilitating extensive experiments to study the performance gap of the existing REx methods on GREx tasks. One of the challenges of GRES/GREC is complex relationship modeling, for which we propose a baseline ReLA that adaptively divides the image into regions with sub-instance clues and explicitly models the region-region and region-language dependencies. The proposed ReLA achieves the state-of-the-art results on the both GRES and GREC tasks. The proposed gRefCOCO dataset and method are available at https://henghuiding.github.io/GREx.

Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Yu-Gang Jiang• 2026

Related benchmarks

TaskDatasetResultRank
Referring Expression SegmentationRefCOCO (testA)
cIoU76.48
257
Referring Video Object SegmentationRef-YouTube-VOS (val)
J&F Score65.7
244
Referring Expression SegmentationRefCOCO+ (testA)
cIoU71.02
230
Referring Expression SegmentationRefCOCO+ (val)
cIoU66.04
223
Referring Expression SegmentationRefCOCO (testB)
cIoU70.18
213
Referring Expression SegmentationRefCOCO (val)
cIoU73.82
212
Referring Expression SegmentationRefCOCO+ (testB)
cIoU57.65
210
Referring Video Object SegmentationMeViS (val)
J&F Score0.446
161
Generalized Referring Expression SegmentationgRefCOCO (testA)
cIoU69.43
139
Generalized Referring Expression SegmentationgRefCOCO (val)
cIoU62.91
123
Showing 10 of 18 rows

Other info

Follow for update