Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generation and Comprehension of Unambiguous Object Descriptions

About

We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MS-COCO. We have released the dataset and a toolbox for visualization and evaluation, see https://github.com/mjhucla/Google_Refexp_toolbox

Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, Kevin Murphy• 2015

Related benchmarks

TaskDatasetResultRank
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.649
333
Referring Expression ComprehensionRefCOCOg (val)
Accuracy62.14
291
Referring Expression ComprehensionRefCOCO+ (testB)
Accuracy42.81
235
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy54.03
207
Referring Expression ComprehensionRefCOCO (testB)
Accuracy54.51
196
Referring Expression ComprehensionRefCOCO+ (test-A)
Accuracy48.73
172
Visual GroundingRefCOCO+ (testB)
Accuracy42.8
169
Referring Expression ComprehensionRefCOCO+ (test-B)
Accuracy42.13
167
Referring Expression ComprehensionRefCOCO (test-B)
Accuracy64.21
160
Visual GroundingRefCOCO (testB)
Accuracy54.5
125
Showing 10 of 17 rows

Other info

Follow for update