Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Language-Mediated, Object-Centric Representation Learning

About

We present Language-mediated, Object-centric Representation Learning (LORL), a paradigm for learning disentangled, object-centric scene representations from vision and language. LORL builds upon recent advances in unsupervised object discovery and segmentation, notably MONet and Slot Attention. While these algorithms learn an object-centric representation just by reconstructing the input image, LORL enables them to further learn to associate the learned representations to concepts, i.e., words for object categories, properties, and spatial relationships, from language input. These object-centric concepts derived from language facilitate the learning of object-centric representations. LORL can be integrated with various unsupervised object discovery algorithms that are language-agnostic. Experiments show that the integration of LORL consistently improves the performance of unsupervised object discovery methods on two datasets via the help of language. We also show that concepts learned by LORL, in conjunction with object discovery methods, aid downstream tasks such as referring expression comprehension.

Ruocheng Wang, Jiayuan Mao, Samuel J. Gershman, Jiajun Wu• 2020

Related benchmarks

TaskDatasetResultRank
Instance SegmentationPartNet 1.0 (test)
mAP (Chair)50.1
44
Visual Question AnsweringPartNet-Reasoning Cart 1.0 (test)
Part Existence Accuracy91
6
Visual Question AnsweringPartNet-Reasoning Chair 1.0 (test)
Exist Part Accuracy72.4
6
Visual Question AnsweringPartNet-Reasoning Table 1.0 (test)
Accuracy (Part Existence)71
6
Visual Question AnsweringPartNet-Reasoning Bag 1.0 (test)
Existence Part Accuracy87.3
6
Semantic segmentationPartNet-Reasoning (test)
Chair Accuracy64.6
5
Instance SegmentationPartNet-Reasoning (test)
Chair mIoU50.1
5
Showing 7 of 7 rows

Other info

Follow for update