Language-Mediated, Object-Centric Representation Learning
About
We present Language-mediated, Object-centric Representation Learning (LORL), a paradigm for learning disentangled, object-centric scene representations from vision and language. LORL builds upon recent advances in unsupervised object discovery and segmentation, notably MONet and Slot Attention. While these algorithms learn an object-centric representation just by reconstructing the input image, LORL enables them to further learn to associate the learned representations to concepts, i.e., words for object categories, properties, and spatial relationships, from language input. These object-centric concepts derived from language facilitate the learning of object-centric representations. LORL can be integrated with various unsupervised object discovery algorithms that are language-agnostic. Experiments show that the integration of LORL consistently improves the performance of unsupervised object discovery methods on two datasets via the help of language. We also show that concepts learned by LORL, in conjunction with object discovery methods, aid downstream tasks such as referring expression comprehension.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instance Segmentation | PartNet 1.0 (test) | mAP (Chair)50.1 | 44 | |
| Visual Question Answering | PartNet-Reasoning Cart 1.0 (test) | Part Existence Accuracy91 | 6 | |
| Visual Question Answering | PartNet-Reasoning Chair 1.0 (test) | Exist Part Accuracy72.4 | 6 | |
| Visual Question Answering | PartNet-Reasoning Table 1.0 (test) | Accuracy (Part Existence)71 | 6 | |
| Visual Question Answering | PartNet-Reasoning Bag 1.0 (test) | Existence Part Accuracy87.3 | 6 | |
| Semantic segmentation | PartNet-Reasoning (test) | Chair Accuracy64.6 | 5 | |
| Instance Segmentation | PartNet-Reasoning (test) | Chair mIoU50.1 | 5 |