Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Simple but Effective: CLIP Embeddings for Embodied AI

About

Contrastive language image pretraining (CLIP) encoders have been shown to be beneficial for a range of visual tasks from classification and detection to captioning and image manipulation. We investigate the effectiveness of CLIP visual backbones for Embodied AI tasks. We build incredibly simple baselines, named EmbCLIP, with no task specific architectures, inductive biases (such as the use of semantic maps), auxiliary tasks during training, or depth maps -- yet we find that our improved baselines perform very well across a range of tasks and simulators. EmbCLIP tops the RoboTHOR ObjectNav leaderboard by a huge margin of 20 pts (Success Rate). It tops the iTHOR 1-Phase Rearrangement leaderboard, beating the next best submission, which employs Active Neural Mapping, and more than doubling the % Fixed Strict metric (0.08 to 0.17). It also beats the winners of the 2021 Habitat ObjectNav Challenge, which employ auxiliary tasks, depth maps, and human demonstrations, and those of the 2019 Habitat PointNav Challenge. We evaluate the ability of CLIP's visual representations at capturing semantic information about input observations -- primitives that are useful for navigation-heavy embodied tasks -- and find that CLIP's representations encode these primitives more effectively than ImageNet-pretrained backbones. Finally, we extend one of our baselines, producing an agent capable of zero-shot object navigation that can navigate to objects that were not used as targets during training. Our code and models are available at https://github.com/allenai/embodied-clip

Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, Aniruddha Kembhavi• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)
Top-1 Accuracy83.8
512
1-Phase Room RearrangementiTHOR challenge 2021 (test)
FS (Final Success)0.1813
14
Object Detection7-channel brain cell dataset (val)
mAP (Box)64.2
10
Instance Segmentation7-channel brain cell dataset (val)
Mask mAP67
10
Object Goal NavigationHabitat OBJECTNAV Challenge 2021 (test-standard)
SPL0.08
9
ReachMeta-World ML-1 (test)
Success Rate64.7
9
Reach-Wallegocentric-Metaworld Source
Success Rate1
6
Autonomous DrivingCARLA Map 1 (Seen Target)
Sum of Rewards1.73e+3
6
Autonomous DrivingCARLA Map 1 (Unseen Target)
Cumulative Reward1.42e+3
6
Object Goal NavigationAI2THOR Source domains
Success Rate89.3
6
Showing 10 of 24 rows

Other info

Code

Follow for update