Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Theia: Distilling Diverse Vision Foundation Models for Robot Learning

About

Vision-based robot policy learning, which maps visual inputs to actions, necessitates a holistic understanding of diverse visual tasks beyond single-task needs like classification or segmentation. Inspired by this, we introduce Theia, a vision foundation model for robot learning that distills multiple off-the-shelf vision foundation models trained on varied vision tasks. Theia's rich visual representations encode diverse visual knowledge, enhancing downstream robot learning. Extensive experiments demonstrate that Theia outperforms its teacher models and prior robot learning models using less training data and smaller model sizes. Additionally, we quantify the quality of pre-trained visual representations and hypothesize that higher entropy in feature norm distributions leads to improved robot learning performance. Code, models, and demo are available at https://theia.theaiinstitute.com.

Jinghuan Shang, Karl Schmeckpeper, Brandon B. May, Maria Vittoria Minniti, Tarik Kelestemur, David Watkins, Laura Herlant• 2024

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU35.55
936
Semantic segmentationPascal Context
mIoU69.84
111
Semantic segmentationNYUD v2
mIoU38.9
96
Semantic segmentationScanNet
mIoU14.71
59
Semantic segmentationPascal Context
mIoU69.84
43
Saliency DetectionPascal Context
maxF Score80.63
21
Surface Normal EstimationPascal Context
Mean Error (MAE)16.94
21
Robot ManipulationMetaWorld 50 tasks
Success Rate (Easy)50.9
21
Surface Normal EstimationNYUD
mErr24.11
21
Semantic segmentationSUN-RGBD
IoU11.18
19
Showing 10 of 27 rows

Other info

Follow for update