Theia: Distilling Diverse Vision Foundation Models for Robot Learning
About
Vision-based robot policy learning, which maps visual inputs to actions, necessitates a holistic understanding of diverse visual tasks beyond single-task needs like classification or segmentation. Inspired by this, we introduce Theia, a vision foundation model for robot learning that distills multiple off-the-shelf vision foundation models trained on varied vision tasks. Theia's rich visual representations encode diverse visual knowledge, enhancing downstream robot learning. Extensive experiments demonstrate that Theia outperforms its teacher models and prior robot learning models using less training data and smaller model sizes. Additionally, we quantify the quality of pre-trained visual representations and hypothesize that higher entropy in feature norm distributions leads to improved robot learning performance. Code, models, and demo are available at https://theia.theaiinstitute.com.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | ADE20K | mIoU35.55 | 936 | |
| Semantic segmentation | Pascal Context | mIoU69.84 | 111 | |
| Semantic segmentation | NYUD v2 | mIoU38.9 | 96 | |
| Semantic segmentation | ScanNet | mIoU14.71 | 59 | |
| Semantic segmentation | Pascal Context | mIoU69.84 | 43 | |
| Saliency Detection | Pascal Context | maxF Score80.63 | 21 | |
| Surface Normal Estimation | Pascal Context | Mean Error (MAE)16.94 | 21 | |
| Robot Manipulation | MetaWorld 50 tasks | Success Rate (Easy)50.9 | 21 | |
| Surface Normal Estimation | NYUD | mErr24.11 | 21 | |
| Semantic segmentation | SUN-RGBD | IoU11.18 | 19 |