Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning

About

Pre-trained vision models (PVMs) are fundamental to modern robotics, yet their optimal configuration remains unclear. Through systematic evaluation, we find that while DINO and iBOT outperform MAE across visuomotor control and perception tasks, they struggle when trained on non-(single-)object-centric (NOC) data--a limitation strongly correlated with their diminished ability to learn object-centric representations. This investigation indicates that the ability to form object-centric representations from the non-object-centric robotics dataset is the key to success for PVMs. Motivated by this discovery, we designed SlotMIM, a method that induces object-centric representations by introducing a semantic bottleneck to reduce the number of prototypes to encourage the emergence of objectness as well as cross-view consistency regularization for encouraging multiview invariance. Our experiments encompass pre-training on object-centric, scene-centric, web-crawled, and ego-centric data. Across all settings, our approach learns transferrable representations and achieves significant improvements over prior work in image recognition, scene understanding, and robot learning evaluations. When scaled up with million-scale datasets, our method also demonstrates superior data efficiency and scalability. Our code and models are publicly available at https://github.com/CVMI-Lab/SlotMIM.

Xin Wen, Bingchen Zhao, Yilun Chen, Jiangmiao Pang, Xiaojuan Qi• 2025

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationMeta-World
Average Success Rate84.2
27
Robotic ManipulationFranka-Kitchen
Avg Success Rate86
24
Image NavigationImageNav
Success Rate69.8
11
Object NavigationObjectNav
Success Rate62
11
Showing 4 of 4 rows

Other info

Code

Follow for update