Viewpoint Matters: Dynamically Optimizing Viewpoints with Masked Autoencoder for Visual Manipulation
About
Robotic manipulation continues to be a challenge, and imitation learning (IL) enables robots to learn tasks from expert demonstrations. Current IL methods typically rely on fixed camera setups, where cameras are manually positioned in static locations, imposing significant limitations on adaptability and coverage. Inspired by human active perception, where humans dynamically adjust their viewpoint to capture the most relevant and least noisy information, we propose MAE-Select, a novel framework for active viewpoint selection in single-camera robotic systems. MAE-Select fully leverages pre-trained multi-view masked autoencoder representations and dynamically selects the next most informative viewpoint at each time chunk without requiring labeled viewpoints. Extensive experiments demonstrate that MAE-Select improves the capabilities of single-camera systems and, in some cases, even surpasses multi-camera setups. The project will be available at https://mae-select.github.io.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Insertion | Simulation | Insertion Success Rate88 | 14 | |
| Phone On Base | Simulation | Success Rate92 | 7 | |
| Pick Up Cup | Simulation | Success Rate0.7 | 7 | |
| Put Box In Cabinet | Simulation | Success Rate0.5 | 7 | |
| Unplug Charger | Simulation | Success Rate58 | 7 | |
| Bimanual Insertion | Simulation | Success Rate52 | 7 | |
| Put Box In Bin | Simulation with Disturbance | Success Rate58 | 7 | |
| Take Umbrella | Simulation | Success Rate60 | 7 |