Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Viewpoint Matters: Dynamically Optimizing Viewpoints with Masked Autoencoder for Visual Manipulation

About

Robotic manipulation continues to be a challenge, and imitation learning (IL) enables robots to learn tasks from expert demonstrations. Current IL methods typically rely on fixed camera setups, where cameras are manually positioned in static locations, imposing significant limitations on adaptability and coverage. Inspired by human active perception, where humans dynamically adjust their viewpoint to capture the most relevant and least noisy information, we propose MAE-Select, a novel framework for active viewpoint selection in single-camera robotic systems. MAE-Select fully leverages pre-trained multi-view masked autoencoder representations and dynamically selects the next most informative viewpoint at each time chunk without requiring labeled viewpoints. Extensive experiments demonstrate that MAE-Select improves the capabilities of single-camera systems and, in some cases, even surpasses multi-camera setups. The project will be available at https://mae-select.github.io.

Pengfei Yi, Yifan Han, Junyan Li, Litao Liu, Wenzhao Lian• 2026

Related benchmarks

TaskDatasetResultRank
InsertionSimulation
Insertion Success Rate88
14
Phone On BaseSimulation
Success Rate92
7
Pick Up CupSimulation
Success Rate0.7
7
Put Box In CabinetSimulation
Success Rate0.5
7
Unplug ChargerSimulation
Success Rate58
7
Bimanual InsertionSimulation
Success Rate52
7
Put Box In BinSimulation with Disturbance
Success Rate58
7
Take UmbrellaSimulation
Success Rate60
7
Showing 8 of 8 rows

Other info

Follow for update