Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Abstracting Robot Manipulation Skills via Mixture-of-Experts Diffusion Policies

About

Diffusion-based policies have recently shown strong results in robot manipulation, but their extension to multi-task scenarios is hindered by the high cost of scaling model size and demonstrations. We introduce Skill Mixture-of-Experts Policy (SMP), a diffusion-based mixture-of-experts policy that learns a compact orthogonal skill basis and uses sticky routing to compose actions from a small, task-relevant subset of experts at each step. A variational training objective supports this design, and adaptive expert activation at inference yields fast sampling without oversized backbones. We validate SMP in simulation and on a real dual-arm platform with multi-task learning and transfer learning tasks, where SMP achieves higher success rates and markedly lower inference cost than large diffusion baselines. These results indicate a practical path toward scalable, transferable multi-task manipulation: learn reusable skills once, activate only what is needed, and adapt quickly when tasks change.

Ce Hao, Xuanran Zhai, Yaohua Liu, Harold Soh• 2026

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationRoboTwin 2.0
Pick Diverse Bottles Success Rate56
17
Bimanual Multi-Task LearningRoboTwin and RLBench average over all tasks 2
Np258.9
7
Bimanual Multi-Task LearningRLBench 2
Tray Success Rate19
6
Bimanual ManipulationRoboTwin-2 Few-shot
Success Rate (Div.)22
4
Showing 4 of 4 rows

Other info

Follow for update