Score and Distribution Matching Policy: Advanced Accelerated Visuomotor Policies via Matched Distillation
About
Visual-motor policy learning has advanced with architectures like diffusion-based policies, known for modeling complex robotic trajectories. However, their prolonged inference times hinder high-frequency control tasks requiring real-time feedback. While consistency distillation (CD) accelerates inference, it introduces errors that compromise action quality. To address these limitations, we propose the Score and Distribution Matching Policy (SDM Policy), which transforms diffusion-based policies into single-step generators through a two-stage optimization process: score matching ensures alignment with true action distributions, and distribution matching minimizes KL divergence for consistency. A dual-teacher mechanism integrates a frozen teacher for stability and an unfrozen teacher for adversarial training, enhancing robustness and alignment with target distributions. Evaluated on a 57-task simulation benchmark, SDM Policy achieves a 6x inference speedup while having state-of-the-art action quality, providing an efficient and reliable framework for high-frequency robotic tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Robot Manipulation | Adroit | Success Rate74 | 18 | |
| Robot Manipulation | MetaWorld Medium 11 tasks | Success Rate65.8 | 18 | |
| Robot Manipulation | MetaWorld Hard (6 tasks) | Success Rate35.8 | 18 | |
| Robot Manipulation | MetaWorld Very Hard 5 tasks | Success Rate71.6 | 15 | |
| Robotic Arm Manipulation | MetaWorld Easy | Success Rate86.5 | 15 | |
| Robotic Arm Manipulation | MetaWorld Very Hard | Success Rate71.6 | 15 | |
| Robotic Manipulation | Adroit and MetaWorld | Average Success Rate74.8 | 13 | |
| Robot Manipulation | MetaWorld Easy 28 tasks | Success Rate86.5 | 9 | |
| Robotic Manipulation | Meta-World (test) | Success Rate (Easy)89 | 6 | |
| Robotic Manipulation | Adroit and MetaWorld (test) | Average Score74.81 | 6 |