Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Flexible Multitask Learning with Factorized Diffusion Policy

About

Multitask learning poses significant challenges due to the highly multimodal and diverse nature of robot action distributions. However, effectively fitting policies to these complex task distributions is often difficult, and existing monolithic models often underfit the action distribution and lack the flexibility required for efficient adaptation. We introduce a novel modular diffusion policy framework that factorizes complex action distributions into a composition of specialized diffusion models, each capturing a distinct sub-mode of the behavior space for a more effective overall policy. In addition, this modular structure enables flexible policy adaptation to new tasks by adding or fine-tuning components, which inherently mitigates catastrophic forgetting. Empirically, across both simulation and real-world robotic manipulation settings, we illustrate how our method consistently outperforms strong modular and monolithic baselines.

Chaoqi Liu, Haonan Chen, Sigmund H. H{\o}eg, Shaoxiong Yao, Yunzhu Li, Kris Hauser, Yilun Du• 2025

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationRLBench (test)
Average Success Rate63.9
34
Multitask Robot ManipulationMetaWorld (test)
Door Open100
4
Continual AdaptationLIBERO
L1 PnP Soup Success Rate83.2
3
Showing 3 of 3 rows

Other info

Follow for update