Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Neural Operators for Multi-Task Control and Adaptation

About

Neural operator methods have emerged as powerful tools for learning mappings between infinite-dimensional function spaces, yet their potential in optimal control remains largely unexplored. We focus on multi-task control problems, whose solution is a mapping from task description (e.g., cost or dynamics functions) to optimal control law (e.g., feedback policy). We approximate these solution operators using a permutation-invariant neural operator architecture. Across a range of parametric optimal control environments and a locomotion benchmark, a single operator trained via behavioral cloning accurately approximates the solution operator and generalizes to unseen tasks, out-of-distribution settings, and varying amounts of task observations. We further show that the branch-trunk structure of our neural operator architecture enables efficient and flexible adaptation to new tasks. We develop structured adaptation strategies ranging from lightweight updates to full-network fine-tuning, achieving strong performance across different data and compute settings. Finally, we introduce meta-trained operator variants that optimize the initialization for few-shot adaptation. These methods enable rapid task adaptation with limited data and consistently outperform a popular meta-learning baseline. Together, our results demonstrate that neural operators provide a unified and efficient framework for multi-task control and adaptation.

David Sewell, Xingjian Li, Stepan Tretiakov, Krishna Kumar, David Fridovich-Keil• 2026

Related benchmarks

TaskDatasetResultRank
Behavioral CloningP2P Small
Relative L2 Error0.077
16
Behavioral CloningP2P-Cost
Relative L2 Error6.9
16
Behavioral CloningPlanar Quadrotor
Relative L2 Error0.066
16
Behavioral Cloningobstacle
Relative L2 Error0.285
16
Behavioral CloningP2P-Dyn.
Relative L2 Error0.081
15
Showing 5 of 5 rows

Other info

Follow for update