Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SPREAD: Subspace Representation Distillation for Lifelong Imitation Learning

About

A key challenge in lifelong imitation learning (LIL) is enabling agents to acquire new skills from expert demonstrations while retaining prior knowledge. This requires preserving the low-dimensional manifolds and geometric structures that underlie task representations across sequential learning. Existing distillation methods, which rely on L2-norm feature matching in raw feature space, are sensitive to noise and high-dimensional variability, often failing to preserve intrinsic task manifolds. To address this, we introduce SPREAD, a geometry-preserving framework that employs singular value decomposition (SVD) to align policy representations across tasks within low-rank subspaces. This alignment maintains the underlying geometry of multimodal features, facilitating stable transfer, robustness, and generalization. Additionally, we propose a confidence-guided distillation strategy that applies a Kullback-Leibler divergence loss restricted to the top-M most confident action samples, emphasizing reliable modes and improving optimization stability. Experiments on the LIBERO, lifelong imitation learning benchmark, show that SPREAD substantially improves knowledge transfer, mitigates catastrophic forgetting, and achieves state-of-the-art performance.

Kaushik Roy, Giovanni D'urso, Nicholas Lawrance, Brendan Tidd, Peyman Moghadam• 2026

Related benchmarks

TaskDatasetResultRank
Lifelong Imitation LearningLIBERO Goal
Forward Transfer (FWT)78
16
Continual LearningLIBERO Object
FWT81
15
Lifelong Imitation LearningLIBERO Spatial
Forward Transfer (FWT)71
5
Showing 3 of 3 rows

Other info

Follow for update