Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Distillation Enables Continual Learning

About

Continual learning, enabling models to acquire new skills and knowledge without degrading existing capabilities, remains a fundamental challenge for foundation models. While on-policy reinforcement learning can reduce forgetting, it requires explicit reward functions that are often unavailable. Learning from expert demonstrations, the primary alternative, is dominated by supervised fine-tuning (SFT), which is inherently off-policy. We introduce Self-Distillation Fine-Tuning (SDFT), a simple method that enables on-policy learning directly from demonstrations. SDFT leverages in-context learning by using a demonstration-conditioned model as its own teacher, generating on-policy training signals that preserve prior capabilities while acquiring new skills. Across skill learning and knowledge acquisition tasks, SDFT consistently outperforms SFT, achieving higher new-task accuracy while substantially reducing catastrophic forgetting. In sequential learning experiments, SDFT enables a single model to accumulate multiple skills over time without performance regression, establishing on-policy distillation as a practical path to continual learning from demonstrations.

Idan Shenfeld, Mehul Damani, Jonas H\"ubotter, Pulkit Agrawal• 2026

Related benchmarks

TaskDatasetResultRank
Skill LearningScience Q&A and Previous Task Suite (Hellaswag, Humaneval, IFeval, MMLU, TruthfulQA, Winogrande)
ScienceQA70.2
5
Skill LearningTooluse and Previous Task Suite (Hellaswag, Humaneval, IFeval, MMLU, TruthfulQA, Winogrande)
Tooluse70.6
5
Skill LearningMedical and Previous Task Suite (Hellaswag, Humaneval, IFeval, MMLU, TruthfulQA, Winogrande)
Medical Score40.2
5
Knowledge AcquisitionWikipedia Knowledge Acquisition In-distribution (test)
Accuracy (strict)89
5
Knowledge AcquisitionWikipedia Knowledge Acquisition Out-of-distribution (OOD)
Accuracy98
5
Medical ReasoningMedical task
Accuracy43.7
3
Showing 6 of 6 rows

Other info

GitHub

Follow for update