Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distillation-Guided Structural Transfer for Continual Learning Beyond Sparse Distributed Memory

About

Sparse neural systems are gaining traction for efficient continual learning due to their modularity and low interference. Architectures such as Sparse Distributed Memory Multi-Layer Perceptrons (SDMLP) construct task-specific subnetworks via Top-K activation and have shown resilience against catastrophic forgetting. However, their rigid modularity limits cross-task knowledge reuse and leads to performance degradation under high sparsity. We propose Selective Subnetwork Distillation (SSD), a structurally guided continual learning framework that treats distillation not as a regularizer but as a topology-aligned information conduit. SSD identifies neurons with high activation frequency and selectively distills knowledge within previous Top-K subnetworks and output logits, without requiring replay or task labels. This enables structural realignment while preserving sparse modularity. Experiments on Split CIFAR-10, CIFAR-100, and MNIST demonstrate that SSD improves accuracy, retention, and representation coverage, offering a structurally grounded solution for sparse continual learning.

Huiyan Xue, Xuming Ran, Yaxin Li, Qi Xu, Enhui Li, Yi Xu, Qiang Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 Split
Accuracy40
61
Continual Image ClassificationCIFAR100 Split
Accuracy52
17
Image ClassificationCIFAR-10 Split
Average Accuracy80
12
ClassificationMNIST Split
Validation Accuracy86
7
Continual Image ClassificationCIFAR-10 Split (val)
Accuracy (Val)87
5
Continual Image ClassificationSplit MNIST (val)
Accuracy (Val)86
4
Continual LearningCIFAR-10 Split (test)
Mean BWT-0.1234
2
Showing 7 of 7 rows

Other info

Follow for update