Distillation-Guided Structural Transfer for Continual Learning Beyond Sparse Distributed Memory
About
Sparse neural systems are gaining traction for efficient continual learning due to their modularity and low interference. Architectures such as Sparse Distributed Memory Multi-Layer Perceptrons (SDMLP) construct task-specific subnetworks via Top-K activation and have shown resilience against catastrophic forgetting. However, their rigid modularity limits cross-task knowledge reuse and leads to performance degradation under high sparsity. We propose Selective Subnetwork Distillation (SSD), a structurally guided continual learning framework that treats distillation not as a regularizer but as a topology-aligned information conduit. SSD identifies neurons with high activation frequency and selectively distills knowledge within previous Top-K subnetworks and output logits, without requiring replay or task labels. This enables structural realignment while preserving sparse modularity. Experiments on Split CIFAR-10, CIFAR-100, and MNIST demonstrate that SSD improves accuracy, retention, and representation coverage, offering a structurally grounded solution for sparse continual learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 Split | Accuracy40 | 61 | |
| Continual Image Classification | CIFAR100 Split | Accuracy52 | 17 | |
| Image Classification | CIFAR-10 Split | Average Accuracy80 | 12 | |
| Classification | MNIST Split | Validation Accuracy86 | 7 | |
| Continual Image Classification | CIFAR-10 Split (val) | Accuracy (Val)87 | 5 | |
| Continual Image Classification | Split MNIST (val) | Accuracy (Val)86 | 4 | |
| Continual Learning | CIFAR-10 Split (test) | Mean BWT-0.1234 | 2 |