Learn-Prune-Share for Lifelong Learning
About
In lifelong learning, we wish to maintain and update a model (e.g., a neural network classifier) in the presence of new classification tasks that arrive sequentially. In this paper, we propose a learn-prune-share (LPS) algorithm which addresses the challenges of catastrophic forgetting, parsimony, and knowledge reuse simultaneously. LPS splits the network into task-specific partitions via an ADMM-based pruning strategy. This leads to no forgetting, while maintaining parsimony. Moreover, LPS integrates a novel selective knowledge sharing scheme into this ADMM optimization framework. This enables adaptive knowledge sharing in an end-to-end fashion. Comprehensive experimental results on two lifelong learning benchmark datasets and a challenging real-world radio frequency fingerprinting dataset are provided to demonstrate the effectiveness of our approach. Our experiments show that LPS consistently outperforms multiple state-of-the-art competitors.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Class-incremental learning | VTAB B0 Inc10 | Last Accuracy77.1 | 38 | |
| Class-incremental learning | ImageNet-100 (10T) | Average Accuracy (A_T)80.51 | 35 | |
| Class-incremental learning | CUB200 (100-20) | Avg Accuracy71.9 | 22 | |
| Class-incremental learning | CIFAR100 10 steps (incremental) | Average Accuracy81.9 | 7 | |
| Class-incremental learning | ImageNet-R 10 steps (incremental) | Average Accuracy81.67 | 7 | |
| Class-incremental learning | ObjectNet 20 steps (incremental) | Average Accuracy63.78 | 7 | |
| Class-incremental learning | ImageNet-A 10 steps incremental | Average Accuracy49.39 | 7 | |
| Class-incremental learning | OmniBenchmark 10 steps (incremental) | Average Accuracy73.36 | 7 |