Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learn-Prune-Share for Lifelong Learning

About

In lifelong learning, we wish to maintain and update a model (e.g., a neural network classifier) in the presence of new classification tasks that arrive sequentially. In this paper, we propose a learn-prune-share (LPS) algorithm which addresses the challenges of catastrophic forgetting, parsimony, and knowledge reuse simultaneously. LPS splits the network into task-specific partitions via an ADMM-based pruning strategy. This leads to no forgetting, while maintaining parsimony. Moreover, LPS integrates a novel selective knowledge sharing scheme into this ADMM optimization framework. This enables adaptive knowledge sharing in an end-to-end fashion. Comprehensive experimental results on two lifelong learning benchmark datasets and a challenging real-world radio frequency fingerprinting dataset are provided to demonstrate the effectiveness of our approach. Our experiments show that LPS consistently outperforms multiple state-of-the-art competitors.

Zifeng Wang, Tong Jian, Kaushik Chowdhury, Yanzhi Wang, Jennifer Dy, Stratis Ioannidis• 2020

Related benchmarks

TaskDatasetResultRank
Class-incremental learningVTAB B0 Inc10
Last Accuracy77.1
38
Class-incremental learningImageNet-100 (10T)
Average Accuracy (A_T)80.51
35
Class-incremental learningCUB200 (100-20)
Avg Accuracy71.9
22
Class-incremental learningCIFAR100 10 steps (incremental)
Average Accuracy81.9
7
Class-incremental learningImageNet-R 10 steps (incremental)
Average Accuracy81.67
7
Class-incremental learningObjectNet 20 steps (incremental)
Average Accuracy63.78
7
Class-incremental learningImageNet-A 10 steps incremental
Average Accuracy49.39
7
Class-incremental learningOmniBenchmark 10 steps (incremental)
Average Accuracy73.36
7
Showing 8 of 8 rows

Other info

Follow for update