Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CIP-Net: Continual Interpretable Prototype-based Network

About

Continual learning constrains models to learn new tasks over time without forgetting what they have already learned. A key challenge in this setting is catastrophic forgetting, where learning new information causes the model to lose its performance on previous tasks. Recently, explainable AI has been proposed as a promising way to better understand and reduce forgetting. In particular, self-explainable models are useful because they generate explanations during prediction, which can help preserve knowledge. However, most existing explainable approaches use post-hoc explanations or require additional memory for each new task, resulting in limited scalability. In this work, we introduce CIP-Net, an exemplar-free self-explainable prototype-based model designed for continual learning. CIP-Net avoids storing past examples and maintains a simple architecture, while still providing useful explanations and strong performance. We demonstrate that CIPNet achieves state-of-the-art performances compared to previous exemplar-free and self-explainable methods in both task- and class-incremental settings, while bearing significantly lower memory-related overhead. This makes it a practical and interpretable solution for continual learning.

Federico Di Valerio, Michela Proietti, Alessio Ragno, Roberto Capobianco• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCUB--
282
Class-incremental learningCUB200 10 Tasks
FN (Final Acc)84.6
59
Task-Incremental LearningCUB 4 tasks
Final Avg Acc77.2
14
Class-incremental learningCUB 20 tasks
Final Avg Accuracy18
7
Task-Incremental LearningCUB 20 tasks
Final Average Accuracy84.7
7
Task-Incremental LearningStanford Cars--
6
Class-incremental learningStanford Cars
Accuracy (4 Tasks)58.9
5
Showing 7 of 7 rows

Other info

Follow for update