Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dynamic Feedback Engines: Layer-Wise Control for Self-Regulating Continual Learning

About

Continual learning aims to acquire new tasks while preserving performance on previously learned ones, but most methods struggle with catastrophic forgetting. Existing approaches typically treat all layers uniformly, often trading stability for plasticity or vice versa. However, different layers naturally exhibit varying levels of uncertainty (entropy) when classifying tasks. High-entropy layers tend to underfit by failing to capture task-specific patterns, while low-entropy layers risk overfitting by becoming overly confident and specialized. To address this imbalance, we propose an entropy-aware continual learning method that employs a dynamic feedback mechanism to regulate each layer based on its entropy. Specifically, our approach reduces entropy in high-entropy layers to mitigate underfitting and increases entropy in overly confident layers to alleviate overfitting. This adaptive regulation encourages the model to converge to wider local minima, which have been shown to improve generalization. Our method is general and can be seamlessly integrated with both replay- and regularization-based approaches. Experiments on various datasets demonstrate substantial performance gains over state-of-the-art continual learning baselines.

Hengyi Wu, Zhenyi Wang, Heng Huang• 2025

Related benchmarks

TaskDatasetResultRank
Continual LearningCIFAR100 Split--
85
Continual LearningSplit CIFAR-100 10 tasks
Accuracy56.3
60
Continual LearningTiny-ImageNet Split 100 tasks (test)
AF (%)8.95
60
Continual LearningSplit CIFAR-100 (10 tasks) (test)
Accuracy33.9
60
Online Continual LearningCUB-200
A_Final43.53
12
Showing 5 of 5 rows

Other info

Follow for update