Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Diffusion-Classifier Synergy: Reward-Aligned Learning via Mutual Boosting Loop for FSCIL

About

Few-Shot Class-Incremental Learning (FSCIL) challenges models to sequentially learn new classes from minimal examples without forgetting prior knowledge, a task complicated by the stability-plasticity dilemma and data scarcity. Current FSCIL methods often struggle with generalization due to their reliance on limited datasets. While diffusion models offer a path for data augmentation, their direct application can lead to semantic misalignment or ineffective guidance. This paper introduces Diffusion-Classifier Synergy (DCS), a novel framework that establishes a mutual boosting loop between diffusion model and FSCIL classifier. DCS utilizes a reward-aligned learning strategy, where a dynamic, multi-faceted reward function derived from the classifier's state directs the diffusion model. This reward system operates at two levels: the feature level ensures semantic coherence and diversity using prototype-anchored maximum mean discrepancy and dimension-wise variance matching, while the logits level promotes exploratory image generation and enhances inter-class discriminability through confidence recalibration and cross-session confusion-aware mechanisms. This co-evolutionary process, where generated images refine the classifier and an improved classifier state yields better reward signals, demonstrably achieves state-of-the-art performance on FSCIL benchmarks, significantly enhancing both knowledge retention and new class learning.

Ruitao Wu, Yifan Zhao, Guangyao Chen, Jia Li• 2025

Related benchmarks

TaskDatasetResultRank
Few-Shot Class-Incremental LearningCUB-200
Session 1 Accuracy77.32
85
Few-Shot Class-Incremental LearningCIFAR100
Accuracy (S0)81.09
77
Few-Shot Class-Incremental LearningMiniImagenet
Avg Accuracy68.14
41
Showing 3 of 3 rows

Other info

Follow for update