Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Quantum-Gated Task-interaction Knowledge Distillation for Pre-trained Model-based Class-Incremental Learning

About

Class-incremental learning (CIL) aims to continuously accumulate knowledge from a stream of tasks and construct a unified classifier over all seen classes. Although pretrained models (PTMs) have shown promising performance in CIL, they still struggle with the entanglement of multi-task subspaces, leading to catastrophic forgetting when task routing parameters are poorly calibrated or task-level representations are rigidly fixed. To address this issue, we propose a novel Quantum-Gated Task-interaction Knowledge Distillation (QKD) framework that leverages quantum gating to guide inter-task knowledge transfer. Specifically, we introduce a quantum-gated task modulation gating mechanism to model the relational dependencies among task embedding, dynamically capturing the sample-to-task relevance for both joint training and inference across streaming tasks. Guided by the quantum gating outputs, we perform task-interaction knowledge distillation guided by these task-embedding-level correlation weights from old to new adapters, enabling the model to bridge the representation gaps between independent task subspaces. Extensive experiments demonstrate that QKD effectively mitigates forgetting and achieves state-of-the-art performance.

Linjie Li, Huiyu Xiao, Jiarui Cao, Zhenyu Wu, Yang Ji• 2026

Related benchmarks

TaskDatasetResultRank
Class-incremental learningImageNet-R B0 Inc20
Last Accuracy77.67
79
Class-incremental learningCIFAR-100 B0_Inc10
Avg Accuracy94.08
43
Showing 2 of 2 rows

Other info

Follow for update