Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Stable-Drift: A Patient-Aware Latent Drift Replay Method for Stabilizing Representations in Continual Learning

About

When deep learning models are sequentially trained on new data, they tend to abruptly lose performance on previously learned tasks, a critical failure known as catastrophic forgetting. This challenge severely limits the deployment of AI in medical imaging, where models must continually adapt to data from new hospitals without compromising established diagnostic knowledge. To address this, we introduce a latent drift-guided replay method that identifies and replays samples with high representational instability. Specifically, our method quantifies this instability via latent drift, the change in a sample internal feature representation after naive domain adaptation. To ensure diversity and clinical relevance, we aggregate drift at the patient level, our memory buffer stores the per patient slices exhibiting the greatest multi-layer representation shift. Evaluated on a cross-hospital COVID-19 CT classification task using state-of-the-art CNN and Vision Transformer backbones, our method substantially reduces forgetting compared to naive fine-tuning and random replay. This work highlights latent drift as a practical and interpretable replay signal for advancing robust continual learning in real world medical settings.

Paraskevi-Antonia Theofilou, Anuhya Thota, Stefanos Kollias, Mamatha Thota• 2025

Related benchmarks

TaskDatasetResultRank
Medical ClassificationH2 Target Hospital
Accuracy Per-Patient93.75
8
Medical ClassificationH1 Source Hospital
Acc (Per-Patient)92.45
8
Continual LearningCOVID-19 CT
Forward Transfer (FWT)0.768
6
Showing 3 of 3 rows

Other info

Follow for update