Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ACD-U: Asymmetric co-teaching with machine unlearning for robust learning with noisy labels

About

Deep neural networks are prone to memorizing incorrect labels during training, which degrades their generalizability. Although recent methods have combined sample selection with semi-supervised learning (SSL) to exploit the memorization effect -- where networks learn from clean data before noisy data -- they cannot correct selection errors once a sample is misclassified. To overcome this, we propose asymmetric co-teaching with different architectures (ACD)-U, an asymmetric co-teaching framework that uses different model architectures and incorporates machine unlearning. ACD-U addresses this limitation through two core mechanisms. First, its asymmetric co-teaching pairs a contrastive language-image pretraining (CLIP)-pretrained vision Transformer with a convolutional neural network (CNN), leveraging their complementary learning behaviors: the pretrained model provides stable predictions, whereas the CNN adapts throughout training. This asymmetry, where the vision Transformer is trained only on clean samples and the CNN is trained through SSL, effectively mitigates confirmation bias. Second, selective unlearning enables post-hoc error correction by identifying incorrectly memorized samples through loss trajectory analysis and CLIP consistency checks, and then removing their influence via Kullback--Leibler divergence-based forgetting. This approach shifts the learning paradigm from passive error avoidance to active error correction. Experiments on synthetic and real-world noisy datasets, including CIFAR-10/100, CIFAR-N, WebVision, Clothing1M, and Red Mini-ImageNet, demonstrate state-of-the-art performance, particularly in high-noise regimes and under instance-dependent noise. The code is publicly available at https://github.com/meruemon/ACD-U.

Reo Fukunaga, Soh Yoshida, Mitsuji Muneyasu• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationClothing1M (test)--
574
Image ClassificationCIFAR-10N (Worst)
Accuracy94.64
83
Image ClassificationCIFAR-10N (Aggregate)
Accuracy96.46
78
Image ClassificationRed Mini-ImageNet (test)
Accuracy61.52
75
Image ClassificationImageNet (val)
Top-1 Accuracy81
57
Image ClassificationWebVision (val)
Top-1 Acc82.8
49
Image ClassificationCIFAR-100 Noisy
Accuracy75.98
24
Image ClassificationCIFAR-10 (test)
Accuracy (Sym. 20%)97.2
14
Image ClassificationCIFAR-10N (random)
Accuracy96.28
4
Showing 9 of 9 rows

Other info

Follow for update