Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robust Self-Training with Closed-loop Label Correction for Learning from Noisy Labels

About

Training deep neural networks with noisy labels remains a significant challenge, often leading to degraded performance. Existing methods for handling label noise typically rely on either transition matrix, noise detection, or meta-learning techniques, but they often exhibit low utilization efficiency of noisy samples and incur high computational costs. In this paper, we propose a self-training label correction framework using decoupled bilevel optimization, where a classifier and neural correction function co-evolve. Leveraging a small clean dataset, our method employs noisy posterior simulation and intermediate features to transfer ground-truth knowledge, forming a closed-loop feedback system that prevents error amplification. Theoretical guarantees underpin the stability of our approach, and extensive experiments on benchmark datasets like CIFAR and Clothing1M confirm state-of-the-art performance with reduced training time, highlighting its practical applicability for learning from noisy labels.

Zhanhui Lin, Yanlin Liu, Sanping Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationClothing1M (test)
Accuracy80.23
574
Image ClassificationCIFAR-10 (test)
Test Accuracy (Symmetric, η=0.2)92.48
12
Image ClassificationCIFAR-100 (test)
Test Accuracy (Symmetric, η=0.2)68.73
12
Image ClassificationCIFAR-100 Instance-dependent Noise (1k clean samples)
Accuracy (eta=0.2)67.42
10
Image ClassificationCIFAR-100 with Symmetric Noise (1k clean samples)
Accuracy (eta=0.2)68.13
10
Image ClassificationCIFAR-100 with Asymmetric Noise (1k clean samples)
Accuracy (eta=0.2)69.92
10
Showing 6 of 6 rows

Other info

Follow for update