Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Is the Information Bottleneck Robust Enough? Towards Label-Noise Resistant Information Bottleneck Learning

About

The Information Bottleneck (IB) principle facilitates effective representation learning by preserving label-relevant information while compressing irrelevant information. However, its strong reliance on accurate labels makes it inherently vulnerable to label noise, prevalent in real-world scenarios, resulting in significant performance degradation and overfitting. To address this issue, we propose LaT-IB, a novel Label-Noise ResistanT Information Bottleneck method which introduces a "Minimal-Sufficient-Clean" (MSC) criterion. Instantiated as a mutual information regularizer to retain task-relevant information while discarding noise, MSC addresses standard IB's vulnerability to noisy label supervision. To achieve this, LaT-IB employs a noise-aware latent disentanglement that decomposes the latent representation into components aligned with to the clean label space and the noise space. Theoretically, we first derive mutual information bounds for each component of our objective including prediction, compression, and disentanglement, and moreover prove that optimizing it encourages representations invariant to input noise and separates clean and noisy label information. Furthermore, we design a three-phase training framework: Warmup, Knowledge Injection and Robust Training, to progressively guide the model toward noise-resistant representations. Extensive experiments demonstrate that LaT-IB achieves superior robustness and efficiency under label noise, significantly enhancing robustness and applicability in real-world scenarios with label noise.

Yi Huang, Qingyun Sun, Yisen Gao, Haonan Yuan, Xingcheng Fu, Jianxin Li• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10N (Worst)
Accuracy87.95
78
Image ClassificationCIFAR-10N (Aggregate)
Accuracy94.17
74
Image ClassificationCIFAR-100 Sym-20% (test)
Accuracy75.79
33
Image ClassificationCIFAR-100 Sym-50% (test)
Accuracy67.28
32
Image ClassificationANIMAL-10N
Accuracy0.8849
32
Image ClassificationCIFAR-10 40% asymmetric noise
Accuracy88.89
27
Node ClassificationPubmed 20% pair noise (test)
Accuracy0.7303
24
Node ClassificationPubmed 20% Uniform Noise (test)
Accuracy73.4
24
Node ClassificationCora (test)--
19
Node ClassificationDBLP 20% Uniform Noise (test)
Accuracy71.13
18
Showing 10 of 30 rows

Other info

Follow for update