Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance

About

Adopting a two-stage paradigm of pretraining followed by fine-tuning, Pretrained Language Models (PLMs) have achieved substantial advancements in the field of natural language processing. However, in real-world scenarios, data labels are often noisy due to the complex annotation process, making it essential to develop strategies for fine-tuning PLMs with such noisy labels. To this end, we introduce an innovative approach for fine-tuning PLMs using noisy labels, which incorporates the guidance of Large Language Models (LLMs) like ChatGPT. This guidance assists in accurately distinguishing between clean and noisy samples and provides supplementary information beyond the noisy labels, thereby boosting the learning process during fine-tuning PLMs. Extensive experiments on synthetic and real-world noisy datasets further demonstrate the superior advantages of our framework over the state-of-the-art baselines.

Song Wang, Zhen Tan, Ruocheng Guo, Jundong Li• 2023

Related benchmarks

TaskDatasetResultRank
News topic classification20 Newsgroups 20% Asymmetric Noise
Accuracy83.7
24
News topic classification20 Newsgroups 20% Instance-Dependent Noise
Accuracy83.61
24
News topic classification20 Newsgroups 40% Asymmetric Noise
Accuracy81.97
24
News topic classification20 Newsgroups 20% Symmetric Noise
Accuracy82.04
24
News topic classification20 Newsgroups 40% Instance-Dependent Noise
Accuracy80.49
24
News topic classification20 Newsgroups 40% Symmetric Noise
Accuracy76.93
24
Text Classification20NG (test)--
18
Text ClassificationAGNews (test)
Perturbation S (20%)93.09
10
Text ClassificationSST-5 20%S (test)
Accuracy55
10
Text ClassificationSST-2 20%S (test)
Accuracy88.07
10
Showing 10 of 29 rows

Other info

Follow for update