Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning

About

Semi-supervised learning (SSL) has witnessed remarkable progress, resulting in the emergence of numerous method variations. However, practitioners often encounter challenges when attempting to deploy these methods due to their subpar performance. In this paper, we present a novel SSL approach named FineSSL that significantly addresses this limitation by adapting pre-trained foundation models. We identify the aggregated biases and cognitive deviation problems inherent in foundation models, and propose a simple yet effective solution by imposing balanced margin softmax and decoupled label smoothing. Through extensive experiments, we demonstrate that FineSSL sets a new state of the art for SSL on multiple benchmark datasets, reduces the training cost by over six times, and can seamlessly integrate various fine-tuning and modern SSL algorithms. The source code is available at https://github.com/Gank0078/FineSSL.

Kai Gan, Tong Wei• 2024

Related benchmarks

TaskDatasetResultRank
EEG ClassificationSEED
AUPRC72.99
32
EEG ClassificationMental Arithmetic
AUPRC53.12
32
EEG ClassificationISRUC
Kappa54.24
32
Image ClassificationFive Datasets 4-shot
Accuracy0.576
18
Image ClassificationFive Datasets 8-shot
Accuracy64.6
18
Image ClassificationFive Datasets 16-shot
Accuracy68.9
18
Showing 6 of 6 rows

Other info

Follow for update