Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Long-Tail Learning with Foundation Model: Heavy Fine-Tuning Hurts

About

The fine-tuning paradigm in addressing long-tail learning tasks has sparked significant interest since the emergence of foundation models. Nonetheless, how fine-tuning impacts performance in long-tail learning was not explicitly quantified. In this paper, we disclose that heavy fine-tuning may even lead to non-negligible performance deterioration on tail classes, and lightweight fine-tuning is more effective. The reason is attributed to inconsistent class conditions caused by heavy fine-tuning. With the observation above, we develop a low-complexity and accurate long-tail learning algorithms LIFT with the goal of facilitating fast prediction and compact models by adaptive lightweight fine-tuning. Experiments clearly verify that both the training time and the learned parameters are significantly reduced with more accurate predictive performance compared with state-of-the-art approaches. The implementation code is available at https://github.com/shijxcs/LIFT.

Jiang-Xin Shi, Tong Wei, Zhi Zhou, Jie-Jing Shao, Xin-Yan Han, Yu-Feng Li• 2023

Related benchmarks

TaskDatasetResultRank
Long-Tailed Image ClassificationImageNet-LT (test)
Top-1 Acc (Overall)77.8
220
Image ClassificationImageNet-LT (test)
Top-1 Acc (All)78.3
159
Image ClassificationPlaces-LT (test)
Accuracy (Medium)53.1
128
Image ClassificationiNaturalist 2018 (val)--
116
Long-tailed Visual RecognitionImageNet LT
Overall Accuracy82.9
89
Long-Tailed Image ClassificationiNaturalist 2018
Accuracy85.2
82
Image ClassificationImageNet-LT (val)
Top-1 Acc (Total)78.3
72
Image ClassificationCIFAR-100 Imbalance Ratio LT-50 (test)
Accuracy90.2
62
Image ClassificationCIFAR-100-LT Imbalance Ratio 100 (test)
Accuracy89.1
62
Long-Tailed Image ClassificationPlaces-LT (test)
Accuracy51.8
61
Showing 10 of 14 rows

Other info

Code

Follow for update