Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revisiting Few-sample BERT Fine-tuning

About

This paper is a study of fine-tuning of BERT contextual representations, with focus on commonly observed instabilities in few-sample scenarios. We identify several factors that cause this instability: the common use of a non-standard optimization method with biased gradient estimation; the limited applicability of significant parts of the BERT network for down-stream tasks; and the prevalent practice of using a pre-determined, and small number of training iterations. We empirically test the impact of these factors, and identify alternative practices that resolve the commonly observed instability of the process. In light of these observations, we re-visit recently proposed methods to improve few-sample fine-tuning with BERT and re-evaluate their effectiveness. Generally, we observe the impact of these methods diminishes significantly with our modified process.

Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q. Weinberger, Yoav Artzi• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)96.9
504
Natural Language UnderstandingGLUE (val)--
170
Semantic segmentationPascal Semantic Segmentation ID Clean (test)
mIoU (Clean)72.09
9
Semantic segmentationPascal Semantic Segmentation OOD Corrupted (test)
mIoU (Fog)0.6813
9
Human Parts SegmentationPASCAL Human Parts ID Clean (test)
mIoU64.37
8
Surface Normal EstimationPASCAL ID Clean (test)
RMSE (Degrees)15.54
8
Surface Normal EstimationPASCAL OOD corrupted (test)
Fog Error18.31
8
Human Parts SegmentationPASCAL Human Parts OOD Corruptions (test)
Fog Acc60.1
8
Semantic segmentationPASCAL-Context (Clean)
mIoU72.09
8
Semantic segmentationPASCAL-Context (OOD)
mIoU (Fog)68.13
8
Showing 10 of 10 rows

Other info

Follow for update