Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-CLS BERT: An Efficient Alternative to Traditional Ensembling

About

Ensembling BERT models often significantly improves accuracy, but at the cost of significantly more computation and memory footprint. In this work, we propose Multi-CLS BERT, a novel ensembling method for CLS-based prediction tasks that is almost as efficient as a single BERT model. Multi-CLS BERT uses multiple CLS tokens with a parameterization and objective that encourages their diversity. Thus instead of fine-tuning each BERT model in an ensemble (and running them all at test time), we need only fine-tune our single Multi-CLS BERT model (and run the one model at test time, ensembling just the multiple final CLS embeddings). To test its effectiveness, we build Multi-CLS BERT on top of a state-of-the-art pretraining method for BERT (Aroca-Ouellette and Rudzicz, 2020). In experiments on GLUE and SuperGLUE we show that our Multi-CLS BERT reliably improves both overall accuracy and confidence estimation. When only 100 training samples are available in GLUE, the Multi-CLS BERT_Base model can even outperform the corresponding BERT_Large model. We analyze the behavior of our Multi-CLS BERT, showing that it has many of the same characteristics and behavior as a typical BERT 5-way ensemble, but with nearly 4-times less computation and memory.

Haw-Shiuan Chang, Ruei-Yao Sun, Kathryn Ricci, Andrew McCallum• 2022

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)--
504
Natural Language ProcessingGLUE 100 samples (dev)
Macro Avg Score64.24
13
Natural Language ProcessingGLUE 1k samples (dev)
Macro Avg Score76.27
13
Natural Language ProcessingSuperGLUE 100 samples, excl. ReCoRD (dev)
Macro Avg Score59.88
13
Natural Language ProcessingSuperGLUE 1k samples, excl. ReCoRD (dev)
Macro Avg Score65.84
13
Natural Language ProcessingSuperGLUE Full, excl. ReCoRD (dev)
Macro Avg Score70.03
13
Showing 6 of 6 rows

Other info

Code

Follow for update