Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling

About

Many efforts have been made to facilitate natural language processing tasks with pre-trained language models (LMs), and brought significant improvements to various applications. To fully leverage the nearly unlimited corpora and capture linguistic information of multifarious levels, large-size LMs are required; but for a specific task, only parts of these information are useful. Such large-sized LMs, even in the inference stage, may cause heavy computation workloads, making them too time-consuming for large-scale applications. Here we propose to compress bulky LMs while preserving useful information with regard to a specific task. As different layers of the model keep different information, we develop a layer selection method for model pruning using sparsity-inducing regularization. By introducing the dense connectivity, we can detach any layer without affecting others, and stretch shallow and wide LMs to be deep and narrow. In model training, LMs are learned with layer-wise dropouts for better robustness. Experiments on two benchmark datasets demonstrate the effectiveness of our method.

Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, Jiawei Han• 2018

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL 2003 (test)
F1 Score92.03
539
Language ModelingOne Billion Word Benchmark (test)
Test Perplexity45.14
108
ChunkingCoNLL 2000 (test)
F1 Score96.15
88
Named Entity RecognitionCoNLL 2003 Corrected (test)
F1 Score92.32
12
Named Entity RecognitionCoNLL03 (train)
Latency (s)4.86
4
Showing 5 of 5 rows

Other info

Code

Follow for update