Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter

About

Lexicon information and pre-trained models, such as BERT, have been combined to explore Chinese sequence labelling tasks due to their respective strengths. However, existing methods solely fuse lexicon features via a shallow and random initialized sequence layer and do not integrate them into the bottom layers of BERT. In this paper, we propose Lexicon Enhanced BERT (LEBERT) for Chinese sequence labelling, which integrates external lexicon knowledge into BERT layers directly by a Lexicon Adapter layer. Compared with the existing methods, our model facilitates deep lexicon knowledge fusion at the lower layers of BERT. Experiments on ten Chinese datasets of three tasks including Named Entity Recognition, Word Segmentation, and Part-of-Speech tagging, show that LEBERT achieves the state-of-the-art results.

Wei Liu, Xiyan Fu, Yue Zhang, Wenming Xiao• 2021

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionOntoNotes 4.0 (test)
F1 Score81.59
55
Chinese Word SegmentationPKU (test)
F196.7
32
Chinese Word SegmentationMSRA (test)
F1 Score98.41
17
Named Entity RecognitionFinance (test)
F1 Score86.47
14
Joint Chinese Word Segmentation and Part-of-Speech TaggingCTB6 (test)
CWS Accuracy97.14
14
Chinese Word SegmentationCTB 6.0 (test)
F1 Score97.44
12
Part-of-Speech TaggingCTB 6.0 (test)
F1 Score94.92
11
Part-of-Speech TaggingUD 2 (test)
F1 Score95.42
11
Part-of-Speech TaggingUD1 (test)
F1 Score95.49
11
Named Entity RecognitionNews (test)
F1 Score80.29
10
Showing 10 of 11 rows

Other info

Follow for update