Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling

About

Boundary information is critical for various Chinese language processing tasks, such as word segmentation, part-of-speech tagging, and named entity recognition. Previous studies usually resorted to the use of a high-quality external lexicon, where lexicon items can offer explicit boundary information. However, to ensure the quality of the lexicon, great human effort is always necessary, which has been generally ignored. In this work, we suggest unsupervised statistical boundary information instead, and propose an architecture to encode the information directly into pre-trained language models, resulting in Boundary-Aware BERT (BABERT). We apply BABERT for feature induction of Chinese sequence labeling tasks. Experimental results on ten benchmarks of Chinese sequence labeling demonstrate that BABERT can provide consistent improvements on all datasets. In addition, our method can complement previous supervised lexicon exploration, where further improvements can be achieved when integrated with external lexicon information.

Peijie Jiang, Dingkun Long, Yanzhao Zhang, Pengjun Xie, Meishan Zhang, Min Zhang• 2022

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionOntoNotes 4.0 (test)
F1 Score82.35
55
Chinese Word SegmentationPKU (test)
F196.84
32
Chinese Word SegmentationMSRA (test)
F1 Score98.63
17
Named Entity RecognitionFinance (test)
F1 Score87.25
14
Chinese Word SegmentationCTB 6.0 (test)
F1 Score97.56
12
Part-of-Speech TaggingCTB 6.0 (test)
F1 Score95.24
11
Part-of-Speech TaggingUD1 (test)
F1 Score95.74
11
Part-of-Speech TaggingUD 2 (test)
F1 Score95.7
11
Named Entity RecognitionBook (test)
F1 Score78.36
10
Named Entity RecognitionNews (test)
F1 Score80.86
10
Showing 10 of 10 rows

Other info

Code

Follow for update