Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FLAT: Chinese NER Using Flat-Lattice Transformer

About

Recently, the character-word lattice structure has been proved to be effective for Chinese named entity recognition (NER) by incorporating the word information. However, since the lattice structure is complex and dynamic, most existing lattice-based models are hard to fully utilize the parallel computation of GPUs and usually have a low inference-speed. In this paper, we propose FLAT: Flat-LAttice Transformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans. Each span corresponds to a character or latent word and its position in the original lattice. With the power of Transformer and well-designed position encoding, FLAT can fully leverage the lattice information and has an excellent parallelization ability. Experiments on four datasets show FLAT outperforms other lexicon-based models in performance and efficiency.

Xiaonan Li, Hang Yan, Xipeng Qiu, Xuanjing Huang• 2020

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL 2003 (test)
F1 Score93.07
539
Named Entity RecognitionMSRA (test)
F1 Score96.09
63
Named Entity RecognitionOntoNotes 4.0 (test)
F1 Score81.82
55
Named Entity RecognitionRESUME
F1 Score95.86
52
Named Entity RecognitionWeibo (test)
Overall Score60.32
50
Named Entity RecognitionMSRA
F1 Score94.12
29
Named Entity RecognitionResume (test)
F1 Score95.86
28
Named Entity RecognitionWeiboNER
F1 Score68.55
27
Named Entity RecognitionChinese OntoNotes 4.0 (test)
F1 Score81.82
19
Named Entity RecognitionOntoNotes 4.0
F1 Score76.45
18
Showing 10 of 14 rows

Other info

Code

Follow for update