Sentence-State LSTM for Text Representation
About
Bi-directional LSTMs are a powerful tool for text representation. On the other hand, they have been shown to suffer various limitations due to their sequential nature. We investigate an alternative LSTM structure for encoding text, which consists of a parallel state for each word. Recurrent steps are used to perform local and global information exchange between words simultaneously, rather than incremental reading of a sequence of words. Results on various classification and sequence labelling benchmarks show that the proposed model has strong representation power, giving highly competitive performances compared to stacked BiLSTM models with similar parameter numbers.
Yue Zhang, Qi Liu, Linfeng Song• 2018
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Named Entity Recognition | CoNLL 2003 (test) | F1 Score91.57 | 539 | |
| Named Entity Recognition | CoNLL English 2003 (test) | F1 Score91.57 | 135 | |
| Natural Language Understanding | Snips (test) | Intent Acc98.3 | 27 | |
| POS Tagging | PTB (test) | Accuracy97.55 | 24 | |
| Spoken Language Understanding | ATIS (test) | Slot F195.65 | 18 | |
| Text Classification | movie review dataset (test) | Accuracy82.45 | 12 | |
| Text Classification | MTL-16 (test) | Average Accuracy85.38 | 4 | |
| Intent Detection | CAIS | Accuracy94.36 | 3 | |
| Slot Filling | CAIS | F1 Score85.74 | 3 |
Showing 9 of 9 rows