Exploiting BERT for End-to-End Aspect-based Sentiment Analysis
About
In this paper, we investigate the modeling power of contextualized embeddings from pre-trained language models, e.g. BERT, on the E2E-ABSA task. Specifically, we build a series of simple yet insightful neural baselines to deal with E2E-ABSA. The experimental results show that even with a simple linear classification layer, our BERT-based architecture can outperform state-of-the-art works. Besides, we also standardize the comparative study by consistently utilizing a hold-out validation dataset for model selection, which is largely ignored by previous works. Therefore, our work can serve as a BERT-based benchmark for E2E-ABSA.
Xin Li, Lidong Bing, Wenxuan Zhang, Wai Lam• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Named Entity Recognition | Conll 2003 | F1 Score91.76 | 86 | |
| Named Entity Recognition | Wnut 2017 | F1 Score55.76 | 79 | |
| Aspect-Term Sentiment Analysis | LAPTOP SemEval 2014 (test) | Macro-F161.12 | 69 | |
| Named Entity Recognition | WeiboNER | F1 Score69.53 | 27 | |
| Entity-Level Financial Sentiment Analysis | EFSA | F1 Score73.77 | 23 | |
| Sequence Labeling | Restaurant 14 | F1 Score74 | 20 | |
| Sequence Labeling | Restaurant 16 | F1 Score71.47 | 20 | |
| Sequence Labeling | Restaurant15 | F1 Score61.58 | 20 | |
| Sequence Labeling | Laptop14 | F1 Score61.33 | 20 | |
| End-to-End Aspect-Based Sentiment Analysis | REST SemEval 2015 2016 (test) | F1 Score74.72 | 10 |
Showing 10 of 13 rows