An Empirical Study on Leveraging Position Embeddings for Target-oriented Opinion Words Extraction
About
Target-oriented opinion words extraction (TOWE) (Fan et al., 2019b) is a new subtask of target-oriented sentiment analysis that aims to extract opinion words for a given aspect in text. Current state-of-the-art methods leverage position embeddings to capture the relative position of a word to the target. However, the performance of these methods depends on the ability to incorporate this information into word representations. In this paper, we explore a variety of text encoders based on pretrained word embeddings or language models that leverage part-of-speech and position embeddings, aiming to examine the actual contribution of each component in TOWE. We also adapt a graph convolutional network (GCN) to enhance word representations by incorporating syntactic information. Our experimental results demonstrate that BiLSTM-based models can effectively encode position information into word representations while using a GCN only achieves marginal gains. Interestingly, our simple methods outperform several state-of-the-art complex neural structures.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Opinion Term Extraction | res 14 | F1-score (%)85.74 | 16 | |
| Opinion Term Extraction | Res 15 | F1-score80.54 | 14 | |
| Opinion Term Extraction | 16-Res | F1-score87.35 | 14 | |
| Opinion Term Extraction | lap 14 | F1-score78.82 | 14 |