Effective LSTMs for Target-Dependent Sentiment Classification
About
Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence. Different context words have different influences on determining the sentiment polarity of a sentence towards the target. Therefore, it is desirable to integrate the connections between target word and context words when building a learning system. In this paper, we develop two target dependent long short-term memory (LSTM) models, where target information is automatically taken into account. We evaluate our methods on a benchmark dataset from Twitter. Empirical results show that modeling sentence representation with standard LSTM does not perform well. Incorporating target information into LSTM can significantly boost the classification accuracy. The target-dependent LSTM models achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Aspect-Term Sentiment Analysis | LAPTOP SemEval 2014 (test) | Macro-F162.42 | 69 | |
| Aspect-level sentiment classification | SemEval Restaurant 2014 (test) | Accuracy75.6 | 67 | |
| Aspect Sentiment Classification | Rest SemEval 2014 (test) | Accuracy77.97 | 60 | |
| Aspect-level sentiment classification | SemEval Laptop 2014 (test) | Accuracy68.1 | 59 | |
| Aspect Sentiment Classification | Laptop (test) | Accuracy79.31 | 49 | |
| Opinion Term Extraction | 14res SemEval 2014 (test) | Precision67.65 | 37 | |
| Target-dependent sentiment classification | Twitter (test) | Accuracy71.5 | 31 | |
| Opinion Term Extraction | SemEval res 2015 (test) | Precision66.06 | 28 | |
| Target-Oriented Opinion Word Extraction | SemEval res 2016 (test) | Precision73.46 | 27 | |
| Target-Oriented Opinion Word Extraction | 14lap SemEval 2014 (test) | Precision62.45 | 27 |