Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Natural Language Inference with LSTM

About

Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for natural language inference (NLI). In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neural attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a match-LSTM to perform word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. On the SNLI corpus, our model achieves an accuracy of 86.1%, outperforming the state of the art.

Shuohang Wang, Jing Jiang• 2015

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy86.1
681
Natural Language InferenceSNLI (train)
Accuracy92
154
Response SelectionDouban Conversation Corpus (test)
MAP0.5
94
Response SelectionE-commerce (test)
Recall@1 (R10)0.41
81
Natural Language InferenceSNLI (dev)
Accuracy86.9
71
Multi-turn Response SelectionE-commerce Dialogue Corpus (test)
R@1 (Top 10 Set)41
70
Multi-turn Response SelectionDouban Conversation Corpus
MAP49.8
67
Multi-turn Response SelectionUbuntu Corpus
Recall@1 (R10)65.3
65
Response SelectionUbuntu (test)
Recall@1 (Top 10)0.653
58
Dialogue Response SelectionUbuntu (test)
R@1 (R10)0.653
18
Showing 10 of 12 rows

Other info

Code

Follow for update