Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enhanced LSTM for Natural Language Inference

About

Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is very challenging. With the availability of large annotated data (Bowman et al., 2015), it has recently become feasible to train neural network based inference models, which have shown to be very effective. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset. Unlike the previous top models that use very complicated network architectures, we first demonstrate that carefully designing sequential inference models based on chain LSTMs can outperform all previous models. Based on this, we further show that by explicitly considering recursive architectures in both local inference modeling and inference composition, we achieve additional improvement. Particularly, incorporating syntactic parsing information contributes to our best result---it further improves the performance even when added to the already very strong model.

Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, Diana Inkpen• 2016

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy88.6
681
Natural Language InferenceSNLI
Accuracy88
174
Natural Language InferenceSNLI (train)
Accuracy93.5
154
Answer SelectionWikiQA (test)
MAP0.652
149
Natural Language InferenceSciTail (test)
Accuracy70.6
86
Paraphrase IdentificationQuora Question Pairs (test)
Accuracy86.98
72
Natural Language InferenceSNLI (dev)
Accuracy83.39
71
Natural Language InferenceMultiNLI matched (test)
Accuracy76.8
65
Natural Language InferenceMultiNLI Mismatched
Accuracy75.8
60
Natural Language InferenceMultiNLI mismatched (test)
Accuracy75.8
56
Showing 10 of 25 rows

Other info

Code

Follow for update