Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Natural Language Inference Models Enhanced with External Knowledge

About

Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.

Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei• 2017

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy89.1
681
Natural Language InferenceMultiNLI matched (test)
Accuracy77.2
65
Natural Language InferenceMultiNLI mismatched (test)
Accuracy76.4
56
Natural Language InferenceGlockner 2018 (test)
Accuracy83.5
4
Showing 4 of 4 rows

Other info

Code

Follow for update