Neural Natural Language Inference Models Enhanced with External Knowledge
About
Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Inference | SNLI (test) | Accuracy89.1 | 681 | |
| Natural Language Inference | MultiNLI matched (test) | Accuracy77.2 | 65 | |
| Natural Language Inference | MultiNLI mismatched (test) | Accuracy76.4 | 56 | |
| Natural Language Inference | Glockner 2018 (test) | Accuracy83.5 | 4 |