Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference

About

The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9% accuracy) and on the cross-domain test set (also attaining a 74.9% accuracy), demonstrating that the model generalizes well to the cross-domain data. Our model is equipped with intra-sentence gated-attention composition which helps achieve a better performance. In addition to submitting our model to the Shared Task, we have also tested it on the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy of 85.5%, which is the best reported result on SNLI when cross-sentence attention is not allowed, the same condition enforced in RepEval 2017.

Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen• 2017

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy85.5
681
Natural Language InferenceSNLI (train)
Accuracy90.5
154
Natural Language InferenceMultiNLI matched (test)
Accuracy74.9
65
Natural Language InferenceMultiNLI Mismatched
Accuracy74.9
60
Natural Language InferenceMultiNLI mismatched (test)
Accuracy74.9
56
Natural Language InferenceMultiNLI Matched
Accuracy74.9
49
Natural Language InferenceMultiNLI mismatched (cross-domain) RepEval 2017 (test)
Accuracy74.9
25
Natural Language InferenceMultiNLI (test)--
21
Natural Language InferenceMultiNLI matched (in-domain) RepEval 2017 (test)
Accuracy74.9
18
Natural Language InferenceMultiNLI matched (in-domain)
Accuracy73.5
8
Showing 10 of 10 rows

Other info

Code

Follow for update