Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Shortcut-Stacked Sentence Encoders for Multi-Domain Inference

About

We present a simple sequential sentence encoder for multi-domain natural language inference. Our encoder is based on stacked bidirectional LSTM-RNNs with shortcut connections and fine-tuning of word embeddings. The overall supervised model uses the above encoder to encode two input sentences into two vectors, and then uses a classifier over the vector combination to label the relationship between these two sentences as that of entailment, contradiction, or neural. Our Shortcut-Stacked sentence encoders achieve strong improvements over existing encoders on matched and mismatched multi-domain natural language inference (top non-ensemble single-model result in the EMNLP RepEval 2017 Shared Task (Nangia et al., 2017)). Moreover, they achieve the new state-of-the-art encoding result on the original SNLI dataset (Bowman et al., 2015).

Yixin Nie, Mohit Bansal• 2017

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy86.1
681
Natural Language InferenceSNLI (train)
Accuracy91
154
Natural Language InferenceMultiNLI matched (test)
Accuracy74.6
65
Natural Language InferenceMultiNLI Mismatched
Accuracy73.6
60
Natural Language InferenceMultiNLI mismatched (test)
Accuracy73.6
56
Natural Language InferenceMultiNLI Matched
Accuracy74.6
49
Natural Language InferenceMultiNLI mismatched (cross-domain) RepEval 2017 (test)
Accuracy73.6
25
Natural Language InferenceSNLI 1.0 (test)
Accuracy86
19
Natural Language InferenceSNLI 1.0 (train)
Accuracy91
9
Natural Language InferenceMultiNLI matched (in-domain)
Accuracy74.6
8
Showing 10 of 11 rows

Other info

Code

Follow for update