Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Decomposable Attention Model for Natural Language Inference

About

We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.

Ankur P. Parikh, Oscar T\"ackstr\"om, Dipanjan Das, Jakob Uszkoreit• 2016

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy86.8
681
Natural Language InferenceSNLI
Accuracy86.3
174
Natural Language InferenceSNLI (train)
Accuracy90.5
154
Natural Language InferenceSciTail (test)
Accuracy72.3
86
Paraphrase IdentificationQuora Question Pairs (test)
Accuracy87.77
72
Question AnsweringNatural Question (NQ) (dev)--
72
Paraphrase IdentificationQuora Question Pairs (dev)
Accuracy87.8
14
Dialogue DisentanglementUbuntu IRC (dev)
VI0.874
9
Commonsense ReasoningHSWAG Out-of-Domain (test)
Accuracy32.48
8
Commonsense ReasoningSWAG In-Domain (test)
Accuracy46.8
8
Showing 10 of 16 rows

Other info

Follow for update