Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Text Understanding with the Attention Sum Reader Network

About

Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets.

Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, Jan Kleindienst• 2016

Related benchmarks

TaskDatasetResultRank
Machine ComprehensionCNN (val)
Accuracy0.745
80
Machine ComprehensionCNN (test)
Accuracy75.4
77
Machine ComprehensionCBT NE (test)
Accuracy71
56
Machine ComprehensionCBT-CN (test)
Accuracy68.9
56
Word PredictionLAMBADA (test)
Accuracy44.5
53
Question AnsweringSearchQA (test)
N-gram F122.8
48
Machine Reading ComprehensionDaily Mail (test)
Accuracy77.7
46
Machine ComprehensionCBT-CN (val)
Accuracy72.4
37
Machine ComprehensionCBT-NE (val)
Accuracy76.2
37
Machine Reading ComprehensionDaily Mail (val)
Accuracy78.7
36
Showing 10 of 25 rows

Other info

Code

Follow for update