Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dynamic Integration of Background Knowledge in Neural NLU Systems

About

Common-sense and background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, this knowledge must be acquired from training corpora during learning, and then it is static at test time. We introduce a new architecture for the dynamic integration of explicit background knowledge in NLU models. A general-purpose reading module reads background knowledge in the form of free-text statements (together with task-specific text inputs) and yields refined word representations to a task-specific NLU architecture that reprocesses the task inputs with these representations. Experiments on document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of the approach. Analysis shows that our model learns to exploit knowledge in a semantically appropriate way.

Dirk Weissenborn, Tom\'a\v{s} Ko\v{c}isk\'y, Chris Dyer• 2017

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceMultiNLI Mismatched
Accuracy77
60
Natural Language InferenceMultiNLI Matched
Accuracy77.8
49
Question AnsweringNewsQA (test)
F156.7
31
Question AnsweringTriviaQA Wiki domain, Verified (dev)
EM53.4
21
Question AnsweringTriviaQA Wikipedia (dev-full)
F155.1
19
Reading ComprehensionSQuAD (dev)
F1 Score0.797
15
Question AnsweringTriviaQA Web domain Verified (test)
Exact Match (EM)63.2
11
Machine ComprehensionTriviaQA Wikipedia Verified (test)
EM72.8
7
Machine Reading ComprehensionTriviaQA Web (Full)
EM67.46
7
Machine Reading ComprehensionTriviaQA Web (Verified)
EM77.63
7
Showing 10 of 14 rows

Other info

Follow for update