Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction

About

Question answering (QA) using textual sources for purposes such as reading comprehension (RC) has attracted much attention. This study focuses on the task of explainable multi-hop QA, which requires the system to return the answer with evidence sentences by reasoning and gathering disjoint pieces of the reference texts. It proposes the Query Focused Extractor (QFE) model for evidence extraction and uses multi-task learning with the QA model. QFE is inspired by extractive summarization models; compared with the existing method, which extracts each evidence sentence independently, it sequentially extracts evidence sentences by using an RNN with an attention mechanism on the question sentence. It enables QFE to consider the dependency among the evidence sentences and cover important information in the question sentence. Experimental results show that QFE with a simple RC baseline model achieves a state-of-the-art evidence extraction score on HotpotQA. Although designed for RC, it also achieves a state-of-the-art evidence extraction score on FEVER, which is a recognizing textual entailment task on a large textual database.

Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita• 2019

Related benchmarks

TaskDatasetResultRank
Multi-hop Question AnsweringHotpotQA fullwiki setting (test)
Answer F138.1
64
Question AnsweringHotpotQA distractor (dev)
Answer F168.7
45
Question AnsweringHotpotQA distractor setting (test)
Answer F168.1
34
Supporting Fact PredictionHotpotQA distractor (dev)
F1 Score84.7
13
Question AnsweringHotpotQA Full Wiki hidden (test)
F138.1
12
Supporting Facts PredictionHotpotQA Full Wiki hidden (test)
F1 Score44.4
11
Fact Extraction and VerificationFEVER leaderboard March 2019 (test)
Evidence F177.7
8
Showing 7 of 7 rows

Other info

Follow for update