Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards

About

The growth of online consumer health questions has led to the necessity for reliable and accurate question answering systems. A recent study showed that manual summarization of consumer health questions brings significant improvement in retrieving relevant answers. However, the automatic summarization of long questions is a challenging task due to the lack of training data and the complexity of the related subtasks, such as the question focus and type recognition. In this paper, we introduce a reinforcement learning-based framework for abstractive question summarization. We propose two novel rewards obtained from the downstream tasks of (i) question-type identification and (ii) question-focus recognition to regularize the question generation model. These rewards ensure the generation of semantically valid questions and encourage the inclusion of key medical entities/foci in the question summary. We evaluated our proposed method on two benchmark datasets and achieved higher performance over state-of-the-art models. The manual evaluation of the summaries reveals that the generated questions are more diverse and have fewer factual inconsistencies than the baseline summaries

Shweta Yadav, Deepak Gupta, Asma Ben Abacha, Dina Demner-Fushman• 2021

Related benchmarks

TaskDatasetResultRank
Abstractive Question SummarizationMEQSUM 1.0 (test)
ROUGE-145.52
19
Abstractive Question SummarizationMATINF English translated subset (test)
ROUGE-147.73
18
Showing 2 of 2 rows

Other info

Code

Follow for update