Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Few-Shot Generative Conversational Query Rewriting

About

Conversational query rewriting aims to reformulate a concise conversational query to a fully specified, context-independent query that can be effectively handled by existing information retrieval systems. This paper presents a few-shot generative approach to conversational query rewriting. We develop two methods, based on rules and self-supervised learning, to generate weak supervision data using large amounts of ad hoc search sessions, and to fine-tune GPT-2 to rewrite conversational queries. On the TREC Conversational Assistance Track, our weakly supervised GPT-2 rewriter improves the state-of-the-art ranking accuracy by 12%, only using very limited amounts of manual query rewrites. In the zero-shot learning setting, the rewriter still gives a comparable result to previous state-of-the-art systems. Our analyses reveal that GPT-2 effectively picks up the task syntax and learns to capture context dependencies, even for hard cases that involve group references and long-turn dependencies.

Shi Yu, Jiahua Liu, Jingqin Yang, Chenyan Xiong, Paul Bennett, Jianfeng Gao, Zhiyuan Liu• 2020

Related benchmarks

TaskDatasetResultRank
Conversational RetrievalQReCC (test)
Recall@1053.1
43
Conversational Search RetrievalTopiOCQA (test)
MRR12.6
21
Conversational SearchCAsT 20
MRR37.5
14
Conversational SearchCAsT 19
MRR66.5
14
Dense RetrievalCAsT 20
MRR37.5
7
Dense RetrievalCAsT 19
MRR66.5
7
Showing 6 of 6 rows

Other info

Follow for update