Few-Shot Generative Conversational Query Rewriting
About
Conversational query rewriting aims to reformulate a concise conversational query to a fully specified, context-independent query that can be effectively handled by existing information retrieval systems. This paper presents a few-shot generative approach to conversational query rewriting. We develop two methods, based on rules and self-supervised learning, to generate weak supervision data using large amounts of ad hoc search sessions, and to fine-tune GPT-2 to rewrite conversational queries. On the TREC Conversational Assistance Track, our weakly supervised GPT-2 rewriter improves the state-of-the-art ranking accuracy by 12%, only using very limited amounts of manual query rewrites. In the zero-shot learning setting, the rewriter still gives a comparable result to previous state-of-the-art systems. Our analyses reveal that GPT-2 effectively picks up the task syntax and learns to capture context dependencies, even for hard cases that involve group references and long-turn dependencies.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Conversational Retrieval | QReCC (test) | Recall@1053.1 | 43 | |
| Conversational Search Retrieval | TopiOCQA (test) | MRR12.6 | 21 | |
| Conversational Search | CAsT 20 | MRR37.5 | 14 | |
| Conversational Search | CAsT 19 | MRR66.5 | 14 | |
| Dense Retrieval | CAsT 20 | MRR37.5 | 7 | |
| Dense Retrieval | CAsT 19 | MRR66.5 | 7 |