Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ConvFiT: Conversational Fine-Tuning of Pretrained Language Models

About

Transformer-based language models (LMs) pretrained on large text collections are proven to store a wealth of semantic knowledge. However, 1) they are not effective as sentence encoders when used off-the-shelf, and 2) thus typically lag behind conversationally pretrained (e.g., via response selection) encoders on conversational tasks such as intent detection (ID). In this work, we propose ConvFiT, a simple and efficient two-stage procedure which turns any pretrained LM into a universal conversational encoder (after Stage 1 ConvFiT-ing) and task-specialised sentence encoder (after Stage 2). We demonstrate that 1) full-blown conversational pretraining is not required, and that LMs can be quickly transformed into effective conversational encoders with much smaller amounts of unannotated data; 2) pretrained LMs can be fine-tuned into task-specialised sentence encoders, optimised for the fine-grained semantics of a particular task. Consequently, such specialised sentence encoders allow for treating ID as a simple semantic similarity task based on interpretable nearest neighbours retrieval. We validate the robustness and versatility of the ConvFiT framework with such similarity-based inference on the standard ID evaluation sets: ConvFiT-ed LMs achieve state-of-the-art ID performance across the board, with particular gains in the most challenging, few-shot setups.

Ivan Vuli\'c, Pei-Hao Su, Sam Coope, Daniela Gerz, Pawe{\l} Budzianowski, I\~nigo Casanueva, Nikola Mrk\v{s}i\'c, Tsung-Hsien Wen• 2021

Related benchmarks

TaskDatasetResultRank
Intent ClassificationBanking77
Accuracy94.16
70
Intent ClassificationCLINC150
Accuracy97.34
17
Intent ClassificationHWU64
Accuracy92.42
17
Showing 3 of 3 rows

Other info

Follow for update