Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When to Speak, When to Abstain: Contrastive Decoding with Abstention

About

Large Language Models (LLMs) demonstrate exceptional performance across diverse tasks by leveraging pre-trained (i.e., parametric) and external (i.e., contextual) knowledge. While substantial efforts have been made to enhance the utilization of both forms of knowledge, situations in which models lack relevant information remain underexplored. To investigate this challenge, we first present a controlled testbed featuring four distinct knowledge access scenarios, including the aforementioned edge case, revealing that conventional LLM usage exhibits insufficient robustness in handling all instances. Addressing this limitation, we propose Contrastive Decoding with Abstention (CDA), a novel training-free decoding method that allows LLMs to generate responses when relevant knowledge is available and to abstain otherwise. CDA estimates the relevance of both knowledge sources for a given input, adaptively deciding which type of information to prioritize and which to exclude. Through extensive experiments, we demonstrate that CDA can effectively perform accurate generation and abstention simultaneously, enhancing reliability and preserving user trust.

Hyuhng Joon Kim, Youna Kim, Sang-goo Lee, Taeuk Kim• 2024

Related benchmarks

TaskDatasetResultRank
Retrieval-Augmented GenerationHotpotQA
Reliability Score (RS)51.8
52
Retrieval-Augmented Generation (RAG)NQ
Reliability Score (RS)54.33
52
Retrieval-Augmented Generation (RAG)TriviaQA
Reliability Score80.67
52
Question AnsweringNQ
Flans73.15
4
Question AnsweringHotpotQA
Flans Score79.32
4
Question AnsweringTriviaQA
Flans Score80.93
4
Showing 6 of 6 rows

Other info

Follow for update