Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

R-Tuning: Instructing Large Language Models to Say `I Don't Know'

About

Large language models (LLMs) have revolutionized numerous domains with their impressive performance but still face their challenges. A predominant issue is the propensity for these models to generate non-existent facts, a concern termed hallucination. Our research is motivated by the observation that previous instruction tuning methods force the model to complete a sentence no matter whether the model knows the knowledge or not. When the question is out of the parametric knowledge, it will try to make up something and fail to indicate when it lacks knowledge. In this paper, we present a new approach called Refusal-Aware Instruction Tuning (R-Tuning). This approach is formalized by first identifying the disparity in knowledge encompassed by pre-trained parameters compared to that of instruction tuning data. Then, we construct the refusal-aware data based on the knowledge intersection, to tune LLMs to refrain from responding to questions beyond its parametric knowledge. Experimental results demonstrate R-Tuning effectively improves a model's ability to answer known questions and refrain from answering unknown questions. Furthermore, when tested on out-of-domain datasets, the refusal ability was found to be a meta-skill that could be generalized to other tasks. Further analysis surprisingly finds that learning the uncertainty results in better calibration and an improved ability to estimate the uncertainty than uncertainty-based testing. Our code is available at https://github.com/shizhediao/R-Tuning.

Hanning Zhang, Shizhe Diao, Yong Lin, Yi R. Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, Tong Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Open-domain Question AnsweringNaturalQuestions (NQ)
SubEM46.2
40
Multi-hop Question AnsweringHotpotQA
SubEM28.36
40
Open-domain Question AnsweringTriviaQA
SubEM57.1
40
Factual Question AnsweringSciQ (ID)
Precision71.38
24
Factual Question AnsweringTVQA ID
Precision72.93
24
Factual Question AnsweringLSQA OOD
Precision71.54
24
Factual Question AnsweringID Datasets Average
Precision64.04
24
Factual Question AnsweringNQ-Open ID
Precision47.81
24
Multi-hop Question AnsweringMuSiQue Full
C Score65.8
22
Multi-hop Question AnsweringHotpotQA Full
C (Correctness)64.8
22
Showing 10 of 12 rows

Other info

Follow for update