Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model

About

We introduce Inference-Time Intervention (ITI), a technique designed to enhance the "truthfulness" of large language models (LLMs). ITI operates by shifting model activations during inference, following a set of directions across a limited number of attention heads. This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark. On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from 32.5% to 65.1%. We identify a tradeoff between truthfulness and helpfulness and demonstrate how to balance it by tuning the intervention strength. ITI is minimally invasive and computationally inexpensive. Moreover, the technique is data efficient: while approaches like RLHF require extensive annotations, ITI locates truthful directions using only few hundred examples. Our findings suggest that LLMs may have an internal representation of the likelihood of something being true, even as they produce falsehoods on the surface.

Kenneth Li, Oam Patel, Fernanda Vi\'egas, Hanspeter Pfister, Martin Wattenberg• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy72
1891
Mathematical ReasoningGSM8K (test)
Accuracy66.7
770
Language ModelingWikiText
PPL11
732
Multitask Language UnderstandingMMLU
Accuracy72.72
413
Question AnsweringBoolQ
Accuracy74.1
317
Question AnsweringTruthfulQA
Accuracy52.68
152
Logical reasoningLogiQA (test)
Accuracy37.3
151
Language UnderstandingMMLU 5-shot--
132
Question AnsweringWinoGrande (WG)
Accuracy52.8
124
Massive Multitask Language UnderstandingMMLU
Accuracy60.1
117
Showing 10 of 99 rows
...

Other info

Follow for update