Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models

About

Despite demonstrating impressive capabilities, Large Language Models (LLMs) still often struggle to accurately express the factual knowledge they possess, especially in cases where the LLMs' knowledge boundaries are ambiguous. To improve LLMs' factual expressions, we propose the UAlign framework, which leverages Uncertainty estimations to represent knowledge boundaries, and then explicitly incorporates these representations as input features into prompts for LLMs to Align with factual knowledge. First, we prepare the dataset on knowledge question-answering (QA) samples by calculating two uncertainty estimations, including confidence score and semantic entropy, to represent the knowledge boundaries for LLMs. Subsequently, using the prepared dataset, we train a reward model that incorporates uncertainty estimations and then employ the Proximal Policy Optimization (PPO) algorithm for factuality alignment on LLMs. Experimental results indicate that, by integrating uncertainty representations in LLM alignment, the proposed UAlign can significantly enhance the LLMs' capacities to confidently answer known questions and refuse unknown questions on both in-domain and out-of-domain tasks, showing reliability improvements and good generalizability over various prompt- and training-based baselines.

Boyang Xue, Fei Mi, Qi Zhu, Hongru Wang, Rui Wang, Sheng Wang, Erxin Yu, Xuming Hu, Kam-Fai Wong• 2024

Related benchmarks

TaskDatasetResultRank
Factual Question AnsweringTVQA ID
Precision82.1
24
Factual Question AnsweringSciQ (ID)
Precision76.44
24
Factual Question AnsweringID Datasets Average
Precision70.82
24
Factual Question AnsweringLSQA OOD
Precision79.56
24
Factual Question AnsweringNQ-Open ID
Precision56.68
24
Showing 5 of 5 rows

Other info

Code

Follow for update