Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation

About

LLM self-evaluation relies on the LLM's own ability to estimate response correctness, which can greatly improve its deployment reliability. In this research track, we propose the Chain-of-Embedding (CoE) in the latent space to enable LLMs to perform output-free self-evaluation. CoE consists of all progressive hidden states produced during the inference time, which can be treated as the latent thinking path of LLMs. We find that when LLMs respond correctly and incorrectly, their CoE features differ, these discrepancies assist us in estimating LLM response correctness. Experiments in four diverse domains and seven LLMs fully demonstrate the effectiveness of our method. Meanwhile, its label-free design intent without any training and millisecond-level computational cost ensures real-time feedback in large-scale scenarios. More importantly, we provide interesting insights into LLM response correctness from the perspective of hidden state changes inside LLMs.

Yiming Wang, Pei Zhang, Baosong Yang, Derek F. Wong, Rui Wang• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
EM33
123
Hallucination DetectionCSQA
AUROC66.89
107
Hallucination DetectionGSM8K
AUROC75.5
93
Mathematical ReasoningGSM-Symbolic
GSM-Sym Accuracy25.9
73
ReasoningMATH
AUROC0.7668
46
Self-evaluationViLP
AUROC60.5
36
Self-evaluationVisualCoT
AUROC61.4
36
Self-evaluationMMVet
AUROC0.608
36
Self-evaluationCVBench
AUROC0.536
36
Reasoning Quality AssessmentSocial-IQA
AUROC74.86
34
Showing 10 of 21 rows

Other info

Follow for update