Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation

About

LLM self-evaluation relies on the LLM's own ability to estimate response correctness, which can greatly improve its deployment reliability. In this research track, we propose the Chain-of-Embedding (CoE) in the latent space to enable LLMs to perform output-free self-evaluation. CoE consists of all progressive hidden states produced during the inference time, which can be treated as the latent thinking path of LLMs. We find that when LLMs respond correctly and incorrectly, their CoE features differ, these discrepancies assist us in estimating LLM response correctness. Experiments in four diverse domains and seven LLMs fully demonstrate the effectiveness of our method. Meanwhile, its label-free design intent without any training and millisecond-level computational cost ensures real-time feedback in large-scale scenarios. More importantly, we provide interesting insights into LLM response correctness from the perspective of hidden state changes inside LLMs.

Yiming Wang, Pei Zhang, Baosong Yang, Derek F. Wong, Rui Wang• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
EM33
115
Hallucination DetectionCSQA
AUROC66.89
55
Hallucination DetectionGSM8K
AUROC75.5
53
Mathematical ReasoningGSM-Symbolic
GSM-Sym Accuracy25.9
43
Self-evaluationViLP
AUROC60.5
36
Self-evaluationVisualCoT
AUROC61.4
36
Self-evaluationMMVet
AUROC0.608
36
Self-evaluationCVBench
AUROC0.536
36
Hallucination DetectionAQUA
AUROC0.7213
31
ReasoningMATH
AUROC0.7668
14
Showing 10 of 16 rows

Other info

Follow for update