Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HoT: Highlighted Chain of Thought for Referencing Supporting Facts from Inputs

About

An Achilles heel of Large Language Models (LLMs) is their tendency to hallucinate non-factual statements. A response mixed of factual and non-factual statements poses a challenge for humans to verify and accurately base their decisions on. To combat this problem, we propose Highlighted Chain-of-Thought Prompting (HoT), a technique for prompting LLMs to generate responses with XML tags that ground facts to those provided in the question. That is, given an input question, LLMs would first re-format the question to add XML tags highlighting key facts, and then, generate a response with highlights over the facts referenced from the input. Compared to vanilla chain of thought prompting (CoT), HoT reduces the rate of hallucination and separately improves LLM accuracy consistently on over 22 tasks from arithmetic, reading comprehension, to logical reasoning. When asking humans to verify LLM responses, highlights help time-limited participants to more accurately and efficiently recognize when LLMs are correct. Yet, surprisingly, when LLMs are wrong, HoTs tend to fool users into believing that an answer is correct.

Tin Nguyen, Logan Bolton, Mohammad Reza Taesiri, Trung Bui, Anh Totti Nguyen• 2025

Related benchmarks

TaskDatasetResultRank
General Knowledge ReasoningMMLU CF
Accuracy70.4
55
Long-context ReasoningLongBench
Accuracy (LongBench)58.6
45
Grade School Math Word ProblemsGSM8K
Accuracy0.92
42
Multi-hop Question AnsweringMuSiQue
Accuracy37.2
24
Reasoning17 Reasoning Benchmarks Aggregate (test)
Accuracy90.71
21
Reasoningr-GSM, Seven Objects, and Date
Accuracy91.87
18
Advanced Mathematical ReasoningOlympiadBench
Accuracy12
18
General ReasoningBBH
Accuracy80.4
18
General ReasoningBBH
Relative Cost1.23
14
General ReasoningMMLU CF
Relative Cost1.12
14
Showing 10 of 16 rows

Other info

Follow for update