Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Evaluating Chain-of-Thought Reasoning through Reusability and Verifiability

About

In multi-agent IR pipelines for tasks such as search and ranking, LLM-based agents exchange intermediate reasoning in terms of Chain-of-Thought (CoT) with each other. Current CoT evaluation narrowly focuses on target task accuracy. However, this metric fails to assess the quality or utility of the reasoning process itself. To address this limitation, we introduce two novel measures: reusability and verifiability. We decouple CoT generation from execution using a Thinker-Executor framework. Reusability measures how easily an Executor can reuse the Thinker's CoT. Verifiability measures how frequently an Executor can match the Thinker's answer using the CoT. We evaluated four Thinker models against a committee of ten Executor models across five benchmarks. Our results reveal that reusability and verifiability do not correlate with standard accuracy, exposing a blind spot in current accuracy-based leaderboards for reasoning capability. Surprisingly, we find that CoTs from specialized reasoning models are not consistently more reusable or verifiable than those from general-purpose LLMs like Llama and Gemma.

Shashank Aggarwal, Ram Vikas Mishra, Amit Awekar• 2026

Related benchmarks

TaskDatasetResultRank
Strategy-based Question AnsweringStrategyQA--
16
Commonsense Question AnsweringCommonsense QA--
12
Mathematical ReasoningGSM8K--
12
Mathematical ReasoningSVAMP--
12
Mathematical ReasoningGSM8K--
12
Mathematical ReasoningSVAMP--
12
Science Question AnsweringARC--
12
Science Question AnsweringARC--
12
Strategic Question AnsweringStrategyQA--
12
Mathematical ReasoningGSM8K--
4
Showing 10 of 12 rows

Other info

Follow for update