Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching

About

Recent advances in large language models (LLMs) have enabled strong reasoning capabilities through Chain-of-Thought (CoT) prompting, which elicits step-by-step problem solving, but often at the cost of excessive verbosity in intermediate outputs, leading to increased computational overhead. We propose Sketch-of-Thought (SoT), a prompting framework that integrates cognitively inspired reasoning paradigms with linguistic constraints to reduce token usage while preserving reasoning accuracy. SoT is designed as a flexible, modular approach and is instantiated with three paradigms--Conceptual Chaining, Chunked Symbolism, and Expert Lexicons--each tailored to distinct reasoning tasks and selected dynamically at test-time by a lightweight routing model. Across 18 reasoning datasets spanning multiple domains, languages, and modalities, SoT achieves token reductions of up to 84% with minimal accuracy loss. In tasks such as mathematical and multi-hop reasoning, it even improves accuracy while shortening outputs.

Simon A. Aytes, Jinheon Baek, Sung Ju Hwang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy43.42
983
Mathematical ReasoningMATH
Accuracy46.92
643
Fact VerificationFEVER
Accuracy0.522
67
Multi-hop Question AnsweringHotpotQA
F159.04
48
Question AnsweringStrQA
Accuracy59.8
24
Question AnsweringComQA
Accuracy64.25
18
Showing 6 of 6 rows

Other info

Follow for update