Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models

About

We introduce Buffer of Thoughts (BoT), a novel and versatile thought-augmented reasoning approach for enhancing accuracy, efficiency and robustness of large language models (LLMs). Specifically, we propose meta-buffer to store a series of informative high-level thoughts, namely thought-template, distilled from the problem-solving processes across various tasks. Then for each problem, we retrieve a relevant thought-template and adaptively instantiate it with specific reasoning structures to conduct efficient reasoning. To guarantee the scalability and stability, we further propose buffer-manager to dynamically update the meta-buffer, thus enhancing the capacity of meta-buffer as more tasks are solved. We conduct extensive experiments on 10 challenging reasoning-intensive tasks, and achieve significant performance improvements over previous SOTA methods: 11% on Game of 24, 20% on Geometric Shapes and 51% on Checkmate-in-One. Further analysis demonstrate the superior generalization ability and model robustness of our BoT, while requiring only 12% of the cost of multi-query prompting methods (e.g., tree/graph of thoughts) on average. Notably, we find that our Llama3-8B+BoT has the potential to surpass Llama3-70B model. Our project is available at: https://github.com/YangLing0818/buffer-of-thought-llm

Ling Yang, Zhaochen Yu, Tianjun Zhang, Shiyi Cao, Minkai Xu, Wentao Zhang, Joseph E. Gonzalez, Bin Cui• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGame of 24
Accuracy83.7
62
Code GenerationLiveCodeBench
Pass@11.04e+3
37
ReasoningMulti-Step Arithmetic
Accuracy96.8
28
Visual ReasoningGeometric Shapes
Accuracy90.7
28
Question AnsweringGPQA Diamond
Solve Rate16.67
27
Math problem solvingGSM8K
Solve Rate39.73
27
ReasoningWord Sorting
Accuracy99.6
24
ReasoningCheckmate-in-One
Accuracy88.3
24
ReasoningPython Puzzles
Accuracy52.8
24
ReasoningMGSM
Accuracy87.9
24
Showing 10 of 12 rows

Other info

Follow for update