Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Tool Use with Chain-of-Abstraction Reasoning

About

To achieve faithful reasoning that aligns with human expectations, large language models (LLMs) need to ground their reasoning to real-world knowledge (e.g., web facts, math and physical rules). Tools help LLMs access this external knowledge, but there remains challenges for fine-tuning LLM agents (e.g., Toolformer) to invoke tools in multi-step reasoning problems, where inter-connected tool calls require holistic and efficient tool usage planning. In this work, we propose a new method for LLMs to better leverage tools in multi-step reasoning. Our method, Chain-of-Abstraction (CoA), trains LLMs to first decode reasoning chains with abstract placeholders, and then call domain tools to reify each reasoning chain by filling in specific knowledge. This planning with abstract chains enables LLMs to learn more general reasoning strategies, which are robust to shifts of domain knowledge (e.g., math results) relevant to different reasoning questions. It also allows LLMs to perform decoding and calling of external tools in parallel, which avoids the inference delay caused by waiting for tool responses. In mathematical reasoning and Wiki QA domains, we show that our method consistently outperforms previous chain-of-thought and tool-augmented baselines on both in-distribution and out-of-distribution test sets, with an average ~6% absolute QA accuracy improvement. LLM agents trained with our method also show more efficient tool use, with inference speed being on average ~1.4x faster than baseline tool-augmented LLMs.

Silin Gao, Jane Dwivedi-Yu, Ping Yu, Xiaoqing Ellen Tan, Ramakanth Pasunuru, Olga Golovneva, Koustuv Sinha, Asli Celikyilmaz, Antoine Bosselut, Tianlu Wang• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge
Accuracy51
749
ReasoningBBH
Accuracy42.9
507
Mathematical ReasoningASDIV
Accuracy0.951
221
Mathematical ReasoningMAWPS
Accuracy98.3
219
Mathematical ReasoningMATH
Accuracy83.7
162
Mathematical ReasoningCollegeMATH
Accuracy47.2
161
Mathematical ReasoningTabMWP
Accuracy92.9
157
Mathematical ReasoningAQUA
Accuracy72.8
132
Mathematical ReasoningSAT Math
SAT Math Accuracy90
44
Mathematical ReasoningGSM-Symbolic Vary Num.
Accuracy75.06
36
Showing 10 of 18 rows

Other info

Follow for update