Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Automatic Model Selection with Large Language Models for Reasoning

About

Chain-of-Thought (CoT) and Program-Aided Language Models (PAL) represent two distinct reasoning methods, each with its own strengths. CoT employs natural language, offering flexibility and interpretability, while PAL utilizes programming language, yielding more structured and rigorous logic. We introduce a model selection method to combine the best of both worlds by employing a large language model (LLM) to dynamically select between them. Our theoretical analysis underscores the feasibility of this method, which is further corroborated by empirical results. Our proposed method demonstrates significant performance improvements across eight reasoning datasets with Codex, ChatGPT, and GPT-4. Additionally, our method is complementary to self-consistency; when integrated, it can further enhance performance while significantly reducing computation costs. Moreover, we achieve new state-of-the-art results on GSM8K and SVAMP, with respective accuracies of 96.8% and 93.7%. Our code, data and prompts are available at https://github.com/XuZhao0/Model-Selection-Reasoning

James Xu Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, Michael Qizhe Xie• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy90.1
900
Mathematical ReasoningGSM8K (test)
Accuracy80.8
770
Mathematical ReasoningSVAMP (test)
Accuracy93.7
262
Arithmetic ReasoningMultiArith
Accuracy99.7
229
Arithmetic ReasoningGSM8K
Accuracy95.6
173
Arithmetic ReasoningGSM8K (test)
Accuracy96.8
129
Arithmetic ReasoningADDSUB
Accuracy95.7
123
Mathematical ReasoningCollegeMath (test)
Accuracy46.7
89
Mathematical ReasoningMAWPS (test)
Accuracy95.3
87
Arithmetic ReasoningMultiArith (test)
Accuracy99
67
Showing 10 of 21 rows

Other info

Code

Follow for update