FireAct: Toward Language Agent Fine-tuning
About
Recent efforts have augmented language models (LMs) with external tools or environments, leading to the development of language agents that can reason and act. However, most of these agents rely on few-shot prompting techniques with off-the-shelf LMs. In this paper, we investigate and argue for the overlooked direction of fine-tuning LMs to obtain language agents. Using a setup of question answering (QA) with a Google search API, we explore a variety of base LMs, prompting methods, fine-tuning data, and QA tasks, and find language agents are consistently improved after fine-tuning their backbone LMs. For example, fine-tuning Llama2-7B with 500 agent trajectories generated by GPT-4 leads to a 77% HotpotQA performance increase. Furthermore, we propose FireAct, a novel approach to fine-tuning LMs with trajectories from multiple tasks and prompting methods, and show having more diverse fine-tuning data can further improve agents. Along with other findings regarding scaling effects, robustness, generalization, efficiency and cost, our work establishes comprehensive benefits of fine-tuning LMs for agents, and provides an initial set of experimental designs, insights, as well as open questions toward language agent fine-tuning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-task Language Understanding | MMLU | Accuracy65.8 | 842 | |
| Multi-hop Question Answering | HotpotQA | -- | 221 | |
| Commonsense Reasoning | StrategyQA | Accuracy72.9 | 125 | |
| Interactive Decision-making | AlfWorld | PICK85.71 | 52 | |
| Multi-hop Question Answering | Bamboogle | Accuracy50.4 | 52 | |
| Interactive Reasoning | ScienceWorld Seen | Success Rate57.21 | 31 | |
| Question Answering | HotpotQA v1.1 (test) | Easy Score50.82 | 26 | |
| Science Question Answering | ScienceQA v1.0 (test) | Accuracy (G1-4)72.5 | 26 | |
| Interactive Decision-making | ScienceWorld Unseen (test) | Success Rate50.33 | 24 | |
| Web Task | Webshop | Average Reward59.3 | 24 |