Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Toolformer: Language Models Can Teach Themselves to Use Tools

About

Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q\&A system, two different search engines, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.

Timo Schick, Jane Dwivedi-Yu, Roberto Dess\`i, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningSVAMP
Accuracy29.4
403
Multi-hop Question Answering2WikiMultihopQA--
387
Mathematical ReasoningGSM8K
Accuracy40
312
Mathematical ReasoningSVAMP (test)
Accuracy29.4
262
Multi-hop Question AnsweringHotpotQA (test)--
255
Mathematical ReasoningASDIV
Accuracy0.404
245
Question AnsweringTriviaQA
Accuracy48.8
238
Mathematical ReasoningMAWPS
Accuracy44
234
Instruction FollowingAlpacaEval
Win Rate56.7
227
Multi-hop Question AnsweringMuSiQue--
185
Showing 10 of 35 rows

Other info

Follow for update