Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model

About

In this work, we demonstrate that multilingual large-scale sequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoising and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners than decoder-only models on various tasks. In particular, we train a 20 billion parameter multilingual seq2seq model called Alexa Teacher Model (AlexaTM 20B) and show that it achieves state-of-the-art (SOTA) performance on 1-shot summarization tasks, outperforming a much larger 540B PaLM decoder model. AlexaTM 20B also achieves SOTA in 1-shot machine translation, especially for low-resource languages, across almost all language pairs supported by the model (Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, Portuguese, Spanish, Tamil, and Telugu) on Flores-101 dataset. We also show in zero-shot setting, AlexaTM 20B outperforms GPT3 (175B) on SuperGLUE and SQuADv2 datasets and provides SOTA performance on multilingual tasks such as XNLI, XCOPA, Paws-X, and XWinograd. Overall, our results present a compelling case for seq2seq models as a powerful alternative to decoder-only models for Large-scale Language Model (LLM) training.

Saleh Soltan, Shankar Ananthakrishnan, Jack FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith Peris, Stephen Rawls, Andy Rosenbaum, Anna Rumshisky, Chandana Satya Prakash, Mukund Sridhar, Fabian Triefenbach, Apurv Verma, Gokhan Tur, Prem Natarajan• 2022

Related benchmarks

TaskDatasetResultRank
SummarizationXSum (test)--
231
Arithmetic ReasoningMultiArith
Accuracy6
181
Natural Language InferenceXNLI (test)--
167
Question AnsweringSQuAD v2.0 (dev)
F174.29
158
Natural Language UnderstandingSuperGLUE (dev)
Average Score69.16
91
Machine TranslationFLORES-101 (devtest)
French (fr) Score50.7
30
SummarizationXsum
ROUGE-224.16
14
SummarizationMLSUM German
ROUGE-233.73
14
Machine TranslationWMT en-fr 14
BLEU38.38
14
Machine TranslationWMT en-de 16 (test)
BLEU35.23
13
Showing 10 of 31 rows

Other info

Code

Follow for update