Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sailor: Open Language Models for South-East Asia

About

We present Sailor, a family of open language models ranging from 0.5B to 7B parameters, tailored for South-East Asian (SEA) languages. These models are continually pre-trained from Qwen1.5, a great language model for multilingual use cases. From Qwen1.5, Sailor models accept 200B to 400B tokens, primarily covering the languages of English, Chinese, Vietnamese, Thai, Indonesian, Malay, and Lao. The training leverages several techniques, including BPE dropout for improving the model robustness, aggressive data cleaning and deduplication, and small proxy models to optimize data mixture. Experimental results on four typical tasks indicate that Sailor models demonstrate strong performance across different benchmarks, including commonsense reasoning, question answering, reading comprehension and examination. Embracing the open-source spirit, we share our insights through this report to spark a wider interest in developing large language models for multilingual use cases.

Longxu Dou, Qian Liu, Guangtao Zeng, Jia Guo, Jiahui Zhou, Wei Lu, Min Lin• 2024

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceXNLI
Accuracy44.59
111
Paraphrase IdentificationPAWS-X
Accuracy64.53
57
Commonsense ReasoningXCOPA
Accuracy74.5
24
Machine TranslationEn-XX
chrF31.09
15
Machine TranslationXX-En--
10
Showing 5 of 5 rows

Other info

Follow for update