Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SEA-LION: Southeast Asian Languages in One Network

About

Recently, Large Language Models (LLMs) have dominated much of the artificial intelligence scene with their ability to process and generate natural languages. However, the majority of LLM research and development remains English-centric, leaving low-resource languages such as those in the Southeast Asian (SEA) region under-represented. To address this representation gap, we introduce Llama-SEA-LION-v3-8B-IT and Gemma-SEA-LION-v3-9B-IT, two cutting-edge multilingual LLMs designed for SEA languages. The SEA-LION family of LLMs supports 11 SEA languages, namely English, Chinese, Indonesian, Vietnamese, Malay, Thai, Burmese, Lao, Filipino, Tamil, and Khmer. Our work leverages large-scale multilingual continued pre-training with a comprehensive post-training regime involving multiple stages of instruction fine-tuning, alignment, and model merging. Evaluation results on multilingual benchmarks indicate that our models achieve state-of-the-art performance across LLMs supporting SEA languages. We open-source the models to benefit the wider SEA community.

Raymond Ng, Thanh Ngan Nguyen, Yuli Huang, Ngee Chia Tai, Wai Yi Leong, Wei Qi Leong, Xianbin Yong, Jian Gang Ngui, Yosephine Susanto, Nicholas Cheng, Hamsawardhini Rengarajan, Peerat Limkonchotiwat, Adithya Venkatadri Hulagadri, Kok Wai Teng, Yeo Yeow Tong, Bryan Siow, Wei Yi Teo, Wayne Lau, Choon Meng Tan, Brandon Ong, Zhi Hao Ong, Jann Railey Montalan, Adwin Chan, Sajeban Antonyrex, Ren Lee, Esther Choa, David Ong Tat-Wee, Bing Jie Darius Liu, William Chandra Tjhi, Erik Cambria, Leslie Teo• 2025

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceXNLI
Accuracy46.11
111
Paraphrase IdentificationPAWS-X
Accuracy63.58
66
Chinese Language UnderstandingC-Eval
Accuracy50.89
56
Question AnsweringTriviaQA (TQA)--
56
Mathematical ReasoningGSM8K
Accuracy (Acc)64.29
42
Commonsense ReasoningXCOPA
Accuracy70.37
32
Machine TranslationEn-XX
chrF25.37
15
Language UnderstandingMMLU
MMLU Score63.84
12
Close-ended Spoken Question AnsweringAudio-MLQA
Score (EN)4.34
10
Open-ended instruction followingAlpacaEval Audio
EN Score4.59
10
Showing 10 of 23 rows

Other info

Follow for update