Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OpenSeal: Good, Fast, and Cheap Construction of an Open-Source Southeast Asian LLM via Parallel Data

About

Large language models (LLMs) have proven to be effective tools for a wide range of natural language processing (NLP) applications. Although many LLMs are multilingual, most remain English-centric and perform poorly on low-resource languages. Recently, several Southeast Asia-focused LLMs have been developed, but none are truly open source, as they do not publicly disclose their training data. Truly open-source models are important for transparency and for enabling a deeper and more precise understanding of LLM internals and development, including biases, generalization, and multilinguality. Motivated by recent advances demonstrating the effectiveness of parallel data in improving multilingual performance, we conduct controlled and comprehensive experiments to study the effectiveness of parallel data in continual pretraining of LLMs. Our findings show that using only parallel data is the most effective way to extend an LLM to new languages. Using just 34.7B tokens of parallel data and 180 hours on 8x NVIDIA H200 GPUs, we built OpenSeal, the first truly open Southeast Asian LLM that rivals the performance of existing models of similar size.

Tan Sang Nguyen, Muhammad Reza Qorib, Hwee Tou Ng• 2026

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceXNLI
Accuracy45.4
111
Paraphrase IdentificationPAWS-X
Accuracy64.7
57
Commonsense ReasoningXCOPA
Accuracy70.2
24
Machine TranslationEn-XX
chrF31.12
15
Machine TranslationXX-En--
10
Showing 5 of 5 rows

Other info

Follow for update