Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking

About

In this paper, we present TituLLMs, the first large pretrained Bangla LLMs, available in 1b and 3b parameter sizes. Due to computational constraints during both training and inference, we focused on smaller models. To train TituLLMs, we collected a pretraining dataset of approximately ~37 billion tokens. We extended the Llama-3.2 tokenizer to incorporate language- and culture-specific knowledge, which also enables faster training and inference. There was a lack of benchmarking datasets to benchmark LLMs for Bangla. To address this gap, we developed five benchmarking datasets. We benchmarked various LLMs, including TituLLMs, and demonstrated that TituLLMs outperforms its initial multilingual versions. However, this is not always the case, highlighting the complexities of language adaptation. Our work lays the groundwork for adapting existing multilingual open models to other low-resource languages. To facilitate broader adoption and further research, we have made the TituLLMs models and benchmarking datasets publicly available (https://huggingface.co/collections/hishab/titulm-llama-family-6718d31fc1b83529276f490a).

Shahriar Kabir Nahin, Rabindra Nath Nandi, Sagor Sarker, Quazi Sarwar Muhtaseem, Md Kowsher, Apu Chandraw Shill, Md Ibrahim, Mehadi Hasan Menon, Tareq Al Muntasir, Firoj Alam• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningPIQA 1.0 (test)
Accuracy60
48
Commonsense ReasoningCommonsenseQA (CSQA) v1.0 (test)
Accuracy33
46
Open-Book Question AnsweringOpenBookQA 1.0 (test)
Accuracy35
33
Yes/No Reading ComprehensionBoolQ 1.0 (test)
Normalized Accuracy54
33
Multiple-choice Question AnsweringBangla MMLU 1.0 (test)
Accuracy25
33
Machine TranslationBangla conversational text
BLEU57
5
TranslationBangla Human Evaluation Set 1.0 (test)
Business Domain Score (H1)4.8
5
Showing 7 of 7 rows

Other info

Code

Follow for update