Konkani LLM: Multi-Script Instruction Tuning and Evaluation for a Low-Resource Indian Language
About
Large Language Models (LLMs) consistently under perform in low-resource linguistic contexts such as Konkani. This performance deficit stems from acute training data scarcity compounded by high script diversity across Devanagari, Romi and Kannada orthographies. To address this gap, we introduce Konkani-Instruct-100k, a comprehensive synthetic instruction-tuning dataset generated through Gemini 3. We establish rigorous baseline benchmarks by evaluating leading open-weights architectures including Llama 3.1, Qwen2.5 and Gemma 3 alongside proprietary closed-source models. Our primary contribution involves the development of Konkani LLM, a series of fine-tuned models optimized for regional nuances. Furthermore, we are developing the Multi-Script Konkani Benchmark to facilitate cross-script linguistic evaluation. In machine translation, Konkani LLM delivers consistent gains over the corresponding base models and is competitive with and in several settings surpasses proprietary baselines
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Translation | Konkani-Bench 200 samples (test) | BLEU50.26 | 22 | |
| Transliteration | Konkani-Bench Devanagari to Kannada | BLEU Score38.71 | 21 | |
| Transliteration | Konkani-Bench Devanagari to Romi | BLEU50.52 | 21 | |
| Transliteration | Konkani-Bench Kannada to Devanagari | BLEU Score56.03 | 21 | |
| Transliteration | Konkani-Bench Kannada to Romi | BLEU55.26 | 21 | |
| Transliteration | Konkani-Bench Romi to Devanagari | BLEU Score55.35 | 21 | |
| Transliteration | Konkani-Bench Romi to Kannada | BLEU Score43.95 | 21 | |
| Creative Writing | Konkani Romi script | LLM Judge Score4.2 | 6 | |
| Creative Writing | Konkani Devanagari script | LLM Judge Score3.7 | 6 | |
| Creative Writing | Konkani Kannada script | LLM Judge Score3.6 | 6 |