Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Konkani LLM: Multi-Script Instruction Tuning and Evaluation for a Low-Resource Indian Language

About

Large Language Models (LLMs) consistently under perform in low-resource linguistic contexts such as Konkani. This performance deficit stems from acute training data scarcity compounded by high script diversity across Devanagari, Romi and Kannada orthographies. To address this gap, we introduce Konkani-Instruct-100k, a comprehensive synthetic instruction-tuning dataset generated through Gemini 3. We establish rigorous baseline benchmarks by evaluating leading open-weights architectures including Llama 3.1, Qwen2.5 and Gemma 3 alongside proprietary closed-source models. Our primary contribution involves the development of Konkani LLM, a series of fine-tuned models optimized for regional nuances. Furthermore, we are developing the Multi-Script Konkani Benchmark to facilitate cross-script linguistic evaluation. In machine translation, Konkani LLM delivers consistent gains over the corresponding base models and is competitive with and in several settings surpasses proprietary baselines

Reuben Chagas Fernandes, Gaurang S. Patkar• 2026

Related benchmarks

TaskDatasetResultRank
TranslationKonkani-Bench 200 samples (test)
BLEU50.26
22
TransliterationKonkani-Bench Devanagari to Kannada
BLEU Score38.71
21
TransliterationKonkani-Bench Devanagari to Romi
BLEU50.52
21
TransliterationKonkani-Bench Kannada to Devanagari
BLEU Score56.03
21
TransliterationKonkani-Bench Kannada to Romi
BLEU55.26
21
TransliterationKonkani-Bench Romi to Devanagari
BLEU Score55.35
21
TransliterationKonkani-Bench Romi to Kannada
BLEU Score43.95
21
Creative WritingKonkani Romi script
LLM Judge Score4.2
6
Creative WritingKonkani Devanagari script
LLM Judge Score3.7
6
Creative WritingKonkani Kannada script
LLM Judge Score3.6
6
Showing 10 of 18 rows

Other info

Follow for update