Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Making Large Language Models Speak Tulu: Structured Prompting for an Extremely Low-Resource Language

About

Can large language models converse in languages virtually absent from their training data? We investigate this question through a case study on Tulu, a Dravidian language with over 2 million speakers but minimal digital presence. Rather than fine-tuning an LLM, we examine whether structured prompts alone can elicit basic conversational ability under controlled prompting. We systematically tackle various challenges posed by absence of training data for Tulu by combining explicit grammar documentation, negative constraints to suppress high-probability tokens from related languages, romanization standardization, and quality-controlled synthetic data generation via self-play. Evaluated on a manually curated held-out set across three LLMs (Gemini 2.0 Flash, GPT-4o, Llama 3.1 70B) and validated by native speakers, our approach reduces vocabulary contamination from 80% to 5% while achieving 85% grammatical accuracy. Cross-model analysis reveals that negative constraints provide consistent improvements (12--18 percentage points), while grammar documentation effects vary by model architecture (8--22 points).

Prathamesh Devadiga, Paras Chopra• 2026

Related benchmarks

TaskDatasetResultRank
Tulu generationTulu
Grammar Accuracy85
12
Showing 1 of 1 rows

Other info

Follow for update