Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RomanSetu: Efficiently unlocking multilingual capabilities of Large Language Models via Romanization

About

This study addresses the challenge of extending Large Language Models (LLMs) to non-English languages that use non-Roman scripts. We propose an approach that utilizes the romanized form of text as an interface for LLMs, hypothesizing that its frequent informal use and shared tokens with English enhance cross-lingual alignment. Our approach involves the continual pretraining of an English LLM like Llama 2 on romanized text of non-English, non-Roman script languages, followed by instruction tuning on romanized data. The results indicate that romanized text not only reduces token fertility by 2x-4x but also matches or outperforms native script representation across various NLU, NLG, and MT tasks. Moreover, the embeddings computed on romanized text exhibit closer alignment with their English translations than those from the native script. Our approach presents a promising direction for leveraging the power of English LLMs in languages traditionally underrepresented in NLP. Our code is available on https://github.com/AI4Bharat/romansetu.

Jaavid Aktar Husain, Raj Dabre, Aswanth Kumar, Jay Gala, Thanmay Jayakumar, Ratish Puduppully, Anoop Kunchukuttan• 2024

Related benchmarks

TaskDatasetResultRank
Machine TranslationEn-XX
chrF46.87
15
Causal ReasoningIndicCOPA IndicXTREME (test)
Average F1 Score45.45
10
Machine TranslationXX-En
chrF50.75
10
Natural Language InferenceIndicXNLI IndicXTREME (test)
F1 Score0.423
10
Sentiment AnalysisIndicSentiment IndicXTREME (test)
F1 Score92.82
10
Headline GenerationIndicHeadline
ROUGE-L18.92
6
Question AnsweringIndicQA with context IndicXTREME (test)
F1 Score27.25
6
SummarizationXLSum
ROUGE-L12.56
6
Showing 8 of 8 rows

Other info

Follow for update