Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MuRIL: Multilingual Representations for Indian Languages

About

India is a multilingual society with 1369 rationalized languages and dialects being spoken across the country (INDIA, 2011). Of these, the 22 scheduled languages have a staggering total of 1.17 billion speakers and 121 languages have more than 10,000 speakers (INDIA, 2011). India also has the second largest (and an ever growing) digital footprint (Statista, 2020). Despite this, today's state-of-the-art multilingual systems perform suboptimally on Indian (IN) languages. This can be explained by the fact that multilingual language models (LMs) are often trained on 100+ languages together, leading to a small representation of IN languages in their vocabulary and training data. Multilingual LMs are substantially less effective in resource-lean scenarios (Wu and Dredze, 2020; Lauscher et al., 2020), as limited data doesn't help capture the various nuances of a language. One also commonly observes IN language text transliterated to Latin or code-mixed with English, especially in informal settings (for example, on social media platforms) (Rijhwani et al., 2017). This phenomenon is not adequately handled by current state-of-the-art multilingual LMs. To address the aforementioned gaps, we propose MuRIL, a multilingual LM specifically built for IN languages. MuRIL is trained on significantly large amounts of IN text corpora only. We explicitly augment monolingual text corpora with both translated and transliterated document pairs, that serve as supervised cross-lingual signals in training. MuRIL significantly outperforms multilingual BERT (mBERT) on all tasks in the challenging cross-lingual XTREME benchmark (Hu et al., 2020). We also present results on transliterated (native to Latin script) test sets of the chosen datasets and demonstrate the efficacy of MuRIL in handling transliterated data.

Simran Khanuja, Diksha Bansal, Sarvesh Mehtani, Savya Khosla, Atreyee Dey, Balaji Gopalan, Dilip Kumar Margam, Pooja Aggarwal, Rajiv Teja Nagipogu, Shachi Dave, Shruti Gupta, Subhash Chandra Bose Gali, Vish Subramanian, Partha Talukdar• 2021

Related benchmarks

TaskDatasetResultRank
Hate Speech DetectionHateXplain (test)
Macro F1 Score83.85
24
Hate speech classification and explainabilityHateXplain (test)
IOU F10.2121
22
Text ClassificationTelugu
Acc87.4
14
ExplainabilityTelugu
Token-F156.08
14
Classification and ExplainabilityHindi
Accuracy83.57
14
Named Entity RecognitionNaamapadam
F1 Score74.3
9
Paraphrase DetectionIndicXPara
Accuracy60.8
9
Question AnsweringIndicQA
F1 Score48.3
9
Natural Language InferenceIndicXNLI
Accuracy72.4
9
Commonsense ReasoningIndicCOPA
Accuracy58.9
9
Showing 10 of 14 rows

Other info

Follow for update