Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Parameter-Efficient Finetuning for Robust Continual Multilingual Learning

About

We introduce and study the problem of Continual Multilingual Learning (CML) where a previously trained multilingual model is periodically updated using new data arriving in stages. If the new data is present only in a subset of languages, we find that the resulting model shows improved performance only on the languages included in the latest update (and a few closely related languages) while its performance on all the remaining languages degrade significantly. We address this challenge by proposing LAFT-URIEL, a parameter-efficient finetuning strategy which aims to increase the number of languages on which the model improves after an update, while reducing the magnitude of loss in performance for the remaining languages. LAFT-URIEL uses linguistic knowledge to balance overfitting and knowledge sharing across languages, allowing for an additional 25% of task languages to see an improvement in performance after an update, while also reducing the average magnitude of losses on the remaining languages by 78% relative.

Kartikeya Badola, Shachi Dave, Partha Talukdar• 2022

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionPAN-X
Macro Avg Score0.8165
16
Intent ClassificationMTOP
Max Ratio12.04
4
Next Slot PredictionMTOP NSP
MaxRatio1.26
4
Part-of-Speech TaggingUDPOS
Macro Avg Loss12.3
4
Next Sentence PredictionMTOP NSP
Macro Avg Performance61.16
3
Showing 5 of 5 rows

Other info

Follow for update