No Train but Gain: Language Arithmetic for training-free Language Adapters enhancement
About
Modular deep learning is the state-of-the-art solution for lifting the curse of multilinguality, preventing the impact of negative interference and enabling cross-lingual performance in Multilingual Pre-trained Language Models. However, a trade-off of this approach is the reduction in positive transfer learning from closely related languages. In response, we introduce a novel method called language arithmetic, which enables training-free post-processing to address this limitation. Extending the task arithmetic framework, we apply learning via addition to the language adapters, transitioning the framework from a multi-task to a multilingual setup. The effectiveness of the proposed solution is demonstrated on three downstream tasks in a MAD-X-based set of cross-lingual schemes, acting as a post-processing procedure. Language arithmetic consistently improves the baselines with significant gains, especially in the most challenging case of zero-shot application. Our code and models are available at https://github.com/mklimasz/language-arithmetic .
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Part-of-Speech Tagging | POS 80 languages | Score46.9 | 5 | |
| Multilingual Natural Language Processing | Aggregate 234 languages | Score50.1 | 5 | |
| Named Entity Recognition | NER 136 languages | Overall Score49.3 | 5 | |
| Question Answering | QA 12 languages | Score72.5 | 5 | |
| Topic Classification | SIB 176 languages | Score61.6 | 5 | |
| Choice of Plausible Alternatives | COPA 11 languages | Score50.3 | 5 |