Cross-lingual QA: A Key to Unlocking In-context Cross-lingual Performance
About
Multilingual large language models (MLLMs) have demonstrated significant cross-lingual capabilities through in-context learning. Existing approaches typically construct monolingual in-context examples, either in the source or target language. However, translating entire in-context examples into the target language might compromise contextual integrity and be costly in the case of long-context passages. To address this, we introduce Cross-lingual QA, a cross-lingual prompting method that translates only the question and answer parts, thus reducing translation costs. Experiments on four typologically diverse multilingual benchmarks show that Cross-lingual QA prompting effectively stimulates models to elicit their cross-lingual knowledge, outperforming prior monolingual prompting approaches. Furthermore, we show that prompting open-source MLLMs with cross-lingual in-context examples enhances performance as the model scale increases.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Inference | XNLI | Accuracy80.7 | 111 | |
| Question Answering | TyDiQA | Exact Match51.29 | 28 | |
| Natural Language Understanding | NusaX | Macro F180.12 | 28 |