Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-lingual QA: A Key to Unlocking In-context Cross-lingual Performance

About

Multilingual large language models (MLLMs) have demonstrated significant cross-lingual capabilities through in-context learning. Existing approaches typically construct monolingual in-context examples, either in the source or target language. However, translating entire in-context examples into the target language might compromise contextual integrity and be costly in the case of long-context passages. To address this, we introduce Cross-lingual QA, a cross-lingual prompting method that translates only the question and answer parts, thus reducing translation costs. Experiments on four typologically diverse multilingual benchmarks show that Cross-lingual QA prompting effectively stimulates models to elicit their cross-lingual knowledge, outperforming prior monolingual prompting approaches. Furthermore, we show that prompting open-source MLLMs with cross-lingual in-context examples enhances performance as the model scale increases.

Sunkyoung Kim, Dayeon Ki, Yireun Kim, Jinsik Lee• 2023

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceXNLI
Accuracy80.7
111
Question AnsweringTyDiQA
Exact Match51.29
28
Natural Language UnderstandingNusaX
Macro F180.12
28
Showing 3 of 3 rows

Other info

Follow for update