Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enhancing Cross-lingual Prompting with Dual Prompt Augmentation

About

Prompting shows promising results in few-shot scenarios. However, its strength for multilingual/cross-lingual problems has not been fully exploited. Zhao and Sch\"utze (2021) made initial explorations in this direction by presenting that cross-lingual prompting outperforms cross-lingual finetuning. In this paper, we conduct an empirical exploration on the effect of each component in cross-lingual prompting and derive language-agnostic Universal Prompting, which helps alleviate the discrepancies between source-language training and target-language inference. Based on this, we propose DPA, a dual prompt augmentation framework, aiming at relieving the data scarcity issue in few-shot cross-lingual prompting. Notably, for XNLI, our method achieves 46.54% with only 16 English training examples per class, significantly better than 34.99% of finetuning. Our code is available at https://github.com/DAMO-NLP-SG/DPA.

Meng Zhou, Xin Li, Yue Jiang, Lidong Bing• 2022

Related benchmarks

TaskDatasetResultRank
Sentence-pair classificationXNLI 1.1 (test)
Accuracy (EN)67.97
14
Paraphrase IdentificationPAWS-X (test)
Accuracy (en)84.97
13
Showing 2 of 2 rows

Other info

Code

Follow for update