Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning

About

Large multilingual pretrained language models such as mBERT and XLM-RoBERTa have been found to be surprisingly effective for cross-lingual transfer of syntactic parsing models (Wu and Dredze 2019), but only between related languages. However, source and training languages are rarely related, when parsing truly low-resource languages. To close this gap, we adopt a method from multi-task learning, which relies on automated curriculum learning, to dynamically optimize for parsing performance on outlier languages. We show that this approach is significantly better than uniform and size-proportional sampling in the zero-shot setting.

Miryam de Lhoneux, Sheng Zhang, Anders S{\o}gaard• 2022

Related benchmarks

TaskDatasetResultRank
Dependency Parsing30 unseen languages split (test)
Average LAS42.3
12
Showing 1 of 1 rows

Other info

Code

Follow for update