MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension
About
We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT.
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Named Entity Recognition | CoNLL 03 | -- | 102 | |
| Named Entity Recognition | MIT Movie | Entity F166.26 | 57 | |
| Named Entity Recognition | MIT Restaurant | Micro-F168.68 | 50 | |
| Relation Extraction | CoNLL 04 | F166.23 | 39 | |
| Question Answering | RelExt MRQA out-of-domain evaluation | EM72.46 | 37 | |
| Question Answering | TextbookQA MRQA out-of-domain evaluation | EM32.4 | 37 | |
| Extractive Question Answering | SQuAD 2.0 | F1 Score66.22 | 34 | |
| Question Answering | MRQA 2019 (dev) | -- | 32 | |
| Question Answering | DuoRC MRQA out-of-domain evaluation | EM35.51 | 23 | |
| Question Answering | DROP MRQA out-of-domain evaluation | EM34.46 | 23 |
Showing 10 of 20 rows