QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
About
Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | SQuAD v1.1 (dev) | F1 Score32.6 | 375 | |
| Question Answering | SQuAD v1.1 (test) | F1 Score90.5 | 260 | |
| Question Answering | NewsQA (dev) | F1 Score60.21 | 101 | |
| Sequence Classification | IMDB | Micro F189.86 | 64 | |
| Sequence Classification | ATIS | Micro F197.07 | 64 | |
| Sequence Classification | MASSIVE | Micro F178.48 | 64 | |
| Sequence Classification | Yahoo | Micro F155.77 | 64 | |
| Sequence Classification | Huffpost low-resource (test) | Micro F180.71 | 64 | |
| Reading Comprehension | DROP (dev) | F1 Score30.44 | 63 | |
| Reading Comprehension | DROP (test) | F1 Score28.36 | 61 |