Natural Language Processing with Small Feed-Forward Networks
About
We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.
Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan McDonald, Slav Petrov• 2017
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Identification | UDHR CLD3 1.0 | F1 Score0.922 | 8 | |
| Language Identification | FLORES-200 CLD3 1.0 | F1 Score95.2 | 8 |
Showing 2 of 2 rows