Severing the Edge Between Before and After: Neural Architectures for Temporal Ordering of Events
About
In this paper, we propose a neural architecture and a set of training methods for ordering events by predicting temporal relations. Our proposed models receive a pair of events within a span of text as input and they identify temporal relations (Before, After, Equal, Vague) between them. Given that a key challenge with this task is the scarcity of annotated data, our models rely on either pretrained representations (i.e. RoBERTa, BERT or ELMo), transfer and multi-task learning (by leveraging complementary datasets), and self-training techniques. Experiments on the MATRES dataset of English documents establish a new state-of-the-art on this task.
Miguel Ballesteros, Rishita Anubhai, Shuai Wang, Nima Pourdamghani, Yogarshi Vyas, Jie Ma, Parminder Bhatia, Kathleen McKeown, Yaser Al-Onaizan• 2020
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Event TEMPREL extraction | MATRES | F1 Score77.2 | 31 | |
| Relation Extraction | TB-DENSE | F1 Score62.2 | 10 | |
| Temporal relation extraction | TDDAuto | F1 Score62.3 | 7 | |
| Temporal relation extraction | TDDMan | F1 Score37.5 | 7 |
Showing 4 of 4 rows