Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transformer Embeddings of Irregularly Spaced Events and Their Participants

About

The neural Hawkes process (Mei & Eisner, 2017) is a generative model of irregularly spaced sequences of discrete events. To handle complex domains with many event types, Mei et al. (2020a) further consider a setting in which each event in the sequence updates a deductive database of facts (via domain-specific pattern-matching rules); future events are then conditioned on the database contents. They show how to convert such a symbolic system into a neuro-symbolic continuous-time generative model, in which each database fact and the possible event has a time-varying embedding that is derived from its symbolic provenance. In this paper, we modify both models, replacing their recurrent LSTM-based architectures with flatter attention-based architectures (Vaswani et al., 2017), which are simpler and more parallelizable. This does not appear to hurt our accuracy, which is comparable to or better than that of the original models as well as (where applicable) previous attention-based methods (Zuo et al., 2020; Zhang et al., 2020a).

Chenghao Yang, Hongyuan Mei, Jason Eisner• 2021

Related benchmarks

TaskDatasetResultRank
Event PredictionStackOverflow
RMSE1.37
42
Event PredictionRetweet
RMSE22.28
18
Event Forecastingtaxi
RMSE0.37
16
Next event predictionAMAZON
RMSE0.612
14
Multimodal Temporal Point Process predictionDanmakuTPP (test)
RMSE5.4384
9
Multimodal Temporal Point Process predictionTAXI-PRO (test)
RMSE0.4049
9
Showing 6 of 6 rows

Other info

Follow for update