Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Transformer Embeddings of Irregularly Spaced Events and Their Participants

About

The neural Hawkes process (Mei & Eisner, 2017) is a generative model of irregularly spaced sequences of discrete events. To handle complex domains with many event types, Mei et al. (2020a) further consider a setting in which each event in the sequence updates a deductive database of facts (via domain-specific pattern-matching rules); future events are then conditioned on the database contents. They show how to convert such a symbolic system into a neuro-symbolic continuous-time generative model, in which each database fact and the possible event has a time-varying embedding that is derived from its symbolic provenance. In this paper, we modify both models, replacing their recurrent LSTM-based architectures with flatter attention-based architectures (Vaswani et al., 2017), which are simpler and more parallelizable. This does not appear to hurt our accuracy, which is comparable to or better than that of the original models as well as (where applicable) previous attention-based methods (Zuo et al., 2020; Zhang et al., 2020a).

Chenghao Yang, Hongyuan Mei, Jason Eisner• 2021

Related benchmarks

TaskDatasetResultRank
Event PredictionStackOverflow
RMSE1.37
42
Event Forecastingtaxi
RMSE0.37
23
Marked Temporal Point ProcessStackOverflow (test)
RMSE1.402
20
Event PredictionRetweet--
18
Next event predictionAMAZON
RMSE0.612
14
Marked Temporal Point Process PredictionRETWEET (test)
RMSE21.748
10
Marked Temporal Point Process PredictionEarthquake (test)
RMSE2.117
10
Marked Temporal Point Process PredictionTaxi (test)
RMSE0.394
10
Marked Temporal Point Process PredictionAmazon (test)
RMSE0.652
10
Multimodal Temporal Point Process predictionDanmakuTPP (test)
RMSE5.4384
9
Showing 10 of 15 rows

Other info

Follow for update