Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-attention with Functional Time Representation Learning

About

Sequential modelling with self-attention has achieved cutting edge performances in natural language processing. With advantages in model flexibility, computation complexity and interpretability, self-attention is gradually becoming a key component in event sequence models. However, like most other sequence models, self-attention does not account for the time span between events and thus captures sequential signals rather than temporal patterns. Without relying on recurrent network structures, self-attention recognizes event orderings via positional encoding. To bridge the gap between modelling time-independent and time-dependent event sequence, we introduce a functional feature map that embeds time span into high-dimensional spaces. By constructing the associated translation-invariant time kernel function, we reveal the functional forms of the feature map under classic functional function analysis results, namely Bochner's Theorem and Mercer's Theorem. We propose several models to learn the functional time representation and the interactions with event representation. These methods are evaluated on real-world datasets under various continuous-time event sequence prediction tasks. The experiments reveal that the proposed methods compare favorably to baseline models while also capturing useful time-event interactions.

Da Xu, Chuanwei Ruan, Sushant Kumar, Evren Korpeoglu, Kannan Achan• 2019

Related benchmarks

TaskDatasetResultRank
Marked Temporal Point ProcessMIMIC (test)
LL-1.11
10
Marked Temporal Point ProcessNeonate (test)
LL-2.646
10
Marked Temporal Point ProcessTraffic (test)
LL0.372
10
Marked Temporal Point ProcessSynthetic (test)
LL-0.619
10
Marked Temporal Point ProcessStackOverflow (test)
Log-Likelihood-2.404
10
Marked Temporal Point ProcessBookOrder (test)
LL-1.11
10
Showing 6 of 6 rows

Other info

Follow for update