Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Position-Aware Sequential Attention for Accurate Next Item Recommendations

About

Sequential self-attention models usually rely on additive positional embeddings, which inject positional information into item representations at the input. In the absence of positional signals, the attention block is permutation-equivariant over sequence positions and thus has no intrinsic notion of temporal order beyond causal masking. We argue that additive positional embeddings make the attention mechanism only superficially sensitive to sequence order: positional information is entangled with item embedding semantics, propagates weakly in deep architectures, and limits the ability to capture rich sequential patterns. To address these limitations, we introduce a kernelized self-attention mechanism, where a learnable positional kernel operates purely in the position space, disentangled from semantic similarity, and directly modulates attention weights. When applied per attention block, this kernel enables adaptive multi-scale sequential modeling. Experiments on standard next-item prediction benchmarks show that our positional kernel attention consistently improves over strong competing baselines.

Timur Nabiev, Evgeny Frolov• 2026

Related benchmarks

TaskDatasetResultRank
Sequential RecommendationGowalla
NDCG@100.0485
45
Sequential RecommendationY-likes
NDCG@101.53
6
Sequential Recommendationzvuk
NDCG@100.0106
6
Sequential RecommendationBeauty
NDCG@103.94
5
Sequential RecommendationY-listens
NDCG@100.0123
5
Showing 5 of 5 rows

Other info

Follow for update