Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Higher-Order Modular Attention: Fusing Pairwise and Triadic Interactions for Protein Sequences

About

Transformer self-attention computes pairwise token interactions, yet protein sequence to phenotype relationships often involve cooperative dependencies among three or more residues that dot product attention does not capture explicitly. We introduce Higher-Order Modular Attention, HOMA, a unified attention operator that fuses pairwise attention with an explicit triadic interaction pathway. To make triadic attention practical on long sequences, HOMA employs block-structured, windowed triadic attention. We evaluate on three TAPE benchmarks for Secondary Structure, Fluorescence, and Stability. Our attention mechanism yields consistent improvements across all tasks compared with standard self-attention and efficient variants including block-wise attention and Linformer. These results suggest that explicit triadic terms provide complementary representational capacity for protein sequence prediction at controllable additional computational cost.

Shirin Amiraslani, Xin Gao• 2026

Related benchmarks

TaskDatasetResultRank
RegressionStability
Spearman Correlation0.7152
12
Fluorescence predictionFluorescence
Spearman's rho (ρ)0.7388
6
Secondary Structure PredictionCASP 12
F1 Score63.38
6
Secondary Structure PredictionCB513
F1 Score63.36
6
Secondary Structure PredictionTS115
F1 Score65.65
6
Showing 5 of 5 rows

Other info

Follow for update