Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Decentralized Attention Fails Centralized Signals: Rethinking Transformers for Medical Time Series

About

Accurate analysis of medical time series (MedTS) data, such as electroencephalography (EEG) and electrocardiography (ECG), plays a pivotal role in healthcare applications, including the diagnosis of brain and heart diseases. MedTS data typically exhibit two critical patterns: temporal dependencies within individual channels and channel dependencies across multiple channels. While recent advances in deep learning have leveraged Transformer-based models to effectively capture temporal dependencies, they often struggle with modeling channel dependencies. This limitation stems from a structural mismatch: MedTS signals are inherently centralized, whereas the Transformer's attention mechanism is decentralized, making it less effective at capturing global synchronization and unified waveform patterns. To address this mismatch, we propose CoTAR (Core Token Aggregation-Redistribution), a centralized MLP-based module designed to replace decentralized attention. Instead of allowing all tokens to interact directly, as in standard attention, CoTAR introduces a global core token that serves as a proxy to facilitate inter-token interactions, thereby enforcing a centralized aggregation and redistribution strategy. This design not only better aligns with the centralized nature of MedTS signals but also reduces computational complexity from quadratic to linear. Experiments on five benchmarks validate the superiority of our method in both effectiveness and efficiency, achieving up to a 12.13% improvement on the APAVA dataset, while using only 33% of the memory and 20% of the inference time compared to the previous state of the art. Code and all training scripts are available at https://github.com/Levi-Ackman/TeCh.

Guoqi Yu, Juncheng Wang, Chen Yang, Jing Qin, Angelica I. Aviles-Rivero, Shujun Wang• 2026

Related benchmarks

TaskDatasetResultRank
Human Activity RecognitionFLAAP 10-Classes (test)
Accuracy80.6
11
Human Activity RecognitionUCI-HAR 6-Classes (test)
Accuracy94.15
11
Medical Time Series ClassificationADFTD 3-Classes (test)
Accuracy54.54
11
Medical Time Series ClassificationAPAVA 2-Classes (test)
Accuracy86.86
11
Medical Time Series ClassificationTDBrain 2-Classes (test)
Accuracy93.21
11
Medical Time Series ClassificationPTB 2-Classes (test)
Accuracy0.8596
11
Medical Time Series ClassificationPTB-XL 5-Classes (test)
Accuracy0.7353
11
Medical Time Series ClassificationPTB-XL
F1-Score62.44
7
Medical Time Series ClassificationAPAVA
Accuracy86.86
2
Medical Time Series ClassificationTDBrain
Accuracy93.21
2
Showing 10 of 12 rows

Other info

Follow for update