Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CA-TCN: A Causal-Anticausal Temporal Convolutional Network for Direct Auditory Attention Decoding

About

A promising approach for steering auditory attention in complex listening environments relies on Auditory Attention Decoding (AAD), which aim to identify the attended speech stream in a multiple speaker scenario from neural recordings. Entrainment-based AAD approaches, typically assume access to clean speech sources and electroencephalography (EEG) signals to exploit low-frequency correlations between the neural response and the attended stimulus. In this study, we propose CA-TCN, a Causal-Anticausal Temporal Convolutional Network that directly classifies the attended speaker. The proposed architecture integrates several best practices from convolutional neural networks in sequence processing tasks. Importantly, it explicitly aligns auditory stimuli and neural responses by employing separate causal and anticausal convolutions respectively, with distinct receptive fields operating in opposite temporal directions. Experimental results, obtained through comparisons with three baseline AAD models, demonstrated that CA-TCN consistently improved decoding accuracy across datasets and decision windows, with gains ranging from 0.5% to 3.2% for subject-independent models and from 0.8% to 2.9% for subject-specific models compared with the next best-performing model, AADNet. Moreover, these improvements were statistically significant in four of the six evaluated settings when comparing Minimum Expected Switch Duration distributions. Beyond accuracy, the model demonstrated spatial robustness across different conditions, as the EEG spatial filters exhibited stable patterns across datasets. Overall, this work introduces an accurate and unified AAD model that outperforms existing methods while considering practical benefits for online processing scenarios. These findings contribute to advancing the state of AAD and its applicability in real-world systems.

I\~nigo Garc\'ia-Ugarte, Rub\'en Eguinoa, Ricardo San Mart\'in, Daniel Paternain, Carmen Vidaurre• 2026

Related benchmarks

TaskDatasetResultRank
Auditory Attention DecodingJaulab Subject-Specific
Accuracy (1s)60.4
4
Auditory Attention DecodingDTU Subject-Specific
Accuracy (1s)60.5
4
Auditory Attention DecodingKULeuven (Subject-Specific)
Accuracy (1s)59.1
4
Auditory Attention DecodingJaulab Subject-Independent (val)
Accuracy (1s)58
4
Auditory Attention DecodingDTU Subject-Independent (val)
Accuracy (1s)58.3
4
Auditory Attention DecodingKULeuven Subject-Independent (val)
Accuracy (1s)56.8
4
Showing 6 of 6 rows

Other info

Follow for update