Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

EEG-CLIP : Learning EEG representations from natural language descriptions

About

Deep networks for electroencephalogram (EEG) decoding are often only trained to solve one specific task, such as pathology or age decoding. A more general task-agnostic approach is to train deep networks to match a (clinical) EEG recording to its corresponding textual medical report and vice versa. This approach was pioneered in the computer vision domain matching images and their text captions and subsequently allowed to do successful zero-shot decoding using textual class prompts. In this work, we follow this approach and develop a contrastive learning framework, EEG-CLIP, that aligns the EEG time series and the descriptions of the corresponding clinical text in a shared embedding space. We investigated its potential for versatile EEG decoding, evaluating performance in a range of few-shot and zero-shot settings. Overall, we show that EEG-CLIP manages to non-trivially align text and EEG representations. Our work presents a promising approach to learn general EEG representations, which could enable easier analyses of diverse decoding questions through zero-shot decoding or training task-specific models from fewer training examples. The code for reproducing our results is available at https://github.com/tidiane-camaret/EEGClip

Tidiane Camaret Ndir, Robin Tibor Schirrmeister, Tonio Ball• 2025

Related benchmarks

TaskDatasetResultRank
RetrievalTUAB
Pathological Retrieval Score79.94
8
Seizure DetectionCHB-MIT
Balanced Accuracy68.43
8
Major Depressive Disorder ClassificationMumtaz 2016
Balanced Accuracy65.82
8
Age ClassificationTUAB
Accuracy68.4
2
Age ClassificationTUAB (test)
Accuracy43.95
2
Gender ClassificationTUAB (test)
Accuracy53.6
2
Pathological ClassificationTUAB
Accuracy81.48
2
Pathological ClassificationTUAB (test)
Accuracy44.67
2
Text-based classificationTUAB
Pathological Score79.05
2
Showing 9 of 9 rows

Other info

Follow for update