Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Cross-Joint Attention for Generalizable Video-Based Seizure Detection

About

Automated seizure detection from long-term clinical videos can substantially reduce manual review time and enable real-time monitoring. However, existing video-based methods often struggle to generalize to unseen subjects due to background bias and reliance on subject-specific appearance cues. We propose a joint-centric attention model that focuses exclusively on body dynamics to improve cross-subject generalization. For each video segment, body joints are detected and joint-centered clips are extracted, suppressing background context. These joint-centered clips are tokenized using a Video Vision Transformer (ViViT), and cross-joint attention is learned to model spatial and temporal interactions between body parts, capturing coordinated movement patterns characteristic of seizure semiology. Extensive cross-subject experiments show that the proposed method consistently outperforms state-of-the-art CNN-, graph-, and transformer-based approaches on unseen subjects.

Omar Zamzam, Takfarinas Medani, Chinmay Chinara, Richard Leahy• 2026

Related benchmarks

TaskDatasetResultRank
Seizure DetectionPublic Seizure Dataset held-out subjects (test)
Accuracy88.9
8
Showing 1 of 1 rows

Other info

Follow for update