Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Ultrasound Video Transformers for Cardiac Ejection Fraction Estimation

About

Cardiac ultrasound imaging is used to diagnose various heart diseases. Common analysis pipelines involve manual processing of the video frames by expert clinicians. This suffers from intra- and inter-observer variability. We propose a novel approach to ultrasound video analysis using a transformer architecture based on a Residual Auto-Encoder Network and a BERT model adapted for token classification. This enables videos of any length to be processed. We apply our model to the task of End-Systolic (ES) and End-Diastolic (ED) frame detection and the automated computation of the left ventricular ejection fraction. We achieve an average frame distance of 3.36 frames for the ES and 7.17 frames for the ED on videos of arbitrary length. Our end-to-end learnable approach can estimate the ejection fraction with a MAE of 5.95 and $R^2$ of 0.52 in 0.15s per video, showing that segmentation is not the only way to predict ejection fraction. Code and models are available at https://github.com/HReynaud/UVT.

Hadrien Reynaud, Athanasios Vlontzos, Benjamin Hou, Arian Beqiri, Paul Leeson, Bernhard Kainz• 2021

Related benchmarks

TaskDatasetResultRank
Ejection Fraction PredictionEchoNet-Dynamic (test)
R20.64
44
LVEF estimationEchoNet-Pediatric
MAE7.91
17
LVEF estimationCAMUS (test)
MAE9.42
7
Ejection Fraction EstimationEchoNet-Dynamic (test)
R20.52
5
Cardiac Phase DetectionAdult Echocardiography (EchoNet-Dynamic) 1276 videos (test)
MAE Frames ED7.2
5
ES/ED Frame DetectionEchoNet-Dynamic (test)
ES Average Frame Difference2.86
3
Showing 6 of 6 rows

Other info

Follow for update