Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Longitudinal Health Representations from EHR and Wearable Data

About

Foundation models trained on electronic health records show strong performance on many clinical prediction tasks but are limited by sparse and irregular documentation. Wearable devices provide dense continuous physiological signals but lack semantic grounding. Existing methods usually model these data sources separately or combine them through late fusion. We propose a multimodal foundation model that jointly represents electronic health records and wearable data as a continuous time latent process. The model uses modality specific encoders and a shared temporal backbone pretrained with self supervised and cross modal objectives. This design produces representations that are temporally coherent and clinically grounded. Across forecasting physiological and risk modeling tasks the model outperforms strong electronic health record only and wearable only baselines especially at long horizons and under missing data. These results show that joint electronic health record and wearable pretraining yields more faithful representations of longitudinal health.

Yuanyun Zhang, Han Zhou, Li Feng, Yilin Hong, Shi Li• 2026

Related benchmarks

TaskDatasetResultRank
Clinical event forecastingUK Biobank 30d
AUROC78.9
7
Clinical event forecastingUK Biobank 90d
AUROC0.782
7
Clinical event forecastingUK Biobank 180d
AUROC0.771
7
Clinical event forecastingUK Biobank 365d
AUROC75.6
7
Activity Regularity EstimationUK BioBank
RMSE0.114
6
Heart-rate-variability (HRV) EstimationUK BioBank
RMSE0.109
6
Longitudinal risk modelingUK BioBank
AUC (365d)77.3
6
Sleep Efficiency PredictionUK BioBank
RMSE0.096
6
Showing 8 of 8 rows

Other info

Follow for update