Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Align and Adapt: Multimodal Multiview Human Activity Recognition under Arbitrary View Combinations

About

Multimodal multiview learning seeks to integrate information from diverse sources to enhance task performance. Existing approaches often struggle with flexible view configurations, including arbitrary view combinations, numbers of views, and heterogeneous modalities. Focusing on the context of human activity recognition, we propose AliAd, a model that combines multiview contrastive learning with a mixture-of-experts module to support arbitrary view availability during both training and inference. Instead of trying to reconstruct missing views, an adjusted center contrastive loss is used for self-supervised representation learning and view alignment, mitigating the impact of missing views on multiview fusion. This loss formulation allows for the integration of view weights to account for view quality. Additionally, it reduces computational complexity from $O(V^2)$ to $O(V)$, where $V$ is the number of views. To address residual discrepancies not captured by contrastive learning, we employ a mixture-of-experts module with a specialized load balancing strategy, tasked with adapting to arbitrary view combinations. We highlight the geometric relationship among components in our model and how they combine well in the latent space. AliAd is validated on four datasets encompassing inertial and human pose modalities, with the number of views ranging from three to nine, demonstrating its performance and flexibility.

Duc-Anh Nguyen, Nhien-An Le-Khac• 2026

Related benchmarks

TaskDatasetResultRank
Human Activity RecognitionREALDISP
F196.74
94
Human Activity RecognitionDailySport
F1 Score93.64
78
Human Activity RecognitionUP-Fall
F1 Score92.17
78
Human Activity RecognitionCMDFall
F1 Score81.75
36
Showing 4 of 4 rows

Other info

Follow for update