Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EgoLM: Multi-Modal Language Model of Egocentric Motions

About

As the prevalence of wearable devices, learning egocentric motions becomes essential to develop contextual AI. In this work, we present EgoLM, a versatile framework that tracks and understands egocentric motions from multi-modal inputs, e.g., egocentric videos and motion sensors. EgoLM exploits rich contexts for the disambiguation of egomotion tracking and understanding, which are ill-posed under single modality conditions. To facilitate the versatile and multi-modal framework, our key insight is to model the joint distribution of egocentric motions and natural languages using large language models (LLM). Multi-modal sensor inputs are encoded and projected to the joint latent space of language models, and used to prompt motion generation or text generation for egomotion tracking or understanding, respectively. Extensive experiments on large-scale multi-modal human motion dataset validate the effectiveness of EgoLM as a generalist model for universal egocentric learning.

Fangzhou Hong, Vladimir Guzov, Hyo Jin Kim, Yuting Ye, Richard Newcombe, Ziwei Liu, Lingni Ma• 2024

Related benchmarks

TaskDatasetResultRank
Motion TrackingNymeria
Full Error73.38
8
Motion UnderstandingNymeria 1.0 (test)
BERT Score19.97
8
Showing 2 of 2 rows

Other info

Code

Follow for update