Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On-device Large Multi-modal Agent for Human Activity Recognition

About

Human Activity Recognition (HAR) has been an active area of research, with applications ranging from healthcare to smart environments. The recent advancements in Large Language Models (LLMs) have opened new possibilities to leverage their capabilities in HAR, enabling not just activity classification but also interpretability and human-like interaction. In this paper, we present a Large Multi-Modal Agent designed for HAR, which integrates the power of LLMs to enhance both performance and user engagement. The proposed framework not only delivers activity classification but also bridges the gap between technical outputs and user-friendly insights through its reasoning and question-answering capabilities. We conduct extensive evaluations using widely adopted HAR datasets, including HHAR, Shoaib, Motionsense to assess the performance of our framework. The results demonstrate that our model achieves high classification accuracy comparable to state-of-the-art methods while significantly improving interpretability through its reasoning and Q&A capabilities.

Md Shakhrul Iman Siam, Ishtiaque Ahmed Showmik, Guanqun Song, Ting Zhu• 2025

Related benchmarks

TaskDatasetResultRank
Activity ClassificationHHAR unseen
F1 Score75
5
Activity ClassificationMotionSense unseen
F1 Score65
5
Activity ClassificationShoaib unseen
F1 Score71
5
Activity ClassificationHHAR seen
F1 Score83
4
Activity ClassificationMotionSense seen
F1 Score0.77
4
Activity ClassificationShoaib seen
F1 Score79
4
Showing 6 of 6 rows

Other info

Follow for update