Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diving Deep into the Motion Representation of Video-Text Models

About

Videos are more informative than images because they capture the dynamics of the scene. By representing motion in videos, we can capture dynamic activities. In this work, we introduce GPT-4 generated motion descriptions that capture fine-grained motion descriptions of activities and apply them to three action datasets. We evaluated several video-text models on the task of retrieval of motion descriptions. We found that they fall far behind human expert performance on two action datasets, raising the question of whether video-text models understand motion in videos. To address it, we introduce a method of improving motion understanding in video-text models by utilizing motion descriptions. This method proves to be effective on two action datasets for the motion description retrieval task. The results draw attention to the need for quality captions involving fine-grained motion information in existing datasets and demonstrate the effectiveness of the proposed pipeline in understanding fine-grained motion during video-text retrieval.

Chinmaya Devaraj, Cornelia Fermuller, Yiannis Aloimonos• 2024

Related benchmarks

TaskDatasetResultRank
Video RetrievalUCF101 (test)
Top-1 Acc58.46
55
Motion Description RetrievalHMDB-51 (test)
Accuracy0.3924
12
Motion Description RetrievalHMDB-51
Accuracy0.3924
12
Showing 3 of 3 rows

Other info

Code

Follow for update