Video + CLIP Baseline for Ego4D Long-term Action Anticipation
About
In this report, we introduce our adaptation of image-text models for long-term action anticipation. Our Video + CLIP framework makes use of a large-scale pre-trained paired image-text model: CLIP and a video encoder Slowfast network. The CLIP embedding provides fine-grained understanding of objects relevant for an action whereas the slowfast network is responsible for modeling temporal information within a video clip of few frames. We show that the features obtained from both encoders are complementary to each other, thus outperforming the baseline on Ego4D for the task of long-term action anticipation. Our code is available at github.com/srijandas07/clip_baseline_LTA_Ego4d.
Srijan Das, Michael S. Ryoo• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long-term Action Anticipation | Ego4D v1 (test) | ED@Z=20 Verb0.715 | 31 | |
| Long Term Anticipation | Ego4D LTA v1 (test) | ED@Z=20 Verb0.739 | 18 | |
| Long-Term Anticipation (LTA) | Ego4D (test) | Verb Anticipation Accuracy74 | 9 | |
| Long Term Anticipation | Ego4D (test) | Verb ED0.7389 | 6 |
Showing 4 of 4 rows