Just a Glimpse: Rethinking Temporal Information for Video Continual Learning
About
Class-incremental learning is one of the most important settings for the study of Continual Learning, as it closely resembles real-world application scenarios. With constrained memory sizes, catastrophic forgetting arises as the number of classes/tasks increases. Studying continual learning in the video domain poses even more challenges, as video data contains a large number of frames, which places a higher burden on the replay memory. The current common practice is to sub-sample frames from the video stream and store them in the replay memory. In this paper, we propose SMILE a novel replay mechanism for effective video continual learning based on individual/single frames. Through extensive experimentation, we show that under extreme memory constraints, video diversity plays a more significant role than temporal information. Therefore, our method focuses on learning from a small number of frames that represent a large number of unique videos. On three representative video datasets, Kinetics, UCF101, and ActivityNet, the proposed method achieves state-of-the-art performance, outperforming the previous state-of-the-art by up to 21.49%.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Temporal action segmentation | 50Salads | Accuracy71.6 | 106 | |
| Temporal action segmentation | GTEA | F1 Score @ 10% Threshold82.1 | 99 | |
| Temporal action segmentation | Breakfast | Accuracy52.2 | 96 | |
| Action Segmentation | Breakfast 10 tasks (test) | Acc18.4 | 16 | |
| Action Segmentation | YouTube Instructional 5 tasks (test) | Accuracy0.308 | 8 | |
| Action Segmentation | Breakfast blurry task boundary | Acc25 | 8 | |
| Action Segmentation | Breakfast 5 tasks (test) | Accuracy32.5 | 8 |