Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visual Representation Learning with Stochastic Frame Prediction

About

Self-supervised learning of image representations by predicting future frames is a promising direction but still remains a challenge. This is because of the under-determined nature of frame prediction; multiple potential futures can arise from a single current frame. To tackle this challenge, in this paper, we revisit the idea of stochastic video generation that learns to capture uncertainty in frame prediction and explore its effectiveness for representation learning. Specifically, we design a framework that trains a stochastic frame prediction model to learn temporal information between frames. Moreover, to learn dense information within each frame, we introduce an auxiliary masked image modeling objective along with a shared decoder architecture. We find this architecture allows for combining both objectives in a synergistic and compute-efficient manner. We demonstrate the effectiveness of our framework on a variety of tasks from video label propagation and vision-based robot learning domains, such as video segmentation, pose tracking, vision-based robotic locomotion, and manipulation tasks. Code is available on the project webpage: https://sites.google.com/view/2024rsp.

Huiwon Jang, Dongyoung Kim, Junsu Kim, Jinwoo Shin, Pieter Abbeel, Younggyo Seo• 2024

Related benchmarks

TaskDatasetResultRank
Video Object SegmentationDAVIS 2017 (val)
J mean57.4
1130
Video Object SegmentationDAVIS 2017
Jaccard Index (J)57.8
42
Video Instance ParsingVIP (val)
mIoU33.8
20
Human Pose EstimationJHMDB (val)
PCK@0.144.6
19
wet-AMD conversion predictionHARBOR 12-month window (test)
AUROC0.54
19
wet-AMD conversion predictionHARBOR 6-month window (test)
AUROC0.58
19
Human Pose EstimationJHMDB
PCK@0.146
12
AD conversion predictionADNI (1-year window)
AUROC0.785
8
AD conversion predictionADNI 3-years window
AUROC73.8
8
Video Part SegmentationVIP
mIoU0.34
6
Showing 10 of 11 rows

Other info

Follow for update