Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OmniVec: Learning robust representations with cross modal sharing

About

Majority of research in learning based methods has been towards designing and training networks for specific tasks. However, many of the learning based tasks, across modalities, share commonalities and could be potentially tackled in a joint framework. We present an approach in such direction, to learn multiple tasks, in multiple modalities, with a unified architecture. The proposed network is composed of task specific encoders, a common trunk in the middle, followed by task specific prediction heads. We first pre-train it by self-supervised masked training, followed by sequential training for the different tasks. We train the network on all major modalities, e.g.\ visual, audio, text and 3D, and report results on $22$ diverse and challenging public benchmarks. We demonstrate empirically that, using a joint network to train across modalities leads to meaningful information sharing and this allows us to achieve state-of-the-art results on most of the benchmarks. We also show generalization of the trained network on cross-modal tasks as well as unseen datasets and tasks.

Siddharth Srivastava, Gaurav Sharma• 2023

Related benchmarks

TaskDatasetResultRank
Action RecognitionKinetics-400
Top-1 Acc91.1
413
Audio ClassificationESC-50
Accuracy98.4
325
Image ClassificationiNaturalist 2018
Top-1 Accuracy93.8
287
Action RecognitionHMDB51
3-Fold Accuracy91.6
191
Semantic segmentationNYUD v2 (test)
mIoU60.8
187
Video Action ClassificationSomething-Something v2
Top-1 Acc85.4
139
Text-to-Video RetrievalYouCook2
Recall@1070.8
117
3D Point Cloud ClassificationScanObjectNN
Accuracy96.1
76
Semantic segmentationNYU V2
mIoU60.8
74
Video RecognitionKinetics-400
Top-1 Acc91.1
54
Showing 10 of 19 rows

Other info

Follow for update