Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OmniVec2 -- A Novel Transformer based Network for Large Scale Multimodal and Multitask Learning

About

We present a novel multimodal multitask network and associated training algorithm. The method is capable of ingesting data from approximately 12 different modalities namely image, video, audio, text, depth, point cloud, time series, tabular, graph, X-ray, infrared, IMU, and hyperspectral. The proposed approach utilizes modality specialized tokenizers, a shared transformer architecture, and cross-attention mechanisms to project the data from different modalities into a unified embedding space. It addresses multimodal and multitask scenarios by incorporating modality-specific task heads for different tasks in respective modalities. We propose a novel pretraining strategy with iterative modality switching to initialize the network, and a training algorithm which trades off fully joint training over all modalities, with training on pairs of modalities at a time. We provide comprehensive evaluation across 25 datasets from 12 modalities and show state of the art performances, demonstrating the effectiveness of the proposed architecture, pretraining strategy and adapted multitask training.

Siddharth Srivastava, Gaurav Sharma• 2025

Related benchmarks

TaskDatasetResultRank
Action RecognitionKinetics-400
Top-1 Acc93.6
413
Audio ClassificationESC-50
Accuracy99.1
325
Text-to-Video RetrievalMSR-VTT--
313
Image ClassificationiNaturalist 2018
Top-1 Accuracy94.6
287
Action RecognitionHMDB51
3-Fold Accuracy92.1
191
Video Action ClassificationSomething-Something v2
Top-1 Acc86.1
139
Text-to-Video RetrievalYouCook2
Recall@1069.9
117
Natural Language UnderstandingGLUE (test dev)
MRPC Accuracy85.8
81
3D Point Cloud ClassificationScanObjectNN
Accuracy97.2
76
Semantic segmentationNYU V2
mIoU63.6
74
Showing 10 of 23 rows

Other info

Follow for update