Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

4M: Massively Multimodal Masked Modeling

About

Current machine learning models for vision are often highly specialized and limited to a single modality and task. In contrast, recent large language models exhibit a wide range of capabilities, hinting at a possibility for similarly versatile models in computer vision. In this paper, we take a step in this direction and propose a multimodal training scheme called 4M. It consists of training a single unified Transformer encoder-decoder using a masked modeling objective across a wide range of input/output modalities - including text, images, geometric, and semantic modalities, as well as neural network feature maps. 4M achieves scalability by unifying the representation space of all modalities through mapping them into discrete tokens and performing multimodal masked modeling on a small randomized subset of tokens. 4M leads to models that exhibit several key capabilities: (1) they can perform a diverse set of vision tasks out of the box, (2) they excel when fine-tuned for unseen downstream tasks or new input modalities, and (3) they can function as a generative model that can be conditioned on arbitrary modalities, enabling a wide variety of expressive multimodal editing capabilities with remarkable flexibility. Through experimental analyses, we demonstrate the potential of 4M for training versatile and scalable foundation models for vision tasks, setting the stage for further exploration in multimodal learning for vision and other domains.

David Mizrahi, Roman Bachmann, O\u{g}uzhan Fatih Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, Amir Zamir• 2023

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)--
2454
Instance SegmentationCOCO 2017 (val)--
1144
Semantic segmentationADE20K
mIoU53.4
936
Depth EstimationNYU v2 (test)
Threshold Accuracy (delta < 1.25)94.4
423
Image ClassificationImageNet-1k 1.0 (test)
Top-1 Accuracy0.866
191
Semantic segmentationCOCO (val)
mIoU46.5
135
Depth EstimationScanNet
AbsRel0.065
94
Depth EstimationKITTI
AbsRel0.105
92
Semantic segmentationADE20K v1 (val)
mIoU53.4
76
Depth EstimationDIODE
Delta-1 Accuracy73.4
62
Showing 10 of 17 rows

Other info

Code

Follow for update