Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset

About

In this paper, we propose a Vision-Audio-Language Omni-peRception pretraining model (VALOR) for multi-modal understanding and generation. Different from widely-studied vision-language pretraining models, VALOR jointly models relationships of vision, audio and language in an end-to-end manner. It contains three separate encoders for single modality representations, and a decoder for multimodal conditional text generation. We design two pretext tasks to pretrain VALOR model, including Multimodal Grouping Alignment (MGA) and Multimodal Grouping Captioning (MGC). MGA projects vision, language and audio to the same common space, building vision-language, audio-language and audiovisual-language alignment simultaneously. MGC learns how to generate text tokens in conditions of vision, audio or their both. To promote vision-audio-language pretraining research, we construct a large-scale high-quality tri-modality dataset named VALOR-1M, which contains 1M audiable videos with human annotated audiovisual captions. Extensive experiments show that VALOR can learn strong multimodal correlations and be generalized to various downstream tasks (e.g., retrieval, captioning and question answering), with different input modalities (e.g., vision-language, audio-language and audiovisual-language). VALOR achieves new state-of-the-art performances on series of public cross-modality benchmarks. Code and data are available at project page https://casia-iva-group.github.io/projects/VALOR.

Jing Liu, Sihan Chen, Xingjian He, Longteng Guo, Xinxin Zhu, Weining Wang, Jinhui Tang• 2023

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringMSRVTT-QA
Accuracy49.2
481
Visual Question AnsweringVQA v2 (test-std)
Accuracy78.62
466
Text-to-Video RetrievalDiDeMo (test)
R@161.5
376
Text-to-Video RetrievalDiDeMo
R@10.576
360
Video Question AnsweringMSVD-QA
Accuracy60
340
Video Question AnsweringActivityNet-QA
Accuracy48.6
319
Text-to-Video RetrievalMSR-VTT
Recall@154.4
313
Text-to-Video RetrievalLSMDC (test)
R@134.2
225
Text-to-Video RetrievalActivityNet
R@10.634
197
Video-to-Text retrievalMSR-VTT
Recall@157.6
157
Showing 10 of 59 rows

Other info

Code

Follow for update