Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

V$^2$Dial: Unification of Video and Visual Dialog via Multimodal Experts

About

We present V$^2$Dial - a novel expert-based model specifically geared towards simultaneously handling image and video input data for multimodal conversational tasks. Current multimodal models primarily focus on simpler tasks (e.g., VQA, VideoQA, video-text retrieval) and often neglect the more challenging conversational counterparts, such as video and visual/image dialog. Moreover, works on both conversational tasks evolved separately from each other despite their apparent similarities limiting their applicability potential. To this end, we propose to unify both tasks using a single model that for the first time jointly learns the spatial and temporal features of images and videos by routing them through dedicated experts and aligns them using matching and contrastive learning techniques. Furthermore, we systemically study the domain shift between the two tasks by investigating whether and to what extent these seemingly related tasks can mutually benefit from their respective training data. Extensive evaluations on the widely used video and visual dialog datasets of AVSD and VisDial show that our model achieves new state-of-the-art results across four benchmarks both in zero-shot and fine-tuning settings.

Adnen Abdessaied, Anna Rohrbach, Marcus Rohrbach, Andreas Bulling• 2025

Related benchmarks

TaskDatasetResultRank
Visual DialogVisDial 1.0 (val)
MRR0.532
65
Video DialogueAVSD DSTC8 (test)
BLEU-447.5
24
Video-grounded DialogueDSTC7 (test)
BLEU-447.4
24
Video DialogAVSD DSTC10
BLEU-10.546
6
Video DialogAVSD DSTC7
BLEU-155.5
6
Video DialogueAVSD DSTC10 (test)
CIDEr103.3
6
Video DialogueAVSD DSTC7 (test)
BLEU-178.9
6
Showing 7 of 7 rows

Other info

Code

Follow for update