Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Modal Video Dialog State Tracking in the Wild

About

We present MST-MIXER - a novel video dialog model operating over a generic multi-modal state tracking scheme. Current models that claim to perform multi-modal state tracking fall short of two major aspects: (1) They either track only one modality (mostly the visual input) or (2) they target synthetic datasets that do not reflect the complexity of real-world in the wild scenarios. Our model addresses these two limitations in an attempt to close this crucial research gap. Specifically, MST-MIXER first tracks the most important constituents of each input modality. Then, it predicts the missing underlying structure of the selected constituents of each modality by learning local latent graphs using a novel multi-modal graph structure learning method. Subsequently, the learned local graphs and features are parsed together to form a global graph operating on the mix of all modalities which further refines its structure and node embeddings. Finally, the fine-grained graph node features are used to enhance the hidden states of the backbone Vision-Language Model (VLM). MST-MIXER achieves new state-of-the-art results on five challenging benchmarks.

Adnen Abdessaied, Lei Shi, Andreas Bulling• 2024

Related benchmarks

TaskDatasetResultRank
Visual DialogVisDial 1.0 (val)
MRR0.477
65
Video-grounded DialogueDSTC7 (test)
BLEU-447.1
24
Video DialogueAVSD DSTC8 (test)
BLEU-447.1
24
Video DialogueAVSD DSTC7 (test)
BLEU-178.4
6
Video DialogueAVSD DSTC10 (test)
CIDEr91.2
6
Video DialogAVSD DSTC10
BLEU-10.001
6
Video DialogAVSD DSTC7
BLEU-10.2
6
Showing 7 of 7 rows

Other info

Follow for update