Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ABMAMBA: Multimodal Large Language Model with Aligned Hierarchical Bidirectional Scan for Efficient Video Captioning

About

In this study, we focus on video captioning by fully open multimodal large language models (MLLMs). The comprehension of visual sequences is challenging because of their intricate temporal dependencies and substantial sequence length. The core attention mechanisms of existing Transformer-based approaches scale quadratically with the sequence length, making them computationally prohibitive. To address these limitations, we propose Aligned Hierarchical Bidirectional Scan Mamba (ABMamba), a fully open MLLM with linear computational complexity that enables the scalable processing of video sequences. ABMamba extends Deep State Space Models as its language backbone, replacing the costly quadratic attention mechanisms, and employs a novel Aligned Hierarchical Bidirectional Scan module that processes videos across multiple temporal resolutions. On standard video captioning benchmarks such as VATEX and MSR-VTT, ABMamba demonstrates competitive performance compared to typical MLLMs while achieving approximately three times higher throughput.

Daichi Yashima, Shuhei Kurita, Yusuke Oda, Shuntaro Suzuki, Seitaro Otsuki, Komei Sugiura• 2026

Related benchmarks

TaskDatasetResultRank
Video CaptioningMSR-VTT (test)
CIDEr27.3
128
Video CaptioningVATEX (test)
CIDEr44.4
66
Video Question AnsweringVideo-MME without subtitles
Accuracy (Overall)29.4
34
Video-Language UnderstandingMSR-VTT
Initial Memory (MB)7.09e+3
6
Showing 4 of 4 rows

Other info

Follow for update