Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Long-window Anchoring in Vision-Language Model Distillation

About

While large vision-language models (VLMs) demonstrate strong long-context understanding, their prevalent small branches fail on linguistics-photography alignment for a limited window size. We discover that knowledge distillation improves students' capability as a complement to Rotary Position Embeddings (RoPE) on window sizes (anchored from large models). Building on this insight, we propose LAid, which directly aims at the transfer of long-range attention mechanisms through two complementary components: (1) a progressive distance-weighted attention matching that dynamically emphasizes longer position differences during training, and (2) a learnable RoPE response gain modulation that selectively amplifies position sensitivity where needed. Extensive experiments across multiple model families demonstrate that LAid-distilled models achieve up to 3.2 times longer effective context windows compared to baseline small models, while maintaining or improving performance on standard VL benchmarks. Spectral analysis also suggests that LAid successfully preserves crucial low-frequency attention components that conventional methods fail to transfer. Our work not only provides practical techniques for building more efficient long-context VLMs but also offers theoretical insights into how positional understanding emerges and transfers during distillation.

Haoyi Zhou, Shuo Li, Tianyu Chen, Qi Song, Chonghan Gao, Jianxin Li• 2025

Related benchmarks

TaskDatasetResultRank
Long-context Visual Question AnsweringVisual HayStack Long Window (test)
Accuracy (50 images)67.04
11
Long-context Visual Question AnsweringVisual HayStack Short Window (test)
Accuracy (1 Image)96.83
11
Showing 2 of 2 rows

Other info

Follow for update