Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VoCo-LLaMA: Towards Vision Compression with Large Language Models

About

Vision-Language Models (VLMs) have achieved remarkable success in various multi-modal tasks, but they are often bottlenecked by the limited context window and high computational cost of processing high-resolution image inputs and videos. Vision compression can alleviate this problem by reducing the vision token count. Previous approaches compress vision tokens with external modules and force LLMs to understand the compressed ones, leading to visual information loss. However, the LLMs' understanding paradigm of vision tokens is not fully utilised in the compression learning process. We propose VoCo-LLaMA, the first approach to compress vision tokens using LLMs. By introducing Vision Compression tokens during the vision instruction tuning phase and leveraging attention distillation, our method distill how LLMs comprehend vision tokens into their processing of VoCo tokens. VoCo-LLaMA facilitates effective vision compression and improves the computational efficiency during the inference stage. Specifically, our method achieves minimal performance loss with a compression ratio of 576$\times$, resulting in up to 94.8$\%$ fewer FLOPs and 69.6$\%$ acceleration in inference time. Furthermore, through continuous training using time-series compressed token sequences of video frames, VoCo-LLaMA demonstrates the ability to understand temporal correlations, outperforming previous methods on popular video question-answering benchmarks. Our approach presents a promising way to unlock the full potential of VLMs' contextual window, enabling more scalable multi-modal applications. The project page, along with the associated code, can be accessed via https://yxxxb.github.io/VoCo-LLaMA-page/.

Xubing Ye, Yukang Gan, Xiaoke Huang, Yixiao Ge, Yansong Tang• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy76.9
1165
Visual Question AnsweringGQA
Accuracy59.8
963
Object Hallucination EvaluationPOPE
Accuracy81.5
935
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy75.4
664
Text-based Visual Question AnsweringTextVQA
Accuracy59.1
496
Visual Question AnsweringGQA
Accuracy57.4
374
Multimodal UnderstandingMMBench
Accuracy57.9
367
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy80.02
345
Referring Expression ComprehensionRefCOCO (val)
Accuracy85.17
335
Referring Expression ComprehensionRefCOCO (testA)
Accuracy0.8892
333
Showing 10 of 39 rows

Other info

Follow for update