Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Semantic Equivalence of Tokenization in Multimodal LLM

About

Multimodal Large Language Models (MLLMs) have demonstrated exceptional capabilities in processing vision-language tasks. One of the crux of MLLMs lies in vision tokenization, which involves efficiently transforming input visual signals into feature representations that are most beneficial for LLMs. However, existing vision tokenizers, essential for semantic alignment between vision and language, remain problematic. Existing methods aggressively fragment visual input, corrupting the visual semantic integrity. To address this, this paper proposes a novel dynamic Semantic-Equivalent Vision Tokenizer (SeTok), which groups visual features into semantic units via a dynamic clustering algorithm, flexibly determining the number of tokens based on image complexity. The resulting vision tokens effectively preserve semantic integrity and capture both low-frequency and high-frequency visual features. The proposed MLLM (Setokim) equipped with SeTok significantly demonstrates superior performance across various tasks, as evidenced by our experimental results. The project page is at https://chocowu.github.io/SeTok-web/.

Shengqiong Wu, Hao Fei, Xiangtai Li, Jiayi Ji, Hanwang Zhang, Tat-Seng Chua, Shuicheng Yan• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy89.1
1455
Visual Question AnsweringVQA v2
Accuracy78.5
1362
Visual Question AnsweringGQA
Accuracy65.6
1249
Multimodal EvaluationMME--
658
Multimodal Capability EvaluationMM-Vet
Score45.2
345
Referring Expression SegmentationRefCOCO+ (testA)
cIoU72.4
230
Referring Expression SegmentationRefCOCO+ (val)
cIoU68
223
Referring Expression SegmentationRefCOCO+ (testB)
cIoU61.2
210
Text-to-Image GenerationMS-COCO
FID8.5
131
Referring Expression SegmentationRefCOCOg (val (U))
cIoU71.3
89
Showing 10 of 20 rows

Other info

Code

Follow for update