Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DeepSeek-VL: Towards Real-World Vision-Language Understanding

About

We present DeepSeek-VL, an open-source Vision-Language (VL) Model designed for real-world vision and language understanding applications. Our approach is structured around three key dimensions: We strive to ensure our data is diverse, scalable, and extensively covers real-world scenarios including web screenshots, PDFs, OCR, charts, and knowledge-based content, aiming for a comprehensive representation of practical contexts. Further, we create a use case taxonomy from real user scenarios and construct an instruction tuning dataset accordingly. The fine-tuning with this dataset substantially improves the model's user experience in practical applications. Considering efficiency and the demands of most real-world scenarios, DeepSeek-VL incorporates a hybrid vision encoder that efficiently processes high-resolution images (1024 x 1024), while maintaining a relatively low computational overhead. This design choice ensures the model's ability to capture critical semantic and detailed information across various visual tasks. We posit that a proficient Vision-Language Model should, foremost, possess strong language abilities. To ensure the preservation of LLM capabilities during pretraining, we investigate an effective VL pretraining strategy by integrating LLM training from the beginning and carefully managing the competitive dynamics observed between vision and language modalities. The DeepSeek-VL family (both 1.3B and 7B models) showcases superior user experiences as a vision-language chatbot in real-world applications, achieving state-of-the-art or competitive performance across a wide range of visual-language benchmarks at the same model size while maintaining robust performance on language-centric benchmarks. We have made both 1.3B and 7B models publicly accessible to foster innovations based on this foundation model.

Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, Chong Ruan• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy68.4
1460
Visual Question AnsweringVQA v2
Accuracy88.1
1165
Visual Question AnsweringTextVQA--
1117
Mathematical ReasoningGSM8K
Accuracy55
983
Object Hallucination EvaluationPOPE
Accuracy88.1
935
Language UnderstandingMMLU
Accuracy52.4
756
Text-based Visual Question AnsweringTextVQA
Accuracy64.7
496
Multimodal UnderstandingMM-Vet
MM-Vet Score41.5
418
Multimodal UnderstandingMMBench
Accuracy64.6
367
Mathematical ReasoningMathVista
Score36.9
322
Showing 10 of 91 rows
...

Other info

Code

Follow for update