Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VisionPangu: A Compact and Fine-Grained Multimodal Assistant with 1.7B Parameters

About

Large Multimodal Models (LMMs) have achieved strong performance in vision-language understanding, yet many existing approaches rely on large-scale architectures and coarse supervision, which limits their ability to generate detailed image captions. In this work, we present VisionPangu, a compact 1.7B-parameter multimodal model designed to improve detailed image captioning through efficient multimodal alignment and high-quality supervision. Our model combines an InternVL-derived vision encoder with the OpenPangu-Embedded language backbone via a lightweight MLP projector and adopts an instruction-tuning pipeline inspired by LLaVA. By incorporating dense human-authored descriptions from the DOCCI dataset, VisionPangu improves semantic coherence and descriptive richness without relying on aggressive model scaling. Experimental results demonstrate that compact multimodal models can achieve competitive performance while producing more structured and detailed captions. The code and model weights will be publicly available at https://www.modelscope.cn/models/asdfgh007/visionpangu.

Jiaxin Fan, Wenpo Song• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Multimodal Perception and CognitionMME--
182
Multimodal ReasoningMMBench
Overall Score62.5
78
Multimodal UnderstandingMMMU
MMMU Score36.5
69
Image CaptioningCOCO 600 images subset 2017 (val)
BLEU28.59
5
Showing 5 of 5 rows

Other info

Follow for update