Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Open-Source Multimodal Moxin Models with Moxin-VLM and Moxin-VLA

About

Recently, Large Language Models (LLMs) have undergone a significant transformation, marked by a rapid rise in both their popularity and capabilities. Leading this evolution are proprietary LLMs like GPT-4 and GPT-o1, which have captured widespread attention in the AI community due to their remarkable performance and versatility. Simultaneously, open-source LLMs, such as LLaMA and Mistral, have made great contributions to the ever-increasing popularity of LLMs due to the ease to customize and deploy the models across diverse applications. Moxin 7B is introduced as a fully open-source LLM developed in accordance with the Model Openness Framework, which moves beyond the simple sharing of model weights to embrace complete transparency in training, datasets, and implementation detail, thus fostering a more inclusive and collaborative research environment that can sustain a healthy open-source ecosystem. To further equip Moxin with various capabilities in different tasks, we develop three variants based on Moxin, including Moxin-VLM, Moxin-VLA, and Moxin-Chinese, which target the vision-language, vision-language-action, and Chinese capabilities, respectively. Experiments show that our models achieve superior performance in various evaluations. We adopt open-source framework and open data for the training. We release our models, along with the available data and code to derive these models.

Pu Zhao, Arash Akbari, Xuan Shen, Zhenglun Kong, Yixin Shen, Sung-En Chang, Timothy Rupprecht, Lei Lu, Enfu Nan, Changdi Yang, Yumei He, Weiyan Shi, Xingchen Xu, Yu Huang, Wei Jiang, Wei Wang, Yue Chen, Yong He, Yanzhi Wang• 2025

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement95
494
Visual Question AnsweringGQA (test-dev)
Accuracy64.88
178
Hallucination EvaluationPOPE
Accuracy87.3
132
LocalizationRefCOCO+ (val)
Accuracy71.3
32
Chinese Language UnderstandingCMMLU (test)
CMMLU Score0.45
13
CountingTallyQA (val)
Accuracy66
6
LocalizationOCID-Ref (val)
Accuracy48.4
6
Spatial ReasoningVSR zero-shot (test)
Accuracy (zero-shot)60.8
6
Open-Ended Visual Question AnsweringVizwiz (val)
Accuracy54.08
6
Showing 9 of 9 rows

Other info

Follow for update