OmniFusion Technical Report
About
Last year, multimodal architectures served up a revolution in AI-based approaches and solutions, extending the capabilities of large language models (LLM). We propose an \textit{OmniFusion} model based on a pretrained LLM and adapters for visual modality. We evaluated and compared several architecture design principles for better text and visual data coupling: MLP and transformer adapters, various CLIP ViT-based encoders (SigLIP, InternVIT, etc.), and their fusing approach, image encoding method (whole image or tiles encoding) and two 7B LLMs (the proprietary one and open-source Mistral). Experiments on 8 visual-language benchmarks show the top score for the best OmniFusion setup in terms of different VQA tasks in comparison with open-source LLaVA-like solutions: VizWiz, Pope, MM-Vet, ScienceQA, MMBench, TextVQA, VQAv2, MMMU. We also propose a variety of situations, where OmniFusion provides highly-detailed answers in different domains: housekeeping, sightseeing, culture, medicine, handwritten and scanned equations recognition, etc. Mistral-based OmniFusion model is an open-source solution with weights, training and inference scripts available at https://github.com/AIRI-Institute/OmniFusion.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA v2 | Accuracy80.94 | 1165 | |
| Visual Question Answering | GQA | Accuracy65.72 | 963 | |
| Object Hallucination Evaluation | POPE | Accuracy87.21 | 935 | |
| Multimodal Understanding | MM-Vet | MM-Vet Score39.4 | 418 | |
| Multimodal Reasoning | MM-Vet | MM-Vet Score39.4 | 281 | |
| Multimodal Understanding | MMMU | Accuracy36.9 | 275 | |
| Multi-discipline Multimodal Understanding | MMMU | -- | 266 | |
| Science Question Answering | ScienceQA SQA-IMG | Accuracy69.2 | 114 | |
| Visual Question Answering | SciQA-IMG | Accuracy71.29 | 53 | |
| Multimodal Understanding | MMB | Score69 | 30 |