Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Enhancing Alignment for Unified Multimodal Models via Semantically-Grounded Supervision

About

Unified Multimodal Models (UMMs) have emerged as a promising paradigm that integrates multimodal understanding and generation within a unified modeling framework. However, current generative training paradigms suffer from inherent limitations. We present Semantically-Grounded Supervision (SeGroS), a fine-tuning framework designed to resolve the granularity mismatch and supervisory redundancy in UMMs. At its core, we propose a novel visual grounding map to construct two complementary supervision signals. First, we formulate semantic Visual Hints to compensate for the sparsity of text prompts. Second, we generate a semantically-grounded Corrupted Input to explicitly enhance the supervision of masking-based UMMs by restricting the reconstruction loss to core text-aligned regions. Extensive evaluations on GenEval, DPGBench, and CompBench demonstrate that SeGroS significantly improves generation fidelity and cross-modal alignment across various UMM architectures.

Jiyeong Kim, Yerim So, Hyesong Choi, Uiwon Hwang, Dongbo Min• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringGQA
Accuracy58.7
1249
Text-to-Image GenerationGenEval
Overall Score88.66
506
Vision UnderstandingMMMU
Accuracy36
65
Visual UnderstandingMME
MME Score1.22e+3
54
Text-to-Image GenerationCompBench
Overall Score88.08
33
Text-to-Image GenerationDPGBench
DPGBench Overall Score86.58
24
Vision UnderstandingSEED
Accuracy65.5
15
Visual UnderstandingPOPE
Accuracy84.5
4
Showing 8 of 8 rows

Other info

Follow for update