Enhancing Alignment for Unified Multimodal Models via Semantically-Grounded Supervision
About
Unified Multimodal Models (UMMs) have emerged as a promising paradigm that integrates multimodal understanding and generation within a unified modeling framework. However, current generative training paradigms suffer from inherent limitations. We present Semantically-Grounded Supervision (SeGroS), a fine-tuning framework designed to resolve the granularity mismatch and supervisory redundancy in UMMs. At its core, we propose a novel visual grounding map to construct two complementary supervision signals. First, we formulate semantic Visual Hints to compensate for the sparsity of text prompts. Second, we generate a semantically-grounded Corrupted Input to explicitly enhance the supervision of masking-based UMMs by restricting the reconstruction loss to core text-aligned regions. Extensive evaluations on GenEval, DPGBench, and CompBench demonstrate that SeGroS significantly improves generation fidelity and cross-modal alignment across various UMM architectures.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | GQA | Accuracy58.7 | 1249 | |
| Text-to-Image Generation | GenEval | Overall Score88.66 | 506 | |
| Vision Understanding | MMMU | Accuracy36 | 65 | |
| Visual Understanding | MME | MME Score1.22e+3 | 54 | |
| Text-to-Image Generation | CompBench | Overall Score88.08 | 33 | |
| Text-to-Image Generation | DPGBench | DPGBench Overall Score86.58 | 24 | |
| Vision Understanding | SEED | Accuracy65.5 | 15 | |
| Visual Understanding | POPE | Accuracy84.5 | 4 |