Trifuse: Enhancing Attention-Based GUI Grounding via Multimodal Fusion
About
GUI grounding maps natural language instructions to the correct interface elements, serving as the perception foundation for GUI agents. Existing approaches predominantly rely on fine-tuning multimodal large language models (MLLMs) using large-scale GUI datasets to predict target element coordinates, which is data-intensive and generalizes poorly to unseen interfaces. Recent attention-based alternatives exploit localization signals in MLLMs attention mechanisms without task-specific fine-tuning, but suffer from low reliability due to the lack of explicit and complementary spatial anchors in GUI images. To address this limitation, we propose Trifuse, an attention-based grounding framework that explicitly integrates complementary spatial anchors. Trifuse integrates attention, OCR-derived textual cues, and icon-level caption semantics via a Consensus-SinglePeak (CS) fusion strategy that enforces cross-modal agreement while retaining sharp localization peaks. Extensive evaluations on four grounding benchmarks demonstrate that Trifuse achieves strong performance without task-specific fine-tuning, substantially reducing the reliance on expensive annotated data. Moreover, ablation studies reveal that incorporating OCR and caption cues consistently improves attention-based grounding performance across different backbones, highlighting its effectiveness as a general framework for GUI grounding.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| GUI Grounding | ScreenSpot v2 | Avg Accuracy93.2 | 203 | |
| GUI Grounding | OSWorld-G | Average Score58.4 | 74 | |
| GUI Grounding | OSWorld-G (test) | Element Accuracy58.4 | 52 | |
| GUI Grounding | ScreenSpot-Pro (test) | Element Accuracy51.3 | 43 | |
| GUI Grounding | ScreenSpot v1 (test) | Mobile Text Acc98.2 | 25 | |
| GUI Grounding | ScreenSpot (test) | Element Accuracy90.6 | 13 | |
| GUI Grounding | ScreenSpot v2 (test) | Element Accuracy93.2 | 9 |