UI-Zoomer: Uncertainty-Driven Adaptive Zoom-In for GUI Grounding
About
GUI grounding, which localizes interface elements from screenshots given natural language queries, remains challenging for small icons and dense layouts. Test-time zoom-in methods improve localization by cropping and re-running inference at higher resolution, but apply cropping uniformly across all instances with fixed crop sizes, ignoring whether the model is actually uncertain on each case. We propose \textbf{UI-Zoomer}, a training-free adaptive zoom-in framework that treats both the trigger and scale of zoom-in as a prediction uncertainty quantification problem. A confidence-aware gate fuses spatial consensus among stochastic candidates with token-level generation confidence to selectively trigger zoom-in only when localization is uncertain. When triggered, an uncertainty-driven crop sizing module decomposes prediction variance into inter-sample positional spread and intra-sample box extent, deriving a per-instance crop radius via the law of total variance. Extensive experiments on ScreenSpot-Pro, UI-Vision, and ScreenSpot-v2 demonstrate consistent improvements over strong baselines across multiple model architectures, achieving gains of up to +13.4\%, +10.3\%, and +4.2\% respectively, with no additional training required.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| GUI Grounding | ScreenSpot Pro | Average Score67.8 | 307 | |
| GUI Grounding | ScreenSpot Desktop V2 | Text Accuracy99 | 55 | |
| GUI Grounding | ScreenSpot Web V2 | Text Accuracy95.7 | 55 | |
| GUI Grounding | ScreenSpot Mobile V2 | Text Accuracy99.3 | 55 | |
| UI Element Grounding | ScreenSpot Overall v2 | Overall Accuracy (Avg)94.9 | 26 | |
| UI Grounding | UI-Vision | -- | 24 |