AutoGUI: Scaling GUI Grounding with Automatic Functionality Annotations from LLMs
About
User interface understanding with vision-language models (VLMs) has received much attention due to its potential for enhancing software automation. However, existing datasets used to build UI-VLMs either only contain large-scale context-free element annotations or contextualized functional descriptions for elements at a small scale. In this work, we propose the \textbf{AutoGUI} pipeline for automatically annotating UI elements with detailed functionality descriptions at scale. Specifically, we leverage large language models (LLMs) to infer element functionality by comparing UI state changes before and after simulated interactions. To improve annotation quality, we propose LLM-aided rejection and verification, eliminating invalid annotations without human labor. We construct a high-quality AutoGUI-704k dataset using the proposed pipeline, featuring diverse and detailed functionality annotations that are hardly provided by previous datasets. Human evaluation shows that we achieve annotation correctness comparable to a trained human annotator. Extensive experiments show that our dataset remarkably enhances VLM's UI grounding capabilities and exhibits significant scaling effects. We also show the interesting potential use of our dataset in UI agent tasks. Please view our project at https://autogui-project.github.io/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Element Grounding | FuncPred (test) | Grounding Accuracy65 | 13 | |
| Action Grounding | VWB AG (test) | Grounding Accuracy70.9 | 13 | |
| Element Grounding | VWB EG (test) | Grounding Accuracy0.903 | 13 | |
| Element Grounding | ScreenSpot (test) | Grounding Accuracy80 | 13 | |
| Element Grounding | ScreenSpot v2 (test) | Grounding Accuracy83.2 | 13 | |
| Element Grounding | MoTIF (test) | Grounding Accuracy67 | 13 | |
| GUI agent task planning | AITW | General Step Accuracy36.34 | 4 |