Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GUI-Shift: Enhancing VLM-Based GUI Agents through Self-supervised Reinforcement Learning

About

Training effective Vision-Language Models (VLMs) for GUI agents typically depends on large-scale annotated datasets, whose collection is both labor-intensive and error-prone. We introduce K-step GUI Transition, a self-supervised inverse dynamics task in which VLMs learn GUI dynamics by predicting the initial action that causes a transition between two GUI states. This approach eliminates the need for natural language instructions and enables scalable dataset construction from existing GUI trajectories or automated exploration. Building on this task, we propose GUI-Shift, a reinforcement learning (RL) framework that combines rule-based optimization with data filtering to improve VLM performance. We conduct extensive experiments using multiple VLM backbones across four benchmarks, spanning GUI task automation (AndroidControl, GUI Odyssey) and GUI grounding (ScreenSpot-v2, ScreenSpot-Pro). Our results show that training on GUI-Shift generalizes well to both GUI automation and grounding tasks, yielding up to an 11.2% increase in GUI automation accuracy. This study underscores the potential of self-supervised RL to leverage unlabeled GUI trajectories and offers a scalable alternative to training with annotated samples.

Longxi Gao, Li Zhang, Pengzhi Gao, Wei Liu, Jian Luan, Mengwei Xu• 2025

Related benchmarks

TaskDatasetResultRank
GUI Task ExecutionAITZ
Success Rate46.78
20
GUI NavigationAITW
Overall Success Rate54.38
19
GUI NavigationAC Low
Goal Rate94.22
12
GUI NavigationOmniAct-D
Goal Rate (GR)80.01
12
GUI NavigationAC High
Goal Rate (GR)73.41
12
GUI NavigationGuiAct-W
Success Rate (GR)89.49
12
GUI NavigationOmniAct-W
Goal Rate (GR)82.93
12
GUI NavigationGuiAct-P
Goal Rate (GR)59.1
12
GUI NavigationLlamatouch
Goal Rate75.08
12
GUI NavigationOdyssey
Grounding Rate (GR)62.54
11
Showing 10 of 10 rows

Other info

Follow for update