Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MobileGUI-RL: Advancing Mobile GUI Agent through Reinforcement Learning in Online Environment

About

Recently, there has been a surge of vision-based GUI agents designed to automate everyday mobile and web tasks. These agents interpret raw GUI screenshots and autonomously decide where to click, scroll, or type, which bypasses handcrafted rules and app-specific APIs. However, most existing methods trained GUI agent in the offline environment using pre-collected trajectories. This approach limits scalability, causes overfitting to specific UI templates, and leads to brittle policies when faced with unseen environment. We present MobileGUI-RL, a scalable framework that trains GUI agent in online environment. MobileGUI-RL contains two key components. It (i) synthesizes a curriculum of learnable tasks through self-exploration and filtering, and (ii) adapts GRPO to GUI navigation with trajectory-aware advantages and composite rewards that balance task success and execution efficiency. Experiments on three online mobile-agent benchmarks show consistent gains, validating the effectiveness of our approach.

Yucheng Shi, Wenhao Yu, Zaitang Li, Yonglin Wang, Hongming Zhang, Ninghao Liu, Haitao Mi, Dong Yu• 2025

Related benchmarks

TaskDatasetResultRank
GUI Agent TaskAndroidWorld
Success Rate30
104
Mobile Task AutomationAndroidWorld (test)
Average Success Rate0.448
75
Showing 2 of 2 rows

Other info

Follow for update