Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ProRe: A Proactive Reward System for GUI Agents via Reasoner-Actor Collaboration

About

Reward is critical to the evaluation and training of large language models (LLMs). However, existing rule-based or model-based reward methods struggle to generalize to GUI agents, where access to ground-truth trajectories or application databases is often unavailable, and static trajectory-based LLM-as-a-Judge approaches suffer from limited accuracy. To address these challenges, we propose ProRe, a proactive reward system that leverages a general-purpose reasoner and domain-specific evaluator agents (actors). The reasoner schedules targeted state probing tasks, which the evaluator agents then execute by actively interacting with the environment to collect additional observations. This enables the reasoner to assign more accurate and verifiable rewards to GUI agents. Empirical results on over 3K trajectories demonstrate that ProRe improves reward accuracy and F1 score by up to 5.3\% and 19.4\%, respectively. Furthermore, integrating ProRe with state-of-the-art policy agents yields a success rate improvement of up to 22.4\%. The source code is available at https://github.com/V-Droid-Agent/ProRe.

Gaole Dai, Shiqi Jiang, Ting Cao, Yuqing Yang, Yuanchun Li, Rui Tan, Mo Li, Lili Qiu• 2025

Related benchmarks

TaskDatasetResultRank
Reward PredictionV-Droid trajectories
Accuracy93.1
9
Reward PredictionM3A trajectories
Accuracy91.4
9
Reward PredictionUI-TARS-7B trajectories
Accuracy96.5
9
Reward PredictionOSWorld
Reward Accuracy92
5
Reward PredictionOSWorld Chrome
Reward Accuracy93.5
5
Showing 5 of 5 rows

Other info

Follow for update