Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SSL: Sweet Spot Learning for Differentiated Guidance in Agentic Optimization

About

Reinforcement learning with verifiable rewards has emerged as a powerful paradigm for training intelligent agents. However, existing methods typically employ binary rewards that fail to capture quality differences among trajectories achieving identical outcomes, thereby overlooking potential diversity within the solution space. Inspired by the ``sweet spot'' concept in tennis-the racket's core region that produces optimal hitting effects, we introduce \textbf{S}weet \textbf{S}pot \textbf{L}earning (\textbf{SSL}), a novel framework that provides differentiated guidance for agent optimization. SSL follows a simple yet effective principle: progressively amplified, tiered rewards guide policies toward the sweet-spot region of the solution space. This principle naturally adapts across diverse tasks: visual perception tasks leverage distance-tiered modeling to reward proximity, while complex reasoning tasks reward incremental progress toward promising solutions. We theoretically demonstrate that SSL preserves optimal solution ordering and enhances the gradient signal-to-noise ratio, thereby fostering more directed optimization. Extensive experiments across GUI perception, short/long-term planning, and complex reasoning tasks show consistent improvements over strong baselines on 12 benchmarks, achieving up to 2.5X sample efficiency gains and effective cross-task transferability. Our work establishes SSL as a general principle for training capable and robust agents.

Jinyang Wu, Changpeng Yang, Yuhao Shen, Fangzhi Xu, Bolin Ni, Chonghua Liao, Yuchen Liu, Hongzhen Wang, Shuai Nie, Shuai Zhang, Haoran Luo, Jiaming Xu• 2026

Related benchmarks

TaskDatasetResultRank
GUI GroundingScreenSpot Pro
Average Score2.91e+3
169
GUI GroundingScreenSpot
Avg Acc83
76
Short-Term PlanningGUI-Act-Web
Type Success Rate94.54
16
Short-Term PlanningOmniAct-Web
Type Success Rate95.39
16
Short-Term PlanningOmniAct Desktop
Type Success Rate92.38
16
Short-Term PlanningAndroidControl Low
Type85.17
16
Long-term PlanningAndroidControl High
Type Rate71.79
14
Long-term PlanningGUI-Odyssey
Type Success Rate65.9
14
Sudoku SolvingSudoku
Success Rate (pass@1)45.4
10
Abstraction and ReasoningARC-AGI
ARC-1 Score58.2
6
Showing 10 of 11 rows

Other info

Follow for update