Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Poincar\'e Shaping for Agentic Reinforcement Learning

About

We propose LaPha, a method for training AlphaZero-like LLM agents in a Poincar\'e latent space. Under LaPha, the search process can be visualized as a tree rooted at the prompt and growing outward from the origin toward the boundary of the Poincar\'e ball, where negative curvature provides exponentially increasing capacity with radius. Using hyperbolic geodesic distance to rule-verified correctness, we define a node potential and assign dense process rewards by potential differences. We further attach a lightweight value head on the same shared latent space, enabling self-guided test-time scaling with almost no additional overhead. On MATH-500, LaPha improves Qwen2.5-Math-1.5B from 66.0% to 88.2%. With value-head-guided search, LaPha-1.5B reaches 56.7% accuracy on AIME'24, and LaPha-7B further achieves 60.0% on AIME'24 and 53.3% on AIME'25.

Hanchen Xia, Baoyou Chen, Zelin Zang, Yutang Ge, Guojiang Zhao, Siyu Zhu• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH 500
Accuracy92
155
Mathematical ReasoningAIME 24
Accuracy60
35
Mathematical ReasoningOlympiadBench
Accuracy58
30
Mathematical ReasoningAIME 25
AIME'25 Accuracy53.3
22
Mathematical ReasoningGaokao En 23
Accuracy74.3
18
Showing 5 of 5 rows

Other info

Follow for update