How Far Are LLMs from Professional Poker Players? Revisiting Game-Theoretic Reasoning with Agentic Tool Use
About
As Large Language Models (LLMs) are increasingly applied in high-stakes domains, their ability to reason strategically under uncertainty becomes critical. Poker provides a rigorous testbed, requiring not only strong actions but also principled, game-theoretic reasoning. In this paper, we conduct a systematic study of LLMs in multiple realistic poker tasks, evaluating both gameplay outcomes and reasoning traces. Our analysis reveals LLMs fail to compete against traditional algorithms and identifies three recurring flaws: reliance on heuristics, factual misunderstandings, and a "knowing-doing" gap where actions diverge from reasoning. An initial attempt with behavior cloning and step-level reinforcement learning improves reasoning style but remains insufficient for accurate game-theoretic play. Motivated by these limitations, we propose ToolPoker, a tool-integrated reasoning framework that combines external solvers for GTO-consistent actions with more precise professional-style explanations. Experiments demonstrate that ToolPoker achieves state-of-the-art gameplay while producing reasoning traces that closely reflect game-theoretic principles.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Poker Gameplay | Leduc Hold'em (test) | -- | 8 | |
| Poker Gameplay | Limit Texas Hold'em (test) | -- | 8 | |
| Reasoning evaluation | Leduc Hold’em | -- | 6 | |
| Reasoning evaluation | Limit Texas Hold’em | -- | 6 | |
| Poker Gameplay Performance | Limit Texas Hold’em | NFSP Performance60.5 | 5 | |
| Poker Gameplay Performance | Leduc Hold’em | NFSP11.5 | 5 | |
| Poker Gameplay Performance | 3-player Leduc Hold'em | Gameplay Performance Score30.8 | 3 | |
| Reasoning Quality Evaluation | 3-player Leduc Hold'em (test) | Hit Rate (HR)193 | 3 |