Table-R1: Inference-Time Scaling for Table Reasoning
About
In this work, we present the first study to explore inference-time scaling on table reasoning tasks. We develop and evaluate two post-training strategies to enable inference-time scaling: distillation from frontier model reasoning traces and reinforcement learning with verifiable rewards (RLVR). For distillation, we introduce a large-scale dataset of reasoning traces generated by DeepSeek-R1, which we use to fine-tune LLMs into the Table-R1-SFT model. For RLVR, we propose task-specific verifiable reward functions and apply the GRPO algorithm to obtain the Table-R1-Zero model. We evaluate our Table-R1-series models across diverse table reasoning tasks, including short-form QA, fact verification, and free-form QA. Notably, the Table-R1-Zero model matches or exceeds the performance of GPT-4.1 and DeepSeek-R1, while using only a 7B-parameter LLM. It also demonstrates strong generalization to out-of-domain datasets. Extensive ablation and qualitative analyses reveal the benefits of instruction tuning, model architecture choices, and cross-task generalization, as well as emergence of essential table reasoning skills during RL training.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Table Question Answering | WikiTQ (test) | Accuracy81.7 | 92 | |
| Text-to-SQL | Spider | -- | 57 | |
| Structure Comprehending | RealHitBench | Exact Match (EM)28.5 | 49 | |
| Chart Generation | RealHitBench | ECR16 | 49 | |
| Data Analysis | RealHitBench | GPT Score36.24 | 49 | |
| Fact Checking | RealHitBench | Exact Match0.00e+0 | 49 | |
| Text-to-SQL | Bird | Total Execution Accuracy50.98 | 22 | |
| Numerical Reasoning | RealHitBench | Exact Match (EM)0.00e+0 | 21 | |
| Symbolic Chain of Thought Reasoning | TableBench | Rge28.89 | 13 | |
| Agent-based Data Analysis | InfiAgent-DABench | Accuracy70.82 | 13 |