Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

{\dag}DAGGER: Distractor-Aware Graph Generation for Executable Reasoning in Math Problems

About

Chain-of-Thought (CoT) prompting is widely adopted for mathematical problem solving, including in low-resource languages, yet its behavior under irrelevant context remains underexplored. To systematically study this challenge, we introduce DISTRACTMATH-BN, a Bangla benchmark that augments MGSM and MSVAMP with semantically coherent but computationally irrelevant information. Evaluating seven models ranging from 3B to 12B parameters, we observe substantial performance degradation under distractors: standard models drop by up to 41 points, while reasoning-specialized models decline by 14 to 20 points despite consuming five times more tokens. We propose {\dag}DAGGER, which reformulates mathematical problem solving as executable computational graph generation with explicit modeling of distractor nodes. Fine-tuning Gemma-3 models using supervised fine-tuning followed by Group Relative Policy Optimization achieves comparable weighted accuracy on augmented benchmarks while using 89 percent fewer tokens than reasoning models. Importantly, this robustness emerges without explicit training on distractor-augmented examples. Our results suggest that enforcing structured intermediate representations improves robustness and inference efficiency in mathematical reasoning compared to free-form approaches, particularly in noisy, low-resource settings.

Zabir Al Nazi, Shubhashis Roy Dipta, Sudipta Kar• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMSVAMP Bangla
Accuracy (Original)78.8
13
Mathematical ReasoningMGSM Bangla
Accuracy (Original)0.784
13
Mathematical ReasoningMGSM Thai
Score87.6
5
Mathematical ReasoningMGSM Telugu
Accuracy78.8
2
Showing 4 of 4 rows

Other info

Follow for update