Reasoning or Retrieval? A Study of Answer Attribution on Large Reasoning Models
About
Large reasoning models (LRMs) exhibit unprecedented capabilities in solving complex problems through Chain-of-Thought (CoT) reasoning. However, recent studies reveal that their final answers often contradict their own reasoning traces. We hypothesize that this inconsistency stems from two competing mechanisms for generating answers: CoT reasoning and memory retrieval. To test this hypothesis, we conduct controlled experiments that challenge LRMs with misleading cues during reasoning and/or corrupted answers during retrieval. Our results across models and datasets confirm that both mechanisms operate simultaneously, with their relative dominance influenced by multiple factors: problem domains, model scales, and fine-tuning approaches (e.g., reinforcement learning vs. distillation). The findings reveal a critical limitation in current reasoning fine-tuning paradigms: models can exploit the retrieval mechanism as a shortcut, effectively "hacking" the reward signal and undermining genuine reasoning development. To address this challenge, we introduce FARL, a novel fine-tuning framework that integrates memory unlearning with reinforcement learning. By carefully suppressing retrieval shortcuts during the fine-tuning process, FARL promotes reasoning-dominant behavior and enhances generalizable reasoning capabilities. The code is available: https://github.com/ZJUWYH/FARL.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | Mathematics out-of-domain (test) | Accuracy72.4 | 30 | |
| Mathematical Reasoning | MMLU Out of Domain | MTL Score1.90e+3 | 4 | |
| Mathematical Reasoning | MMLU Math&Logic (train) | R-PSR19.7 | 4 | |
| Reasoning | Mathematical Reasoning (train) | MTL (Loss)1.66e+3 | 4 | |
| Reasoning Robustness | Mathematical Reasoning Perturbation Experiments | Robustness Perturbation Success Rate (R-PSR)29.5 | 4 |