MM-ReCoder: Advancing Chart-to-Code Generation with Reinforcement Learning and Self-Correction
About
Multimodal Large Language Models (MLLMs) have recently demonstrated promising capabilities in multimodal coding tasks such as chart-to-code generation. However, existing methods primarily rely on supervised fine-tuning (SFT), which requires the model to learn code patterns through chart-code pairs but does not expose the model to a code execution environment. Moreover, while self-correction through execution feedback offers a potential route to improve coding quality, even state-of-the-art MLLMs have been shown to struggle with effective self-correction. In this work, we introduce MM-ReCoder, a chart-to-code generation model trained with reinforcement learning (RL) and equipped with self-correction ability. We propose a two-stage multi-turn self-correction RL strategy based on Group Relative Policy Optimization (GRPO). The first stage enhances the model's self-correction ability via rolling out a shared first turn, while the second stage improves the coding capability with full-trajectory optimization. MM-ReCoder learns to produce more accurate and executable code through the interaction with the environment and by iteratively correcting its own outputs. Our results on three chart-to-code benchmarks demonstrate the state-of-the-art performance of MM-ReCoder.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Plot-to-code generation | Plot2Code | Pass Rate98.5 | 47 | |
| Chart2Code | ChartMimic | Execution Rate97.5 | 30 | |
| Chart2Code | ChartX | GPT-4o Score2.32 | 16 |