Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MM-ReCoder: Advancing Chart-to-Code Generation with Reinforcement Learning and Self-Correction

About

Multimodal Large Language Models (MLLMs) have recently demonstrated promising capabilities in multimodal coding tasks such as chart-to-code generation. However, existing methods primarily rely on supervised fine-tuning (SFT), which requires the model to learn code patterns through chart-code pairs but does not expose the model to a code execution environment. Moreover, while self-correction through execution feedback offers a potential route to improve coding quality, even state-of-the-art MLLMs have been shown to struggle with effective self-correction. In this work, we introduce MM-ReCoder, a chart-to-code generation model trained with reinforcement learning (RL) and equipped with self-correction ability. We propose a two-stage multi-turn self-correction RL strategy based on Group Relative Policy Optimization (GRPO). The first stage enhances the model's self-correction ability via rolling out a shared first turn, while the second stage improves the coding capability with full-trajectory optimization. MM-ReCoder learns to produce more accurate and executable code through the interaction with the environment and by iteratively correcting its own outputs. Our results on three chart-to-code benchmarks demonstrate the state-of-the-art performance of MM-ReCoder.

Zitian Tang, Xu Zhang, Jianbo Yuan, Yang Zou, Varad Gunjal, Songyao Jiang, Davide Modolo• 2026

Related benchmarks

TaskDatasetResultRank
Plot-to-code generationPlot2Code
Pass Rate98.5
47
Chart2CodeChartMimic
Execution Rate97.5
30
Chart2CodeChartX
GPT-4o Score2.32
16
Showing 3 of 3 rows

Other info

Follow for update