Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VisRefiner: Learning from Visual Differences for Screenshot-to-Code Generation

About

Screenshot-to-code generation aims to translate user interface screenshots into executable frontend code that faithfully reproduces the target layout and style. Existing multimodal large language models perform this mapping directly from screenshots but are trained without observing the visual outcomes of their generated code. In contrast, human developers iteratively render their implementation, compare it with the design, and learn how visual differences relate to code changes. Inspired by this process, we propose VisRefiner, a training framework that enables models to learn from visual differences between rendered predictions and reference designs. We construct difference-aligned supervision that associates visual discrepancies with corresponding code edits, allowing the model to understand how appearance variations arise from implementation changes. Building on this, we introduce a reinforcement learning stage for self-refinement, where the model improves its generated code by observing both the rendered output and the target design, identifying their visual differences, and updating the code accordingly. Experiments show that VisRefiner substantially improves single-step generation quality and layout fidelity, while also endowing models with strong self-refinement ability. These results demonstrate the effectiveness of learning from visual differences for advancing screenshot-to-code generation.

Jie Deng, Kaichun Yao, Libo Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Screenshot-to-codeDesign2Code
Block-Match91.4
20
Screenshot-to-code generationDesign2Code HARD
Block Match80.8
14
Screenshot-to-code generationVisDiffUI (test)
Block Match96.6
8
Showing 3 of 3 rows

Other info

Follow for update