Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VegaChat: A Robust Framework for LLM-Based Chart Generation and Assessment

About

Natural-language-to-visualization (NL2VIS) systems based on large language models (LLMs) have substantially improved the accessibility of data visualization. However, their further adoption is hindered by two coupled challenges: (i) the absence of standardized evaluation metrics makes it difficult to assess progress in the field and compare different approaches; and (ii) natural language descriptions are inherently underspecified, so multiple visualizations may be valid for the same query. To address these issues, we introduce VegaChat, a framework for generating, validating, and assessing declarative visualizations from natural language. We propose two complementary metrics: Spec Score, a deterministic metric that measures specification-level similarity without invoking an LLM, and Vision Score, a library-agnostic, image-based metric that leverages a multimodal LLM to assess chart similarity and prompt compliance. We evaluate VegaChat on the NLV Corpus and on the annotated subset of ChartLLM. VegaChat achieves near-zero rates of invalid or empty visualizations, while Spec Score and Vision Score exhibit strong correlation with human judgments (Pearson 0.65 and 0.71, respectively), indicating that the proposed metrics support consistent, cross-library comparison. The code and evaluation artifacts are available at https://zenodo.org/records/17062309.

Marko Hostnik, Rauf Kurbanov, Yaroslav Sokolov, Artem Trofimov• 2026

Related benchmarks

TaskDatasetResultRank
Chart GenerationNLV (non-sequential)
Vision Score85.1
4
Chart GenerationChartLLM
Vision Score56.7
4
Correlation with human judgmentNLV and ChartLLM (171 sampled examples)
Pearson Correlation0.71
4
Showing 3 of 3 rows

Other info

Follow for update