Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Beyond LLM-as-a-Judge: Deterministic Metrics for Multilingual Generative Text Evaluation

About

While Large Language Models (LLMs) are increasingly adopted as automated judges for evaluating generated text, their outputs are often costly, and highly sensitive to prompt design, language, and aggregation strategies, severely, which limits reproducibility. To address these challenges, we propose \textbf{\textit{OmniScore}}, a family of complementary, deterministic learned metrics developed using small size ($<$1B) parameter models. OmniScore approximates LLM-judge behavior while preserving the low latency and consistency of traditional model-based scoring. We trained the models large-scale synthetic supervision ($\sim$564k instances, in \textbf{107 languages}) and evaluated using 8,617 manually annotated instances. The OmniScore family supports reliable, multi-dimensional scores across a variety of settings, including reference-based, source-grounded, and hybrid evaluations. We evaluate these models across question answering (QA), translation, and summarization in \textbf{6 languages}. Our results demonstrate that lightweight, deterministic learned metrics provide a highly practical and scalable alternative to frontier LLMs. Our models and datasets can be found at https://huggingface.co/collections/QCRI/omniscore

Firoj Alam, Gagan Bhatia, Sahinur Rahman Laskar, Shammur Absar Chowdhury• 2026

Related benchmarks

TaskDatasetResultRank
Score PredictionOmniScore-Bench (test)
MAE0.78
18
Headline GenerationOmniScore Evaluation Set
MAE0.6
5
ParaphraseOmniScore Evaluation Set
MAE0.86
5
Subjective Rubric-based ScoringOmniScore overall (test)
MAE0.78
5
TranslationOmniScore Evaluation Set
MAE0.68
5
SummarizationOmniScore Evaluation Set
MAE1.09
5
Multi-task ScoringOmniScore Evaluation Set
Average MA0.84
5
Question AnsweringOmniScore Evaluation Set
MAE0.66
5
Showing 8 of 8 rows

Other info

Follow for update