Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Benchmarking of AI Agents

About

Evaluating AI agents on comprehensive benchmarks is expensive because each evaluation requires interactive rollouts with tool use and multi-step reasoning. We study whether small task subsets can preserve agent rankings at substantially lower cost. Unlike static language model benchmarks, agent evaluation is subject to scaffold-driven distribution shift, since performance depends on the framework wrapping the underlying model. Across eight benchmarks, 33 agent scaffolds, and 70+ model configurations, we find that absolute score prediction degrades under this shift, while rank-order prediction remains stable. Exploiting this asymmetry, we propose a simple optimization-free protocol: evaluate new agents only on tasks with intermediate historical pass rates (30-70%). This mid-range difficulty filter, motivated by Item Response Theory, reduces the number of evaluation tasks by 44-70% while maintaining high rank fidelity under scaffold and temporal shifts. It provides more reliable rankings than random sampling, which exhibits high variance across seeds, and outperforms greedy task selection under distribution shift. These results suggest that reliable leaderboard ranking does not require full-benchmark evaluation.

Franck Ndzomga• 2026

Related benchmarks

TaskDatasetResultRank
Ranking PreservationGAIA (test)
Mean Spearman Rho0.946
5
Ranking PreservationUSACO (test)
Mean Spearman Rho0.938
5
Ranking Preservationtau-bench Airline (test)
Mean Spearman Rho0.944
5
Ranking PreservationCoreBench Hard (test)
Mean Spearman Rho0.901
5
Ranking PreservationMIND2WEB ONLINE (test)
Mean Spearman Rho0.921
5
Ranking PreservationTerminalBench (test)
Mean Spearman Rho0.98
5
Ranking PreservationSWE-bench Mini (test)
Mean Spearman Rho0.92
5
Ranking PreservationSciCode (test)
Mean Spearman Rho0.736
4
Showing 8 of 8 rows

Other info

Follow for update