Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Framework for Efficient Model Evaluation through Stratification, Sampling, and Estimation

About

Model performance evaluation is a critical and expensive task in machine learning and computer vision. Without clear guidelines, practitioners often estimate model accuracy using a one-time completely random selection of the data. However, by employing tailored sampling and estimation strategies, one can obtain more precise estimates and reduce annotation costs. In this paper, we propose a statistical framework for model evaluation that includes stratification, sampling, and estimation components. We examine the statistical properties of each component and evaluate their efficiency (precision). One key result of our work is that stratification via k-means clustering based on accurate predictions of model performance yields efficient estimators. Our experiments on computer vision datasets show that this method consistently provides more precise accuracy estimates than the traditional simple random sampling, even with substantial efficiency gains of 10x. We also find that model-assisted estimators, which leverage predictions of model accuracy on the unlabeled portion of the dataset, are generally more efficient than the traditional estimates based solely on the labeled data.

Riccardo Fogliato, Pratik Patil, Mathew Monfort, Pietro Perona• 2024

Related benchmarks

TaskDatasetResultRank
Classification Model MonitoringMNIST
Relative Error vs. Random0.95
10
Classification Model MonitoringCIFAR10
Relative Error vs Random1.01
6
Classification Model MonitoringProprietary
Relative Error (vs Random)2.43
6
Classification Model MonitoringBCW
Relative Error vs Random1.24
5
Classification Model MonitoringCredit default
RE vs. Random1.11
3
Showing 5 of 5 rows

Other info

Follow for update