Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Label-Efficient Monitoring of Classification Models via Stratified Importance Sampling

About

Monitoring the performance of classification models in production is critical yet challenging due to strict labeling budgets, one-shot batch acquisition of labels and extremely low error rates. We propose a general framework based on Stratified Importance Sampling (SIS) that directly addresses these constraints in model monitoring. While SIS has previously been applied in specialized domains, our theoretical analysis establishes its broad applicability to the monitoring of classification models. Under mild conditions, SIS yields unbiased estimators with strict finite-sample mean squared error (MSE) improvements over both importance sampling (IS) and stratified random sampling (SRS). The framework does not rely on optimally defined proposal distributions or strata: even with noisy proxies and sub-optimal stratification, SIS can improve estimator efficiency compared to IS or SRS individually, though extreme proposal mismatch may limit these gains. Experiments across binary and multiclass tasks demonstrate consistent efficiency improvements under fixed label budgets, underscoring SIS as a principled, label-efficient, and operationally lightweight methodology for post-deployment model monitoring.

Lupo Marsigli, Angel Lopez de Haro• 2026

Related benchmarks

TaskDatasetResultRank
Classification Model MonitoringMNIST
Relative Error vs. Random8.97
10
Classification Model MonitoringCIFAR10
Relative Error vs Random1.59
6
Classification Model MonitoringProprietary
Relative Error (vs Random)4.53
6
Classification Model MonitoringBCW
Relative Error vs Random1.65
5
Classification Model MonitoringDigits
RE vs Random2.2
4
Classification Model MonitoringCredit default
RE vs. Random1.42
3
Showing 6 of 6 rows

Other info

Follow for update