Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning
About
Data Shapley has recently been proposed as a principled framework to quantify the contribution of individual datum in machine learning. It can effectively identify helpful or harmful data points for a learning algorithm. In this paper, we propose Beta Shapley, which is a substantial generalization of Data Shapley. Beta Shapley arises naturally by relaxing the efficiency axiom of the Shapley value, which is not critical for machine learning settings. Beta Shapley unifies several popular data valuation methods and includes data Shapley as a special case. Moreover, we prove that Beta Shapley has several desirable statistical properties and propose efficient algorithms to estimate it. We demonstrate that Beta Shapley outperforms state-of-the-art data valuation methods on several downstream ML tasks such as: 1) detecting mislabeled training data; 2) learning with subsamples; and 3) identifying points whose addition or removal have the largest positive or negative impact on the model.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Binary Classification | Heart | -- | 17 | |
| Label Noise Identification | MNIST (train) | AUC0.845 | 15 | |
| Faithfulness Evaluation | AG-News | Rate of Label Changes20 | 12 | |
| Faithfulness Evaluation | SST-2 | Rate of Label Changes28 | 12 | |
| Faithfulness Evaluation | IMDB | Rate of Label Changes34 | 12 | |
| High-value data removal | CIFAR10 binarized (test) | -- | 11 | |
| Multiclass Classification | Wine | R50068 | 9 | |
| Regression | DIABETES subsampled to 300 (train) | R5000.99 | 9 | |
| Regression | AMES HOUSING subsampled to 300 (train) | R5000.99 | 9 | |
| Multiclass Classification | DIGITS subsampled to 100 (train) | R50075 | 9 |