Predicting Neural Network Accuracy from Weights
About
We show experimentally that the accuracy of a trained neural network can be predicted surprisingly well by looking only at its weights, without evaluating it on input data. We motivate this task and introduce a formal setting for it. Even when using simple statistics of the weights, the predictors are able to rank neural networks by their performance with very high accuracy (R2 score more than 0.98). Furthermore, the predictors are able to rank networks trained on different, unobserved datasets and with different architectures. We release a collection of 120k convolutional neural networks trained on four different datasets to encourage further research in this area, with the goal of understanding network training and performance better.
Thomas Unterthiner, Daniel Keysers, Sylvain Gelly, Olivier Bousquet, Ilya Tolstikhin• 2020
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Adapter Retrieval | Mixed Adapter Pool (ARC-C, BoolQ, GSM8K, MBPP) LoRA (test) | Average Score27.82 | 6 | |
| Attribute Classification | CelebA-LoRA SD1.5 | Macro F111.51 | 6 | |
| Generalization prediction | SmallCNN Zoo CIFAR-10-GS ReLU (test) | Kendall's Tau0.914 | 6 | |
| Generalization prediction | SmallCNN Zoo SVHN-GS ReLU (test) | Kendall's Tau0.8463 | 6 | |
| Generalization prediction | SmallCNN Zoo CIFAR-10-GS Tanh (test) | Kendall's Tau0.914 | 6 | |
| Generalization prediction | SmallCNN Zoo SVHN-GS Tanh (test) | Kendall's Tau0.844 | 6 | |
| Generalization prediction | SmallCNN Zoo CIFAR-10-GS both activations ReLU Tanh (test) | Kendall's Tau91.5 | 6 | |
| Attribute Classification | GoEmotions LoRA | Macro F1 Score2.87 | 5 | |
| Attribute Classification | CUB-LoRA | Macro-F17.44 | 5 | |
| Attribute Classification | CelebA-LoRA | Macro F17.73 | 5 |
Showing 10 of 25 rows