Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient Hyper-parameter Tuning

About

Deep neural networks have seen great success in recent years; however, training a deep model is often challenging as its performance heavily depends on the hyper-parameters used. In addition, finding the optimal hyper-parameter configuration, even with state-of-the-art (SOTA) hyper-parameter optimization (HPO) algorithms, can be time-consuming, requiring multiple training runs over the entire dataset for different possible sets of hyper-parameters. Our central insight is that using an informative subset of the dataset for model training runs involved in hyper-parameter optimization, allows us to find the optimal hyper-parameter configuration significantly faster. In this work, we propose AUTOMATA, a gradient-based subset selection framework for hyper-parameter tuning. We empirically evaluate the effectiveness of AUTOMATA in hyper-parameter tuning through several experiments on real-world datasets in the text, vision, and tabular domains. Our experiments show that using gradient-based data subsets for hyper-parameter tuning achieves significantly faster turnaround times and speedups of 3$\times$-30$\times$ while achieving comparable performance to the hyper-parameters found using the entire dataset.

Krishnateja Killamsetty, Guttu Sai Abhishek, Aakriti, Alexandre V. Evfimievski, Lucian Popa, Ganesh Ramakrishnan, Rishabh Iyer• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationFMNIST
Speedup5.24
21
Image ClassificationCIFAR10
Speedup2.2
18
Neural Architecture SearchNAS-Bench-101 CIFAR-10 (test)--
18
Image ClassificationTiny-ImageNet
Speedup2.08
14
Image ClassificationCIFAR100
Speedup0.84
11
Subset SelectionfMNIST (train)
Speedup5.24
10
Image ClassificationCaltech-256
Speedup2.07
9
Hyper-parameter optimizationCIFAR10 (test)
Test Error3.39
8
Showing 8 of 8 rows

Other info

Follow for update