Randomized Sharpness-Aware Training for Boosting Computational Efficiency in Deep Learning
About
By driving models to converge to flat minima, sharpness-aware learning algorithms (such as SAM) have shown the power to achieve state-of-the-art performances. However, these algorithms will generally incur one extra forward-backward propagation at each training iteration, which largely burdens the computation especially for scalable models. To this end, we propose a simple yet efficient training scheme, called Randomized Sharpness-Aware Training (RST). Optimizers in RST would perform a Bernoulli trial at each iteration to choose randomly from base algorithms (SGD) and sharpness-aware algorithms (SAM) with a probability arranged by a predefined scheduling function. Due to the mixture of base algorithms, the overall count of propagation pairs could be largely reduced. Also, we give theoretical analysis on the convergence of RST. Then, we empirically study the computation cost and effect of various types of scheduling functions, and give directions on setting appropriate scheduling functions. Further, we extend the RST to a general framework (G-RST), where we can adjust regularization degree on sharpness freely for any scheduling function. We show that G-RST can outperform SAM in most cases while saving 50\% extra computation cost.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | HellaSwag | Accuracy70.33 | 1460 | |
| Question Answering | OpenBookQA | Accuracy36.2 | 465 | |
| Natural Language Inference | RTE | Accuracy72.56 | 367 | |
| Boolean Question Answering | BoolQ | Accuracy79.24 | 307 | |
| Science Question Answering | ARC Challenge | Accuracy43.94 | 234 | |
| Natural Language Understanding | GLUE (test dev) | MRPC Accuracy92.39 | 81 | |
| Multiple-choice Question Answering | MMLU | STEM Accuracy50.52 | 13 | |
| Linguistic Acceptability | COLA | Max Memory (MB)3.32e+3 | 5 | |
| Natural Language Inference | MNLI | Max Memory (MB)8.08e+3 | 5 | |
| Fine-tuning | Open-Platypus | Max Memory (MB)5.13e+4 | 4 |