Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploiting Explainable Metrics for Augmented SGD

About

Explaining the generalization characteristics of deep learning is an emerging topic in advanced machine learning. There are several unanswered questions about how learning under stochastic optimization really works and why certain strategies are better than others. In this paper, we address the following question: \textit{can we probe intermediate layers of a deep neural network to identify and quantify the learning quality of each layer?} With this question in mind, we propose new explainability metrics that measure the redundant information in a network's layers using a low-rank factorization framework and quantify a complexity measure that is highly correlated with the generalization performance of a given optimizer, network, and dataset. We subsequently exploit these metrics to augment the Stochastic Gradient Descent (SGD) optimizer by adaptively adjusting the learning rate in each layer to improve in generalization performance. Our augmented SGD -- dubbed RMSGD -- introduces minimal computational overhead compared to SOTA methods and outperforms them by exhibiting strong generalization characteristics across application, architecture, and dataset.

Mahdi S. Hosseini, Mathieu Tuli, Konstantinos N. Plataniotis• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR10 (test)
Accuracy96.42
585
Image ClassificationCIFAR100 (test)
Top-1 Accuracy80.36
377
Image ClassificationImageNet (test)--
235
Image ClassificationCIFAR100 (test)
Test Accuracy80.36
147
Image ClassificationCIFAR100 without Cutout (test)
Accuracy79.59
45
Image ClassificationCIFAR10 without Cutout (test)
Accuracy95.71
45
Image ClassificationMHIST (test)
Accuracy94.27
36
Image ClassificationADP (test)
Accuracy82.58
18
Showing 8 of 8 rows

Other info

Code

Follow for update