Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Regularizing Neural Networks by Penalizing Confident Output Distributions

About

We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.

Gabriel Pereyra, George Tucker, Jan Chorowski, {\L}ukasz Kaiser, Geoffrey Hinton• 2017

Related benchmarks

TaskDatasetResultRank
Semantic segmentationPASCAL VOC 2012 (val)
Mean IoU71.16
2040
Fine-grained Image ClassificationCUB200 2011 (test)
Accuracy73.51
536
Fine-grained Image ClassificationStanford Cars (test)
Accuracy73.78
348
Image ClassificationImageNet LT
Top-1 Accuracy37.69
251
Machine TranslationIWSLT De-En 2014 (test)
BLEU34.2
146
Fine-grained Image ClassificationStanford Dogs (test)
Accuracy74.41
117
Out-of-Distribution DetectionCIFAR-10 vs CIFAR-100 (test)--
93
Machine TranslationIWSLT En-De 2014 (test)
BLEU27.9
92
Text Classification20 Newsgroups (test)
Accuracy66.48
71
Object DetectionPASCAL VOC to Water Color (test)
mAP39.4
64
Showing 10 of 40 rows

Other info

Follow for update