Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improved Regularization of Convolutional Neural Networks with Cutout

About

Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code is available at https://github.com/uoguelph-mlrg/Cutout

Terrance DeVries, Graham W. Taylor• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy82.33
3518
Image ClassificationCIFAR-10 (test)
Accuracy97.44
3381
Image ClassificationImageNet-1k (val)--
1453
Image ClassificationImageNet (val)--
1206
Image ClassificationCIFAR-10 (test)
Accuracy96.92
906
Object DetectionPASCAL VOC 2007 (test)
mAP75.1
821
Image ClassificationCIFAR-10--
471
Image ClassificationImageNet ILSVRC-2012 (val)
Top-1 Accuracy76.52
405
Image ClassificationSVHN (test)--
362
Image ClassificationSTL-10 (test)
Accuracy54.32
357
Showing 10 of 113 rows
...

Other info

Code

Follow for update