Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

About

Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on our PReLU networks (PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66%). To our knowledge, our result is the first to surpass human-level performance (5.1%, Russakovsky et al.) on this visual recognition challenge.

Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun• 2015

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy81.4
3518
Image ClassificationCIFAR-10 (test)
Accuracy97.27
3381
Semantic segmentationADE20K (val)
mIoU38.8
2731
Object DetectionCOCO 2017 (val)--
2454
Image ClassificationImageNet-1k (val)--
1453
Image ClassificationImageNet (val)--
1206
Instance SegmentationCOCO 2017 (val)
APm0.364
1144
Graph ClassificationPROTEINS
Accuracy76.7
742
Graph ClassificationMUTAG
Accuracy91.7
697
Graph ClassificationNCI1
Accuracy82.9
460
Showing 10 of 42 rows

Other info

Follow for update