Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Deep Learning using Rectified Linear Units (ReLU)

About

The Rectified Linear Unit (ReLU) is a foundational activation function in artficial neural networks. Recent literature frequently misattributes its origin to the 2018 (initial) version of this paper, which exclusively investigated ReLU at the classification layer. This paper formally corrects the citation record by tracing the mathematical lineage of piecewise linear functions from early biological models to their definitive integration into deep learning by Nair & Hinton (2010). Alongside this historical rectification, we present a comprehensive empirical comparison of the ReLU, Hyperbolic Tangent (Tanh), and Logistic (Sigmoid) activation functions across image classification, text classification, and image reconstruction tasks. To ensure statistical robustness, we evaluated these functions using 10 independent randomized trials and assessed significance using the non-parametric Kruskal-Wallis $H$ test. The empirical data validates the theoretical limitations of saturating functions. Sigmoid failed to converge in deep convolutional vision tasks due to the vanishing gradient problem, thus yielding accuracies equivalent to random probability. Conversely, ReLU and Tanh exhibited stable convergence. ReLU achieved the highest mean accuracy and F1-score on image classification and text classification tasks, while Tanh yielded the highest peak signal to noise ratio in image reconstruction. Ultimately, this study confirms a statistically significant performance variance among activations, thus reaffirming the necessity of non-saturating functions in deep architectures, and restores proper historical attribution to prior literature.

Abien Fred Agarap• 2018

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity15.73
579
Image ClassificationCIFAR100-LT (test)--
45
Image ClassificationCIFAR-100 LT (50:1 ratio) (test)
Loss4.448
15
Image ClassificationCIFAR-100 LT (500:1 ratio) (test)
Loss5.711
15
Image ClassificationCIFAR-100 LT 10:1 ratio (test)
Loss2.823
15
Image ClassificationCIFAR-100-LT (100:1 ratio) (test)
Loss5.436
15
Showing 6 of 6 rows

Other info

Follow for update