Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Full-Capacity Unitary Recurrent Neural Networks

About

Recurrent neural networks are powerful models for processing sequential data, but they are generally plagued by vanishing and exploding gradient problems. Unitary recurrent neural networks (uRNNs), which use unitary recurrence matrices, have recently been proposed as a means to avoid these issues. However, in previous experiments, the recurrence matrices were restricted to be a product of parameterized unitary matrices, and an open question remains: when does such a parameterization fail to represent all unitary matrices, and how does this restricted representational capacity limit what can be learned? To address this question, we propose full-capacity uRNNs that optimize their recurrence matrix over all unitary matrices, leading to significantly improved performance over uRNNs that use a restricted-capacity recurrence matrix. Our contribution consists of two main components. First, we provide a theoretical argument to determine if a unitary parameterization has restricted capacity. Using this argument, we show that a recently proposed unitary parameterization has restricted capacity for hidden state dimension greater than 7. Second, we show how a complete, full-capacity unitary recurrence matrix can be optimized over the differentiable manifold of unitary matrices. The resulting multiplicative gradient step is very simple and does not require gradient clipping or learning rate adaptation. We confirm the utility of our claims by empirically evaluating our new full-capacity uRNNs on both synthetic and natural data, achieving superior performance compared to both LSTMs and the original restricted-capacity uRNNs.

Scott Wisdom, Thomas Powers, John R. Hershey, Jonathan Le Roux, Les Atlas• 2016

Related benchmarks

TaskDatasetResultRank
Character-level Language ModelingPenn Treebank (test)
BPC1.33
113
Pixel-by-pixel Image ClassificationPermuted Sequential MNIST (pMNIST) (test)
Accuracy94.1
79
Image Classificationpermuted MNIST (pMNIST) (test)
Accuracy94.1
63
Image Classificationpixel-by-pixel MNIST (test)
Accuracy96.9
28
Permuted Pixel-by-Pixel MNIST ClassificationPermuted MNIST (pMNIST) pixel-by-pixel (test)
Accuracy (Clean)94.1
25
pixel-by-pixel classificationMNIST unpermuted pixel-by-pixel (test)
Accuracy (Test)98.7
18
Log-magnitude STFT predictionTIMIT 8kHz (val)
MSE14.41
15
Pixel-by-pixel Image ClassificationMNIST ordered
Accuracy96.9
14
Pixel-by-pixel Image ClassificationMNIST permuted
Accuracy94.1
14
Speech predictionTIMIT (val)
MSE14.41
13
Showing 10 of 18 rows

Other info

Code

Follow for update