Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Factorization tricks for LSTM networks

About

We present two simple ways of reducing the number of parameters and accelerating the training of large Long Short-Term Memory (LSTM) networks: the first one is "matrix factorization by design" of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the near state-of the art perplexity while using significantly less RNN parameters.

Oleksii Kuchaiev, Boris Ginsburg• 2017

Related benchmarks

TaskDatasetResultRank
Language ModelingOne Billion Word Benchmark (test)
Test Perplexity23.3
108
Language ModelingOne Billion Word Benchmark
Perplexity36
10
Showing 2 of 2 rows

Other info

Code

Follow for update