Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Ultimate tensorization: compressing convolutional and FC layers alike

About

Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity. To tackle this problem, [1] developed a tensor factorization framework to compress fully-connected layers. In this paper, we focus on compressing convolutional layers. We show that while the direct application of the tensor framework [1] to the 4-dimensional kernel of convolution does compress the layer, we can do better. We reshape the convolutional kernel into a tensor of higher order and factorize it. We combine the proposed approach with the previous work to compress both convolutional and fully-connected layers of a network and achieve 80x network compression rate with 1.1% accuracy drop on the CIFAR-10 dataset.

Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, Dmitry Vetrov• 2016

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)--
906
Image ClassificationMNIST (test)
Accuracy99.07
882
Image ClassificationCIFAR-100 (test)
Top-1 Acc62.9
275
Image ClassificationImageNet (val)
Top-5 Accuracy85.64
11
Showing 4 of 4 rows

Other info

Follow for update