Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

About

Unstructured pruning reduces the memory footprint in deep neural networks (DNNs). Recently, researchers proposed different types of structural pruning intending to reduce also the computation complexity. In this work, we first suggest a new measure called mask-diversity which correlates with the expected accuracy of the different types of structural pruning. We focus on the recently suggested N:M fine-grained block sparsity mask, in which for each block of M weights, we have at least N zeros. While N:M fine-grained block sparsity allows acceleration in actual modern hardware, it can be used only to accelerate the inference phase. In order to allow for similar accelerations in the training phase, we suggest a novel transposable fine-grained sparsity mask, where the same mask can be used for both forward and backward passes. Our transposable mask guarantees that both the weight matrix and its transpose follow the same sparsity pattern; thus, the matrix multiplication required for passing the error backward can also be accelerated. We formulate the problem of finding the optimal transposable-mask as a minimum-cost flow problem. Additionally, to speed up the minimum-cost flow computation, we also introduce a fast linear-time approximation that can be used when the masks dynamically change during training. Our experiments suggest a 2x speed-up in the matrix multiplications with no accuracy degradation over vision and language models. Finally, to solve the problem of switching between different structure constraints, we suggest a method to convert a pre-trained model with unstructured sparsity to an N:M fine-grained block sparsity model with little to no training. A reference implementation can be found at https://github.com/papers-submission/structured_transposable_masks.

Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Seffi Naor, Daniel Soudry• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet (val)
Top-1 Acc77.4
1206
Object DetectionCOCO (val)
mAP37.84
613
Language ModelingPTB (test)
Perplexity41.6
471
Image ClassificationImageNet
Top-1 Accuracy77.1
429
Language ModelingC4 (val)
PPL28.56
392
Image ClassificationImageNet
Top-1 Accuracy75.53
324
Image ClassificationImageNet (val)
Accuracy74.75
300
Object DetectionCOCO
AP50 (Box)66
190
Question AnsweringSQuAD v1.1
F187.12
79
Question AnsweringSQuAD v1.1 (val)
F1 Score88.38
70
Showing 10 of 13 rows

Other info

Code

Follow for update