Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CondConv: Conditionally Parameterized Convolutions for Efficient Inference

About

Convolutional layers are one of the basic building blocks of modern deep neural networks. One fundamental assumption is that convolutional kernels should be shared for all examples in a dataset. We propose conditionally parameterized convolutions (CondConv), which learn specialized convolutional kernels for each example. Replacing normal convolutions with CondConv enables us to increase the size and capacity of a network, while maintaining efficient inference. We demonstrate that scaling networks with CondConv improves the performance and inference cost trade-off of several existing convolutional neural network architectures on both classification and detection tasks. On ImageNet classification, our CondConv approach applied to EfficientNet-B0 achieves state-of-the-art performance of 78.3% accuracy with only 413M multiply-adds. Code and checkpoints for the CondConv Tensorflow layer and CondConv-EfficientNet models are available at: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/condconv.

Brandon Yang, Gabriel Bender, Quoc V. Le, Jiquan Ngiam• 2019

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)--
2454
Image ClassificationImageNet-1k (val)--
1453
Object DetectionCOCO (val)
mAP36.3
613
Image ClassificationImageNet
Top-1 Accuracy77.6
429
Object DetectionCOCO (minival)
mAP22.4
184
Image ClassificationImageNet (val)
Top-1 Accuracy74.6
118
Face IdentificationMegaFace 1M distractors 1.0 (test)
Rank-1 Accuracy94.8
40
Image ClassificationImageNet-1k (val)
Top-1 Accuracy78.6
25
Image ClassificationImageNet (val)
Top-1 Accuracy79.9
6
Showing 9 of 9 rows

Other info

Code

Follow for update