Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scaling Vision with Sparse Mixture of Experts

About

Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent scalability in Natural Language Processing. In Computer Vision, however, almost all performant networks are "dense", that is, every input is processed by every parameter. We present a Vision MoE (V-MoE), a sparse version of the Vision Transformer, that is scalable and competitive with the largest dense networks. When applied to image recognition, V-MoE matches the performance of state-of-the-art networks, while requiring as little as half of the compute at inference time. Further, we propose an extension to the routing algorithm that can prioritize subsets of each input across the entire batch, leading to adaptive per-image compute. This allows V-MoE to trade-off performance and compute smoothly at test-time. Finally, we demonstrate the potential of V-MoE to scale vision models, and train a 15B parameter model that attains 90.35% on ImageNet.

Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, Andr\'e Susano Pinto, Daniel Keysers, Neil Houlsby• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy98.5
906
Image ClassificationImageNet-1k (val)
Top-1 Acc90.35
706
Image ClassificationImageNet ILSVRC-2012 (val)
Top-1 Accuracy77.9
405
Image ClassificationImageNet-1K V1
Top-1 Acc90.35
35
Click-Through Rate PredictionDouyin Search
QAUC43
8
Finish-rate PredictionDouyin Search
QAUC0.25
8
Few-shot Image ClassificationImageNet 5-shot
Accuracy (5-shot)78.21
6
Image ClassificationJFT (test)
Precision@1 (JFT)60.62
6
Image ClassificationJFT 300M (test)
Top-1 Accuracy60.62
4
Showing 9 of 9 rows

Other info

Follow for update