Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

One weird trick for parallelizing convolutional neural networks

About

I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

Alex Krizhevsky• 2014

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (train)
Top-1 Accuracy57.1
58
Perceptual SimilarityBAPPS (val)
2AFC (Overall)68.9
39
Showing 2 of 2 rows

Other info

Follow for update