One weird trick for parallelizing convolutional neural networks
About
I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.
Alex Krizhevsky• 2014
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet 1k (train) | Top-1 Accuracy57.1 | 58 | |
| Perceptual Similarity | BAPPS (val) | 2AFC (Overall)68.9 | 39 |
Showing 2 of 2 rows