Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction
About
We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.
Richard Zhang, Phillip Isola, Alexei A. Efros• 2016
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | PASCAL VOC 2012 (val) | Mean IoU36 | 2040 | |
| Image Classification | ImageNet-1k (val) | Top-1 Accuracy35.4 | 1453 | |
| Semantic segmentation | PASCAL VOC 2012 (test) | mIoU36 | 1342 | |
| Object Detection | PASCAL VOC 2007 (test) | mAP46.7 | 821 | |
| Classification | PASCAL VOC 2007 (test) | mAP (%)67.1 | 217 | |
| Semantic segmentation | PASCAL VOC 2012 | mIoU36 | 187 | |
| Semantic segmentation | Pascal VOC | mIoU0.36 | 172 | |
| Scene Classification | Places 205 categories (test) | Top-1 Acc34.1 | 150 | |
| Image Classification | STL-10 | -- | 109 | |
| Scene Classification | Places-205 (val) | Top-1 Acc34.1 | 97 |
Showing 10 of 30 rows