Cross-stitch Networks for Multi-task Learning
About
Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multi-task learning. Specifically, we propose a new sharing unit: "cross-stitch" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | Cityscapes (test) | mIoU40.3 | 1145 | |
| Depth Estimation | NYU v2 (test) | -- | 423 | |
| Semantic segmentation | NYU v2 (test) | mIoU40.5 | 248 | |
| Image Classification | Fashion MNIST | -- | 225 | |
| Surface Normal Estimation | NYU v2 (test) | Mean Angle Distance (MAD)15.9 | 206 | |
| Semantic segmentation | NYUD v2 (test) | mIoU36.34 | 187 | |
| Depth Estimation | NYU Depth V2 | RMSE0.629 | 177 | |
| Semantic segmentation | NYU Depth V2 (test) | mIoU36.34 | 172 | |
| Surface Normal Prediction | NYU V2 | Mean Error14.8 | 100 | |
| Semantic segmentation | NYUD v2 | mIoU36.34 | 96 |