PolyViT: Co-training Vision Transformers on Images, Videos and Audio
About
Can we train a single transformer model capable of processing multiple modalities and datasets, whilst sharing almost all of its learnable parameters? We present PolyViT, a model trained on image, audio and video which answers this question. By co-training different tasks on a single modality, we are able to improve the accuracy of each individual task and achieve state-of-the-art results on 5 standard video- and audio-classification datasets. Co-training PolyViT on multiple modalities and tasks leads to a model that is even more parameter-efficient, and learns representations that generalize across multiple domains. Moreover, we show that co-training is simple and practical to implement, as we do not need to tune hyperparameters for each combination of datasets, but can simply adapt those from standard, single-task training.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Recognition | Kinetics-400 | Top-1 Acc82.4 | 447 | |
| Audio Classification | VGG-Sound | Top-1 Accuracy0.517 | 83 | |
| Action Recognition | Moments in Time | Top-1 Accuracy38.6 | 53 | |
| Audio-Visual Classification | VGGSound | -- | 33 |