On Efficient Transformer-Based Image Pre-training for Low-Level Vision
About
Pre-training has marked numerous state of the arts in high-level computer vision, while few attempts have ever been made to investigate how pre-training acts in image processing systems. In this paper, we tailor transformer-based pre-training regimes that boost various low-level tasks. To comprehensively diagnose the influence of pre-training, we design a whole set of principled evaluation tools that uncover its effects on internal representations. The observations demonstrate that pre-training plays strikingly different roles in low-level tasks. For example, pre-training introduces more local information to higher layers in super-resolution (SR), yielding significant performance gains, while pre-training hardly affects internal feature representations in denoising, resulting in limited gains. Further, we explore different methods of pre-training, revealing that multi-related-task pre-training is more effective and data-efficient than other alternatives. Finally, we extend our study to varying data scales and model sizes, as well as comparisons between transformers and CNNs-based architectures. Based on the study, we successfully develop state-of-the-art models for multiple low-level tasks. Code is released at https://github.com/fenglinglwb/EDT.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Super-Resolution | Set5 | PSNR38.63 | 751 | |
| Image Super-resolution | Manga109 | PSNR40.37 | 656 | |
| Super-Resolution | Urban100 | PSNR33.8 | 603 | |
| Super-Resolution | Set14 | PSNR34.57 | 586 | |
| Image Super-resolution | Set5 (test) | PSNR38.63 | 544 | |
| Image Super-resolution | Set5 | PSNR38.63 | 507 | |
| Single Image Super-Resolution | Urban100 | PSNR34.27 | 500 | |
| Image Super-resolution | Set14 | PSNR34.8 | 329 | |
| Super-Resolution | BSD100 | PSNR32.52 | 313 | |
| Super-Resolution | Manga109 | PSNR39.93 | 298 |