Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SAT: Selective Aggregation Transformer for Image Super-Resolution

About

Transformer-based approaches have revolutionized image super-resolution by modeling long-range dependencies. However, the quadratic computational complexity of vanilla self-attention mechanisms poses significant challenges, often leading to compromises between efficiency and global context exploitation. Recent window-based attention methods mitigate this by localizing computations, but they often yield restricted receptive fields. To mitigate these limitations, we propose Selective Aggregation Transformer (SAT). This novel transformer efficiently captures long-range dependencies, leading to an enlarged model receptive field by selectively aggregating key-value matrices (reducing the number of tokens by 97\%) via our Density-driven Token Aggregation algorithm while maintaining the full resolution of the query matrix. This design significantly reduces computational costs, resulting in lower complexity and enabling scalable global interactions without compromising reconstruction fidelity. SAT identifies and represents each cluster with a single aggregation token, utilizing density and isolation metrics to ensure that critical high-frequency details are preserved. Experimental results demonstrate that SAT outperforms the state-of-the-art method PFT by up to 0.22dB, while the total number of FLOPs can be reduced by up to 27\%.

Dinh Phu Tran, Thao Do, Saad Wazir, Seongah Kim, Seon Kwon Kim, Daeyoung Kim• 2026

Related benchmarks

TaskDatasetResultRank
Image Super-resolutionManga109
PSNR40.7
821
Image Super-resolutionSet5
PSNR38.74
692
Image Super-resolutionSet14
PSNR35.07
506
Image Super-resolutionUrban100
PSNR34.92
406
Image Super-resolutionB100
PSNR32.71
101
Showing 5 of 5 rows

Other info

Follow for update