Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Hierarchical Neural Operator Transformer with Learnable Frequency-aware Loss Prior for Arbitrary-scale Super-resolution

About

In this work, we present an arbitrary-scale super-resolution (SR) method to enhance the resolution of scientific data, which often involves complex challenges such as continuity, multi-scale physics, and the intricacies of high-frequency signals. Grounded in operator learning, the proposed method is resolution-invariant. The core of our model is a hierarchical neural operator that leverages a Galerkin-type self-attention mechanism, enabling efficient learning of mappings between function spaces. Sinc filters are used to facilitate the information transfer across different levels in the hierarchy, thereby ensuring representation equivalence in the proposed neural operator. Additionally, we introduce a learnable prior structure that is derived from the spectral resizing of the input data. This loss prior is model-agnostic and is designed to dynamically adjust the weighting of pixel contributions, thereby balancing gradients effectively across the model. We conduct extensive experiments on diverse datasets from different domains and demonstrate consistent improvements compared to strong baselines, which consist of various state-of-the-art SR methods.

Xihaier Luo, Xiaoning Qian, Byung-Jun Yoon• 2024

Related benchmarks

TaskDatasetResultRank
Image Super-resolutionSet5
PSNR39.01
692
Image Super-resolutionUrban100
PSNR34.03
406
Image Super-resolutionBSD100
PSNR (dB)33.02
271
Image Super-resolutionSet14
PSNR35.02
115
Super-ResolutionDIV2K 1.0 (val)
PSNR35.29
100
Super-ResolutionNavier-Stokes (NS) 2x from PDEBench native grid (test)
RMSE0.0803
20
Super-ResolutionERA5 reanalysis 4x SR (test)
RMSE0.212
10
Super-ResolutionGlobal Ocean Surface Velocity 4x SR (test)
RMSE0.0199
10
Showing 8 of 8 rows

Other info

Follow for update