Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training

About

Pre-training has been investigated to improve the efficiency and performance of training neural operators in data-scarce settings. However, it is largely in its infancy due to the inherent complexity and diversity, such as long trajectories, multiple scales and varying dimensions of partial differential equations (PDEs) data. In this paper, we present a new auto-regressive denoising pre-training strategy, which allows for more stable and efficient pre-training on PDE data and generalizes to various downstream tasks. Moreover, by designing a flexible and scalable model architecture based on Fourier attention, we can easily scale up the model for large-scale pre-training. We train our PDE foundation model with up to 0.5B parameters on 10+ PDE datasets with more than 100k trajectories. Extensive experiments show that we achieve SOTA on these benchmarks and validate the strong generalizability of our model to significantly enhance performance on diverse downstream PDE tasks like 3D data. Code is available at \url{https://github.com/thu-ml/DPOT}.

Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, Jun Zhu• 2024

Related benchmarks

TaskDatasetResultRank
PDE PredictionPDEBench 2D Shallow Water Equations (SWE) (test)
Prediction Error0.0017
10
Solving Diffusion EquationPDEBench DIFF 2D (test)
Test Error0.0073
10
PDE Operator LearningCE-RPUI
EG53.6
10
PDE Operator LearningNS-PwC
EG17
10
PDE Operator LearningNS-SL
EG2.1
10
PDE Operator LearningFNS-KF
EG Score0.00e+0
10
PDE solvingPDEBench
CNS0.0285
5
PDE SimulationFNO V4
L2 Relative Error (%)8
5
PDE SimulationPB-CNSL
L2RE (%)1.32
5
PDE SimulationPB-SWE
L2RE (%)5.39
5
Showing 10 of 26 rows

Other info

Follow for update