Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multiple Physics Pretraining for Physical Surrogate Models

About

We introduce multiple physics pretraining (MPP), an autoregressive task-agnostic pretraining approach for physical surrogate modeling of spatiotemporal systems with transformers. In MPP, rather than training one model on a specific physical system, we train a backbone model to predict the dynamics of multiple heterogeneous physical systems simultaneously in order to learn features that are broadly useful across systems and facilitate transfer. In order to learn effectively in this setting, we introduce a shared embedding and normalization strategy that projects the fields of multiple systems into a shared embedding space. We validate the efficacy of our approach on both pretraining and downstream tasks over a broad fluid mechanics-oriented benchmark. We show that a single MPP-pretrained transformer is able to match or outperform task-specific baselines on all pretraining sub-tasks without the need for finetuning. For downstream tasks, we demonstrate that finetuning MPP-trained models results in more accurate predictions across multiple time-steps on systems with previously unseen physical components or higher dimensional systems compared to training from scratch or finetuning pretrained video foundation models. We open-source our code and model weights trained at multiple scales for reproducibility.

Michael McCabe, Bruno R\'egaldo-Saint Blancard, Liam Holden Parker, Ruben Ohana, Miles Cranmer, Alberto Bietti, Michael Eickenberg, Siavash Golkar, Geraud Krawezik, Francois Lanusse, Mariel Pettee, Tiberiu Tesileanu, Kyunghyun Cho, Shirley Ho• 2023

Related benchmarks

TaskDatasetResultRank
PDE Operator LearningNS-SL
EG0.3
10
PDE Operator LearningNS-PwC
EG0.74
10
PDE Operator LearningFNS-KF
EG Score2
10
PDE Operator LearningCE-RPUI
EG0.00e+0
10
Downstream Task Evaluation15 Downstream Tasks summary
Median EG2
7
Operator Learning for PDEsCE-RM downstream
EG0.00e+0
6
Operator Learning for PDEsGCE-RT (downstream)
EG0.00e+0
6
Operator Learning for PDEsACE (downstream)
EG0.00e+0
6
Operator Learning for PDEsNS-SVS (downstream)
EG34.8
6
Operator Learning for PDEsSE-AF (downstream)
EG Score2.2
6
Showing 10 of 17 rows

Other info

Follow for update