Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transformer for Partial Differential Equations' Operator Learning

About

Data-driven learning of partial differential equations' solution operators has recently emerged as a promising paradigm for approximating the underlying solutions. The solution operators are usually parameterized by deep learning models that are built upon problem-specific inductive biases. An example is a convolutional or a graph neural network that exploits the local grid structure where functions' values are sampled. The attention mechanism, on the other hand, provides a flexible way to implicitly exploit the patterns within inputs, and furthermore, relationship between arbitrary query locations and inputs. In this work, we present an attention-based framework for data-driven operator learning, which we term Operator Transformer (OFormer). Our framework is built upon self-attention, cross-attention, and a set of point-wise multilayer perceptrons (MLPs), and thus it makes few assumptions on the sampling pattern of the input function or query locations. We show that the proposed framework is competitive on standard benchmark problems and can flexibly be adapted to randomly sampled input.

Zijie Li, Kazem Meidani, Amir Barati Farimani• 2022

Related benchmarks

TaskDatasetResultRank
PDE solving1d Burgers' equation (test)
Relative Error0.0492
85
Forward PDE solvingPlasticity
Relative L2 Error0.0017
21
Forward PDE solvingAirfoil
Relative L21.83
21
Forward PDE solvingPipe
Relative L2 Error0.0168
20
Forward PDE solvingElasticity
Relative L2 Error0.0183
19
PDE solvingNavier-Stokes Regular Grid (test)
Relative L2 Error0.1705
16
PDE solvingDarcy Regular Grid (test)
Relative L2 Error0.0124
16
Operator learningPlasticity Structured Mesh (test)
Relative L2 Error0.0017
15
PDE solvingNavier-Stokes Point-wise (25% test ratio)
Relative L2 Error0.2079
15
Temporal ExtrapolationNavier-Stokes 1 × 10^-3 (In-t)
MSE0.0078
15
Showing 10 of 39 rows

Other info

Follow for update