Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Transformer for Partial Differential Equations' Operator Learning

About

Data-driven learning of partial differential equations' solution operators has recently emerged as a promising paradigm for approximating the underlying solutions. The solution operators are usually parameterized by deep learning models that are built upon problem-specific inductive biases. An example is a convolutional or a graph neural network that exploits the local grid structure where functions' values are sampled. The attention mechanism, on the other hand, provides a flexible way to implicitly exploit the patterns within inputs, and furthermore, relationship between arbitrary query locations and inputs. In this work, we present an attention-based framework for data-driven operator learning, which we term Operator Transformer (OFormer). Our framework is built upon self-attention, cross-attention, and a set of point-wise multilayer perceptrons (MLPs), and thus it makes few assumptions on the sampling pattern of the input function or query locations. We show that the proposed framework is competitive on standard benchmark problems and can flexibly be adapted to randomly sampled input.

Zijie Li, Kazem Meidani, Amir Barati Farimani• 2022

Related benchmarks

TaskDatasetResultRank
PDE solving1d Burgers' equation (test)
Relative Error0.0492
85
PDE solvingDarcy-Flow 2d (test)
Relative MSE9.84e-4
33
Spatiotemporal Field ReconstructionNavier-Stokes 10% Subset
CRPS0.9052
30
Spatiotemporal Field ReconstructionNavier-Stokes (Full)
CRPS0.5645
30
PDE solvingNavier-Stokes Regular Grid (test)
Relative L2 Error0.1705
25
PDE solvingDarcy Regular Grid (test)
Relative L2 Error0.0124
25
PDE solvingAirfoil Structured Mesh (test)
Relative L2 Error0.0183
23
PDE solvingPipe Structured Mesh (test)
Relative L2 Error0.0168
23
Forward PDE solvingPlasticity
Relative L2 Error0.0017
21
Forward PDE solvingAirfoil
Relative L21.83
21
Showing 10 of 51 rows

Other info

Follow for update