Variable-Input Deep Operator Networks
About
Existing architectures for operator learning require that the number and locations of sensors (where the input functions are evaluated) remain the same across all training and test samples, significantly restricting the range of their applicability. We address this issue by proposing a novel operator learning framework, termed Variable-Input Deep Operator Network (VIDON), which allows for random sensors whose number and locations can vary across samples. VIDON is invariant to permutations of sensor locations and is proved to be universal in approximating a class of continuous operators. We also prove that VIDON can efficiently approximate operators arising in PDEs. Numerical experiments with a diverse set of PDEs are presented to illustrate the robust performance of VIDON in learning operators.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Operator learning | Derivative Fixed | Relative L2 Error2.91 | 6 | |
| Operator learning | Integral Fixed | Relative L2 Error2.07 | 6 | |
| Operator learning | Elastic Plate Fixed | Relative L2 Error5.24 | 6 | |
| Operator learning | Darcy 1D Fixed | Relative L2 Error (Test)991 | 6 | |
| Diffraction | Point-cloud Diffraction | Relative L2 Error1.01 | 5 | |
| Operator learning | Darcy 1D Drop-off | Relative L2 Error (Test)1.05 | 5 | |
| Optimal Transport | Point-cloud Optimal Transport | Relative L2 Error0.0541 | 5 | |
| Heat conduction | Point-cloud Heat, M=30 | Relative L2 Error2.2 | 5 | |
| Operator learning | Integral Variable | Relative L2 Error (Test)9.71 | 5 | |
| Advection Diffusion | Point-cloud Advection Diffusion | Relative L2 Error8.23 | 5 |