Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Manifold-Orthogonal Dual-spectrum Extrapolation for Parameterized Physics-Informed Neural Networks

About

Physics-informed neural networks (PINNs) have achieved notable success in modeling dynamical systems governed by partial differential equations (PDEs). To avoid computationally expensive retraining under new physical conditions, parameterized PINNs (P$^2$INNs) commonly adapt pre-trained operators using singular value decomposition (SVD) for out-of-distribution (OOD) regimes. However, SVD-based fine-tuning often suffers from rigid subspace locking and truncation of important high-frequency spectral modes, limiting its ability to capture complex physical transitions. While parameter-efficient fine-tuning (PEFT) methods appear to be promising alternatives, applying conventional adapters such as LoRA to P$^2$INNs introduces a severe Pareto trade-off, as additive updates increase parameter overhead and disrupt the structured physical manifolds inherent in operator representations. To address these limitations, we propose Manifold-Orthogonal Dual-spectrum Extrapolation (MODE), a lightweight micro-architecture designed for physics operator adaptation. MODE decomposes physical evolution into complementary mechanisms including principal-spectrum dense mixing that enables cross-modal energy transfer within frozen orthogonal bases, residual-spectrum awakening that activates high-frequency spectral components through a single trainable scalar, and affine Galilean unlocking that explicitly isolates spatial translation dynamics. Experiments on challenging PDE benchmarks including the 1D Convection--Diffusion--Reaction equation and the 2D Helmholtz equation demonstrate that MODE achieves strong out-of-distribution generalization while preserving the minimal parameter complexity of native SVD and outperforming existing PEFT-based baselines.

Zhangyong Liang, Ji Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Fine-tuning1D Convection-Diffusion-Reaction (CDR) Equation (train)
Train Loss2.34
14
Fine-tuning1D Convection-Diffusion-Reaction (CDR) Equation (test)
Test Loss2.5
14
PDE solvingCDR Equation beta=1, nu=1, rho=1
Relative L2 Error1.19e+3
12
PDE solvingCDR Equation (beta=3, nu=1, rho=1)
Relative L2 Error1.21e+3
12
PDE solvingCDR Equation (beta=5, nu=1, rho=1)
Relative L2 Error1.23e+3
12
Showing 5 of 5 rows

Other info

Follow for update