Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

KD-PINN: Knowledge-Distilled PINNs for ultra-low-latency real-time neural PDE solvers

About

This work introduces Knowledge-Distilled Physics-Informed Neural Networks (KD-PINN), a framework that transfers the predictive accuracy of a high-capacity teacher model to a compact student through a continuous adaptation of the Kullback-Leibler divergence. In order to confirm its generality for various dynamics and dimensionalities, the framework is evaluated on a representative set of partial differential equations (PDEs). Across the considered benchmarks, the student model achieves inference speedups ranging from x4.8 (Navier-Stokes) to x6.9 (Burgers), while preserving accuracy. Accuracy is improved by on the order of 1% when the model is properly tuned. The distillation process also revealed a regularizing effect. With an average inference latency of 5.3 ms on CPU, the distilled models enter the ultra-low-latency real-time regime defined by sub-10 ms performance. Finally, this study examines how knowledge distillation reduces inference latency in PINNs, to contribute to the development of accurate ultra-low-latency neural PDE solvers.

Karim Bounja, Lahcen Laayouni, Abdeljalil Sakat• 2025

Related benchmarks

TaskDatasetResultRank
Solving nonlinear PDEAllen-Cahn (test)--
5
Partial Differential Equation SolvingBlack-Scholes PDE European options In-domain grid summary
RMSE0.0023
2
Solving nonlinear PDENavier-Stokes (test)
RMSET0.131
2
Solving nonlinear PDEBurgers' (test)
RMSE (T)0.0349
1
Showing 4 of 4 rows

Other info

Follow for update