Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Abstractions

About

We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics. Neural networks have extensively been used before as approximators; in this work, we make a step further and use them for the first time as abstractions. For a given dynamical model, our method synthesises a neural network that overapproximates its dynamics by ensuring an arbitrarily tight, formally certified bound on the approximation error. For this purpose, we employ a counterexample-guided inductive synthesis procedure. We show that this produces a neural ODE with non-deterministic disturbances that constitutes a formal abstraction of the concrete model under analysis. This guarantees a fundamental property: if the abstract model is safe, i.e., free from any initialised trajectory that reaches an undesirable state, then the concrete model is also safe. By using neural ODEs with ReLU activation functions as abstractions, we cast the safety verification problem for nonlinear dynamical models into that of hybrid automata with affine dynamics, which we verify using SpaceEx. We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models. We additionally demonstrate and that it is effective on models that do not exhibit local Lipschitz continuity, which are out of reach to the existing technologies.

Alessandro Abate, Alec Edwards, Mirco Giacobbe• 2023

Related benchmarks

TaskDatasetResultRank
Safety verificationJet Engine
Metric t215
2
Safety verificationExponential
Time308
2
Safety verificationSteam Governor
Verification Time219
2
Safety verificationWater Tank
Time Placeholder49
1
Safety verificationNon-Lipschitz 1
Parameter t19
1
Safety verificationNon-Lipschitz 2
Generic Variable t59
1
Showing 6 of 6 rows

Other info

Code

Follow for update