Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Reachable Set Computation and Safety Verification for Neural Networks with ReLU Activations

About

Neural networks have been widely used to solve complex real-world problems. Due to the complicate, nonlinear, non-convex nature of neural networks, formal safety guarantees for the output behaviors of neural networks will be crucial for their applications in safety-critical systems.In this paper, the output reachable set computation and safety verification problems for a class of neural networks consisting of Rectified Linear Unit (ReLU) activation functions are addressed. A layer-by-layer approach is developed to compute output reachable set. The computation is formulated in the form of a set of manipulations for a union of polyhedra, which can be efficiently applied with the aid of polyhedron computation tools. Based on the output reachable set computation results, the safety verification for a ReLU neural network can be performed by checking the intersections of unsafe regions and output reachable set described by a union of polyhedra. A numerical example of a randomly generated ReLU neural network is provided to show the effectiveness of the approach developed in this paper.

Weiming Xiang, Hoang-Dung Tran, Taylor T. Johnson• 2017

Related benchmarks

TaskDatasetResultRank
Robustness VerificationIris dataset (test)
Vulnerable Samples0.00e+0
90
Showing 1 of 1 rows

Other info

Follow for update