SnareNet: Flexible Repair Layers for Neural Networks with Hard Constraints
About
Neural networks are increasingly used as surrogate solvers and control policies, but unconstrained predictions can violate physical, operational, or safety requirements. We propose SnareNet, a feasibility-controlled architecture for learning mappings whose outputs must satisfy input-dependent nonlinear constraints. SnareNet appends a differentiable repair layer that navigates in the constraint map's range space, steering iterates toward feasibility and producing a repaired output that satisfies constraints to a user-specified tolerance. To stabilize end-to-end training, we introduce adaptive relaxation, which designs a relaxed feasible set that snares the neural network at initialization and shrinks it into the feasible set, enabling early exploration and strict feasibility later in training. On optimization-learning and trajectory planning benchmarks, SnareNet consistently attains improved objective quality while satisfying constraints more reliably than prior work.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Non-Convex Programming Optimization | NCP (test) | Count Equality Violations0.00e+0 | 8 | |
| QCQP Optimization | QCQP (test) | # Equality Violations0.00e+0 | 7 | |
| Constrained Optimization | QCQP 100 inequality constraints (test) | Count of Equality Violations0.00e+0 | 4 | |
| Constrained Optimization | QCQP with 10 inequality constraints (test) | # Eq Violations0.00e+0 | 4 | |
| Quadratically Constrained Quadratic Program | QCQP 50 inequality constraints (test) | Number of Equality Violations0.00e+0 | 4 |