PV1 nnenum: The Neural Network Enumeration Toolgiven an unsafe set an a neural network, checks whether the set overlaps with the output of the network

Application domain/field

Type of tool

Neural network verifier

Expected input

?

Expected output

safe or unsafe. For neural networks this means that they check whether the output of the neural network (for a given input set) overlaps with the provided unsafe set. If they do not overlap, then the network is considered safe.

Internals

nnenum focuses on the verification of fully-connected, feedforward neural networks with ReLU activation functions.
Neural network

Links

Repository: https://github.com/stanleybak/nnenum

Last commit date

15 June 2021

Related papers

https://doi.org/10.1007/978-3-030-53288-8_4 (CAV 2020)

Last publication date

14 July 2020

Related tools

Other tools for the verification of neural networks: Marabou, Neurify, NNV.

ProVerB specific

View/edit source (Markdown)



ProVerB is a part of SLEBoK. Last updated: July 2022.