Network
Model
JuMP.Model — TypeModelA mathematical model of an optimization problem.
Network
ModelVerification.Network — TypeNetworkA vector of layers.
Fields
layers(Vector{Layer}): Layers of the network, including the output layer.
See also: Layer
ModelVerification.Layer — TypeLayer{F<:ActivationFunction, N<:Number}Consists of weights and bias for linear mapping, and activation for nonlinear mapping.
Fields
weights::Matrix{N}bias::Vector{N}activation::F
See also: Network
Activation Functions
ModelVerification.ActivationFunction — TypeActivationFunctionFunction that calculates the output of the node. Supported activation functions are:
- ReLU (
ReLU) - Max (
Max) - Identity (
Id) - Sigmoid (
Sigmoid) - Tanh (
Tanh)
ModelVerification.GeneralAct — TypeGeneralAct <: ActivationFunctionWrapper type for a general activation function.
Usage
act = GeneralAct(tanh)
act(0) == tanh(0) # true
act(10.0) == tanh(10.0) # trueact = GeneralAct(x->tanh.(x))
julia> act(-2:2)
5-element Array{FloatType[],1}:
-0.9640275800758169
-0.7615941559557649
0.0
0.7615941559557649
0.9640275800758169ModelVerification.Id — TypeId <: ActivationFunctionIdentity operator
(Id())(x) -> x
ModelVerification.Max — TypeMax <: ActivationFunction(Max())(x) -> max(maximum(x), 0)
ModelVerification.PiecewiseLinear — TypePiecewiseLinear <: ActivationFunctionActivation function that uses linear interpolation between supplied knots. An extrapolation condition can be set for values outside the set of knots. Default is Linear.
PiecewiseLinear(knots_x, knots_y, [extrapolation = Line()])Usage
kx = [0.0, 1.2, 1.7, 3.1]
ky = [0.0, 0.5, 1.0, 1.5]
act = PiecewiseLinear(kx, ky)
act(first(kx)) == first(ky) == 0.0
act(last(kx)) == last(ky) == 1.5
act(1.0) # 0.4166666666666667
act(-102) # -42.5act = PiecewiseLinear(kx, ky, Flat())
act(-102) # 0.0
act(Inf) # 1.5Extrapolations
- Flat()
- Line()
- constant (supply a number as the argument)
- Throw() (throws bounds error)
PiecewiseLinear uses Interpolations.jl.
ModelVerification.ReLU — TypeReLU <: ActivationFunction(ReLU())(x) -> max.(x, 0)
ModelVerification.Sigmoid — TypeSigmoid <: ActivationFunction(Sigmoid())(x) -> 1 ./ (1 .+ exp.(-x))
ModelVerification.Tanh — TypeTanh <: ActivationFunction(Tanh())(x) -> tanh.(x)
Helper Functions
Below are the helper functions regarding network loading (from file) & dumping (to file), property-related, activation function operations, gradient-related operations, and bound & specification related operations.
Network loading and dumping
ModelVerification.onnx_parse — Methodonnx_parse(onnx_model_path)Creates the Model from the onnx_model_path. First, the computational graph of the ONNX model is created. Then, the Model is created using the information retrieved from the computational graph.
Arguments
onnx_model_path: String path to the ONNX model.
Returns
model_info(Model): Contains network information retrieved from the computational graph of the ONNX model.
ModelVerification.read_nnet — Methodread_nnet(fname::String; last_layer_activation = Id())Read in neural net from a .nnet file and return Network struct. The .nnet format is borrowed from NNet. The format assumes all hidden layers have ReLU activation. Keyword argument last_layer_activation sets the activation of the last layer, and defaults to Id(), (i.e. a linear output layer).
Arguments
fname(String): String path to the.nnetfile.last_layer_activation: Keyword argument that sets the activation of the last layer which defaults toId().
Returns
- A vector of layers saved as
Network.
ModelVerification.read_layer — Functionread_layer(output_dim::Int64, f::IOStream, act = ReLU())Read in layer from .nnet file and return a Layer containing its weights & biases. Optional argument act sets the activation function for the layer.
Arguments
output_dim(Int64): Output dimension of the layer.f(IOStream): IO stream of the.nnetfile.act: Optional argument specifying the activation function of the layer. Defaults toReLU().
Returns
Layercontaining the weights and bias values (and the activation function of the layer).
ModelVerification.to_comment — Methodto_comment(txt)Prepend // to each line of a string.
ModelVerification.print_layer — Methodprint_layer(file::IOStream, layer)Print to file an object implementing weights(layer) and bias(layer).
Arguments
file(IOStream): IO stream of the target.nnetfile.layer: Layer to be transcribed tofile.
ModelVerification.print_header — Methodprint_header(file::IOStream, network; header_text="")The NNet format has a particular header containing information about the network size and training data. print_header does not take training-related information into account (subject to change).
Arguments
file(IOStream): IO stream of the target.nnetfile.network: Network to be transcribed tofile.header_text: Optional header text that comes before the network information. Defaults to an empty string.
ModelVerification.write_nnet — Methodwrite_nnet(filename, network; header_text)Write network to filename.nnet. Note: Does not perform safety checks on inputs, so use with caution.
Based on python code at https://github.com/sisl/NNet/blob/master/utils/writeNNet.py and follows .nnet format given here: https://github.com/sisl/NNet.
Arguments
outfile: String name of the.nnetfile.network: Network to be transcribed tooutfile.nnet.header_text: Optional header text that comes before the network information.
ModelVerification.build_flux_model — Methodbuild_flux_model(onnx_model_path)Builds a Flux.Chain from the given ONNX model path.
Arguments
onnx_model_path: String path to ONNX model in.onnxfile.
Returns
model:Flux.Chainconstructed from the.onnxfile.
ModelVerification.get_chain — Methodget_chain(vertex)Returns a Flux.Chain constructed from the given vertex. This is a helper function for build_flux_model.
Arguments
vertex: Vertex from theNaiveNASfluxcomputation graph.
Returns
model:Flux.Chainconstructed from the given vertex.curr_vertex: The last vertex in the chain.
ModelVerification.purify_flux_model — Methodpurify_flux_model(model::Chain)Removes the starting Flatten layer from the model. This is a wrapper function for remove_flux_start_flatten.
Arguments
model(Chain):Flux.Chainmodel.
Returns
model(Chain):Flux.Chainmodel with the startingFlattenlayer removed.
ModelVerification.remove_flux_start_flatten — Methodremove_flux_start_flatten(model::Chain)Removes the starting Flatten layer from the model.
Arguments
model(Chain):Flux.Chainmodel.
Returns
model(Chain):Flux.Chainmodel with the startingFlattenlayer removed.
ModelVerification.build_onnx_model — Methodbuild_onnx_model(path, model::Chain, input::InputSpec)Builds an ONNX model from the given Flux.Chain model and input specification. The ONNX model is saved to the given path.
Arguments
path(String): Path to save the ONNX model.model(Chain):Flux.Chainmodel.input(InputSpec): Input specification.
Returns
path(String): Path to the saved ONNX model.
Network properties
ModelVerification.get_act — Methodget_act(l)Returns the activation function of the node l if it exists.
Arguments
l: node
Returns
- Activation function of the node if it exists.
- Otherwise, return
nothing.
ModelVerification.n_nodes — Methodn_nodes(L::Layer)Returns the number of neurons in a layer.
Arguments
L(Layer): Layer of a network.
Returns
n(Int): Number of nodes in the layerLwhich is equivalent to the number of biases in the layer.
ModelVerification.get_sub_model — Methodget_sub_model(model_info, end_node)ModelVerification.compute_output — Methodcompute_output(nnet::Network, input)Propagates a given vector through a Network and computes the output.
Arguments
nnet(Network): Network to be propagated.input: Vector to be propagated throughnnet.
Returns
Vectorof the output ofnnetgiveninput.
Activation function operations
ModelVerification.approximate_act_map — Methodapproximate_act_map(act::ActivationFunction, input::Hyperrectangle)Returns a Hyperrectangle overapproximation of the activation map of the input. act must be monotonic.
Arguments
act(ActivationFunction): Activation function to be propagated.input(Hyperrectangle): Input set to be propagated throughact.
Returns
Hyperrectangleoverapproximation of the activation map of the input.
ModelVerification.approximate_act_map — Methodapproximate_act_map(layer::Layer, input::Hyperrectangle)Returns a Hyperrectangle overapproximation of the activation map of the input for the given layer. The activation function of the layer must be monotonic.
Arguments
layer(Layer): Layer to be propagated for the activation map.input(Hyperrectangle): Input set to be propagated throughlayer.
Returns
Hyperrectangleoverapproximation of the activation map of the input.
Gradient operations
ModelVerification.get_gradient — Methodget_gradient(nnet::Network, x::Vector)Given a network, find the gradient for the input x. The function propagates through the layers of the network, and computes the gradient at each layer. The gradient at each layer is computed by multiplying the gradient at the previous layer with the gradient of the current layer.
Arguments
nnet(Network): Network to be propagated.x(Vector): Vector to be propagated throughnnet.
Returns
Matrixof the gradient for the inputxafter propagating throughnnet.
ModelVerification.act_gradient — Methodact_gradient(act::ReLU, z_hat::Vector)Compute the gradient of an ReLU activation function at point zhat. For each element in `zhat, ifzhat[i] > 0, thenactgradient[i] = 1, elseact_gradient[i] = 0`.
Arguments
act(ReLU): ReLU activation function.z_hat(Vector): Vector to be propagated throughact.
Returns
Vectorof the gradient ofactatz_hat. Each element in the vector corresponds to the gradient of a particular neuron.
ModelVerification.act_gradient — Methodact_gradient(act::Id, z_hat::Vector)Compute the gradient of an Identity activation function at point zhat. For each element in `zhat,act_gradient[i] = 1`.
Arguments
act(Id): Identity activation function.z_hat(Vector): Vector to be propagated throughact.
Returns
Vectorof the gradient ofactatz_hat. Each element in the vector corresponds to the gradient of a particular neuron.
ModelVerification.relaxed_relu_gradient — Methodrelaxed_relu_gradient(l::Real, u::Real)Return the relaxed slope of a ReLU activation based on its lower and upper bounds. The relaxed ReLU function allows for a smooth approximation of the gradient of the ReLU function. The relaxed ReLU function is defined as follows:
f'(x) = 0ifupper-bound < 0,f'(x) = 1iflower-bound > 0,- and
f'(x) = x/(u-l)iflower-bound < x < upper-boundwhich is the slope of the line connecting the points(l, ReLU(l))and(u, ReLU(u)).
This provides a differentiable approximation of the ReLU function within the interval [l, u].
Arguments
l(Real): Lower-bound of the input to the ReLU activation function.u(Real): Upper-bound of the input to the ReLU activation function.
Returns
0.0ifu <= 0.0,1.0ifl >= 0.0,u/(u-l)otherwise.
ModelVerification.act_gradient_bounds — Methodact_gradient_bounds(nnet::Network, input::AbstractPolytope)Compute the bit-wise bounds on the gradient post activation operation given an input set. Currently only support ReLU activation function. It first calculates the bounds of the input for each layer using get_bounds function (this function propagates through each layer and computes the bounds of each layer). Then, it computes the bit-wise lower and upper gradient for each layer using act_gradient. The final output is a tuple of two vectors, where each vector is a vector of bit-vectors. Each element in the vector corresponds to the gradient of a particular neuron which can be either 0 (not activated) or 1 (activated).
Arguments
nnet(Network): Network to be propagated.input(AbstractPolytope): Input set to be propagated throughnnet.
Returns
LΛ, UΛ::NTuple{2, Vector{BitVector}}: lower and upper bit-wsie bounds on the activation gradients for each layer.
ModelVerification.get_gradient_bounds — Methodget_gradient_bounds(nnet::Network, input::AbstractPolytope)Get lower and upper bounds on network gradient for given gradient bounds on activations, or given an input set. It first calculates the bit-wise lower and upper gradient bounds for each layer using act_gradient_bounds. Then, it computes the gradient bounds of the entire network for the weights using get_gradient_bounds.
Arguments
nnet(Network): Network to be propagated.input(AbstractPolytope): Input set to be propagated throughnnet.
Returns
(LG, UG):NTuple{2, Matrix{FloatType[]}of the lower and upper bounds of the entire network.
ModelVerification.get_gradient_bounds — Methodget_gradient_bounds(nnet::Network, LΛ::Vector{AbstractVector},
UΛ::Vector{AbstractVector})Given bit-wise lower and upper gradient bounds for each layer (LΛ and UΛ), compute the lower and upper bounds of the entire network for the weights of the layers.
Arguments
nnet(Network): Network to be propagated.LΛ(Vector{AbstractVector}): Vector of bit-wise lower gradient bounds for each layer.UΛ(Vector{AbstractVector}): Vector of bit-wise upper gradient bounds for each layer.
Returns
(LG, UG):NTuple{2, Matrix{FloatType[]}of the lower and upper bounds of the entire network.
Bound & Specification operations
ModelVerification.interval_map — Methodinterval_map(W::Matrix, l::AbstractVecOrMat, u::AbstractVecOrMat)Simple linear mapping of the weights on intervals. L, U := ([W]₊*l + [W]₋*u), ([W]₊*u + [W]₋*l)
Arguments
W(AbstractMatrix): Matrix of weights.l(AbstractVecOrMat): Vector or matrix of lower bounds.u(AbstractVecOrMat): Vector or matrix of upper bounds.
Returns
(l_new, u_new): New bounds after the linear mapping.
ModelVerification.get_bounds — Methodget_bounds(nnet::Network, input; before_act::Bool = false)Computes node-wise bounds given a input set. The optional last argument determines whether the bounds are pre- or post-activation.
Arguments
nnet(Network): Network to be propagated.input(AbstractPolytope): Input set to be propagated throughnnet.before_act: Optional argument that determines whether the bounds are pre- or post-activation. Defaults tofalse.
Returns
Vector{Hyperrectangle}: bounds for all nodes.bounds[1]is the input set overapproximated with aHyperrectangle.
ModelVerification.get_bounds — Methodget_bounds(problem::Problem; kwargs...)Compute node-wise bounds for a given Problem using get_bounds(nnet, input).
Arguments
problem(Problem): Problem to be propagated.kwargs: Keyword arguments to be passed toget_bounds(nnet, input)such as the optional boolean argumentbefore_act.
Returns
Vector{Hyperrectangle}: bounds for all nodes.bounds[1]is the input set.
ModelVerification.isbounded — Methodisbounded(input)Check if input set is bounded. If the input is of type HPolytope, then LazySets.isbounded converts the HPolytope to a HPolyhedron and checks if that is bounded. Otherwise, LazySets.isbounded is called directly.
Arguments
input: Input set to be checked for boundedness.
Returns
trueifinputis bounded,falseotherwise.
ModelVerification.is_hypercube — Methodis_hypercube(set::Hyperrectangle)Check if set is a is_hypercube. This is done by checking if all the radii of the Hyperrectangle are equal in all directions.
Arguments
set(Hyperrectangle): Set to be checked for hypercube-ness.
Returns
trueifsetis a hypercube,falseotherwise.
ModelVerification.is_halfspace_equivalent — Methodis_halfspace_equivalent(set)Check if set is halfspace equivalent. This is done by checking if the number of constraints in the set is equal to 1.
Arguments
set: Set to be checked for halfspace equivalence.
Returns
trueifsetis halfspace equivalent,falseotherwise.
ModelVerification.UnboundedInputError — TypeUnboundedInputError <: ExceptionException thrown when an input set is unbounded.
Fields
msg(String): Error message.