Network

Model

Network

ModelVerification.LayerType
Layer{F<:ActivationFunction, N<:Number}

Consists of weights and bias for linear mapping, and activation for nonlinear mapping.

Fields

  • weights::Matrix{N}
  • bias::Vector{N}
  • activation::F

See also: Network

source

Activation Functions

ModelVerification.ActivationFunctionType
ActivationFunction

Function that calculates the output of the node. Supported activation functions are:

  • ReLU (ReLU)
  • Max (Max)
  • Identity (Id)
  • Sigmoid (Sigmoid)
  • Tanh (Tanh)
source
ModelVerification.GeneralActType
GeneralAct <: ActivationFunction

Wrapper type for a general activation function.

Usage

act = GeneralAct(tanh)

act(0) == tanh(0)           # true
act(10.0) == tanh(10.0)     # true
act = GeneralAct(x->tanh.(x))

julia> act(-2:2)
5-element Array{FloatType[],1}:
 -0.9640275800758169
 -0.7615941559557649
  0.0
  0.7615941559557649
  0.9640275800758169
source
ModelVerification.PiecewiseLinearType
PiecewiseLinear <: ActivationFunction

Activation function that uses linear interpolation between supplied knots. An extrapolation condition can be set for values outside the set of knots. Default is Linear.

PiecewiseLinear(knots_x, knots_y, [extrapolation = Line()])

Usage

kx = [0.0, 1.2, 1.7, 3.1]
ky = [0.0, 0.5, 1.0, 1.5]
act = PiecewiseLinear(kx, ky)

act(first(kx)) == first(ky) == 0.0
act(last(kx))  == last(ky)  == 1.5

act(1.0)    # 0.4166666666666667
act(-102)   # -42.5
act = PiecewiseLinear(kx, ky, Flat())

act(-102)   # 0.0
act(Inf)    # 1.5

Extrapolations

  • Flat()
  • Line()
  • constant (supply a number as the argument)
  • Throw() (throws bounds error)

PiecewiseLinear uses Interpolations.jl.

source

Helper Functions

Below are the helper functions regarding network loading (from file) & dumping (to file), property-related, activation function operations, gradient-related operations, and bound & specification related operations.

Network loading and dumping

ModelVerification.onnx_parseMethod
onnx_parse(onnx_model_path)

Creates the Model from the onnx_model_path. First, the computational graph of the ONNX model is created. Then, the Model is created using the information retrieved from the computational graph.

Arguments

  • onnx_model_path: String path to the ONNX model.

Returns

  • model_info (Model): Contains network information retrieved from the computational graph of the ONNX model.
source
ModelVerification.read_nnetMethod
read_nnet(fname::String; last_layer_activation = Id())

Read in neural net from a .nnet file and return Network struct. The .nnet format is borrowed from NNet. The format assumes all hidden layers have ReLU activation. Keyword argument last_layer_activation sets the activation of the last layer, and defaults to Id(), (i.e. a linear output layer).

Arguments

  • fname (String): String path to the .nnet file.
  • last_layer_activation: Keyword argument that sets the activation of the last layer which defaults to Id().

Returns

  • A vector of layers saved as Network.
source
ModelVerification.read_layerFunction
read_layer(output_dim::Int64, f::IOStream, act = ReLU())

Read in layer from .nnet file and return a Layer containing its weights & biases. Optional argument act sets the activation function for the layer.

Arguments

  • output_dim (Int64): Output dimension of the layer.
  • f (IOStream): IO stream of the .nnet file.
  • act: Optional argument specifying the activation function of the layer. Defaults to ReLU().

Returns

  • Layer containing the weights and bias values (and the activation function of the layer).
source
ModelVerification.print_layerMethod
print_layer(file::IOStream, layer)

Print to file an object implementing weights(layer) and bias(layer).

Arguments

  • file (IOStream): IO stream of the target .nnet file.
  • layer: Layer to be transcribed to file.
source
ModelVerification.print_headerMethod
print_header(file::IOStream, network; header_text="")

The NNet format has a particular header containing information about the network size and training data. print_header does not take training-related information into account (subject to change).

Arguments

  • file (IOStream): IO stream of the target .nnet file.
  • network: Network to be transcribed to file.
  • header_text: Optional header text that comes before the network information. Defaults to an empty string.
source
ModelVerification.write_nnetMethod
write_nnet(filename, network; header_text)

Write network to filename.nnet. Note: Does not perform safety checks on inputs, so use with caution.

Based on python code at https://github.com/sisl/NNet/blob/master/utils/writeNNet.py and follows .nnet format given here: https://github.com/sisl/NNet.

Arguments

  • outfile: String name of the .nnet file.
  • network: Network to be transcribed to outfile.nnet.
  • header_text: Optional header text that comes before the network information.
source
ModelVerification.build_flux_modelMethod
build_flux_model(onnx_model_path)

Builds a Flux.Chain from the given ONNX model path.

Arguments

  • onnx_model_path: String path to ONNX model in .onnx file.

Returns

  • model: Flux.Chain constructed from the .onnx file.
source
ModelVerification.get_chainMethod
get_chain(vertex)

Returns a Flux.Chain constructed from the given vertex. This is a helper function for build_flux_model.

Arguments

  • vertex: Vertex from the NaiveNASflux computation graph.

Returns

  • model: Flux.Chain constructed from the given vertex.
  • curr_vertex: The last vertex in the chain.
source
ModelVerification.purify_flux_modelMethod
purify_flux_model(model::Chain)

Removes the starting Flatten layer from the model. This is a wrapper function for remove_flux_start_flatten.

Arguments

  • model (Chain): Flux.Chain model.

Returns

  • model (Chain): Flux.Chain model with the starting Flatten layer removed.
source
ModelVerification.remove_flux_start_flattenMethod
remove_flux_start_flatten(model::Chain)

Removes the starting Flatten layer from the model.

Arguments

  • model (Chain): Flux.Chain model.

Returns

  • model (Chain): Flux.Chain model with the starting Flatten layer removed.
source
ModelVerification.build_onnx_modelMethod
build_onnx_model(path, model::Chain, input::InputSpec)

Builds an ONNX model from the given Flux.Chain model and input specification. The ONNX model is saved to the given path.

Arguments

  • path (String): Path to save the ONNX model.
  • model (Chain): Flux.Chain model.
  • input (InputSpec): Input specification.

Returns

  • path (String): Path to the saved ONNX model.
source

Network properties

ModelVerification.get_actMethod
get_act(l)

Returns the activation function of the node l if it exists.

Arguments

  • l: node

Returns

  • Activation function of the node if it exists.
  • Otherwise, return nothing.
source
ModelVerification.n_nodesMethod
n_nodes(L::Layer)

Returns the number of neurons in a layer.

Arguments

  • L (Layer): Layer of a network.

Returns

  • n (Int): Number of nodes in the layer L which is equivalent to the number of biases in the layer.
source
ModelVerification.compute_outputMethod
compute_output(nnet::Network, input)

Propagates a given vector through a Network and computes the output.

Arguments

  • nnet (Network): Network to be propagated.
  • input: Vector to be propagated through nnet.

Returns

  • Vector of the output of nnet given input.
source

Activation function operations

ModelVerification.approximate_act_mapMethod
approximate_act_map(act::ActivationFunction, input::Hyperrectangle)

Returns a Hyperrectangle overapproximation of the activation map of the input. act must be monotonic.

Arguments

  • act (ActivationFunction): Activation function to be propagated.
  • input (Hyperrectangle): Input set to be propagated through act.

Returns

  • Hyperrectangle overapproximation of the activation map of the input.
source
ModelVerification.approximate_act_mapMethod
approximate_act_map(layer::Layer, input::Hyperrectangle)

Returns a Hyperrectangle overapproximation of the activation map of the input for the given layer. The activation function of the layer must be monotonic.

Arguments

  • layer (Layer): Layer to be propagated for the activation map.
  • input (Hyperrectangle): Input set to be propagated through layer.

Returns

  • Hyperrectangle overapproximation of the activation map of the input.
source

Gradient operations

ModelVerification.get_gradientMethod
get_gradient(nnet::Network, x::Vector)

Given a network, find the gradient for the input x. The function propagates through the layers of the network, and computes the gradient at each layer. The gradient at each layer is computed by multiplying the gradient at the previous layer with the gradient of the current layer.

Arguments

  • nnet (Network): Network to be propagated.
  • x (Vector): Vector to be propagated through nnet.

Returns

  • Matrix of the gradient for the input x after propagating through nnet.
source
ModelVerification.act_gradientMethod
act_gradient(act::ReLU, z_hat::Vector)

Compute the gradient of an ReLU activation function at point zhat. For each element in `zhat, ifzhat[i] > 0, thenactgradient[i] = 1, elseact_gradient[i] = 0`.

Arguments

  • act (ReLU): ReLU activation function.
  • z_hat (Vector): Vector to be propagated through act.

Returns

  • Vector of the gradient of act at z_hat. Each element in the vector corresponds to the gradient of a particular neuron.
source
ModelVerification.act_gradientMethod
act_gradient(act::Id, z_hat::Vector)

Compute the gradient of an Identity activation function at point zhat. For each element in `zhat,act_gradient[i] = 1`.

Arguments

  • act (Id): Identity activation function.
  • z_hat (Vector): Vector to be propagated through act.

Returns

  • Vector of the gradient of act at z_hat. Each element in the vector corresponds to the gradient of a particular neuron.
source
ModelVerification.relaxed_relu_gradientMethod
relaxed_relu_gradient(l::Real, u::Real)

Return the relaxed slope of a ReLU activation based on its lower and upper bounds. The relaxed ReLU function allows for a smooth approximation of the gradient of the ReLU function. The relaxed ReLU function is defined as follows:

  • f'(x) = 0 if upper-bound < 0,
  • f'(x) = 1 if lower-bound > 0,
  • and f'(x) = x/(u-l) if lower-bound < x < upper-bound which is the slope of the line connecting the points (l, ReLU(l)) and (u, ReLU(u)).

This provides a differentiable approximation of the ReLU function within the interval [l, u].

Arguments

  • l (Real): Lower-bound of the input to the ReLU activation function.
  • u (Real): Upper-bound of the input to the ReLU activation function.

Returns

  • 0.0 if u <= 0.0,
  • 1.0 if l >= 0.0,
  • u/(u-l) otherwise.
source
ModelVerification.act_gradient_boundsMethod
act_gradient_bounds(nnet::Network, input::AbstractPolytope)

Compute the bit-wise bounds on the gradient post activation operation given an input set. Currently only support ReLU activation function. It first calculates the bounds of the input for each layer using get_bounds function (this function propagates through each layer and computes the bounds of each layer). Then, it computes the bit-wise lower and upper gradient for each layer using act_gradient. The final output is a tuple of two vectors, where each vector is a vector of bit-vectors. Each element in the vector corresponds to the gradient of a particular neuron which can be either 0 (not activated) or 1 (activated).

Arguments

  • nnet (Network): Network to be propagated.
  • input (AbstractPolytope): Input set to be propagated through nnet.

Returns

  • LΛ, UΛ::NTuple{2, Vector{BitVector}}: lower and upper bit-wsie bounds on the activation gradients for each layer.
source
ModelVerification.get_gradient_boundsMethod
get_gradient_bounds(nnet::Network, input::AbstractPolytope)

Get lower and upper bounds on network gradient for given gradient bounds on activations, or given an input set. It first calculates the bit-wise lower and upper gradient bounds for each layer using act_gradient_bounds. Then, it computes the gradient bounds of the entire network for the weights using get_gradient_bounds.

Arguments

  • nnet (Network): Network to be propagated.
  • input (AbstractPolytope): Input set to be propagated through nnet.

Returns

  • (LG, UG): NTuple{2, Matrix{FloatType[]} of the lower and upper bounds of the entire network.
source
ModelVerification.get_gradient_boundsMethod
get_gradient_bounds(nnet::Network, LΛ::Vector{AbstractVector}, 
                    UΛ::Vector{AbstractVector})

Given bit-wise lower and upper gradient bounds for each layer ( and ), compute the lower and upper bounds of the entire network for the weights of the layers.

Arguments

  • nnet (Network): Network to be propagated.
  • (Vector{AbstractVector}): Vector of bit-wise lower gradient bounds for each layer.
  • (Vector{AbstractVector}): Vector of bit-wise upper gradient bounds for each layer.

Returns

  • (LG, UG): NTuple{2, Matrix{FloatType[]} of the lower and upper bounds of the entire network.
source

Bound & Specification operations

ModelVerification.interval_mapMethod
interval_map(W::Matrix, l::AbstractVecOrMat, u::AbstractVecOrMat)

Simple linear mapping of the weights on intervals. L, U := ([W]₊*l + [W]₋*u), ([W]₊*u + [W]₋*l)

Arguments

  • W (AbstractMatrix): Matrix of weights.
  • l (AbstractVecOrMat): Vector or matrix of lower bounds.
  • u (AbstractVecOrMat): Vector or matrix of upper bounds.

Returns

  • (l_new, u_new): New bounds after the linear mapping.
source
ModelVerification.get_boundsMethod
get_bounds(nnet::Network, input; before_act::Bool = false)

Computes node-wise bounds given a input set. The optional last argument determines whether the bounds are pre- or post-activation.

Arguments

  • nnet (Network): Network to be propagated.
  • input (AbstractPolytope): Input set to be propagated through nnet.
  • before_act: Optional argument that determines whether the bounds are pre- or post-activation. Defaults to false.

Returns

  • Vector{Hyperrectangle}: bounds for all nodes. bounds[1] is the input set overapproximated with a Hyperrectangle.
source
ModelVerification.get_boundsMethod
get_bounds(problem::Problem; kwargs...)

Compute node-wise bounds for a given Problem using get_bounds(nnet, input).

Arguments

  • problem (Problem): Problem to be propagated.
  • kwargs: Keyword arguments to be passed to get_bounds(nnet, input) such as the optional boolean argument before_act.

Returns

  • Vector{Hyperrectangle}: bounds for all nodes. bounds[1] is the input set.
source
ModelVerification.isboundedMethod
isbounded(input)

Check if input set is bounded. If the input is of type HPolytope, then LazySets.isbounded converts the HPolytope to a HPolyhedron and checks if that is bounded. Otherwise, LazySets.isbounded is called directly.

Arguments

  • input: Input set to be checked for boundedness.

Returns

  • true if input is bounded, false otherwise.
source
ModelVerification.is_hypercubeMethod
is_hypercube(set::Hyperrectangle)

Check if set is a is_hypercube. This is done by checking if all the radii of the Hyperrectangle are equal in all directions.

Arguments

  • set (Hyperrectangle): Set to be checked for hypercube-ness.

Returns

  • true if set is a hypercube, false otherwise.
source
ModelVerification.is_halfspace_equivalentMethod
is_halfspace_equivalent(set)

Check if set is halfspace equivalent. This is done by checking if the number of constraints in the set is equal to 1.

Arguments

  • set: Set to be checked for halfspace equivalence.

Returns

  • true if set is halfspace equivalent, false otherwise.
source