Attacks

ModelVerification.APGDMethod
APGD(model, loss, x, y; ϵ = 10, step_size = 0.1, iters = 100, clamp_range = (0, 1))

Auto Projected Gradient Descent (APGD) (https://arxiv.org/pdf/2003.01690.pdf)

Arguments:

  • model: The model to base teh attack upon.
  • loss: the loss function to use, assuming that it includes the prediction function i.e. loss(x, y) = crossentropy(m(x), y)
  • x: The input to be perturbed.
  • step_size: The ϵ value in the FGSM step.
  • rho: PGD success rate threshold to reduce the step size.
  • a: momentum.
  • iters: The maximum number of iterations to run the algorithm for.
source
ModelVerification.FGSMMethod
FGSM(model, loss, x, y; ϵ = 0.1, clamp_range = (0, 1))

Fast Gradient Sign Method (FGSM) is a method of creating adversarial examples by pushing the input in the direction of the gradient and bounded by the ε parameter.

This method was proposed by Goodfellow et al. 2014 (https://arxiv.org/abs/1412.6572)

Arguments:

  • model: The model to base the attack upon.
  • loss: The loss function to use. This assumes that the loss function includes the predict function, i.e. loss(x, y) = crossentropy(model(x), y).
  • x: The input to be perturbed by the FGSM algorithm.
  • ϵ: The amount of perturbation to apply.
source
ModelVerification.PGDMethod
PGD(model, loss, x, y; ϵ = 10, step_size = 0.1, iters = 100, clamp_range = (0, 1))

Projected Gradient Descent (PGD) is an itrative variant of FGSM with a random point. For every step the FGSM algorithm moves the input in the direction of the gradient bounded in the l∞ norm. (https://arxiv.org/pdf/1706.06083.pdf)

Arguments:

  • model: The model to base teh attack upon.
  • loss: the loss function to use, assuming that it includes the prediction function i.e. loss(x, y) = crossentropy(m(x), y)
  • x: The input to be perturbed.
  • step_size: The ϵ value in the FGSM step.
  • iters: The maximum number of iterations to run the algorithm for.
source