This is a guest post by Alex Shepherd.
There is a growing body of research literature concerning the potential threat of physical-world adversarial attacks against machine-vision models. By applying adversarial perturbations to physical objects, machine-vision models may be vulnerable to images containing these perturbed objects, resulting in an increased risk of misclassification. The potential impacts could be significant and have been identified as risk areas for autonomous vehicles and military UAVs.
For this Three Paper Thursday, we examine the following papers exploring the potential threat of physical-world adversarial attacks, with a focus on the impact for autonomous vehicles.
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world, arXiv:1607.02533 (2016)
In this seminal paper, Kurakin et al. report their findings of an experiment conducted using adversarial images taken from a phone camera as input for a pre-trained ImageNet Inceptionv3 image classification model. Methodology was based on a white-box threat model, with adversarial images crafted from the ImageNet validation dataset using the Inceptionv3 model.
Continue reading Three Paper Thursday: Attacking Machine Vision Models In Real Life