Very thought-provoking talk by Justin Gilmer at the #ICML2020 UDL workshop. Adversarial examples are just a case of out-of-distribution error. There is no particular reason to defend against the nearest OOD error (i.e., L-infty adversarial example) 1/
Instead, we need to work on the more general problem of OOD generalization. Furthermore, real adversaries don't care about finding the nearest error. They often make huge image changes (he gave an example of image spam) 2/
For more, see https://arxiv.org/abs/1906.08988 "A Fourier Perspective on Model Robustness in Computer Vision
Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin D. Cubuk, Justin Gilmer" end/
Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin D. Cubuk, Justin Gilmer" end/