On the Adversarial Robustness of Vision Transformers
Rulin Shao, Zhouxing Shi, et al.
NeurIPS 2022
Prior literature on adversarial attack methods has mainly focused on attacking with and defending against a single threat model, e.g., perturbations bounded in Lp ball. However, multiple threat models can be combined into composite perturbations. One such approach, composite adversarial attack (CAA), not only expands the perturbable space of the image, but also may be overlooked by current modes of robustness evaluation. This paper demonstrates how CAA's attack order affects the resulting image, and provides real-time inferences of different models, which will facilitate users' configuration of the parameters of the attack level and their rapid evaluation of model prediction. A leaderboard to benchmark adversarial robustness against CAA is also introduced.
Rulin Shao, Zhouxing Shi, et al.
NeurIPS 2022
Alex Mathai, Sambaran Bandyopadhyay, et al.
IJCAI 2022
Takayuki Katsuki, Kohei Miyaguchi, et al.
IJCAI 2022
Mayank Mishra, Dhiraj Madan, et al.
IJCAI 2022