Adversarial Attacks on Fairness of Graph Neural Networks
Binchi Zhang, Yushun Dong, et al.
ICLR 2024
Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees. However, these techniques can be computationally costly due to the use of certification during training. We develop a new regularizer that is both more efficient than existing certified defenses, requiring only one additional forward propagation through a network, and can be used to train networks with similar certified accuracy. Through experiments on MNIST and CIFAR-10 we demonstrate improvements in training speed and comparable certified accuracy compared to state-of-the-art certified defenses.
Binchi Zhang, Yushun Dong, et al.
ICLR 2024
Gururaj Saileshwar, Prashant J. Nair, et al.
HPCA 2018
Kristjan Greenewald, Yuancheng Yu, et al.
NeurIPS 2024
Chulin Xie, Keli Huang, et al.
ICLR 2020