Sarath Sreedharan, Tathagata Chakraborti, et al.
AAAI 2020
This work studies the sensitivity of neural networks to weight perturbations, firstly corresponding to a newly developed threat model that perturbs the neural network parameters. We propose an efficient approach to compute a certified robustness bound of weight perturbations, within which neural networks will not make erroneous outputs as desired by the adversary. In addition, we identify a useful connection between our developed certification method and the problem of weight quantization, a popular model compression technique in deep neural networks (DNNs) and a 'must-try' step in the design of DNN inference engines on resource constrained computing platforms, such as mobiles, FPGA, and ASIC. Specifically, we study the problem of weight quantization - weight perturbations in the non-adversarial setting - through the lens of certificated robustness, and we demonstrate significant improvements on the generalization ability of quantized networks through our robustness-aware quantization scheme.
Sarath Sreedharan, Tathagata Chakraborti, et al.
AAAI 2020
Sijia Liu, Parikshit Ram, et al.
AAAI 2020
Binchi Zhang, Yushun Dong, et al.
ICLR 2024
Gururaj Saileshwar, Prashant J. Nair, et al.
HPCA 2018