About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
ICASSP 2024
Conference paper
Adversarial Robustness of Convolutional Models Learned in the Frequency Domain
Abstract
This paper presents an extensive comparison of the noise robustness of standard Convolutional Neural Networks (CNNs) trained on image inputs and those trained in the frequency domain. We investigate the robustness of CNNs to small adversarial noise in the RGB input space and show that CNNs trained on Discrete Cosine Transform (DCT) inputs exhibit significantly better noise robustness to both adversarial and common spatial transformations compared to standard CNNs learned on RGB/Grayscale input. Our results suggest that frequency-domain learning of convolutional models may disentangle frequencies corresponding to semantic and adversarial features, resulting in improved adversarial robustness. This research highlights the potential of frequency domain learning to improve neural network robustness to test-time noise and warrants further investigation in this area.