Publication
CCS 2023
Workshop paper

Certified Robustness of Static Deep Learning-based Malware Detectors against Patch and Append Attacks

Abstract

Machine learning-based (ML) malware detectors have been shown to be susceptible to adversarial malware examples. Given the vul- nerability of deep learning detectors to small changes on the input file, we propose a practical and certifiable defense against patch and append attacks on malware detection. Our defense is inspired by the concept of (de)randomized smoothing, a certifiable defense against patch attacks on image classifiers, which we adapt by: (1) presenting a novel chunk-based smoothing scheme that operates on subsequences of bytes within an executable; (2) deriving a certifi- cate that measures the robustness against patch attacks and append attacks. Our approach works as follows: (i) during the training phase, a base classifier is trained to make classifications on a subset of continguous bytes or chunk of bytes from an executable; (ii) at test time, an executable is divided into non-overlapping chunks of fixed size and our detection system classifies the original ex- ecutable as the majority vote over the predicted classes of the chunks. Leveraging the fact that patch and append attacks can only influence a certain number of chunks, we derive meaningful large robustness certificates against both attacks. To demonstrate the suitability of our approach we have trained a classifier with our chunk-based scheme on the BODMAS dataset. We show that the proposed chunk-based smoothed classifier is more robust against the benign injection attack and state-of-the-art evasion attacks in comparison to a non-smoothed classifier.

Date

Publication

CCS 2023