About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
SPIE DCS 2019
Conference paper
Model poisoning attacks against distributed machine learning systems
Abstract
Future military coalition operations will increasingly rely on machine learning (ML) methods to improve situational awareness. The coalition context presents unique challenges for ML: The tactical environment creates significant computing and communications limitations while also having to deal with an adversarial presence. Further, coalition operations must operate in a distributed manner, while coping with the constraints posed by the operational environment. Envisioned ML deployments in military assets must be resilient to these challenges. Here, we focus on the susceptibility of ML models to be poisoned (during training) or fooled (after training) by adversarial inputs. We review recent work on distributed adversarial ML, and present new results from our own investigations into model poisoning attacks on distributed learning systems without a central parameter aggregation node.