Distributed analytics for audio sensing applications
Abstract
A wide array of military and commercial applications rely on the collection and processing of audio data. One approach to perform analytics and machine learning on such data is to upload and process them at a central server (e.g., cloud) which offers abundant processing resources and the ability to run sophisticated machine learning models and analytics on the audio data. This approach can be inefficient due to the low bandwidth and energy limitations of mobile devices as well as intermittent connectivity to a central collection point such as the cloud. It is also problematic as audio data are often highly sensitive and subject to privacy constraints. An alternative approach is to perform audio analytics at edge of the network where data is generated. The challenge in this approach is the requirement to perform analytics subject to resource constraints which limit performance and accuracy of predictive analytics. In this paper, we present a system for performing predictive analytics on audio data, where the training is executed on the cloud and the classification can be executed at the edge. We present the design principles and architecture of the system, and quantify the performance tradeoff of executing analytics at contemporary edge devices versus the cloud.