Hardware design and the competency awareness of a neural network
Abstract
The ability to estimate the uncertainty of predictions made by a neural network is essential when applying neural networks to tasks such as medical diagnosis and autonomous vehicles. The approach is of particular relevance when deploying the networks on devices with limited hardware resources, but existing competency-aware neural networks largely ignore any resource constraints. Here we examine the relationship between hardware platforms and the competency awareness of a neural network. We highlight the impact of two key areas of hardware development — increasing memory size of accelerator architectures and device-to-device variation in the emerging devices typically used in in-memory computing — on uncertainty estimation quality. We also consider the challenges that developments in uncertainty estimation methods impose on hardware designs. Finally, we explore the innovations required in terms of hardware, software, and hardware–software co-design in order to build future competency-aware neural networks.