About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IBM J. Res. Dev
Paper
Visual saliency on networks of neurosynaptic cores
Abstract
Identifying interesting or salient regions in an image plays an important role for multimedia search, object tracking, active vision, segmentation, and classification. Existing saliency extraction algorithms are implemented using the conventional von Neumann computational model. We propose a bottom-up model of visual saliency, inspired by the primate visual cortex, which is compatible with TrueNorth-a low-power, brain-inspired neuromorphic substrate that runs large-scale spiking neural networks in real-time. Our model uses color, motion, luminance, and shape to identify salient regions in video sequences. For a three-color-channel video with 240 × 136 pixels per frame and 30 frames per second, we demonstrate a model utilizing ∼ 3 million neurons, which achieves competitive detection performance on a publicly available dataset while consuming ∼ 200 mW.