Exploiting local and generic features for accurate skin lesions classification using clinical and dermoscopy imaging
Abstract
Similarity in appearance between various skin diseases, often makes it challenging for clinicians to identify the type of skin condition, and the accuracy is highly reliant on the level of expertise. There is also a great degree of subjectivity and inter/intra observer variability found in the clinical practices. In this paper, we propose a method for automatic skin diseases recognition that combines two different types of deep convolutional neural network features. We hold the hypothesis that it is equally important to capture global features such as color and lesion shape, as well as local features such as local patterns within the lesion area. The proposed method leverages deep residual network to represent global information, and bilinear pooling technique which allows to extract local features to differentiate between skin conditions with subtle visual differences in local regions. We have evaluated our proposed method on MoleMap dataset with 32,195 and ISBI-2016 challenge dataset with 1,279 skin images. Without any lesion localisation or segmentation, our proposed method has achieved state-of-the-art results on the large-scale MoleMap datasets with 15 various disease categories and multiple imaging modalities, and compares favorably with the best method on ISBI-2016 Melanoma challenge dataset.