Body part and imaging modality classification for a general radiology cognitive assistant
Abstract
Decision support systems built for radiologists need to cover a fairly wide range of image types, with the ability to route each image to the relevant algorithm. Furthermore, the training of such networks requires building large datasets with significant efforts in image curation. In situations where the DICOM tag of an image is unavailable, or unreliable, a classifier that can automatically detect the body part depicted in the image, as well as the imaging modality, is necessary. Previous work has shown the use of imaging and textual features to distinguish between imaging modalities. In this work, we present a model for the simultaneous classification of body part and imaging modality, which to our knowledge has not been done before, as part of the larger work to create a cognitive assistant for radiologists. This classification network consists of 10 classes built from a VGG network architecture using transfer learning to learn generic features. An accuracy of 94.8% is achieved.