About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Conference paper
Body part and imaging modality classification for a general radiology cognitive assistant
Abstract
Decision support systems built for radiologists need to cover a fairly wide range of image types, with the ability to route each image to the relevant algorithm. Furthermore, the training of such networks requires building large datasets with significant efforts in image curation. In situations where the DICOM tag of an image is unavailable, or unreliable, a classifier that can automatically detect the body part depicted in the image, as well as the imaging modality, is necessary. Previous work has shown the use of imaging and textual features to distinguish between imaging modalities. In this work, we present a model for the simultaneous classification of body part and imaging modality, which to our knowledge has not been done before, as part of the larger work to create a cognitive assistant for radiologists. This classification network consists of 10 classes built from a VGG network architecture using transfer learning to learn generic features. An accuracy of 94.8% is achieved.