Localizing tuberculosis in chest radiographs with deep learning
Abstract
Chest radiography (CXR) has been used as an effective tool for screening tuberculosis (TB). Because of the lack of radiological expertise in resource-constrained regions, automatic analysis of CXR is appealing as a "first reader". In addition to screening the CXR for disease, it is critical to highlight locations of the disease in abnormal CXRs. In this paper, we focus on the task of locating TB in CXRs which is more challenging due to the intrinsic difficulty of locating the abnormality. The method is based on applying a convolutional neural network (CNN) to classify the superpixels generated from the lung area. Specifically, it consists of four major components: lung ROI extraction, superpixel segmentation, multi-scale patch generation/labeling, and patch classification. The TB regions are located by identifying those superpixels whose corresponding patches are classified as abnormal by the CNN. The method is tested on a publicly available TB CXR dataset which contains 336 TB images showing various manifestations of TB. The TB regions in the images were marked by radiologists. To evaluate the method, the images are split into training, validation, and test sets with all the manifestations being represented in each set. The performance is evaluated at both the patch level and image level. The classification accuracy on the patch test set is 72.8% and the average Dice index for the test images is 0.67. The factors that may contribute to misclassification are discussed and directions for future work are addressed.