Lung Cancer Detection and Segmentation Using Deep Learning
For those of us accustomed to life’s modern automated conveniences, diagnostic radiology can seem shockingly unsophisticated. Lung cancer screening via computed tomography (CT) is an example of a common radiological procedure that is critical in ensuring that cancers are detected early so that patients have the best chance of receiving timely treatment. However, the radiological procedure for lung cancer screening is still an entirely manual affair. A given lung CT exam consists of a three-dimensional volume of data composed of a stack of hundreds of 2D slices. During screening, clinicians manually scroll through the data slice-by-slice, searching for tiny nodules that can be indistinguishable from blood vessels and other structures under most viewing conditions. Not only is this process time-consuming and tedious, but inter-reader variability among clinicians means that most patients are not receiving the best possible care. Building on the successes of our previous deep learning-based tools in cardiac MRI, we have developed a deep learning-based system that that can automatically detect and segment lung nodules in CT exams. Using the open LIDC-IDRI data set of detected and segmented lung nodules in 1018 thoracic CT exams, we have developed a pipeline that consists of three connected models: a nodule proposal system (2D U-Net-based segmentation network), a nodule classification system (2.5D ResNet-based classifier), and a nodule segmentation system (3D ENet based segmentation network). These models operate together as a complete lung nodule detection and segmentation system. The resulting system has the potential to greatly improve the speed and effectiveness of lung cancer screening. For nodules with diameter larger than 6mm (the lower limit for clinical significance), the recall of our detection model is 94% with four false positives per scan. For nodule segmentation, the mean dice coefficient is 0.83±0.10, comparable to the mean dice coefficient of expert radiologists which is 0.79±0.09. Both models operate with clinicians in the loop, requiring that clinicians review and optionally modify the initial automated results before accepting them. These deep learning-based models form the backbone of our FDA-cleared, cloud-based Oncology DL software product. In this talk, we will discuss details of the deep learning technologies behind our lung nodule detection and segmentation system. We will also discuss the method by which we demonstrated that our system is as accurate as expert radiologists in order to obtain FDA clearance.
Dan is the Director of Machine Learning at Arterys, a startup focused on streamlining the practice of medical image interpretation and post-processing. After receiving a PhD in Electrical Engineering from Stanford, he stayed for a postdoc, focusing on using machine learning to predict outcomes and disease characteristics in cancer patients. From there, he joined CellScope, where he founded a machine learning team that used the then-nascent field of Deep Learning to diagnose ear disease and streamline the process of recording ear exams at home. He moved to Arterys to found their machine learning team in 2015.