The Key to Adoption of Deep Learning for Medical Imaging: Interpretability and Comprehensiveness
During the past two years, many deep-learning based algorithms that detect critical conditions in various medical images have been approved by the Food and Drug Administration to be used in clinical practice. However, many clinicians remain skeptical that these algorithms would reduce their workload and ensure patient safety as many optimists claim. For deep learning algorithms to be rapidly adopted and accepted by clinicians, they need to be interpretable and comprehensive. We will share experience and approach to develop clinically relevant deep learning algorithms.
Dr. Yune is Research Translation Director of the Laboratory of Medical Imaging and Computation at Massachusetts General Hospital. She oversees all projects of the lab that include development of machine-learning models for medical images, natural language processing tools for analysis of electronic health record (EHR), and blockchain-based platform for health information exchange. She also actively works in developing new projects and implementing artificial intelligence technologies in the clinical workflow as clinical decision support tools.