Fairness in Medical Algorithms: Threats and Opportunities
The year 2020 has brought into focus a second pandemic of social injustice and systemic bias with the disproportionate deaths observed for minority patients infected with COVID. As we observe an increase in development and adoption of AI for medical care, we note variable performance of the models when tested on previously unseen datasets, and also bias when the outcome proxies such as healthcare costs are utilized. Despite progressive maturity in AI development with increased availability of large open source datasets and regulatory guidelines, operationalizing fairness is difficult and remains largely unexplored. In this talk, we review the background/context for FAIR and UNFAIR sequelae of AI algorithms in healthcare, describe practical approaches to FAIR Medical AI, and issue a grand challenge with open/unanswered questions.
*Overall there is lack of governance and regulation for ensuring fairness in Medical algorithms
*Existing clinical systems deploying AI under the guise of clinical decision support tools add another layer of the black box to medical AI
*AI techniques on new proxy metrics can narrow the disparities gap
Dr. Gichoya is a multidisciplinary researcher, trained as both an informatician and a clinically active radiologist. She is an assistant professor at Emory university, and works in Interventional Radiology and Informatics. She has been funded through the Grand Challenges Canada, NBIB and NSF ECCS. Her career focus is on validating machine learning models for health in real clinical settings, exploring explainability, fairness, and a specific focus on how algorithms fail. She has worked on the curation of datasets for the SIIM (Society for Imaging Informatics in Medicine) hackathon and ML committee. She volunteers on the ACR and RSNA machine learning committees to support the AI ecosystem to advance development and use of AI in medicine. She is currently working on the sociotechnical context for AI explainability for radiology, especially the dimensions of human factors that govern user perceptions and preferences of XAI systems.