Why Interpretable and Explainable Do Not Equal Understandable: Discovery of an IBD Escalation Biomarker Using an Interpretable AI model
Inflammatory Bowel Disease (IBD) is an auto-immune disease affecting ~1.3% of the US population, which requires lifelong treatment and can have a big impact on a patient’s quality of life. Therefore, we performed an in-depth exploration of previously published and dormant IBD-BIOM datasets. A non-invasive biomarker was identified that significantly outperformed the clinical standard (CRP), using an inherently interpretable AI model, with a hazard ratio of 25.91 vs. 9.0. However, while patenting our invention it became apparent that an interpretable and/or an explainable model, is not perse understandable.
3 Key Takeaways:
*The majority of clinical and pharmaceutical datasets have not been fully explored, leading to a significant financial and scientific loss.
*The use of an inherently interpretable model does not guarantee that the customer (or other stakeholders) will be able to understand the model or it’s consequences.
*Significant research is still required to improve this ‘understandability’ aspect of AI.
Bas has strived towards multidisciplinary research combining clinical biochemistry and computer sciences, as illustrated by his degrees in computer sciences, biology and life sciences and a PhD in glyco-bioinformatics. He has worked at contract research organizations in Oxford and Zagreb where he focussed on method development using laboratory automation and artificial intelligence. Additionally, he has led various biomarker discovery projects, e.g., a study identifying a blood glycomics based biomarker. Recently, he became the head of life sciences at a growing AI company, intent on providing explainable and transparent AI to life sciences, pharma and healthcare.