• AI IN HEALTHCARE & PHARMA VIRTUAL SUMMIT

  • Times in EDT

  • 10:00

    Welcome Note

  • AI IN HEALTHCARE: AN INTRODUCTION

  • 10:05
    Indra Joshi

    The NHS AI Lab - Accelerating the Safe Adoption of AI in Health and Care 

    Indra Joshi - Director of AI - NHSX

    Down arrow blue

    The NHS AI Lab - Accelerating the Safe Adoption of AI in Health and Care 

    Dr Indra Joshi will outline how the NHS AI Lab is enabling the safe, ethical and effective development, adoption and use of AI-driven technologies in the UK health and care system.

    Dr Indra Joshi is the Director of AI at NHSX and runs the NHS AI Lab - a £250m programme to accelerate the safe and ethical adoption of AI into health and care. Indra has a unique portfolio with experience stretching across digital health, data and AI strategy and delivery; whilst remaining true to her professional training as an emergency medic.
    She is a Founding Member of One HealthTech – a network which campaigns for the need and importance of better inclusion of all backgrounds, skillsets and disciplines in health technology. Alongside she is an associate editor for BMJ Leader, a member of the WHO digital health expert group, a consultant on digital health, and most importantly a mum to two wonderful little munchkins.

    Twitter Linkedin
  • 10:30
    Sadid Hasan

    AI for Care Planning Support

    Sadid Hasan - Senior Director of Artificial Intelligence - CVS Health

    Down arrow blue

    AI for Care Planning Support

    Effective care planning requires care managers to understand patient health status and needs to deliver appropriate patient support. The proliferation of healthcare data including massive volumes of clinical free text documents, creates a significant challenge for care managers, but a major opportunity for advanced clinical analytics. Novel Artificial Intelligence (AI)-driven solutions can help optimize care planning, reducing inefficiency and increasing focus on the most salient information, leading to improved patient outcomes. This talk will focus on various deep learning-based clinical natural language processing use cases developed as part of our advanced care planning initiatives.

    Key Takeaways:

    *Effective care planning requires care managers to understand patient health status and needs to deliver appropriate support

    *Clinical domain has unique challenges such as massive structured/unstructured data, redundancy, limited interoperability, widespread use of acronyms etc.

    *AI-augmented solutions can help optimize care planning, reducing inefficiency and increasing focus on the most salient information leading to improved patient outcomes

    Dr. Sadid Hasan is a Senior Director for AI at CVS Health leading the team responsible for AI-enabled clinical care plan initiatives in Aetna. His recent work involves solving problems related to clinical information extraction, paraphrase generation, natural language inference, and clinical question answering using Deep Learning. Sadid has over 60 peer-reviewed publications in the top NLP/Machine Learning venues, where he also regularly serves as a program committee member/area chair including ACL, IJCAI, EMNLP, NeurIPS, ICML, COLING, NAACL, AMIA, MLHC, MEDINFO, ICLR, ClinicalNLP, TKDE, JAIR etc. 

    Twitter Linkedin
  • AI FOR DIAGNOSTICS

  • 10:55
    Andrew Soltan

    CURIAL - Rapid Screening for COVID-19 in Emergency Departments

    Andrew Soltan - Academic Clinician - University of Oxford

    Down arrow blue

    CURIAL - Rapid Screening for COVID-19 in Emergency Departments

    Limitations of COVID-19 diagnostic tests include prolonged result times and imperfect sensitivity, posing operational and infection-control challenges in hospitals. In this talk, we discuss how a cross-disciplinary team at the University of Oxford rapidly developed and deployed CURIAL; a screening test for COVID-19 that uses data routinely collected during the first hour of a patient arriving in hospital. In a two week test period, CURIAL correctly predicted the COVID-19 status of 92% of patients attending two emergency departments in Oxfordshire.

    The collaborative team, comprising hospital clinicians and Professor David Clifton’s AI-for-Healthcare lab, is now investigating whether similar methodology is readily applicable alongside novel near-patient blood analysis to rule-out COVID-19 within 10 minutes of a patient attending hospital. A study to evaluate the new ’10 minute’ screening test is ongoing at the John Radcliffe Hospital’s Emergency Department.

    3 Key Takeaways:

    *An example of an interdisciplinary collaboration rapidly developing a clinical decision support tool to support the pandemic response

    *Safety-first design can be achieved by understanding a changing clinical scenario

    *Advances in near-patient diagnostics and machine learning approaches show promise for future improvements in clinical pathways

    Dr Andrew Soltan is a clinician and machine learning researcher at the University of Oxford. Alongside his clinical practice, his research applies machine learning techniques for early detection and prediction of disease. During the COVID-19 pandemic, he has led a programme to rapidly develop and deploy AI tools to screen patients for infection within the first hour of arriving in hospital. Dr Soltan has also worked in industry as Medical Officer at Visulytix, which partnered with Orbis International to make its AI tool for detecting treatable eye disease accessible to doctors in low-resource countries across the world. Since graduating with Distinction in Medicine from the University of Cambridge, Dr Soltan has received awards including an NIHR Academic Clinical Fellowship and a Translational Research award funded by the Wellcome Trust.

    Twitter Linkedin
  • 11:20

    COFFEE BREAK: 1:1 Speed Networking Session

  • 11:35
    Julie Vaughn

    Deconstructing Bias in Medical AI

    Julie Vaughn - Research Assistant - MIT CSAIL

    Down arrow blue

    Deconstructing Bias in Medical AI

    Bias is a common term used to describe medical AI. But what does it really mean? In this talk, we will delve into understanding different types of bias, and how we may begin to address them.

    Julie is a Master’s student at MIT studying computer science and conducting research in NLP and fairness in healthcare in the Medical Decision Making Group at MIT CSAIL. She is also a teaching assistant for MIT’s Intro to ML (6.036) course

    Twitter Linkedin
  • AI AIDED DRUG DISCOVERY

  • 12:00
    Dalton Sakthivadivel

    Creating Personalised Neuromedicine Using Artificial Intelligence and Brain Modelling

    Dalton Sakthivadivel - Biomedical Engineering - Stony Brook University

    Down arrow blue

    Creating Personalised Neuromedicine Using Artificial Intelligence and Brain Modelling

    Today, the utility of data in medicine is rapidly increasing, due to increased precision and availability. To maximise the impact this has, clinicians and researchers are applying unique analysis methods to these data and translating the results into patient care. One example of this is in personalising medicine, which entails learning about and responding to a patient’s unique condition. In clinical neurosciences, we can apply modelling insights to patient care, on an individual level, by using artificial intelligence and machine learning. Through the intelligent analysis of diagnostic data, we can learn about a patient’s brain, and then simulate a patient by building a personal brain model. This enables clear and correct diagnosis, investigation of treatments, and prediction of outcomes. We will discuss a couple of key case studies to explore recent advances in the field of personalising medicine using computational neurodiagnostics, and how they have been performed. Theory, methods, and concrete results will be examined

    3 Key Takeaways:

    *The new horizon of effective psychiatry and neurology is in personalised, data-driven methods.

    *Machine learning and artificial intelligence provide superior methods for accomplishing this.

    *Concrete implementations of this paradigm already exist in the lab, and are ready for translation into clinical settings.

    Dalton is a neuroscientist and mathematician affiliated with Stony Brook University's Renaissance School of Medicine, in the Department of Biomedical Engineering. He applies mathematical methods, primarily based on brain modelling, to answer clinical questions in neuroscience and psychiatry.

    Twitter Linkedin
  • 12:25
    Jonathan Stokes

    Machine Learning for Antibiotic Discovery

    Jonathan Stokes - Banting Fellow - Broad Institute of MIT and Harvard

    Down arrow blue

    Machine Learning for Antibiotic Discovery

    To address the antibiotic-resistance crisis, we trained a deep neural network to predict new antibiotics. We performed predictions on multiple chemical libraries and discovered a molecule from the Drug Repurposing Hub – halicin – that is structurally divergent from conventional antibiotics and displays activity against a wide spectrum of pathogens. Halicin also effectively treated Clostridioides difficile and Acinetobacter baumannii infections in mice. Deep learning approaches have utility in expanding our antibiotic arsenal.

    3 Key Takeaways:

    *We leveraged a message passing neural network to predict structurally novel antibiotics

    *We discovered halicin, which displayed bactericidal efficacy against a broad spectrum of bacterial pathogens

    *Machine learning allows us to explore vast chemical spaces in the search for new medicines

    Jonathan Stokes is a Banting Fellow in the laboratory of James Collins at the Broad Institute of MIT and Harvard. He received his BHSc in 2011, graduating summa cum laude, and his PhD in antimicrobial chemical biology in 2016, both from McMaster University. His research applies a combination of chemical biology, systems biology, and machine learning approaches to develop novel antibacterial therapies with expanded capabilities over conventional antibiotics. Dr. Stokes is the recipient of numerous awards, including the Canadian Institutes of Health Research Master’s Award, the Colin James Lyne Lock Doctoral Award, and was ranked first of just 23 postdoctoral scholars to be awarded the prestigious Banting Fellowship.

    Twitter Linkedin
  • 12:50

    ROUNDTABLE DISCUSSIONS

  • Mathieu Galtier

    ROUNDTABLE: Accelerating Drug Discovery by Competitive Cooperation

    Mathieu Galtier - Chief Product Officer - Owkin

    Down arrow blue

    How Federated Learning Puts Patient Privacy First in Healthcare

    Accessing the volume and diversity of data required for robust and precise machine learning is currently one of the biggest limiting factors to the use of artificial intelligence in healthcare. Health data is private, sensitive, often confidential, and can only be processed in compliance with strict institutional, national and federal regulations. This presentation explores how federated learning technology can be used to overcome this challenge and allow developers to access healthcare data for use in machine learning algorithms with full regulatory compliance. Owkin brings this new learning paradigm to all healthcare stakeholders, unlocking the potential for safer, better and more effective medical research with its federated learning platform Owkin Connect.

    3 Key Takeaways:

    *Federated Learning is a new learning paradigm that will help artificial intelligence reach its full potential and, ultimately, make the transition from research to clinical practice. It is a powerful solution for the future of digital health.

    *Owkin Connect is a federated learning platform that unlocks AI, breaks data silos, and protects data privacy across healthcare applications. Its distributed architecture and federated learning capabilities allow data scientists to securely connect to decentralised, multi-party datasets and train AI models without having to pool data

    *Adoption of federated learning across hospitals and pharma companies is expected to lead to models trained on datasets of unprecedented size and diversity and as such have a catalytic impact on precision medicine.

    Mathieu is the Chief Product Officer at Owkin where he leads the design of Owkin product: a collaboration platform between medical researchers and data scientists powered by federated learning. Mathieu graduated from ENS and Mines ParisTech and completed his PhD in Machine Learning applied to Neuroscience at Oxford and Inria. He started his career at Dreem, a neurotech startup, where he directed research and algorithms and led the Morpheo project. He is devoted to deploying AI in a responsible way. He now leads Owkin’s Product team.

    Twitter Linkedin
  • Christophe Aubry

    ROUNDTABLE: Hybrid AI and Beyond in Healthcare: Demo & Discussion

    Christophe Aubry - Head of Sector Strategy, Healthcare - Expert.AI

    Down arrow blue

    Hybrid AI Approach to Knowledge Discovery

    Global biomedical content represents critical data for healthcare organizations, which cannot be easily handled by business applications because it is unstructured and varies across different data sources. With speed and accuracy necessities in the medical field, organizations must find a way to overcome this barrier to understanding language. A hybrid approach to AI, combining the strengths of both natural language understanding and machine learning, provides an ideal solution that mimics the human-like comprehension of biomedical content such as clinical trials, real-world data, medical reports, literature, and social media. This capability can help to accelerate drug discovery and development, innovate faster and increase access to healthcare.

    Key Takeaways:

    • How Hybrid AI can accurately transform clinical trials data, real world data and scientific literature into knowledge and insight.

    • How Hybrid AI can overcome the need of exhaustive training data sets required for pure machine learning based approach

    • How Hybrid AI can be explainable by design and overcome the black box phenomenon associated with machine learning

    Christophe Aubry leads business activities in specialized markets for expert.ai, the leading provider of AI-based Natural Language Understanding solutions. As a strategic technology leader focused on delivering solutions that solve customer pain points, Christophe has dedicated the last 20 years of his career to creating business value for clients worldwide. He helped his company establish its presence in North America, earning and consolidating the trust of major clients in Media, Publishing, Life Sciences, or Government sectors. Christophe started his career as a Product Manager at IBM in the early stages of data and text mining. As a co-founder and Vice President Professional Services at TEMIS for more than 10 years, he molded talents to help them reach their highest potential while supervising customer deployments in all geographies and leading strategic service activities. His favorite quote is “A company’s employees are its greatest assets and your people are your product”. Christophe has a deep understanding and passion for A.I. technologies. He holds a Master’s Degree in Applied Mathematics and Computer Sciences.

    Twitter Linkedin
  • 13:15

    COFFEE BREAK: Explore the Expo Area

  • 13:30
    Shrey Sukhadia

    A Robust AI-based Software Platform for Effective Integration of Radiomic and Omics Data of Tumor Patients

    Shrey Sukhadia - Bioinformatics Scientist - Phoenix Children’s Hospital

    Down arrow blue

    A Robust AI-based Software Platform for Effective Integration of Radiomic & Omics Data of Tumor Patients

    The potential for radiomics to support oncology decision-making has grown substantially in recent years, as the tumor image scanning techniques such as Magnetic Resonance Imaging (MRI), Computerized Tomography (CT) and Positron Emission Tomography (PET) offer unique information about tumor phenotype and micro environment complementing the information derived from omics data. Radiomic and omics data can be correlated statistically and modelled together using machine learning techniques to yield valuable information regarding associations between them. Radiogenomically-informed biopsies have a potential to improve pathological outcomes and inform optimal treatment strategies for cancer patients. Currently the field of radiogenomics lacks a unified and robust software platform able to effectively integrate radiomic and omics (e.g. genomics, proteomics) data to facilitate robust AI models able to predict individual omics profiles of tumors from their radiological images. Here we report the development of a comprehensive AI-based platform for effective integration of radiomic and omics data of cancer patients that has a potential to be validated and utilized in clinical settings for effective monitoring, diagnosis, and treatment of cancer patients.

    Key Takeaways:

    *A novel and comprehensive radiogenomic software that combines the power of statistics and AI together to yield informative reports on associations between radiology images and omics data for patient tumors. Histopathology image data on the radar.

    *Easy to use software that accommodates a range of users (naïve to expert).

    *Welcoming collaborations from researchers and clinicians worldwide to test the software on multiple tumor types. Potential for clinical deployment in near future.

    Shrey Sukhadia leads clinical bioinformatics efforts at Phoenix Children’s Hospital, Phoenix. He has previously led oncology bioinformatics efforts at University of Pennsylvania and participated in research activities at University of Maryland and University of Miami. He is currently pursuing a PhD in Bioinformatics at Queensland University of Technology, Brisbane, Australia and has Masters from University of the Sciences, Philadelphia. His expertise includes Genomic data analysis (somatic and germline), Precision Medicine, Statistics, Artificial intelligence, Radiogenomics, Transcriptomics, Copy number alterations and Software Engineering. Through his PhD research in Radiogenomics he has developed a robust AI-based software for effective integration of radiomic and omics data of cancer patients that has a potential to be validated and utilized in clinical settings for effective monitoring, diagnosis, and treatment of cancer patients. The software has been tested through Glioblastoma Multiforme and Non-Small cell Lung cancer datasets and could be made available to test through multiple cancer types. He is inviting collaborations from researchers and clinicians globally.

    Linkedin
  • 13:55

    PANEL: The Importance of Machine Learning in Diagnosing & Treating Cancer

  • Sandhya Prabhakaran

    Moderator

    Sandhya Prabhakaran - Research Scientist - Moffitt Cancer Center

    Down arrow blue

    Dr. Sandhya Prabhakaran is a Research Scientist at the Integrated Mathematical Oncology department, Moffitt Cancer Centre, Florida. Before that she was a Research Scientist at Memorial Sloan Kettering Cancer Centre and Columbia University. Her Ph.D. in Computer Science is from University of Basel and her Masters in Intelligent Systems (Robotics) is from University of Edinburgh. Her research deals with developing statistical theory, mechanistic mathematical models and Bayesian inference models, particularly to problems in Cancer Biology and Computer Vision. Prior to academics, she was an Assembler programmer working with the Mainframe Operating System (z/OS) at IBM Software Laboratories and has developed Mainframe applications. She has completed 4 out of the 6 World Marathon Majors.

    Twitter Linkedin
  • Kyung Sung

    Panellist

    Kyung Sung - Associate Professor of Radiology - University of California, Los Angeles (UCLA)

    Down arrow blue

    Dr. Sung is an Associate Professor of Radiology, where his research primarily focuses on the development of novel medical imaging methods and artificial intelligence using magnetic resonance imaging (MRI). He received a Ph.D. degree in Electrical Engineering from the University of Southern California, Los Angeles, in 2008, and from 2008 to 2012, he finished his postdoctoral training at Stanford in the Departments of Radiology. He joined the University of California, Los Angeles (UCLA) Department of Radiological Sciences, in 2012. His research interest is to develop fast and reliable magnetic resonance imaging (MRI) techniques that can provide improved diagnostic contrast and useful information. In particular, his research group (https://mrrl.ucla.edu/sunglab/) is currently focused on developing advanced deep learning algorithms and quantitative MRI techniques for early diagnosis, treatment guidance, and therapeutic response assessment for oncologic applications. Such developments can offer more robust and reproducible measures of biologic markers associated with human cancers.

    Twitter
  • Krzysztof Jerzy Geras

    Paenllist

    Krzysztof Jerzy Geras - Assistant Professor - NYU School of Medicine

    Down arrow blue

    Krzysztof is an assistant professor at NYU School of Medicine and an affiliated faculty at NYU Center for Data Science. His main interests are in unsupervised learning with neural networks, model compression, transfer learning, evaluation of machine learning models and applications of these techniques to medical imaging. He previously did a postdoc at NYU with Kyunghyun Cho, a PhD at the University of Edinburgh with Charles Sutton and an MSc as a visiting student at the University of Edinburgh with Amos Storkey. His BSc is from the University of Warsaw. He also did industrial internships in Microsoft Research (Redmond, working with Rich Caruana and Abdel-rahman Mohamed), Amazon (Berlin, Ralf Herbrich's group), Microsoft (Bellevue) and J.P. Morgan (London).

    Twitter Linkedin
  • Roushanak Rahmat

    Panellist

    Roushanak Rahmat - Deep Learning Research Scientist - Institute of Cancer Research

    Down arrow blue

    Dr Roushanak Rahmat is a researcher at The Institute of Cancer Research, London, with research interests in the fields of deep learning modeling in medical image analysis and computer vision. Her goal is to develop new deep-learning tools for predicting patient survival and treatment toxicity from radiotherapy. In particular, she is interested in combining imaging data with other sources of information, including patient demographics (age, sex, weight etc.), to improve treatment schedule efficiency and reliability. Before joining ICR, She was a Research Associate at The University of Cambridge working on deep learning, for automatic segmentation of glioma before treatment and prediction of progression pattern after treatment, using conventional structural and diffusion tensor imaging.

    Twitter Linkedin
  • 14:30
    1:1 Speed Networking

    1:1 SPEED NETWORKING

    1:1 Speed Networking - - NETWORKING SESSION

    Down arrow blue

    Join a 1-to-1 Speed Networking session to be randomly paired with others with a similar interest for short video calls to expand your network and connect with others.

  • 15:00

    END OF DAY 1

  • AI IN HEALTHCARE & PHARMA VIRTUAL SUMMIT

  • Times in EDT

  • 10:00

    Welcome Note

  • NLP

  • 10:05
    Amir Tahmasebi

    Natural Language Processing for Healthcare

    Amir Tahmasebi - Director of Deep Learning - Enlitic

    Down arrow blue

    Natural Language Processing for Healthcare

    With recent advancements in Deep Learning followed by successful deployment in natural language processing (NLP) applications such as language understanding, modeling, and translation, the general hope was to achieve yet another success in healthcare domain. Given the vast amount of healthcare data captured in Electronic Medical Records (EMR) in an unstructured fashion, there is an immediate high demand for NLP to facilitate automatic extraction and structuring of clinical data for decision support. Nevertheless, the performance of off-the-shelf NLP on healthcare data has been disappointing. Recently, tremendous efforts have been dedicated by NLP research pioneers to adapt general language NLP for healthcare domain. This talk aims to review current challenges researchers face, and furthermore, reviews some of the most recent success stories.

    3 Key Takeaways:

    *General overview of state-of-the-art NLP

    *How to build a domain-specific NLP pipeline for life science applications

    *Review of a few successful applications of NLP in life sciences and how the future will/should look

    Amir Tahmasebi is the director of Deep Learning at Enlitic, San Francisco, CA. Before joining Enlitic, Amir was the senior director of Machine Learning and AI at CODAMETRIX, Boston, MA. He also served as a lecturer at MIT, Northeastern University, Boston University, and Columbia University. Prior to CODAMETRIX, Dr. Tahmasebi was a Principal Research Engineer at PHILIPS HealthTech, Cambridge, MA. Dr. Tahmasebi’s research is focused on innovating computer vision and natural language processing solutions for patient clinical context extraction and modeling, clinical outcome analytics and clinical decision support. Dr. Tahmasebi received his PhD degree in Computer Science from the School of Computing, Queen's University, Canada. He is the recipient of the IEEE Best PhD Thesis award and Tanenbaum Post-doctoral Research Fellowship award. He has been serving as area chair for MICCAI and IPCAI conferences. Dr. Tahmasebi has published and presented his work in a number of conferences and journals including NeurIPS, NAACL, MICCAI, IPCAI, IEEE TMI, SPIE, and RSNA. He has also been granted more than 15 patent awards.

    Linkedin
  • MEDICAL IMAGING

  • 10:30
    Saeed Hassanpour

    AI and Histopathological Characterisation of Microscopy Images

    Saeed Hassanpour - Associate Professor - Hassanpour Lab, Geisel School of Medicine, Dartmouth

    Down arrow blue

    AI and Histopathological Characterization of Microscopy Images

    With the recent expansions of whole-slide digital scanning, archiving, and high-throughput tissue banks, the field of digital pathology is primed to benefit significantly from deep learning technology. This talk will cover several applications of deep learning for characterizing histologic patterns on high-resolution microscopy images for cancerous and precancerous lesions. Also, recent advances and future directions for developing and evaluating deep learning models for pathology image analysis will be discussed.

    3 Key Takeaways:

    *The recent progress in AI has created new opportunities in digital pathology.

    *Deep learning models are capable of assisting pathologists with the accurate characterization of whole-slide histology images.

    *The wide-spread use of these tools in clinical practice depends on establishing clinicians’ trust in AI.

    Dr. Saeed Hassanpour is an Associate Professor in the Departments of Biomedical Data Science, Computer Science, and Epidemiology at Dartmouth College. His research is focused on the use of artificial intelligence in healthcare. Dr. Hassanpour’s research laboratory has built novel machine learning and deep learning models for medical image analysis and clinical text mining to improve diagnosis, prognosis, and personalized therapies. Before joining Dartmouth, he worked as a Research Engineer at Microsoft. Dr. Hassanpour received his Ph.D. in Electrical Engineering with a minor in Biomedical Informatics from Stanford University and a Master of Math in Computer Science from the University of Waterloo in Canada.

    Twitter Linkedin
  • 10:55
    Dale Webster

    Current Challenges in Deep Learning for Medical Image Interpretation

    Dale Webster - Research Director - Google

    Down arrow blue

    Current Challenges in Deep Learning for Medical Image Interpretation

    Deep Learning models can be used to diagnose melanoma, breast cancer lymph node metastases and diabetic retinopathy from medical images with comparable accuracy to human experts. This talk covers work in applying deep learning to imaging for diabetic retinopathy, cancer screening & diagnosis, including recent work in using different reference standards and techniques to improve explainability. It will also cover how deep learning can be leveraged to make novel predictions such as cardiovascular risk factors and disease progression

    3 Key Takeaways:

    *'More data' is not sufficient to generate a better model

    *Accurate models are not enough to create a useful product

    *A good product is not sufficient to realize clinical impact for patients

    Dale Webster is Director of Research at Google Health working to improve patient outcomes in healthcare using Deep Learning and Medical Imaging. His recent work leverages AI to screen for Diabetic Retinopathy in India and Thailand, predict Cardiovascular health factors from fundus photos, and differential diagnosis of skin disease. Prior to Google he was a Software Engineer at Pacific Biosciences working on direct sequencing of methylation state and rapid sequencing and assembly of microbial pathogens during global outbreaks. His PhD work in Bioinformatics at the University of California San Francisco focused on viral evolution, and he received his Bachelor of Science in Computer Science from Rice University.

    Twitter Linkedin
  • 11:20

    COFFEE BREAK: 1:1 Speed Networking Session

  • APPLICATIONS OF AI IN HEALTHCARE

  • 11:35
    Jessie Li

    Transfer Learning for the Prediction of Atrial Fibrillation and Sleep Apnea

    Jessie Li - Global Head of Data Science (VP) - all.health

    Down arrow blue

    Transfer Learning for the Prediction of Atrial Fibrillation and Sleep Apnea

    Inexpensive and highly portable medical sensing technology has the potential to deliver savings for healthcare providers, through screening and early detection. However, the data necessary to support machine learning models on these newly developed sensor modalities - sensor streams together with time-aligned, expert-annotated labels - is often difficult and expensive to obtain. By contrast, high-quality data suitable for machine learning on better-established modalities, such as electrocardiogram (ECG), is abundant and often free. We propose a method allowing for the use of such easy-to-come-by data in building models on photoplethysmography (PPG) sensor modalities. With this method, data requirements are reduced to streams of PPG data time-aligned with readings from ECG together with labelled outcome for ECG. We demonstrate the method by building models for atrial fibrillation and sleep apnea based on data from a wrist-worn PPG sensor, with the only labels coming from publicly available ECG data. We find that the models developed using the transfer learning approach outperform models trained directly on the PPG sensor and are competitive with state-of-the-art ECG-based models.

    3 Key Takeaways:

    *Deep learning outperforms traditional machine learning on heart rate variability features in predicting the presence/absence of atrial fibrillation in real-time.

    *For sleep apnea, the pre-trained model’s predictions provide a substantially better learning signal than the clinician-provided labels, and that this teacher-student technique significantly outperforms both a naive application of supervised deep learning and a label-supervised version of domain adaptation.

    *These applications demonstrate that our wrist-worn device can provide close to clinical-grade accuracy for the real-time prediction of atrifibrlation and sleep apnea.

    Dr Jessie Li received her DPhil in Computational Genomics from the University of Oxford. After that she worked at a university spin-off as a statistical geneticist and two healthcare technology startups as a data scientist, most recently serving as Head of Data Science at all.health. She contributed to the understanding of the genetic cause of major depressive disorder during her academic career which could potentially shed light on new diagnostic and treatment solutions. At all.health, she has led the development of a number of machine learning models for disease detection using photoplethysmography data from the company’s proprietary wrist-worn device. This provides close to clinical grade accuracy on a number of conditions using continuously monitored patient data.

    Linkedin
  • 12:00
    Bas Jansen

    Why Interpretable and Explainable Do Not Equal Understandable: Discovery of an IBD Escalation Biomarker Using an Interpretable AI model

    Bas Jansen - Head of Life Sciences - Omina Technologies

    Down arrow blue

    Why Interpretable and Explainable Do Not Equal Understandable: Discovery of an IBD Escalation Biomarker Using an Interpretable AI model

    Inflammatory Bowel Disease (IBD) is an auto-immune disease affecting ~1.3% of the US population, which requires lifelong treatment and can have a big impact on a patient’s quality of life. Therefore, we performed an in-depth exploration of previously published and dormant IBD-BIOM datasets. A non-invasive biomarker was identified that significantly outperformed the clinical standard (CRP), using an inherently interpretable AI model, with a hazard ratio of 25.91 vs. 9.0. However, while patenting our invention it became apparent that an interpretable and/or an explainable model, is not perse understandable.

    3 Key Takeaways:

    *The majority of clinical and pharmaceutical datasets have not been fully explored, leading to a significant financial and scientific loss.

    *The use of an inherently interpretable model does not guarantee that the customer (or other stakeholders) will be able to understand the model or it’s consequences.

    *Significant research is still required to improve this ‘understandability’ aspect of AI.

    Bas has strived towards multidisciplinary research combining clinical biochemistry and computer sciences, as illustrated by his degrees in computer sciences, biology and life sciences and a PhD in glyco-bioinformatics. He has worked at contract research organizations in Oxford and Zagreb where he focussed on method development using laboratory automation and artificial intelligence. Additionally, he has led various biomarker discovery projects, e.g., a study identifying a blood glycomics based biomarker. Recently, he became the head of life sciences at a growing AI company, intent on providing explainable and transparent AI to life sciences, pharma and healthcare.

    Twitter Linkedin
  • 12:25
    Łukasz Kidziński

    Clinical Motion Lab in Your Pocket

    Łukasz Kidziński - Research Associate - Stanford University

    Down arrow blue

    Clinical Motion Lab in Your Pocket

    Many neurological and musculoskeletal diseases impair movement, which limits people’s function and social participation. Quantitative assessment of motion is critical to medical decision-making but is currently possible only with expensive motion capture systems and highly trained personnel. We developed AI-based algorithms for quantifying gait pathology using commodity cameras. Our methods increase access to quantitative motion analysis in clinics and at home and enable researchers to conduct studies of neurological and musculoskeletal disorders at an unprecedented scale.

    3 Key Takeaways:

    *Quantitative assessment of movement enables diagnostics and treatment of many neurological disorders

    *Existing methods for quantitative analysis of movement require very expensive equipment

    *Deep learning models can predict common gait metrics using a mobile phone camera

    Łukasz Kidziński is a co-founder of Saliency and a research associate in the Neuromuscular Biomechanics Lab at Stanford University, applying state-of-the-art computer vision and reinforcement learning algorithms for improving clinical decisions and treatments. Previously he was a researcher in the CHILI group, Computer-Human Interaction in Learning and Instruction, at the EPFL in Switzerland, where he was developing methods for measuring and improving engagement of users in massive online open courses. He obtained a Ph.D. degree at Université Libre de Bruxelles in mathematical statistics, working on frequency-domain methods for dimensionality reduction in time series.

    Twitter Linkedin
  • 12:50

    ROUNDTABLE DISCUSSIONS

  • Steve Ardire

    ROUNDTABLE: Contextual AI for Digital Behavioral Health from SignalAction.AI

    Steve Ardire - CEO & Co-Founder - SignalAction.AI

    Down arrow blue

    Contextual AI for Digital Behavioral Health from SignalAction.AI

    COVID-19 has filed the Mental Health Crisis especially for the world’s youth where depression rates tripled during the pandemic and where 1 in 4 people in the 18-24 age bracket have seriously considered committing suicide. Human behavior is messy, and most brain activity is nonconscious, so the best way to address is using multimodal analysis i.e., spoken language, emotional, facial, behavioral inputs with human-like understanding to reveal intent, nuanced perceptions, anxieties for more meaningful insights. If you detect fear and anxiety perhaps depression starts becoming manifest so would be incredibly helpful for therapists to have real-time access to session data that shows behavior and emotional states in granular detail to analyze the situation better.

    Steve is the Co-Founder of SignalAction.AI Contextual AI for Digital Behavioral Health, an AI startup ‘force multiplier'  and Quintessential 'Merchant of Light'. He built his personal brand as AI startup ‘force multiplier' ( in which he advised 25 AI startups over the past 7 years ) shaping serendipity to connect and illuminate the dots that matter leveraging deep relationship capital with incisive business strategy to deliver the best results. 

    Twitter Linkedin
  • Xian Zhang

    ROUNDTABLE: Deep Learning for Biomedical Imaging

    Xian Zhang - Scientist - Novartis

    Down arrow blue

    Deep Learning for Biomedical Imaging

    Biomedical imaging, such as cellular imaging, tissue imaging, medical imaging and organism imaging, is a gold mine for artificial intelligence and computer vision. Deep learning methods, including convolution neural networks, generative adversarial networks, autoencoders, have shown early success in segmentation, classification, regression applications, as well as potential in tasks such as registration, in silico labelling and gaining biological insights. This roundtable discussion aims to engage the audience to share opinions about the current status and future directions of deep learning for biomedical imaging.

    Key Takeaways

    *Which types of biomedical imaging data are available and relevant

    *What are the current approaches and successes

    *What are the future challenges and opportunities

    Xian Zhang is leading a data science group in Novartis. Working with diverse biomedical imaging and sequencing data types, he and team focus on deep learning research and application in various segments of early drug discovery. Xian obtained his PhD from the University of Rochester and completed his postdoc in German Cancer Research Center.

    Twitter Linkedin
  • 13:15
     Vivek Natarajan

    ROUNDTABLE: Building Better Medical AI for Clinical Deployment at Scale

    Vivek Natarajan - Researcher - Google

    Down arrow blue

    Building Better Medical AI for Clinical Deployment at Scale

    In recent years, we have seem several research breakthroughs demonstrating the potential of AI in healthcare settings. However, we are yet to see AI have any impact in the real world and improve patient outcomes. In this discussion, I will lay out some of the key challenges of developing and deploy AI at scale in clinical settings and introduce some of my work done at Google towards addressing them. We will then have an open discussion on how we can accelerate the solving of these challenges and realize the potential of AI in clinical settings.

    Key Takeaways

    *Why AI is yet to have real world patient impact? what are the key technical and non-technical challenges we need to address for this to happen?

    *How we can address those challenges systematically? I will be drawing upon examples from my work at Google to illustrate this

    *We have all the key ingredients to address these issues and if we can make systematic progress, we can very soon realize patient impact at scale with AI

    Vivek is currently working at the intersection of Artificial Intelligence and Healthcare at Google. His work aims at accelerating the translation of state of the art AI/ML to health products and real-world clinical impact. His current research spans improving accuracy, data efficiency, robustness, generalization, fairness, privacy and safety of AI models in healthcare with applications in dermatology, mammography and radiology.

    Previously, I worked on Artificial Intelligence based Assistant systems at Facebook improving their ability to understand multimodal data like images, text and speech and interact better with users at scale.

    Twitter
  • 13:15

    COFFEE BREAK: Explore the Expo Area

  • 13:30
    Judy Gichoya

    Fairness in Medical Algorithms: Threats and Opportunities

    Judy Gichoya - NIH Data Scholar - Fogarty International Center at Nation Institute of Health

    Down arrow blue

    Fairness in Medical Algorithms: Threats and Opportunities

    The year 2020 has brought into focus a second pandemic of social injustice and systemic bias with the disproportionate deaths observed for minority patients infected with COVID. As we observe an increase in development and adoption of AI for medical care, we note variable performance of the models when tested on previously unseen datasets, and also bias when the outcome proxies such as healthcare costs are utilized. Despite progressive maturity in AI development with increased availability of large open source datasets and regulatory guidelines, operationalizing fairness is difficult and remains largely unexplored. In this talk, we review the background/context for FAIR and UNFAIR sequelae of AI algorithms in healthcare, describe practical approaches to FAIR Medical AI, and issue a grand challenge with open/unanswered questions.

    Key Takeaways:

    *Overall there is lack of governance and regulation for ensuring fairness in Medical algorithms

    *Existing clinical systems deploying AI under the guise of clinical decision support tools add another layer of the black box to medical AI

    *AI techniques on new proxy metrics can narrow the disparities gap

    Dr. Gichoya is a multidisciplinary researcher, trained as both an informatician and a clinically active radiologist. She is an assistant professor at Emory university, and works in Interventional Radiology and Informatics. She has been funded through the Grand Challenges Canada, NBIB and NSF ECCS. Her career focus is on validating machine learning models for health in real clinical settings, exploring explainability, fairness, and a specific focus on how algorithms fail. She has worked on the curation of datasets for the SIIM (Society for Imaging Informatics in Medicine) hackathon and ML committee. She volunteers on the ACR and RSNA machine learning committees to support the AI ecosystem to advance development and use of AI in medicine. She is currently working on the sociotechnical context for AI explainability for radiology, especially the dimensions of human factors that govern user perceptions and preferences of XAI systems.

    Twitter Linkedin
  • 13:55

    PANEL: The Future of AI in the Health and Pharmaceutical Industry

  • 14:30
    Bill Fox J.D.

    Panellist

    Bill Fox J.D. - Healthcare and Life Sciences Lead - SambaNova Systems

    Down arrow blue

    Bill leads healthcare and life sciences business development at SambaNova Systems. He has over 20 years of experience in healthcare technology and is an internationally recognized thought leader on digital transformation in healthcare and life sciences. He is the former SVP of AI at Change Healthcare. Prior to that he led the global healthcare and life sciences vertical at MarkLogic. Prior to Marklogic he held senior positions at Booz Allen, LexisNexis and Maximus.

    Twitter Linkedin
  • Sahab Aslam

    Panellist

    Sahab Aslam - Associate Director, Data Science Capabilities - Merck

    Down arrow blue

    Sahab Aslam received her Masters in Information & Data Science from the University of California, Berkeley. Sahab has unique and diverse experience ranging across data science, digital health, product development, software engineering, and human-centric design in start-ups and Fortune 100 companies. Sahab started her digital health journey 9 years ago, providing digital health solutions via SMS and voice recording technologies in underserved populations. Today, she utilizes data science to develop solutions to improve patients' lives. Sahab holds a Masters of Science in Mathematics and a Bachelors in Liberal Arts and Sciences. In her time outside her work, she spends advising start-ups and mentoring data science students.

    Twitter Linkedin
  • Shubha Chaudhari

    Panellist

    Shubha Chaudhari - Head of Digital Transformation - Novartis

    Down arrow blue

    Shubha is Head of Digital Transformation at Novartis and earlier in her career has held positions at BMS and Merck in a variety of disciplines R&D, Commercial, Supply Chain, and corporate functions like Procurement, Finance Informatics. She has deep knowledge of software product integrations, transformation initiatives in Fortune 500 companies Shubha earned a MS in Bio-Medical Informatics, NOVA South Eastern, College of Pharmacy; MS- MIS from NJIT; Berkeley Executive Education on Digital Transformation, Haas School of Business; and BS in Engineering, India. Shubha in 2015, was recognized as Tribute to Women in Industry.

    Twitter Linkedin
  • Shameer Khader

    Panellist

    Shameer Khader - Senior Director of Data Science & AI - AstraZeneca

    Down arrow blue

    Dr. Shameer Khader is currently working as a Senior Director of Data Science and Artificial Intelligence at AstraZeneca, USA. He leads a global team that focuses on leveraging trans-disciplinary (biomedical, healthcare, and clinical) big data and machine intelligence to accelerate drug discovery and development. He has more than a decade of experience in building and leading bioinformatics and data science in both academia and industry. He obtained his Ph.D. in computational biology from the National Center for Biological Sciences in India. He completed his post-doctoral training in computation genomics and precision medicine from Mayo Clinic, Rochester, MN. He has published more than 70 peer-reviewed research papers in the areas of healthcare data science, bioinformatics, drug discovery, and precision medicine. His work was featured in media outlets including Forbes, Fast Company, Bloomberg News, and Times of India. He received multiple awards for his research contributions; His work on developing an open catalog of drug repositioning has won the Swiss Institute of Bioinformatics' Bioinformatics Resource Innovation Award in 2017. Recently, he was recognized as one of the 100 Artificial Intelligence Leaders in Drug Discovery & Healthcare (DKI Global and Forbes).

    Twitter Linkedin
  • 14:30
    1:1 Speed Networking

    1:1 SPEED NETWORKING

    1:1 Speed Networking - - NETWORKING SESSION

    Down arrow blue

    Join a 1-to-1 Speed Networking session to be randomly paired with others with a similar interest for short video calls to expand your network and connect with others.

  • 15:00

    END OF EVENT

This website uses cookies to ensure you get the best experience. Learn more