REGISTRATION & LIGHT BREAKFAST
Maxine Mackintosh - UCL/Alan Turing Institute
Maxine is a PhD student at the Alan Turing Institute and University College London working at the intersection of data science and dementia. Her PhD involves mining medical records for new predictors for dementia. She is passionate about understanding how we might make better use of routinely collected data to improve our cognitive health. Alongside this, she is the co-founder of One HealthTech – a community which champions and supports underrepresented groups, particularly women, to be the future leaders in health innovation. Her professional work has led her to the Royal Society, Roche, L’Oreal, Department for International Development, and NHS England. She is part of a number of communities and committees including the World Economic Forum’s Global Shapers, the British Computer Society (Health Exec) and the Google DeepMind Health’s Independent Review Panel.
DEEP LEARNING & HEALTHCARE IN PRACTICE
Sarah Culkin - NHS England
AI in the NHS
AI and other innovative uses of data have the potential to revolutionise healthcare, but perhaps not in the ways people assume. This session will reflect on emerging NHS England policy of how AI could work well in the NHS. It will also touch on some of the preliminary research and development projects being explored by our own data scientists.
Following a PhD in Chemistry, Sarah joined the Government Operational Research Service and held a range of analytical and research positions both at the Department of Health and in the NHS. In 2015 she established and led a Data Science Unit within the Department of Health where she developed, amongst other things, a project that used Google search patterns to successfully predict hospital admissions for Pneumonia and the corresponding pressure on A&E over winter. Sarah is currently Strategic Data Lead at NHS England, where she is involved in establishing and supporting Data Science capacity, and is a member of a working party developing policy for AI in the NHS.
Clinically Applicable Deep Learning
Danielle Belgrave - Microsoft Research
Machine Learning for Subphenotype Discovery
Machine learning advances are opening new routes to more precise healthcare, from the discovery of disease subtypes for stratified interventions to the development of personalised interactions supporting self-care between clinic visits. In this talk, I will present a flexible framework for endo-phenotype discovery through the application of probabilistic modelling to disambiguate diseases where there are heterogeneous phenomena. This strategy enables us to develop a more personalised approach to healthcare whereby information can be aggregated from multiple sources within a unified modelling framework.
Danielle Belgrave is a Machine Learning Researcher in the Healthcare AI Division at Microsoft Research Cambridge. She also has a (tenured) Research Fellowship at Imperial College London and received a Medical Research Council Career Development Award in Biostatistics (2015 – 2018). Her research focuses on integrating expert scientific knowledge to develop statistical machine learning models to understand disease progression over time, with the goal of identifying personalized disease management strategies. She has experience of applied machine learning for personalized health both within the pharmaceutical industry and academia.
DEEP LEARNING IN MEDICAL IMAGING
Peter Mountney - Siemens Healthineers
Deep Learning for Cardiac Interventions
Heart failure affects over 40 million people globally and accounts for 1-2% of the total healthcare spend. Yet, some treatments, such as Cardiac Resynchronization Therapy, have non responder rates of between 30-50%. This talk will discuss how deep learning can play an important role in planning and guiding the delivery of cardiac therapy. I will address some of the challenges of using deep learning with limited data and when ground truth data is not available and show how we are using imitation learning to help doctors perform challenging tasks in stressful situations.
Dr Peter Mountney is a Program Manager and Senior Key Expert Scientist at Siemens Healthineers. His research interests lie in the fields of machine learning, medical imaging and quantum technologies. His research focuses on developing deep technology and translating it into applications. Peter carried out his PhD and post-doctoral work at Imperial College London. He is a Visiting Lecturer at Kings College London in the Department of Biomedical Engineering and the Royal Society Entrepreneur in Residence at UCL in AI and Quantum.
Ahmed Serag - Philips
AI & Deep Learning for Cancer Detection
We are moving rapidly into an era where next-generation pathology is becoming a reality with the advent of digital pathology. Paradigm shifts are being witnessed in cancer care with precision medicine and personalized treatments increasing by the day. Computational pathology was created to help improve the pathology ecosystem and go beyond to help a range of laboratory activities and goals, including: improving diagnostic accuracy, optimizing patient care, and reducing costs by improving laboratory efficiencies. In this talk, we will outline how deep learning could help pathologists be faster, more accurate and make more accurate diagnoses for patients. Application to prostate cancer detection will be demonstrated.
Ahmed Serag is a Research Scientist at Philips. He has over twelve years of experinece in turning data into knowledge for top-tier firms and academic insitutions in Europe and USA. His research focuses on developing data analysis and decision support tools using machine learning, deep learning and big data. Ahmed earned a PhD in Computer Science from Imperial College London.
Yinyin Yuan - The Institute of Cancer Research
Deep Learning the Ecological Niches of Cancer Cells for Combating Treatment Resistance
Tumours consist of not only cancer cells, but also normal cells such as immune cells that can be critical in eliminating cancer cells. These different types of cells co-exist in different parts of the same tumour with profound clinical implications. Just like in ecology where spatial organisation of animals, their predators and habitats is central for understanding the ecosystem and make prediction, It is becoming increasingly evident that we need to use a similar spatial approach to evaluate tumour heterogeneity.
My team at the Institute of Cancer Research develops machine learning and deep learning approaches to identify different types of cells in digital pathological images of tumour sections based on their differences in appearance. Such automated image analysis allows us to map their spatial distribution within the tumour of a patient. The next step is to quantify spatial variability of these cells, usually in the order of millions, using spatial statistics.
Our recent study on breast cancer and lung cancer underscored the importance of examining spatial heterogeneity of the tumour. We studied how immune cells are spatially arranged within the tumours, and detected the so-called immune hotspots, which are tumour regions that contain spatial clustering of immune cells. This uses a spatial statistical method called Getis-Ord Hotspot analysis, which is commonly used for detecting crime hotspots in cities. High amount of immune hotspots, but not the amount of immune cells, correlates with high probability of cancer recurrence. This study provides a new way to predict patient prognosis, and open the door to new therapeutic opportunities using immunotherapy across cancer types.
Yinyin Yuan joined the ICR in 2012 as the leader of the Computational Pathology and Integrative Genomics team. Currently, her team is part of the Centre for Evolution and Cancer and the Division of Molecular Pathology. Yinyin was trained in computer science and bioinformatics. She obtained her academic degrees in computer science during her education at the University of Science and Technology of China (BSc 2003) and University of Warwick (MSc by research 2005, computer vision and steganography; PhD 2009, machine learning and bioinformatics).
AI IN DRUG DISCOVERY & DEVELOPMENT
Shahar Harel - Technion - Israel Institute of Technology
Prototype-Based Drug Discovery using Deep Generative Models
Designing a new drug is an expensive and lengthy process. The first stage is drug discovery, in which potential drugs are identified before selecting a candidate drug to progress to clinical trials. As the space of potential molecules is very large (1023-1060), a common technique during drug discovery is to start from a molecule which already has some of the desired properties. An interdisciplinary team of scientists generates hypothesis about the required changes to the prototype. We call this process a prototype-driven hypothesis generation. In this talk, we present an algorithmic unsupervised approach for prototype-driven hypothesis generation. Our method is inspired by the known analogy between a chemist understanding of a compound and a language speaker understanding of a word (“Atoms are letters, molecules are the words, supramolecular entities are the sentences and the chapters” [Jean-Marie Lehn 1995]), which motivates the potential of Natural Language Processing for Computational Chemistry. More formally, we design a conditional deep generative model for molecule generation with diversity attention. The model operates on a given molecule prototype and generates various molecules as candidates. The generated molecules should be novel and share desired properties with the prototype. We show that the molecules generated by the system are valid molecules which simultaneously have strong connection to the prototype and are novel. Out of the compounds generated by the system, we identified 35 FDA-approved drugs. As an example, our system generated Isoniazid - one of the main drugs for Tuberculosis.
Shahar Harel is a graduate student in the computer science department at the Technion - Israel Institute of Technology under the guidance of Dr. Kira Radinsky and Professor Shaul Markovitch, and a Research Scientist at SparkBeyond - Innovative AI start-up based in Israel. During his research, Shahar is developing machine learning based methods for generation of novel chemical compounds with unique characteristics, mainly for drug discovery & development. His research works were recently published in top-tier data science conferences and pharmaceutical science journal. Shahar is currently working closely with pharmaceutical companies to further analyze additional generated molecules with high potential of having therapeutic effects.
Polina Mamoshina - Insilico Medicine
Deep Learning for drug discovery: applying deep adversarial networks for new molecule development
Neural networks and other machine learning models have recently been applied to many biological problems, including drug discovery. Applications of deep neural networks combined with domain expertise can help design de novo druglike compounds and generate large virtual chemical libraries, which can be more efficiently screened for in silico drug discovery purposes. This presentation will describe aspects of applications of deep adversarial networks and reinforcement learning for molecular de novo design. It will also briefly cover the Insilico Medicine drug discovery pipeline and how machine learning can be applied on each step.
Polina Mamoshina is a senior research scientist at Insilico Medicine, Inc, a Baltimore-based bioinformatics and deep learning company focused on reinventing drug discovery and biomarker development and a part of the computational biology team of Oxford University Computer Science Department. Polina graduated from the Department of Genetics of the Moscow State University. She was one of the winners of GeneHack a Russian nationwide 48-hour hackathon on bioinformatics at the Moscow Institute of Physics and Technology attended by hundreds of young bioinformaticians. Polina is involved in multiple deep learning projects at the Pharmaceutical Artificial Intelligence division of Insilico Medicine working on the drug discovery engine and developing biochemistry, transcriptome, and cell-free nucleic acid-based biomarkers of aging and disease. She recently co-authored seven academic papers in peer-reviewed journals.
DEEP LEARNING IN DIAGNOSTICS
Mark Gooding - Mirada Medical
Deploying AI in the Clinic: Thinking About the Box
As machine learning scientists working in healthcare, we get very excited about both the potential of AI technology and the results that can be achieved with it currently. However, good performance does not guarantee clinical use. In this talk, I will present some considerations that must be addressed in translating technical research into clinical products. While many of the challenges remain the same regardless of the technology used, I will focus specifically on the impact that AI has on reaching the clinic, giving examples from our experience at Mirada in commercialising deep learning-based autocontouring.
Dr Mark Gooding, Chief Scientist at Mirada Medical, obtained his DPhil in Medical Imaging from University of Oxford in 2004. He was employed as a postdoctoral researcher both in university and NHS settings, where his focus was largely around women’s health. In 2009, he joined Mirada Medical, motivated by a desire to see technical innovation translated into clinical practice. While there, he has worked on a broad spectrum of clinical applications, developing algorithms and products for both diagnostic and therapeutic purposes. If given a free choice of research topic, his passion is for improving image segmentation, but in practice he is keen to address any technical challenge. Dr Gooding now leads the research team at Mirada, where in addition to the commercial work he continues to collaborate both clinically and academically.
Dr Gooding has been responsible for leading the research and the development of DLCExpert™ technology, which uses AI (Artificial Intelligence) to learn the clinician’s contouring preferences and automatically apply them to images. This technology demonstrates that AI is not just about huge technological leaps forward. It can be more rapidly applied to everyday tasks to make incremental step-change improvements to the effectiveness of radiotherapy treatment planning, saving time for oncologists and potentially improving patient care.
Spiros Denaxas - Institute of Health Informatics, University College London
Data-driven tools for disease prognosis
Disease prediction tools enable clinicians to identify patients at higher risk of developing a particular health outcome such as a diagnoses or complications association with a disease, being hospitalized (or rehospitalized) or dying from a specific condition. The majority of tools however do not fully exploit the richness and resolution of available data as they tend to use a small set of manually curated clinical features and traditional statistical modelling approaches. This talk will illustrate and critically appraise the use of machine learning approaches including as supervised learning algorithms and neural network representations of clinical concepts for developing and evaluating risk prediction tools using all available data on millions of patients.
Spiros is an Associate Professor in Biomedical Informatics based in the Institute of Health Informatics at University College London. His background is in computer science, information systems engineering and bioinformatics. His research lab (http://denaxaslab.org) operates at the intersection between health research and computer science and focuses on creating and evaluating data-driven methods for transforming electronic health records into research-ready datasets and answering clinically meaningful questions.
Nikolas Pontikos - Researcher & Data Scientist - UCL & Moorfields Eye Hospital
Eye2Gene: A Web App to Assist Genetic Diagnosis of Inherited Retinal Disease with Artificial Intelligence
Nikolas Pontikos - UCL & Moorfields Eye Hospital
Eye2Gene: a web app to assist genetic diagnosis of inherited retinal disease with artificial intelligence
Inherited Retinal Diseases (IRDs) are a group of genetic conditions which causes progressive and bilateral deterioration of the retina, the light-sensitive tissue at the back of the eye. It is estimated that 1 in 3000 people have an IRD and it is the leading cause of blindness in the UK working-age population. Mutations in over 200 genes are known to cause IRD and identifying the causal mutation - a genetic diagnosis - is a significant step towards managing, and potentially treating, people's sight-loss. However, there are very few sites and specialists who are able to achieve these diagnoses around the world. The goal of the Eye2Gene (www.eye2gene.com) web app is to make this specialist service more widely available by using an AI trained on expertly curated datasets from specialist centres around the world. Through uploading of retinal images, genetic data and basic patient information, Eye2Gene is already able to efficiently reach a computer-assisted genetic diagnosis for a number of IRDs. Eye2Gene promises to encourage collaboration and data-sharing in the IRD community which will, in turn, motivate funding for the development of further IRD treatments.
Dr Nikolas Pontikos is a researcher at the UCL Institute of Ophthalmology and a data scientist at Moorfields Eye Hospital. His background is in computer science, bioinformatics and genetics. He specialises in the analysis of genetic and retinal imaging data to identify genetic mutations causing retinal visual impairment. His project Eye2Gene (www.eye2gene.com), aims to make this service globally accessible. He also works on uncovering gene to phenotype correlations in rare disease with his project Phenopolis (www.phenopolis.org) in collaboration with Genomics England.
Daniel Leightley - King’s Centre for Military Health Research
InDEx: Managing Alcohol Misuse by Automation
Technological advances within smartphone devices are creating new innovative routes to improve monitoring, delivery and effectiveness of clinical interventions. In this talk, I will present InDEx, a smartphone app designed to reduce alcohol misuse in veterans through the application of machine learning and behavioural change theory. This combination enables us to personalise both the content of InDEx and messaging to promote healthy lifestyle changes in the armed forces community.
Daniel Leightley is a Post-Doctoral Research Associate at the King’s Centre for Military Health Research. His research focuses on the interface between machine learning and mobile health technologies, specifically focused on diagnosis, treatment, intervention and management of physical and mental health conditions in the Armed Forces community.
Graeme Rimmer - Google
Cardiovascular Signals in Photoplethysmography: Heart Health from the Wrist
Wearable sensors provide the opportunity for continuous passive monitoring of health indicators. In particular the heart rate sensors can extract the pulse waveform; and that waveform can be mined for heart health indicators. At present wearable devices only mine for the heart rate; but there's a lot of cardiovascular activity that affects its shape.
Graeme is engineering lead on Google Fit as it pivots into the health space. His team build apps for Android, iOS and wearable devices as well as a platform for third party fitness and health developers. One particular passion is researching digital biomarkers that can be inferred from both mobile and wearable devices.
CONVERSATION & DRINKS
Lydia Nicholas - Independent
Lydia is a digital anthropologist who researches, writes and communicates about issues where data, culture and bodies meet. Formerly a senior researcher in Nesta's Futures Team and Health Lab she is now pursuing a PhD with UCL's Interaction Centre and Great Ormond Street Hospital. She has worked with the UK Government, the Science Museum, the Wellcome Trust, and more, appeared on BBC Worldwide and Radio 4, writes about science and culture for the New Scientist and regularly performs stand-up comedy about futures & science.
Tímea Polgár - HubScience
Let The Computer Read
HubScience is an scientific software development and service provider company founded in 2015. A team with over 12 years of experience in the development of commercial databases and innovative decision making solutions with primary focus on the pharmaceutical and biotechnology industry ensures the high quality technology. HubScience offers versatile technology to accelerate the knowledge extraction of scientific literature in order for researchers to be able to utilize the existing scientific achievements, and to generate machine readable facts for further analytics. Users can train a built-in AI to recognize pre-defined information categories to automatically analyse large text within a given topic. By supporting cooperation AI-training process and knowledge-share is facilitated.
Timea received her Ph.D. in pharmaceutical sciences with emphasis in chemical engineering and computational approaches for early phase drug discovery at the Budapest University of Technology and Economics, graduating summa cum laude. Her postdoctoral training was in biochemistry, genetics and molecular biology at Albert Szent–Györgyi Medical University in Hungary. She received a M.Sc. in chemistry with emphasis in computational approaches and a M.Sc. in molecular biology. Her broad experience includes strategic business development and research and development with over 18 years in pharmaceutical research and development, biotechnology and scientific software development. She has all–encompassing technical and scientific expertise in drug discovery, with specialty in network pharmacology, ligand–based and structure–based drug design, virtual screening, chemical and biopharmaceutical database management, high–content data analysis and genetics. She has published in about 30 peer reviewed journals and book chapters in the scientific literature.
Finn Catling - Decode Healthcare
Towards automated clinical coding
Manual clinical coding is expensive, unstandardised and incompatible with real-time data interpretation. Traditional automated systems fail to capture much of the information contained in clinical notes. Automated clinical coding is challenging due the large number of clinical codes versus the amount of training data available, and the rarity of many important diseases. Our system uses deep learning to produce rich representations of clinical notes, allowing more accurate coding. We use the structure of medical knowledge to learn more efficiently from training data and allow better coding of rare diseases.
Finn Catling is a doctor, machine learning researcher and entrepreneur. He is the founder of Decode Healthcare, a startup which uses AI to drive new insights, better outcomes and improved efficiency for hospitals and GP practices. His recent research focuses on automated clinical coding, prediction of ventilator-associated pneumonia, prediction of demand on emergency services and generation of synthetic radiotherapy data. Previously, he co-founded the medical education startup T-Log and the Data Science for Doctors courses.
Eduardo W. Jorgensen - MedicSen
Predictive Algorithms and their Impact in Chronic Diseases Management: Diabetes Case
Eduardo will share his vision on the major role that artificial intelligence can have in the digital transformation of the healthcare sector. As a medical doctor and CEO of a medical devices company, he has negotiated with major stakehoders and has clear insights about the needs of both the industry and the patient. Personalized self care, preventive diagnosis, individual treatments and patient support programs, everything available thanks to predictive models and easy interfaces.
Eduardo W. Jorgensen: 2015 UAM Bachelor in Medicine, Built several teams for medical research doing college, passionate about medicine and new technologies, specially in the area of neuroscience and perception. My goal is to change the wy we live with chronic diseases. Spanish national with great interest in medicine, neuroscience and technologies. Perfect domain of English for fluent conversations and negotiations. Had an enrollment of honor prior to college, got into medical school in 2009 and graduated in 2015. In the period, got selected for StartupsMansion (NYC entrepreneurs program), had funded two companies: Livinplans (Surprise travelling agency) and MedicSen (non invasive artificial Pancreas for diabetes) and have been accelerated by TURN8 in Dubai. At the end of the day, I try to learn as much as possible about the world and get surrounded by the top innovative people to form good multidisciplinary teams capable of disrupting every aspect.
DEEP LEARNING APPLIED IN HEALTHCARE
Ben Glocker - Imperial College London
Deep Learning in Medical Imaging - Successes and Challenges
Machines capable of analysing and interpreting medical scans with super-human performance are within reach. Deep learning, in particular, has emerged as a promising tool in our work on automatically detecting brain damage. But getting from the lab into clinical practice comes with great challenges. How do we know when the machine gets it wrong? Can we predict failure, and can we make the machine robust to changes in the clinical data? We will discuss some of our most recent work that aims to address these critical issues and demonstrate our latest results on deep learning for analysing medical scans.
Ben Glocker is Senior Lecturer in Medical Image Computing at the Department of Computing at Imperial College London, and one of three academics leading the Biomedical Image Analysis Group. He also leads the HeartFlow-Imperial Research Team and is scientific advisor for London-based start-up Kheiron Medical Technologies. His research is at the intersection of medical image analysis and artificial intelligence aiming to build computational tools for improving diagnosis, therapy and intervention. He has received several awards including a Philips Impact Award and the Francois Erbsmann Prize. He is a member of the Young Scientists Community of the World Economic Forum. His ERC Starting Grant MIRA is devoted to developing the next generation machine intelligence for medical image representation and analysis.
Maithra Raghu - Google Brain
Explainability Considerations for AI Design
Many AI explainability techniques focus on considerations around AI deployment. But another crucial challenge for AI is their complex design process, spanning data, model choices and algorithms for learning. In this discussion, we overview some of the important considerations for explainability to help with AI design. What might explainability in the design process be defined as? What are some of the approaches being developed and their practical takeaways? What are the key open questions looking forwards?
Maithra Raghu is a PhD Candidate in Computer Science at Maithra Raghu is a Senior Research Scientist at Google Brain and finished her PhD in Computer Science at Cornell University. Her research broadly focuses on enabling effective collaboration between humans and AI, from design to deployment. Specifically, her work develops algorithms to gain insights into deep neural network representations and uses these insights to inform the design of AI systems and their interaction with human experts at deployment. Her work has been featured in many press outlets including The Washington Post, WIRED and Quanta Magazine. She has been named one of the Forbes 30 Under 30 in Science, a 2020 STAT Wunderkind, and a Rising Star in EECS.
Steven Finkbeiner - Director, Center for Systems and Therapeutics and the Taube/Koret Center for Neurodegenerative disease - Gladstone Institutes
Applications of Deep Learning to Neurotherapeutics Development
Steven Finkbeiner - Gladstone Institutes
Applications of Deep Learning to Neurotherapeutics Development
In this talk, we will outline some of the major obstacles to therapeutics development for neurological and psychiatric diseases and how deep learning might be used to address them. We developed robotic microscopes and patient stem cell models of disease that we use to generate large training sets for deep learning networks to develop algorithms that could help with patient stratification, model development, diagnosis, target identification and drug discovery. In one particular example, we collaborated with engineers from Google and recently developed a new technology called in silico tagging in which deep learning networks were trained to accurately predict cell structures, cell states and cell types from unlabeled images obviating the need to perform labeling. This technology will enable investigators to glean much more information from their data than previously possible for almost no additional cost. We see tremendous opportunities for applications of deep learning to contribute to the development of treatments for some of the most devastating diseases known.
Dr. Finkbeiner trained at Yale University, UCSF and Harvard University before joining the faculty at the Gladstone Institutes and UCSF in 1999. Since then, he has been promoted to his current position as a Director at the Gladstone Institutes and a Professor of Neurology and Physiology at UCSF. He Directs the Center for Systems and Therapeutics and the Taube/Koret Center for Neurodegenerative Disease. His research has focused on basic science and disease-related questions in neuroscience, particularly fundamental questions related to learning and memory and elucidating mechanisms of neurodegenerative disease and mental illness. In 2009, the Taube/Koret Center was established to find catalyze the development of neurotherapeutics, leveraging discoveries and technology from in the academic laboratory. Early on, Dr. Finkbeiner developed robotic microscopy, a high throughput longitudinal single cell imaging and analysis approach. It provides a way for scientists to quantify the prognostic value of cellular and molecular changes during a cell’s lifetime for some important future event. It helps to overcome limitations in sensitivity and observer bias inherent to conventional approaches based on single snap shots in time, and it has proven to be very valuable for developing a systems understanding of biology and pathobiology, developing disease models particularly based on induced pluripotent stem cells and for finding putative therapeutics. In turn, this technology has been useful for generating data at a scope and scale that is ideally suited for deep learning networks to develop powerful predictive algorithms and to make important unbiased discoveries in large complex datasets. His laboratory has applied the approach in studies of Parkinson’s disease, Huntington’s disease, ALS, Alzheimer’s disease, frontotemporal dementia, autism and schizophrenia.
ETHICS & REGULATION OF DEEP LEARNING IN PRACTICE
PANEL: Regulation And Global Policy - AI and Autonomous Systems
Jade Leung - Governance AI Program, Future of Humanity Institute
Jade is a researcher with the Governance of Artificial Intelligence Program (GovAI) at the Future of Humanity Institute (University of Oxford). Her research focuses on the governance of emerging dual-use technologies, with a specific focus on firm-government relations in the US and China with respect to advanced artificial intelligence. Jade has a background in engineering, international law, and policy design and evaluation.
Alison Hall - PHG Foundation/University of Cambridge
Alison leads the Humanities work at the PHG Foundation, a health policy think tank which is part of University of Cambridge. Her research focuses on the regulation and governance of genomic data for clinical care and research, the impact of automated processing and artificial intelligence on existing legal and ethical frameworks, and the challenges and opportunities associated with delivering personalised healthcare. Alison has professional qualifications in law and nursing and a masters qualification in healthcare ethics.
Andrea Renda - Centre for European Policy Studies
Andrea Renda is an Italian social scientist, whose research lies at the crossroads between economics, law, technology and public policy. He is Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy at the Centre for European Policy Studies (CEPS). From September 2017, he holds the Chair for Digital Innovation at the College of Europe in Bruges (Belgium) where he has also leading the course “Regulatory Impact Assessment for Business” since 2007. He is also a non-resident fellow at Duke University's Kenan Institute for Ethics. Over the past two decades, he has provided academic advice to several institutions, including the European Commission, the European Parliament, the OECD, the World Bank and several national governments around the world. An expert in technology policy and better regulation, he is a member of the ESIR (Economic and Social Impacts of Research) expert group of the European Commission; and a member of the EU Blockchain Observatory and Forum. He is also a member of the Editorial Board of the international peer-reviewed journals “Telecommunication Policy” (Elsevier) and of the European Journal of Risk Regulation (Lexxion); a member of the Scientific Board of the International Telecommunications Society (ITS) and Chair of the Scientific Board of European Communications Policy Research (EuroCPR). He holds a Ph.D. degree in Law and Economics awarded by the Erasmus University of Rotterdam.
Loubna Bouarfa - OKRA Technologies
Dr Loubna Bouarfa is a machine learning scientist turned entrepreneur. She is the founder and CEO of OKRA Technologies - an artificial intelligence data analytics company for healthcare. OKRA allows healthcare professionals to combine all their data in one place and generate actionable, evidence-based insights in real time, to save and improve human lives. Loubna is currently a member of the European Union High-Level Expert Group on Artificial Intelligence, where she is particularly focused on healthcare and achieving competitive business impact with AI. She was named an MIT Technology Review Top Innovator Under 35, a Forbes 50 Top Women In Tech, and won several prizes, including CEO of Year 2019 at the Cambridge Independent Science and Technology Awards and Best Female-Led Startup at the StartUp Europe Awards. On a personal level, she is a strong advocate for diversity, women and challenging the status quo.
Matthew Fenech - Future Advocacy
Dr Matthew Fenech is an artificial intelligence policy consultant, with expertise in developing and advocating for policies that maximise the opportunities and minimise the risks of these technologies. His main interest is in the ethics and practicalities of the use of AI in healthcare, a field to which he brings his 10 years of experience working as a hospital doctor and clinical academic. He has also authored reports about AI and other emerging technologies in low- & middle-income countries, and on the impact of automation on the future of work. He regularly speaks about these topics in lectures and in the media.
PANEL: Ethically Handling Data - What is Your Responsibility and What Should be the Next Step?
Alice Piterova - Hazy
Alice reviews Hazy's product features for AI ethics, data privacy and compliance and helps define Hazy's core message to the world. Prior to joining Hazy Alice coordinated the cross-party parliamentary group on AI (APPG AI), helping the UK Government to address ethical implications and design new standards for applying machine learning in commercial, political and social areas.
Alice has over 10 years of experience in policy, research, product management and marketing, and a particular focus on such fields as artificial intelligence, big data and tech for good. Having worked in national and international public and private sector organisations, social enterprises and NGOs, Alice has a proven track record in delivering the strategic vision and showcasing impact to a wide range of stakeholders.
Aimee Van Wynsberghe - TU Delft
Aimee van Wynsberghe has been working in ICT and robotics since 2004. She began her career as part of a research team working with surgical robots in Canada at CSTAR (Canadian Surgical Technologies and Advance Robotics). She is Assistant Professor in Ethics and Technology at TU Delft in the Netherlands. She is co-founder and co-director of the Foundation for Responsible Robotics, on the board of the Institute for Accountability in a Digital Age, and an advisory board member for the AI & Intelligent Automation Network. Aimee also serves as a member of the European Commission's High-Level Expert Group on AI and is a founding board member of the Netherlands AI Alliance. Aimee has been named one of the Netherlands top 400 influential women under 38 by VIVA and was named one of the 25 ‘women in robotics you need to know about’. She is author of the book Healthcare Robots: Ethics, Design, and Implementation and has been awarded an NWO personal research grant to study how we can responsibly design service robots. She has been interviewed by BBC, Quartz, Financial Times, and other International news media on the topic of ethics and robots, and is often invited to speak at International conferences and summits.
Caryn Tan - Accenture
Caryn is an Analytics Strategist operating at the intersection of applied analytics and law/ethics.
She advises senior decision-makers on analytics strategy, target operating model and analytics business case and manages technical teams to operationalise and realise these strategies. She also manages Accenture’s Responsible AI practice in the UK where she helps clients confidently deploy responsible AI models with technical, organisational, governance and brand considerations. This involves working with multidisciplinary teams, industry experts and academic institutes.
Caryn graduated from London Business School and holds a law degree from BPP University, both as a merit scholar.
END OF SUMMIT