Last week, RE•WORK held the second annual Deep Learning in Healthcare Summit in London. The summit hosted 30 speakers and 200 attendees over 2 days to explore the recent developments in deep learning and how these are being applied within the healthcare sector. Amongst the lineup of academic influencers and industry leaders, Nils Hammerla, the Machine Learning Lead at Babylon Health presented 'Deep Learning in Health – It's Not All Diagnostics'. Here, Nils gives a little more insight into his work.
in 2013 Babylon Health won several awards, has featured in the Apple Health and
Fitness App Store, and was recently included in Wired’s Top 100 Start-Ups in
Europe. Our vision is to combine the latest technology with the knowledge and
experience of amazing doctors to make healthcare simpler, better, and more
accessible and affordable for people everywhere. Our GP consultations are available in the UK,
Ireland and Rwanda, our app is accessible worldwide, and we have ambitious
plans to expand the reach and range of our services.
In order to achieve our ambition, we need to closely replicate what your GP does using technology. Let’s say you would look at the problem through the eyes of a machine, what does your GP actually do when you have a consultation? Easy, right? You tell the doctor what you think is wrong with you and she understands your concern, she asks you a series of questions, examines you, and finally tells you what she thinks – which could be a diagnosis, that you need some specific tests, or that you should see a specialist. From a machine’s point of view, even the most basic of these skills, such as holding a conversation, are difficult problems that are far from being solved.
Research in digital health tends to focus on the last step in this process – the diagnosis. While it is crucial for what makes a doctor, it is still only one of many tasks a doctor has to do, and simply focusing on this single aspect will not solve the problem. Don’t get me wrong, we at Babylon are very interested in diagnosis and invest considerable resources into developing the best possible automated diagnostic system. However, we also focus on the simple things, the things that we as humans take for granted, as they are just so easy for us.
One major interest of ours is in language: How can we make the machine understand what you are saying? I mean not just understand that you are talking about your shoulder, but to really understand how “shoulder pain” and “I have shoulder pain for years and nobody can help me” talk about similar concepts but convey different emotions and problems. Clearly, we are not the only ones that want to solve this problem. Nearly every application area of machine learning that is based on text basically faces the same challenge – and that is a great thing for us! Very clever people at big companies like Google or Facebook are working hard to solve these problems, and to our benefit they openly publish many of their insights. Given these advances in other fields, the question really becomes one of transfer learning. How do we take the amazing results somebody else has achieved in their domain and apply it to our own problems?
It turns out that this is exactly the sort of problem deep learning excels at! Why? It is basically the whole reason why neural nets work so well in the first place. Without delving too deep into the technical details, any neural network that does a classification basically consists of two parts: i) a sequence of complicated transformations to the network’s input (for example an image), and ii) a relatively simple decision (say if it shows a cat). When we “train” the network, we show it thousands of examples of input and desired output, then put it under a lot of stress, as it can’t do the task well initially. For the network, the only way to relieve the stress is to become better at the task. As the decision part is so simple the network has only one choice: it has to transform the input in a way that makes it easier to make decisions. There are loads of nicely written descriptions on the details of this, so if you are interested, take a look at this awesome blog post on Colah’s blog.
While neural nets are really good at predicting, it is this transformation of the input that is most interesting about them. You can think of the activations of the hidden layers in the neural network as a form of representation: it is a vector, basically a list of numbers. Each input will get one of those representations. Training the neural network means that inputs that the network thinks are similar will end up in similar locations in this representation. The better the network is at arranging inputs in this representation the better it will be at predicting the correct output. The more hidden layers we add to the network the more freedom it will have to arrange inputs in this new representation, which is basically why deep neural nets work so well.
We can simply take these representations that are learned on a task, say identifying thousands of different objects in millions of images, and apply it to a medical problem, say the classification of images of potential melanoma. Effectively the neural network already understands how to do “vision”, and we simply need to adjust it slightly to identify whatever is interesting to us. If this sounds familiar, this is the approach used by Esteva et al. in their recent work published in nature, “Dermatologist-level classification of skin cancer with deep neural network”. They wouldn’t have been able to gather sufficient domain-specific data (images) to train a neural network that performs nearly as well!
We in digital health need to become experts at adapting insights into the basic skills, such as language understanding or computer vision, to our problem domain. We need to make sure that our domain specific data, which has been a tremendous effort to obtain, is used sensibly – we should not use it to teach the machine the most basic skills. As this is one of the main strengths of deep learning I foresee a bright future for these techniques in digital health, as they are an incredible tool to distill massive data-sets to their basic constituents.
In case you’ve missed the summit, you can watch Nils and all the other presentation videos on the RE•WORK Video Hub soon. Register here for on-demand access or contact Chloe on firstname.lastname@example.org to receive additional discount for multiple-event access.
The next Deep Learning in Healthcare Summit will take place in Boston on 25-26 May. Early Bird pass are on sale until 31 March. Hear from speakers such as David Plans, CEO of BioBeats, Christhian Potes, Senior Scientist at Philips Research, and Muyinatu Bell, Assistant Professor at John Hopkins University. Register here.
25 May 2017, Boston
The Deep Learning in Healthcare Summit will explore recent breakthroughs in technical advancements and healthcare applications, from algorithms that learn to recognise complex patterns within rich medical data, to analysing real world evidence for personalised medicine, to discovering the sequence specificities of DNA- binding proteins and how it can aid genome diagnostics.
21 September 2017, London
The next generation in predictive intelligence. Anticipating user & business needs to alert & advise logical steps to increase efficiency. The summit will showcase the opportunities of advancing trends in VAs & their impact on business & society. What impact will predictive intelligence have on business efficiency & personal organization?
10 October 2017, London
Leading minds in healthcare and machine intelligence will come together for an evening of networking and keynote presentations around tools & techniques set to revolutionise healthcare applications, medicine & diagnostics. Join us for a three course meal to support and showcase women in Healthcare and Machine Intelligence.