Democratising Deep Learning: Q&A With Prof. Neil Lawrence

Original
The widespread success of deep learning in a variety of domains is being hailed as a new revolution in artificial intelligence. It has taken 20 years to go from defeating Kasparov at Chess to Lee Sedol at Go, but what have the real advances been across this time?

The fundamental change has been in terms of data availability and compute availability. The underlying technology has not changed much in the last 20 years. So what does that mean for areas like medicine and health? Significant challenges remain, improving the data efficiency of these algorithms and retaining the balance between individual privacy and predictive power of the models. At the Deep Learning Summit in London, Professor Neil Lawrence will review these challenges and propose how they can be solved moving forward.

Neil is Professor of Machine Learning and Computational Biology at the University of Sheffield, where his main research interest is machine learning through probabilistic models, with a particular focus on applications in personalized health and the developing world. I asked him a few questions to learn more about his work, as well as what we can expect from the deep learning field in the future.

What motivated you to start your work in deep learning?
Geoff Hinton gave a talk at the Newton Institute in 1997 on a model called the "Hierarchical Community of Experts", his explanation of the need for hierarchies was a significant influence on my thinking. Yann LeCun had spoken at the same conference on convolutional neural networks for digits, which was another really impressive demonstration, but the ideas that were closest to my own work (and remain close today) were those that Hinton expressed in his talk. I also met my wife at that meeting, so it turned out to be quite important for me!

What are the key factors that have enabled recent advancements in deep learning?
The recent advances in deep learning have all been about supervised learning on very large data sets which are 'weakly structured', such as images or sequence data. The methodologies themselves haven't advanced much from what Yann spoke about at the Newton Institute in 1997, but what has changed is the massive availability of data and practical compute (through GPUs). Back in the late 1980s Tony Robinson had been experimenting with Recurrent Neural Network trained speech recognisers trained on Transputers, but the company making Transputers collapsed, closing that line of research. Modern GPUs have a lot of similarities to Transputers, but the companies that make them are fare from collapsing!

What are the main types of problems now being addressed in the field?
I call the problems 'cognitive problems'. By which I mean ones that humans are naturally good at already. Speech, vision, language. That's because these are the only domains where we can easily access the vast amounts of data that are needed to train these systems. Even then an enormous amount of human labor is needed to deliver results.

What are the practical applications of your work, and what sectors are most likely to be affected?
My own work is much more focused on unsupervised learning, multiview learning, and making methods data efficient. Our ambition is to effect all sectors where data is an important component. But in the meantime we are focusing on those sectors where it's clear that the current generation of neural network based supervised learning techniques don't cut the mustard. Our particular focus is on personalized health and data in the developing world.

What developments can we expect to see in deep learning in the next 5 years?
I hope that the field becomes more diverse, we've seen a rapid influx of people who only have learned one thing, and that thing is something that we understood a great deal about 20 years ago. This is in severe danger of eclipsing the important aspects of machine learning that the community learnt so much about across the last two decades. So I hope we see a settling of this enormous expansion, and a return to a more thoughtful approach to research.

What advancements excite you most in the field?
One of the things that I admire most is the imagination of researchers in deploying the new generation of techniques. We see this particularly in the vision community and the natural language processing community. This is enabling a new generation of devices that don't require the advances in machine learning I've referred to above. It is opening new challenges in dialogue systems and human computer interaction. Through pressing forward in these directions, and discovering the limitations of these methods I think we are likely to learn a lot more about ourselves and our humanity. Call me soppy, but I think that's pretty cool. 

Neil Lawrence will be speaking at the Deep Learning Summit in London on 22-23 September - this year's event with also feature breakout sessions on Chatbots and FinTech. Previous events have sold out, so book early to avoid disappointment. For more information and to register, please visit the website here.

We are holding events focused on AI, Deep Learning and Machine Intelligence in London, Amsterdam, Singapore, Hong Kong, New York, San Francisco and Boston - see all upcoming summits here.

Machine Learning Personalised Medicine Deep Learning AI Healthcare Deep Learning Summit Deep Learning Algorithms


0 Comments

    As Featured In

    Original
    Original
    Original
    Original
    Original
    Original

    Partners & Attendees

    Intel.001
    Nvidia.001
    Ibm watson health 3.001
    Acc1.001
    Rbc research.001
    Twentybn.001
    Mit tech review.001
    Kd nuggets.001
    Facebook.001
    Maluuba 2017.001
    Graphcoreai.001
    Forbes.001
    This website uses cookies to ensure you get the best experience. Learn more