Interpretability: the Next Deep Learning Challenge

Original
Deep learning researchers are increasingly focusing on a big problem in the field: interpretability.

While supervised neural nets trained on huge datasets can achieve impressive performances in tasks such as computer vision and speech recognition, they are often criticized because their internal representations are lacking in interpretability. In order to address some of these concerns, work by scientist Charlie Tang proposes models which add domain-specific knowledge in the form of structured latent variables to standard deep learning methods, leading to good results in one-shot face recognition under illumination variations.

Charlie Tang is a Research Scientist at Apple, with research interests in deep learning, computer vision, neuroscience and robotics. At the Deep Learning Summit in Boston, on 25-26 May, he will present 'Deep Learning with Structure', exploring how neural nets can leverage domain-specific knowledge in computer vision. I asked him a few questions ahead of the summit to learn more.

Can you tell us a bit about yourself and your work? Plus, give us a teaser of your session at the summit?
I'm a currently a research scientist at Apple, previously I've obtained my PhD from the University of Toronto. I do research in Machine Learning, Deep Learning, and Reinforcement Learning, with applications to vision and robotics.

I would like to present my research on how to do deep learning with structure by leverage domain-specific knowledge in computer vision.

What started your work in deep learning?
I was interested in how the human brain learned to form concepts from the sensory stimuli. At the time, Boltzmann Machines were using Hebbian learning, which is inspired by how real synapses of Neurons adapt and learn.

What are the key factors that have enabled recent advancements in deep learning?
I think the availability of compute in the form of GPUs, the availability of datasets/simulators, and the confidence of researchers in deep learning community have led to the recent advancements.

Which industries do you think deep learning will benefit the most and why?
I think every industry from finance to e-commerce to biotech will benefit. They will benefit due to a couple of things. The first is the ability of deep learning to deliver more automation and efficiency to various tasks/services/jobs. The second is the increase in predictive capabilities of algorithms which are trained on large amounts of data.

What advancements in deep learning would you hope to see in the next 3 years?
I want to see deep learning powered applications, startups and services which help make people's lives easier and perhaps even help to save lives. These would obviously require not only advancement in AI algorithms but also overcoming challenges in engineering. 

Charlie Tang will be speaking at the annual Deep Learning Summit in Boston on 25-26 May. Speakers at the summit include Spyros Matsoukas, Principal Scientist at Amazon; Carl Vondrick, PhD Student at MIT; Dilip Krishnan, Research Scientist at Google; Sanja Fidler, Assistant Professor at University of Toronto; Andrew Tulloch, Research Engineer at Facebook, and more.

Early Bird tickets are available until 31 March for the summits in Boston. Register your place here.

Discounted tickets for the Deep Learning Summit and Deep Learning in Finance Summit in Singapore end this week on Friday 3 March! Join us there to explore advances in deep learning and smart artificial intelligence from the world's leading innovators. Book your place now. Original

Neural Networks Machine Learning Deep Learning Deep Learning Summit AI Speech Recognition Computer Vision


0 Comments

    As Featured In

    Original
    Original
    Original
    Original
    Original
    Original

    Partners & Attendees

    Intel.001
    Nvidia.001
    Acc1.001
    Ibm watson health 3.001
    Rbc research.001
    Mit tech review.001
    Kd nuggets.001
    Facebook.001
    Graphcoreai.001
    Maluuba 2017.001
    Twentybn.001
    Forbes.001
    This website uses cookies to ensure you get the best experience. Learn more