Deep Learning for AI: Takeaways from Yoshua Bengio

Original

At the Deep Learning Summit, Boston 2016, Yoshua Bengio  took us through the evolution of Artificial Intelligence (AI) - how it started with supervised learning, its progression to Machine Learning with speech recognition and computer vision, till reaching human level processing. Co-author of Deep Learning (Adaptive Computation and Machine Learning Series) and Full Professor at Université de Montréal, Yoshua began research in artificial neural networks during the mid 80s. This became the gateway to his career in AI and Deep Learning (DL). 

At the heart of artificial intelligence is data.

AI needs knowledge. Early models of AI could only use formalised knowledge – rules and facts to programme machines to "make decisions". The problem? Much human knowledge is implicit. Take for example intuition. How would you explain an intuitive choice or an innate human response? The lack of full translations of human knowledge set these early models up for failure in general AI. This is how machine learning was born. Machine learning adopted a predictive model that helped machines learn on its own. 

Empowered by increasing computational power and improved algorithms, Machine Learning gave way to DL. Inspired by the human brain, DL uses multi-level representation, or abstraction in artificial neural networks. Each layer of "neurons" extracts a different element from a data set, interprets its and feeds it into a deeper layer. Such neural networks function on the basis that each element of data set has particular mathematical properties. Neural networks can also function bidirectionally. By way of illustration, bidirectional recurrent neural networks have enabled contextual language generation where in machines can generate grammatical sentences to describe an image. 

The applications of DL are endless. We are looking forward to practical applications in healthcare, robotics and computer interaction – just to name a few. Moving forward, the field of DL still faces the challenges of achieving fully unsupervised learning, producing better models for semantics and reaching higher levels of abstraction. Despite this, Yoshua knows that this is "just the beginning of a complete change".  

Yoshua's Keynote Deep Learning Frameworks took place at RE•WORK Deep Learning Summit, Boston 2016. More video presentations and interviews can be found on the RE•WORK video hub. Don't miss your chance to meet Yoshua at the Deep Learning Summit, Montreal 2017

Hear more from Yoshua in his exclusive interview with RE•WORK: 

Yoshua Bengio (PhD in CS, McGill University, 1991), post-docs at M.I.T. (Michael Jordan) and AT&T Bell Labs (Yann LeCun), CS professor at Université de Montréal, Canada Research Chair in Statistical Learning Algorithms, NSERC Chair, CIFAR Fellow, member of NIPS foundation board and former program/general chair, co-created ICLR conference, authored two books and over 300 publications, the most cited being in the areas of Deep Learning, recurrent networks, probabilistic learning, natural language and manifold learning. He is among the most cited Canadian computer scientists and is or has been associate editor of the top journals in machine learning and neural networks. 

View our  full events list  for summits and dinners focused on AI, DL and Machine Intelligence taking place in San Francisco, London, Amsterdam, San Francisco, Boston, New York, Singapore, Hong Kong, and Canada!

Original

Big Data Deep Learning AI Deep Learning Summit Data Analytics Intelligent Automated Systems Speech Recognition Computer Vision


0 Comments

    As Featured In

    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original
    Original

    Partners & Attendees

    Intel.001
    Nvidia.001
    Acc1.001
    Ibm watson health 3.001
    Facebook.001
    Rbc research.001
    Mit tech review.001
    Graphcoreai.001
    Maluuba 2017.001
    Twentybn.001
    Forbes.001
    Kd nuggets.001
    This website uses cookies to ensure you get the best experience. Learn more