How Could Machines Learn as Efficiently as Animals and Humans?
Deep learning has caused revolutions in computer perception, natural language understanding, but almost all these successes largely use supervised learning, which requires human-annotated data. For game AI, most systems use reinforcement learning, which requires too many trials to be practical in the real world. But animals and humans seem to learn vast amounts of knowledge about how the world works through mere observation and occasional actions. Good predictive world models are an essential component of intelligent behavior: With them, one can predict outcomes and plan courses of actions. One could argue that good predictive models are the basis of "common sense", allowing us to fill in missing information: predict the future from the past and present, the past from the present, or the state of the world from noisy percepts. I will review some principles and methods for predictive learning, and discuss how they can learn hierarchical representations of the world and deal with uncertainty.
Yann is the Director of AI Research at Facebook since December 2013, and Silver Professor at New York University on a part time basis, mainly affiliated with the NYU Center for Data Science, and the Courant Institute of Mathematical Science. He received the EE Diploma from Ecole Supérieure d’Ingénieurs en Electrotechnique et Electronique (ESIEE Paris), and a PhD in CS from Université Pierre et Marie Curie (Paris). After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in Holmdel, NJ. He became head of the Image Processing Research Department at AT&T Labs-Research in 1996, and joined NYU as a professor in 2003, after a brief period as a Fellow of the NEC Research Institute in Princeton. He is the co-director of the Neural Computation and Adaptive Perception Program of CIFAR, and co-lead of the Moore-Sloan Data Science Environments for NYU. He received the 2014 IEEE Neural Network Pioneer Award.