Learning with Little Data
The current successes of deep neural networks have largely come on classification problems, based on datasets containing hundreds of examples from each category. Humans can easily learn new words or classes of visual objects from very few examples. A fundamental question is how to adapt learning systems to accommodate new classes not seen in training, given only a few examples of each of these classes. I will discuss recent advances in this area, and present ongoing work by my group on various aspects of this problem.
Richard Zemel is a Professor of Computer Science at the University of Toronto, and the Research Director at the new Vector Institute for Artificial Intelligence. Prior to that he was on the faculty at the University of Arizona, and a Postdoctoral Fellow at the Salk Institute and at CMU. He received the B.Sc. in History & Science from Harvard, and a Ph.D. in Computer Science from the University of Toronto. His awards and honors include a Young Investigator Award from the ONR and a US Presidential Scholar award. He is a Senior Fellow of the Canadian Institute for Advanced Research, an NVIDIA Pioneer of AI, and a member of the NIPS Advisory Board. His recent research interests include learning with weak labels, models of images and text, and fairness.