Why Unsupervised (Deep) Learning is Important
Deep learning is mostly associated with supervised learning, that is: learning a mapping from features to labels. Acquiring labels is however often expensive and ignores lots of unlabeled data that is cheaply available. In this talk I will show how to train (deep) fully probabilistic “auto-encoder” models that use both labeled and unlabeled data and discuss how these can be extended to incorporate certain invariances. Finally, I will discuss how these models are important for applications of deep learning in domains such as healthcare where the number of data cases is often much smaller than the number of measured features.
Max Welling is a Professor of Computer Science at the University of Amsterdam and the University of California Irvine. In the past he held postdoctoral positions at Caltech (’98-’00), UCL (’00-’01) and the U. Toronto (’01-’03). He received his PhD in ’98 under supervision of Nobel laureate Prof. G. 't Hooft. Max Welling serves as associate editor in chief of IEEE TPAMI, one of the highest impact journals in AI (impact factor 4.8). He serves on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. In 2009 he was conference chair for AISTATS, in 2013 he was be program chair for NIPS (the largest and most prestigious conference in machine learning), in 2014 he was general chair for NIPS and in 2016 he will be a program chair at ECCV. He received multiple grants from NSF, NIH, ONR, and NWO, among which an NSF career grant in 2005. He is recipient of the ECCV Koenderink Prize in 2010 and the best paper award at ICML 2012. Welling is currently the director of the master program in artificial intelligence at the UvA and he is a member of the advisory board of the newly opened Amsterdam Data Science Center in Amsterdam. He is also a member of the Neural Computation and Adaptive Perception Program at the Canadian Institute for Advanced Research. Welling’s research focuses on large-scale statistical learning. He has made contributions in Bayesian learning, approximate inference in graphical models, deep learning and visual object recognition. He has over 150 academic publications.