Combining Directed & Undirected Generative Models
In this talk I will present a new method for training deep models for unsupervised and semi supervised learning. The models consist of two neural networks with multiple layers of stochastic latent units. The first network supports fast approximate inference given some observed data. The other network is trained to approximately model the observed data using higher-level concepts and causes. The learning method is based on a new bound for the log-likelihood and the trained models are automatically regularized to balance between the requirement of making the job for both these models as easy as possible.
Jorg Bornschein is a Global Scholar with the Canadian Institute for Advanced Research (CIFAR) and postdoctoral researcher in Yoshua Bengio’s machine learning lab at the University of Montreal. He is currently concentrating on unsupervised and semisupervised learning using deep architectures. Before moving to Montreal Jorg obtained his PhD from the University of Frankfurt working on large scale bayesian inference for non-linear sparse coding with a focus on building maintainable and massive parallel implementations for HPC clusters. Jorg was also chair and one of the founders of the german hackerspace “Das Labor” which was awarded in 2005 by the federal government for promoting STEM programs to prospective students.