Generative Models And The Imperative Incompetence Of Deconvolution
Generative models -- models that generate entirely new data that "look like" training examples -- are one of the most promising approaches towards understanding the world in an unsupervised, or weakly supervised manner. The popularity of Generative Adversarial Networks (GANs) have motivated a body of work that successfully generate realistic natural scenes, human faces, and artistic images. However, we recently found out that they struggle at representing disjoint, discrete object sets, and worse, at creating structures among them. This could be attributed to a crucial, but often-overlooked impotence of deconvolution, which is widely adopted in GANs. Towards understanding how generative processes work, we take a deep look into the deconvolution process and the curse of discreteness.
Dr. Rosanne Liu is a Research Scientist and a founding member of Uber AI Labs. She received her PhD degree in Computer Science from Northwestern University. Her research interests include neural network interpretability, object recognition and detection, generative models, and adversarial attacks and defense in neural networks.