Rosanne Liu

Chevron down

Generative Models And The Imperative Incompetence Of Deconvolution

Generative models -- models that generate entirely new data that "look like" training examples -- are one of the most promising approaches towards understanding the world in an unsupervised, or weakly supervised manner. The popularity of Generative Adversarial Networks (GANs) have motivated a body of work that successfully generate realistic natural scenes, human faces, and artistic images. However, we recently found out that they struggle at representing disjoint, discrete object sets, and worse, at creating structures among them. This could be attributed to a crucial, but often-overlooked impotence of deconvolution, which is widely adopted in GANs. Towards understanding how generative processes work, we take a deep look into the deconvolution process and the curse of discreteness.

Dr. Rosanne Liu is a Research Scientist and a founding member of Uber AI Labs. She received her PhD degree in Computer Science from Northwestern University. Her research interests include neural network interpretability, object recognition and detection, generative models, and adversarial attacks and defense in neural networks.

Buttontwitter Buttonlinkedin

As Featured In

Original
Original
Original
Original
Original
Original

Partners & Attendees

Intel.001
Nvidia.001
Ibm watson health 3.001
Acc1.001
Rbc research.001
Forbes.001
Twentybn.001
Kd nuggets.001
Mit tech review.001
Facebook.001
Maluuba 2017.001
Graphcoreai.001
This website uses cookies to ensure you get the best experience. Learn more