Balaji Lakshminarayanan

Do Deep Generative Models Know What They Don't Know?

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. Generative models are widely viewed to be a solution for detecting out-of-distribution (OOD) inputs and distributional skew, as they model the density of the input features p(x). We challenge this assumption by presenting several counter-examples. We find that deep generative models, such as flow-based models, VAEs and PixelCNN, which are trained on one dataset (e.g. CIFAR-10) can assign higher likelihood to OOD inputs from another dataset (e.g. SVHN). We further investigate some of these failure modes in detail, that help us better understand this surprising phenomenon, and potentially fix them.

Balaji Lakshminarayanan is a senior research scientist at Google DeepMind. He's interested in scalable probabilistic machine learning and its applications. Most recently, his research has focused on probabilistic deep learning, specifically, uncertainty estimation and deep generative models. He received his PhD from the Gatsby Unit, University College London where he worked with Yee Whye Teh.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more