Rosanne Liu

Chevron down

Intrinsic Dimension of Objective Landscapes in Deep Neural Networks

Many deep neural networks that solve amazing tasks employ large numbers of parameters. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such a measure? How many parameters are really needed? One way to answer this question is by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We can slowly increase the dimension of this subspace, and note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape.

Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter finding has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, this method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.

Dr. Rosanne Liu is a Research Scientist and a founding member of Uber AI Labs. She received her PhD degree in Computer Science from Northwestern University. Her research interests include neural network interpretability, object recognition and detection, generative models, and adversarial attacks and defense in neural networks.

Buttontwitter Buttonlinkedin

As Featured In

Original
Original
Original
Original
Original
Original

Partners & Attendees

Intel.001
Nvidia.001
Acc1.001
Ibm watson health 3.001
Rbc research.001
Mit tech review.001
Kd nuggets.001
Facebook.001
Graphcoreai.001
Maluuba 2017.001
Twentybn.001
Forbes.001
This website uses cookies to ensure you get the best experience. Learn more