Maithra Raghu

Chevron down

A deep representation analysis tool for learning dynamics, compression and interpretability

Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization. Here we consider a family of combinatorial games, arising from work of Erdos, Selfridge, and Spencer, and we propose their use as environments for evaluating and comparing different approaches to reinforcement learning. These games have a number of appealing features: they are challenging for current learning approaches, but they form (i) a low-dimensional, simply parametrized environment where (ii) there is a linear closed form solution for optimal behavior from any state, and (iii) the difficulty of the game can be tuned by changing environment parameters in an interpretable way. We use these Erdos-Selfridge-Spencer games not only to compare different algorithms, but also to compare approaches based on supervised and reinforcement learning, to analyze the power of multi-agent approaches in improving performance, and to evaluate generalization to environments outside the training set.

Maithra Raghu is a researcher at Google Brain and a PhD student at Cornell University. Her primary research interests are in better interpreting and understanding the representations learned by deep neural networks. Previous work has developed a technique (SVCCA) for comparing latent feature maps of convolutional networks and resulting in faster training methods. She has also worked on adapting a new testbed for deep reinforcement learning algorithms to enable studies of generalization, comparisons to supervised learning and multiagent performance.

Buttontwitter Buttonlinkedin

As Featured In

Original
Original
Original
Original
Original
Original

Partners & Attendees

Intel.001
Nvidia.001
Graphcoreai.001
Ibm watson health 3.001
Acc1.001
Rbc research.001
Forbes.001
Twentybn.001
Kd nuggets.001
Mit tech review.001
Maluuba 2017.001
Facebook.001