Sara Hooker

What Does a Pruned Deep Neural Network "Forget"?

Between infancy and adulthood, the number of synapses in our brain first multiply and then fall. Despite losing 50% of all synapses between age two and ten, the brain continues to function. The phrase “use it or lose it” is frequently used to describe the environmental influence of the learning process on synaptic pruning, however there is little scientific consensus on what exactly is lost.

In this talk, we explore what is lost when we prune a deep neural network. State of art pruning methods remove the majority of the weights in deep neural networks with minimal degradation to top-1 accuracy. However, the ability to prune networks with seemingly so little degradation to generalization performance is puzzling. The cost to top-1 accuracy appears minimal if it is spread in a uniform manner across all classes, but what if the cost is concentrated in only a few classes? Are certain types of examples or classes disproportionately impacted by pruning? Our findings help provide intuition into why so much capacity is needed in the first place and has implications for other objectives we may care about such as fairness or AI safety.

Sara Hooker is a research scholar at Google Brain doing deep learning research on reliable explanations of model predictions for black-box models. Her main research interests gravitate towards interpretability, predictive uncertainty, model compression and security. In 2014, she founded Delta Analytics, a non-profit dedicated to bringing technical capacity to help non-profits across the world use machine learning for good. She grew up in Africa, in Mozambique, Lesotho, Swaziland, South Africa, and Kenya. Her family now lives in Monrovia, Liberia.

Buttontwitter Buttonlinkedin
CYBER MONDAY: 25% Off tickets until 6 December using the code: CYBER25
This website uses cookies to ensure you get the best experience. Learn more