Low-Precision Learning: Heterogeneity and Adversarial Robustness
This talk will overview some recent results from my lab concerning deep learning with low-precision activations, weights and biases. I will highlight two different ways of learning networks with heterogeneous precision, that is, learning nets where different layers use different amounts of precision, optimized for the task. Low-precision nets are already known for their resource efficiency, speed, and regularization effects. I will show some results which show that binary neural networks are also robust to certain forms of adversarial attacks.
Graham Taylor is a Canada Research Chair and Associate Professor at the University of Guelph where he leads the Machine Learning Research Group. He is the academic director of NextAI, non-profit initiative to strengthen Canada's AI venture creation and a member of the Vector Institute for Artificial Intelligence. In 2016 he was named a CIFAR Azrieli Global Scholar in Learning in Machines and Brains. In 2018, he was named one of Canada's Top 40 under 40. Originally born in London, Ontario, he received his PhD in Computer Science from the University of Toronto in 2009, where he was advised by Geoffrey Hinton and Sam Roweis. He spent two years as a postdoc at the Courant Institute of Mathematical Sciences, New York University working with Chris Bregler, Rob Fergus, and Yann LeCun. Through his research, Graham aims to discover new algorithms and architectures for deep learning. His work also intersects high performance computing, investigating better ways to leverage hardware accelerators to cope with the challenges of large-scale machine learning. He is currently Visiting Faculty at Google Brain, Montreal.