Deep Learning at Scale
Deep learning has had a major impact in the last three years. Imperfect interactions with machines, such as speech, natural language, or image processing have been made robust by deep learning and deep learning holds promise in finding usable structure in large datasets. The training process is lengthy and has proven to be difficult to scale due to constraints of existing compute architectures and there is a need of standardized tools for building and scaling deep learning solutions. I will outline some of these challenges and how fundamental changes to the organization of computation and communication can lead to large advances in capabilities.
Urs has over 9 years of research experience in machine learning, spanning areas from computer vision and image processing to large scale neural data analysis. His data science experience ranges from working with national laboratories in applying deep learning to understand climate change, to helping customers solve challenging computer vision problems in medical imaging. Urs works on making the fastest implementations of convolutional and recurrent networks. For his postdoc at UC Berkeley, he used unsupervised machine learning algorithms such as Restricted Boltzmann Machines to understand the visual system.