Bryan Catanzaro

More than a GPU: Platforms for Deep Learning

Training and deploying state of the art deep neural networks is very computationally intensive, with tens of exaflops needed to train a single model on a large dataset. The high density compute afforded by modern GPUs has been key to many of the advances in AI over the past few years. However, researchers need more than a fast processor – they also need optimized libraries, and tools to efficiently program so that they can experiment with new ideas. They also need scalable systems that use many of these processors together to train a single model. In this talk, I’ll discuss platforms for deep learning, and how NVIDIA is working to make the platforms of the future for Deep Learning.

Bryan Catanzaro is VP of Applied Deep Learning Research at NVIDIA, where he leads a team solving problems in fields ranging from video games to chip design using deep learning. Prior to his current role at NVIDIA, he worked at Baidu to create next generation systems for training and deploying end-to-end deep learning based speech recognition. Before that, he was a researcher at NVIDIA, where he wrote the research prototype and drove the creation of CUDNN, the low-level library now used by most AI researchers and deep learning frameworks to train neural networks. Bryan earned his PhD from Berkeley, where he built the Copperhead language and compiler, which allows Python programmers to use nested data parallel abstractions efficiently. He earned his MS and BS from Brigham Young University, where he worked on computer arithmetic for FPGAs.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more