Neil Tenenholtz

Chevron down

Distributed Tensorflow: Scaling Model Training to Multiple GPUs

While offering state-of-the-art performance across a variety of tasks, deep learning models can be time-consuming to train, thus hindering the exploration of model architectures and hyperparameter configurations. However, this bottleneck can be greatly reduced by leveraging the near-linear speedups afforded by multi-GPU training. In this talk, we will explore the different manners in which Tensorflow supports training to be distributed across a collection of GPUs.

Neil Tenenholtz is the Director of Machine Learning at the MGH & BWH Center for Clinical Data Science, where his responsibilities include the training of novel deep learning models for clinical diagnosis, the development of robust infrastructure for their deployment into the clinical setting, and the creation of tooling to facilitate these processes. Prior to joining the Center, Neil was a Senior Research Scientist at Fitbit where he leveraged machine learning and modeling techniques to develop new features and algorithms that reside both on-device and in the cloud. Neil received his PhD from Harvard University where he was a recipient of the NSF Graduate Research Fellowship and the Link Foundation Fellowship in Advanced Simulation and Training.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more