5 Lessons for Improving Training Performance
Learn the best practices for performance analytics and maintenance of a deep learning system. As GPU technology continues to advance, the demand for faster data continues to grow. In deep learning, input pipelines are responsible for a complex chain of actions that ultimately feed data into GPU memory, including reading from storage and pre-processing data. These pipelines bring together multiple hardware systems—networking, CPUs, and storage—along with sophisticated software systems to drive the data movement and transformation.
We'll use results of TensorFlow benchmarking on V100 DGX-1s to highlight ways that overall performance is impacted by various components of the pipeline, and we'll share key ways to keep an end-to-end system highly performant over time.
Emily Watkins is a Solution Architect at Pure Storage. She helps companies streamline their data pipeline to help scale as their AI Projects grow from infancy to delivering significant outcomes for the business. Emily's background is in research, real-time analytics tools, and artificial intelligence workflow optimization.