Deep Learning Workload and its Impact on Hardware Requirements Throughout the AI Product Lifecycle
The goal of this talk is to review infrastructure challenges for supporting deep learning workloads, and the impact they have on the research and AI product development cycles. In particular, the talk will touch on three main areas: • The deep learning workflow and the hardware requirements for each stage of the product development process • Hardware requirements associated with deep learning model development • Current trends in deep learning and their impact on the future hardware requirements
Adam is an applied research scientist specializing in Machine Learning, with backgrounds in Deep Learning / System Architecture. He is currently a Deep Learning Solution Architect at NVIDIA where his primary responsibility is to support a wide range of customers in delivery of their deep learning solutions. In his previous role with Capgemini he was responsible for building up the U.K. government’s Machine Learning capabilities. He also worked in Jaguar Land Rover Research Centre and was responsible for a variety of internal and external projects, contributing to the ‘Self Learning Car’ portfolio, specifically.