Mohamed Fawzy

Training Models at Facebook Scale with PyTorch

Large scale distributed training has become an essential element to scaling the productivity for ML engineers. Today, ML models are getting larger and more complex in terms of compute and memory requirements. The amount of data we train on at Facebook is huge. In this talk, we will learn about the Distributed Training Platform to support large scale data and model parallelism. We will touch base on Distributed Training support for PyTorch and how we are offering a flexible training platform for ML engineers to increase their productivity at facebook scale.

Mohamed Fawzy is a senior manager at Facebook. In his six years at the company, he’s worked on its distributed storage system and was part of the team that developed cold storage, Facebook’s exabyte archiver storage system that keeps your memories safe. Mohamed started the Distributed AI Group to build large-scale distributed training infrastructure for deep learning and support all use cases within the company including large scale ranking and recommendation, computer vision, machine translation and speech.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more