Private Distributed Learning in a Byzantine World
The ever-growing number of edge devices (e.g., smartphones) and the exploding volume of sensitive data they produce, call for distributed machine learning techniques that are privacy-preserving. Given the increasing computing capabilities of modern edge devices, these techniques can be realized by pushing the sensitive-data-dependent tasks of machine learning to the edge devices and thus avoid disclosing sensitive data.
I will present two important challenges in this new computing paradigm along with an overview of our proposed solutions to address them. First, for many applications, such as news recommenders, data needs to be processed fast, before it becomes obsolete. Second, given the large amount of uncontrolled edge devices, some of them may undergo arbitrary (Byzantine) failures and deviate from the distributed learning protocol with potentially negative consequences such as learning divergence or even biased predictions.
*Our data is extremely valuable and vulnerable => let's push it to the "Edge"
*Machine Learning at the Edge is possible yet challenging due to (a) temporality of the data and (b) unreliability of the machines
Georgios is a Machine Learning Engineer at Facebook London, focusing on natural language processing. He received his Ph.D. from EPFL in September 2020, where he worked under the supervision of Rachid Guerraoui. Before joining EPFL, he received his MEng in Electrical and Computer Engineering from NTUA. His research focuses on distributed machine learning techniques that are privacy-preserving and robust against arbitrary failures (such as adversarial attacks). He is mainly a practitioner but also studies algorithmic tools from a theoretical perspective. His work has led to publications in multiple premier conferences such ICML and AAAI while he has also won several awards including the EPFL Ph.D. fellowship and the best paper award in Middleware 2020. More about Georgios: https://gdamaskinos.com/