Machine learning is increasingly being used to take decisions that can severely affect people's lives, e.g. in policing, education, hiring, lending, and criminal risk assessment. However, most often the data used to train such decision systems contains bias that exists in our society. This bias can be absorbed or even amplified by the systems, leading to decisions that are unfair with respect to sensitive attributes (e.g. race and gender). In this talk, I will present the different ways in which the machine learning community is addressing the issue of fairness, and introduce a method for dealing with the complex scenario in which the sensitive attribute affects the decision through both fair and unfair pathways.
Silvia is a senior research scientist at DeepMind, where she works on deep models of high-dimensional time-series and algorithmic fairness, and also contributes to the DeepMind's diversity and inclusion initiative. Silvia received a Diploma di Laurea in Mathematics from University of Bologna and a PhD in Statistical Machine Learning from École Polytechnique Fédérale de Lausanne. Before joining DeepMind, she worked in several Machine Learning and Statistics research groups, such as the Empirical Inference Group at the Max-Planck Institute for Biological Cybernetics, the Machine Learning and Perception Group at Microsoft Research Cambridge, and the Statistical Laboratory at University of Cambridge. Silvia's research interests are based around Bayesian and causal reasoning, approximate inference, time-series models, and deep learning.