Laura Douglas

Chevron down

Bias: Statistical and Significant

Cognitive bias exists in people, statistical bias exists in machine learning algorithms, both exist in healthcare. Given it is easier to remove biases from algorithms than from people, AI has the potential to create a future where important decisions, such as hiring, diagnoses and legal judgements, are made in a more fair way. However, if we don’t actively try to model and remove theses biases, we can end up simply propagating them into future. This is the path we are currently on. The current gold standard word vectors are inherently sexist and there are courts in the US using an algorithm which is inherently racist. Of course bias isn’t just about race and gender, those are just some of the easiest places to notice injustice. In healthcare there exist biases for all sorts of reasons, both in the data and the algorithms, I will talk about ways we can go about understanding and removing these biases through machine learning.

Laura is a Research Scientist at babylon health. Her current research uses Probabilistic Inference and Bayesian Networks to model the diagnostic process of a GP. Previously, she has researched ways to predict patient outcomes for a health tech start-up in Singapore, and built state of the art Natural Language Processing tools for another London based start-up. She holds an MSc in Machine Learning from UCL and a Masters (Part III) in Maths from the University of Cambridge.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more