Nazneen Fatema Rajani

Tailoring Word Embeddings for Gender Bias Mitigation

Word embeddings derived from human-generated corpora inherit strong gender bias which can be further amplified by downstream models. Some commonly adopted debiasing approaches apply post-processing procedures that project pre-trained word embeddings into a subspace orthogonal to an inferred gender subspace. We discover that semantic-agnostic corpus regularities such as word frequency captured by the word embeddings negatively impact the performance of these algorithms. We propose a simple but effective technique that purifies the word embeddings against such corpus regularities prior to inferring and removing the gender subspace.

Nazneen is a senior research scientist at Salesforce working on commonsense reasoning, interpretability, and robustness. She got her PhD in Computer Science from UT Austin in 2018. Several of her work (10+) has been published in top tier conferences like ACL, EMNLP, NACCL, and IJCAI including the work on debiasing word embeddings. Nazneen was one of the finalists for the VentureBeat Transform 2020 women in AI Research. She has given invited talks at various universities and conferences including Yale, UVA, TMLS, and Dreamforce.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more