Lucy Vasserman

AI Principles and Identifying Toxicity in Online Conversation

Jigsaw's Perspective API serves machine learning models that identify toxicity in text. In this talk, I'll share how the team behind Perspective uses Google's AI Principles to guide our work and our collaborations with developers using Perspective.

Takeaways: - Building principled and ethical AI is a continuous, ongoing effort, not a one-time task. - There are concrete strategies to mitigate bias in ML models. - Machine learning, combined with thoughtful human moderation and participation, can make online conversations better.

Lucy Vasserman leads the Conversation AI team within Google's Jigsaw, which studies how computers can learn to understand the nuances and context of abusive language at scale. Lucy works on machine learning research to improve the Conversation AI's core models, which power the Perspective API, with a focus on combating algorithmic bias. She also collaborates with internal and external users to ensure the Conversation AI models capture their needs. Prior to joining Jigsaw, Lucy worked on machine learning research and engineering for several other Google teams including Speech Recognition and Google Shopping.

Lucy is passionate about computer science education. She spent the fall of 2015 teaching computer science full-time at Xavier University in New Orleans through the Google in Residence program, a partnership between Google and Historically Black Colleges and Universities. She also serves on the Board of Advocates for Citizen Schools, a national non-profit that runs career-focused after school programming for low income middle schoolers.

Lucy received her B.A. in Computer Science from Pomona College in 2010. In her free time, Lucy enjoys scuba diving, dance, and, as a native New Yorker, all things New York.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more