The Man, The Machine, And The Black Box: ML Observability: A Critical Piece in Ensuring Responsible AI
In this talk, Aparna will kick off covering the challenges organizations face in checking for model fairness, such as the lack of access to protected class information to check for bias and diffuse organizational responsibility of ensuring model fairness. She will then dive into the approaches organizations can take to start addressing ML fairness head-on with a technical overview of fairness definitions and how practical tools such as ML Observability can help build ML fairness checks into the ML workflow.
Aparna Dhinakaran is Chief Product Officer at Arize AI, a startup focused on ML Observability. She was previously an ML engineer at Uber, Apple, and Tubemogul (acquired by Adobe). During her time at Uber, she built a number of core ML Infrastructure platforms including Michaelangelo. She has a bachelors from Berkeley's Electrical Engineering and Computer Science program where she published research with Berkeley's AI Research group. She is on a leave of absence from the Computer Vision PhD program at Cornell University.