Kathy Baxter

Practical Advice for Building an Ethical AI Practice

Developing ethical AI is not a nice-to-have, but the responsibility of entire organizations to guarantee data accuracy. Without ethical AI, we break customer trust, perpetuate bias and create data errors—all of which generate risk to the brand and business performance, but most importantly cause harm. And we can’t ignore that consumers—and our own employees—expect us to be responsible with the technology solutions we create and use to make a positive impact on the world. It doesn’t matter if you’re a leader of a company creating technologies that rely upon AI applications, or if you’re a leader at the companies that choose to embrace the technologies, you must understand the complexities, risks and implications of ethical AI use while democratizing data for all. Kathy will share practical recommendations for building a responsible AI practice.

As a Principal Architect of Ethical AI Practice at Salesforce, Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, "Understanding your users," was published in May 2015. You can read about her current research at einstein.ai/ethics.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more