• THIS SCHEDULE TAKES PLACE ON DAY 2

  • 08:00

    REGISTRATION OPENS

  • 09:00
    Fiona McEvoy

    WELCOME & OPENING REMARKS

    Fiona McEvoy - Tech Ethics Researcher and Founder - YouTheData.com

    Down arrow blue

    Fiona J McEvoy is an AI ethics writer, researcher, speaker, and thought leader based in San Francisco, CA. She was recently honored in the inaugural Brilliant Women in AI Ethics™ Hall of Fame, established at the end of 2020 to recognize “brilliant women who have made exceptional contributions to the space of AI Ethics and diversity.”

    Fiona is the founder of YouTheData.com, a platform for the discussion of the societal impact of tech and AI, and has had numerous articles published in media outlets including Slate, VentureBeat, The Next Web, Offscreen, and All Turtles. Fiona is also regularly asked to present her ideas to AI conferences across the US and internationally, and in recent years has spoken at events in the UK, Canada, Spain, Portugal, and Mexico. Her work has been referenced in books, including Francine Banner’s Crowdsourcing the Law, David Vance’s Business Essentials, and Jeff Mapua’s Virtual Reality and You.

    Fiona holds a graduate degree in Philosophy, with a special focus on ethics and technology.

    Twitter Linkedin
  • AI ETHICS LANDSCAPE

  • 09:10
    Yolanda Lannquist

    Responsible AI for Development

    Yolanda Lannquist - Head of Research & Advisory - The Future Society

    Down arrow blue

    Responsible AI for Development

    While most AI development and applications are currently in advanced economies, the AI and digital revolutions offer an opportunity for developing countries to drive innovation, inclusive growth and sustainable development. From education to healthcare, AI applications offer gains for public services, key sectors, and the UN Sustainable Development Goals. However, they also pose risks in a ‘more to gain, more to lose’ paradigm. Risks arise from models trained on foreign data not representative of local populations, widening inequality and gaps in digital and economic inclusion, and lack of data governance. The Future Society partners with local and international organizations to create the capacity, knowledge, and tools that equip policymakers, companies, academia and beyond to lead responsible AI adoption while mitigating important risks.

    Yolanda is Head of Research & Advisory at The Future Society (TFS), a nonprofit specializing in AI policy originally incubated at Harvard Kennedy School. She leads its ‘Responsible Artificial Intelligence for Development’ (RAI4D) program which creates the knowledge, capacity, and tools for policymakers and key stakeholders to lead responsible adoption of AI for inclusive and sustainable development. This includes co-developing national AI strategies in development contexts (e.g. Rwanda, Ghana and Tunisia) and international AI policies (e.g. for OECD, World Bank Digital Development, GIZ, Global Partnership on AI). Yolanda is appointed to the OECD AI Policy Observatory’s expert group on Implementing Trustworthy AI and taught AI policy at IE University in Madrid. She has a Master in Public Policy from the Harvard Kennedy School and Bachelors in Economics from Columbia University with Phi Beta Kappa honors. Yolanda is Turkish-American and is based in San Francisco.

    Twitter Linkedin
  • 09:30
    Genevieve Smith

    Seven Leadership Strategies to Mitigate Bias in AI

    Genevieve Smith - Associate Director - Berkeley Haas Center for Equity, Gender & Leadership

    Down arrow blue

    Seven leadership strategies to mitigate bias in AI

    Artificial intelligence, which represents the largest economic opportunity of our lifetime, is increasingly employed to make decisions affecting most aspects of our lives. This is exciting and use of AI in predictions and decision-making can reduce human subjectivity, but it can also embed biases, producing discriminatory outcomes at scale and posing immense risks to business. Harnessing the transformative potential of AI requires addressing these biases. By mitigating bias in AI, business leaders can unlock value responsibly and equitably. Genevieve's presentation will outline why bias exists in AI systems and share 7 strategic strategies for leaders to implement.

    Genevieve Smith is the Associate Director at the Berkeley Haas Center for Equity, Gender & Leadership (EGAL), which seeks to reimagine business for a more equitable and inclusive society. She is Lead Author of EGAL’s playbook on Mitigating Bias in Artificial Intelligence and leads research on advancing inclusive AI. For over a decade, Genevieve has conducted research and advised Fortune 500 companies on inclusive technology, gender equity, and women’s economic empowerment. Prior to working at Haas, Genevieve worked for the International Center for Research on Women, UN Women and the UN Foundation’s Clean Cooking Alliance.

    Twitter Linkedin
  • 09:50
    Sagar Savla

    Responsible AI at Google

    Sagar Savla - Head of Product - Google

    Down arrow blue

    Responsible AI at Google

    AI is fundamental, groundbreaking technology with an adoption rate of 64% year over year. Along with its innovative and transformational abilities comes the challenges it faces with regards to ethics and responsibility. If AI/ML systems are developed without responsible and ethical frameworks, they have the propensity to deploy harm amongst individuals within society. It is the responsibility of every organisation developing AI models, to have a Responsible framework they adhere to which is accountable, fair, transparent and safe. In this talk, you’d learn how Google approaches Responsible AI, best practices for AI frameworks and relevant case studies.

    Sagar is the product lead for Responsible AI in Google's AI Research group, building cutting edge tech into products like Live Transcribe (Webby award winner), Pixel phone's camera, Youtube and Nest. Prior to this, he worked at Facebook (now Meta) on virtual reality metaverses, at Lyft and fraud detection at PayPal. Originally from Bombay, India, he’s gone on to give award-winning talks in over 30 countries on topics like Ethical Hacking, AI and more.

    He has Masters' from Georgia Tech with research in the fields of Machine Learning, User Experience and Privacy Policies. He is co-author of UXonomy: UX for Modern Engineers.

    Twitter Linkedin
  • 10:10
    Denise Kleinrichert

    Empathy, Ethics & AI

    Denise Kleinrichert - Professor, Management/Ethics - San Francisco State University

    Down arrow blue

    Empathy, Ethics & AI

    This talk proposes a new perspective for moral considerations of AI. A systematic ethical lens must be aligned with the human duty to the well-being of others at work or those impacted by work practices and products, including artificial intelligence. AI must be designed and used with the preservation of the human well-being of others. The Association of Computing Machinery (ACM) promotes a robust Code of Ethics and Professional Conduct for the practice of business computing to guide development and deployment practices. This session focuses on the ethical concerns, implications, and practices of business AI ideation, development, deployment, and practices using a cognitive lens of empathy (beyond a compliance mindset). Empathy is a necessary, but unique, theoretical lens in the AI domain.

    Denise Kleinrichert, Ph.D. is a Professor of Management and Business Ethics at the Lam Family College of Business, San Francisco State University. She was the Director of the Center for Ethical and Sustainable Business (CESB) for many years, served as Chair of the annual Business Ethics Week (2007 - 2018), is the founder of the Ethics & Compliance Workshop series, co-developed the Graduate Certificate in Ethical Artificial Intelligence, the Graduate Business Certificate in Ethics & Compliance. She teaches undergraduate and MBA seminar courses in Ethics and Compliance; Political, Social, & Legal Environments of Business.

    Twitter Linkedin
  • 10:30

    COFFEE & NETWORKING BREAK

  • 11:00

    BUILDING ETHICAL AI

  • 11:00
    Kathy Baxter

    Practical Advice for Building an Ethical AI Practice

    Kathy Baxter - Principal Architect, Ethical AI Practice - Salesforce

    Down arrow blue

    Our Role in Guiding Responsible AI Regulation

    Major technological advancements rarely begin as safe, inclusive, or focused on long-term societal impacts. As we have all seen, and some have painfully experienced, AI is no different in that regard. However, it is different in terms of the sheer scale, speed, and complexity of its impact so it is unsurprising that there is significant effort to create standards, frameworks, and regulations. There are still many questions to be answered about how to standardize or regulate AI but there are things that every organization creating and implementing AI can do to prepare for upcoming regulations and create trustworthy technology, which Kathy will share

    As a Principal Architect of Ethical AI Practice at Salesforce, Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, "Understanding your users," was published in May 2015. You can read about her current research at einstein.ai/ethics.

    Twitter Linkedin
  • 11:20
    Alexandra Ross

    Best Practices for Building a Data Ethics Program

    Alexandra Ross - Senior Director, Senior Data Protection, Use & Ethics Counsel - Autodesk

    Down arrow blue

    Building AI Responsibly – From the Ground Up

    As the use of artificial intelligence, machine learning and Big Data continues to develop across industries, companies are presented with increasingly complex legal, ethical and operational challenges. Companies that create or work with AI offerings to support or enhance products or business methods need guidance to succeed. Learn how best to build and maintain an ethics by design program, leverage your existing privacy and security program framework, and manage stakeholders at this presentation by legal and data ethics leads at Autodesk.

    Key Takeaways:

    • Understand current best practices for ensuring compliance with key regulations focused on AI.

    • Learn how to engage stakeholders, leverage resources and build, staff and maintain an ethics program.

    • Tips on building an ethical data culture, governance models, training and awareness.

    Alexandra Ross is Senior Director, Senior Data Protection, Use & Ethics Counsel at Autodesk, Inc. where she provides legal, strategic and governance support for Autodesk’s global privacy, security, data use and ethics programs. She is also an Advisor to BreachRx and an Innovators Evangelist for The Rise of Privacy Tech (TROPT). Previously she was Senior Counsel at Paragon Legal and Associate General Counsel for Wal-Mart Stores. She is a certified information privacy professional (CIPP/US, CIPP/E, CIPM, CIPT, FIP and PLS), holds a law degree from UC Hastings College of Law, and a B.S. in theater from Northwestern University. Alexandra is a recipient of the 2019 Bay Area Corporate Counsel Award – Privacy.

    Twitter Linkedin
  • Alec Shuldiner

    Best Practices for Building a Data Ethics Program

    Alec Shuldiner - Data Ethics Program Lead - Autodesk

    Down arrow blue

    Alec Shuldiner, PhD., leads Autodesk’s Data Ethics Program, a key component of the company’s trusted data practices. He has a background in big data, compliance, and technological systems, and is an occasional IoT researcher and commentator.

    Twitter Linkedin
  • 11:40

    PANEL: Looking Beyond Theory; Considerations & Best Practices for Operationalizing Ethical AI in Industry

  • Shilpi Agarwal

    Moderator

    Shilpi Agarwal - Founder & Chief Data Ethics Officer - DataEthics4All

    Down arrow blue

    Reducing AI’s BMI

    Do you know what is AI’s BMI today? You guessed it. It's really high!! So, how do we fix this? Do you think Big Tech needs to go on a DIET? What diet would that be? Keto? Paleo? Mediterranean or Vegetarian? Come, Join us for this fun talk and learn the right diet for AI Companies.

    Shilpi Agarwal is a Data Philanthropist, Adjunct Faculty at Stanford and MIT $100K Launch Mentor.

    Armed with the technical skills from her Bachelor of Engineering in Computer Science, design thinking skills from her Masters in Design, combined with 20+ years of Business and Marketing know-how by working as a Marketing Consultant for some really big and some small brands, Shilpi started DataEthics4All, troubled with the unethical use of data around her on social media, in business and in political campaigns.

    DataEthics4All is a Community bringing the STEAM in AIᵀᴹ Movement for Youth and celebrating Ethics 1stᵀᴹ Champions of today and tomorrow pledging to help 5 Million economically disadvantaged students in the next 5 years by breaking barriers of entry in tech and creating awareness on the ethical use of data in data science and artificial intelligence in enterprise, working towards a better Data and AI World.

    Twitter Linkedin
  • Navrina Singh

    Panelist

    Navrina Singh - Founder - Credo AI

    Down arrow blue

    Navrina Singh is the Founder of Credo AI and a technology leader with over 18+ years experience in Enterprise SaaS, AI and Mobile. Navrina has held multiple product and business leadership roles at Microsoft and Qualcomm. Navrina is an executive board member of the Mozilla foundation guiding their trustworthy AI charter. Navrina is also a young global leader with the World economic forum & was on their future council for AI guiding policies & regulations in responsible AI. Navrina holds an MS in Electrical & Computer Engineering from the University of Wisconsin-Madison, an MBA from the University of Southern California and a BS in Electronics & Telecommunications from Pune College of Engineering, India.

    Twitter Linkedin
  • Kinjal Basu

    Panelist

    Kinjal Basu - Senior Staff Software Engineer - LinkedIn

    Down arrow blue

    Operationalizing Responsible AI in Large-Scale Organizations

    Most large-scale organizations face challenges while scaling their infrastructure to support multiple teams across multiple product domains. More often than not, individual teams build systems and models to power their specific product areas, but because of the innate differences in the products and infrastructure support, the broad use of Responsible AI techniques poses a serious challenge for organizations.

    Each product can potentially have a different definition of “fairness” across different dimensions and hence require very different measurement and mitigation solutions. In this talk, we will focus on how we are building a scalable system on our machine learning platform that can not only measure but also mitigate unintended consequences of AI models across most products at LinkedIn.

    We will discuss how this system aims to seamlessly integrate into each and every AI pipeline and measures unfairness across different protected attributes. The system is flexible to incorporate different definitions of fairness as required by the product. Moreover, if and when algorithmic bias is detected we also have a system to remove such bias through state-of-the-art AI algorithms across different notions of fairness. That being said, we are just starting and there is much more work to be done and we don’t have all the answers yet.

    Finally, all of the above point to having a good intent towards ethical practices. But the real win comes from the actual member impact after launching such bias mitigated models in production. We will also discuss how we A/B test our models and systems once they are launched in production and incorporate those learnings to improve the overall member experience. Thus, connecting the overall intent and impact cycle.

    Kinjal is currently a Sr. Staff Software Engineer and the tech lead for Responsible AI at LinkedIn, focusing on challenging problems in fairness, explainability, and privacy. He leads several initiatives across different product applications towards making LinkedIn a responsible and equitable platform. He received his Ph.D. in Statistics from Stanford University, with a best thesis award and has several published papers in many top journals and conferences. He has been serving as a reviewer and program committee member in multiple top venues such as NeurIPS, ICML, KDD, FAccT, WWW, etc

    Twitter Linkedin
  • Olivia Gambelin

    Panelist

    Olivia Gambelin - Founder & CEO - Ethical Intelligence

    Down arrow blue

    Olivia is an AI Ethicist who works to bring ethical analysis into tech development to create human-centric innovation. She believes there is strength in human values that, when applied to artificial intelligence, lead to robust technological solutions we can trust. Olivia holds an MSc in Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars, as well as a BA in Philosophy and Entrepreneurship from Baylor University. Currently, Olivia works as the Chief Executive Officer of Ethical Intelligence where she leads a remote team of over thirty experts in the Tech Ethics field. She is also the co-founder of the Beneficial AI Society, sits on the Advisory Board of Tech Scotland Advocates as well as the Founding Editorial Board of Springer Nature's AI and Ethics Journal.

    Twitter Linkedin
  • 12:20

    LUNCH

  • ETHICAL AI IN PRACTICE

  • 13:20
    Alka Roy

    Navigating the Bumpy Road to Responsible AI

    Alka Roy - Founder - Responsible Innovation Project & RI Labs

    Down arrow blue

    Navigating the Bumpy Road to Responsible AI

    Alka Roy is the founder of RI Labs and The Responsible Innovation Project where she works with global leaders, researchers and founders to navigate emerging tech and innovation responsibly and with delight. She has authored online courses on Ethics in AI & Data Science and 5G & AI for the Linux Foundation and designed and taught a course on Responsible Innovation for Entrepreneurs, Tech Makers & Business Leaders at University of Berkeley. Alka is a product and technology leader who has launched 100+ products and been part of several industry firsts for AT&T and Cingular Wireless for Wireless Evolution, Cloud Services and Conversational AI. She was instrumental in setting up the Bay Area 5G co-create lab for AT&T and led the Responsible AI initiative for AT&T’s Innovation Center. Alka holds patents for policy and security frameworks, mentors extensively and has served on advisory boards for startups, industry groups and non-profits.

    Linkedin
  • 13:40
    Vishwakarma Singh

    Powering a Safe Online Experience at Scale with Machine Learning

    Vishwakarma Singh - Machine Learning Researcher - Pinterest

    Down arrow blue

    Powering a Safe Online Experience at Scale with Machine Learning

    Large web platforms like Pinterest which facilitate users to create or save content are sometimes misused by bad actors to distribute harmful, hateful, harassing, or misleading content. These unsafe content not only bring very unpleasant experiences to platform users but rarely can also lead to very dangerous outcomes. Identifying various forms of unsafe content is a hard research problem. Operationalizing an effective scalable solution on a real-world platform is even harder because of the ecosystem complexity, dynamic nature, system requirements, massive size, and impact. In the context of Pinterest, I would discuss challenges as well as the standard processes, techniques, and systems used to identify and act against harmful content at scale.

    Vishwakarma Singh is Machine Learning Technical Lead for Trust and Safety at Pinterest where he leads strategy, innovation, and solutions for proactively fighting various platform abuse at scale using Machine Learning. He previously worked at Apple as a Principal Machine Learning Scientist. He earned a PhD in Computer Science with specialization in “Pattern Querying in Heterogeneous Datasets” from University of California at Santa Barbara. He has published many research papers in peer-reviewed conferences and journals.

    Linkedin
  • 14:00
    Nikon Rasumov

    Privacy and Fairness and MLX

    Nikon Rasumov - Product Manager - Meta

    Down arrow blue

    Privacy and Fairness and MLX

    AI Data and Feature Engineering has some important Privacy and Fairness and Experience requirements including: Lineage Tracking, Purpose Limitation, Retention, Data Minimization, Unauthorized as well as avoiding Label Imbalance, Label Bias, Model Bias. I will talk about some of the techniques to address those requirements.

    Nikon Rasumov has +10 years of experience in building B2C and B2B start-ups from the ground up. He holds a Ph.D. from Cambridge University in computational neuroscience as well as affiliations with MIT and Singularity University. As an expert in information-driven product design, his publications and patents deal with how to minimize vulnerabilities resulting from sharing too much information. Nikon’s product portfolio includes Symantec Cyber Resilience Readiness™, SecurityScorecard Automatic Vendor Detection ™, Symantec CyberWriter™, Cloudflare Bot Management with various other insurance and security analytic platforms. Currently Nikon is responsible for Privacy and Developer Experience of AI Data and Feature Engineering at Facebook.

    Twitter Linkedin
  • 14:20

    PANEL: AI Governance & Regulation

  • Keith Sonderling

    Moderator

    Keith Sonderling - Commissioner - U.S. Equal Employment Opportunity Commission (EEOC)

    Down arrow blue

    Keith E. Sonderling was confirmed by the U.S. Senate on September 22, 2020, to be a Commissioner on the U.S. Equal Employment Opportunity Commission. In this capacity, he is responsible for enforcing federal laws that prohibit discrimination against job applicants and employees because of race, color, religion, sex, national origin, age, disability or genetic information. Commissioner Sonderling previously served as the Acting and Deputy Administrator of the Wage and Hour Division at the U.S. Department of Labor. Prior to that, Commissioner Sonderling was a partner at the Gunster Law Firm in Florida, where he practiced labor and employment law.

    Twitter Linkedin
  • Karen Silverman

    Panelist

    Karen Silverman - Founder & CEO - The Cantellus Group

    Down arrow blue

    Karen is a leading global expert in practical governance strategies for AI and other frontier technologies. As the CEO and Founder of The Cantellus Group, she advises Fortune 50 companies, startups, consortia, and the public sector on how to govern cutting-edge technologies in a rapidly changing policy environment. Her expertise is informed by more than 20 years of practice and management leadership at Latham & Watkins, LLP where she advised global businesses in complex antitrust matters, M&A, governance, ESG, and crisis management. Karen chairs the board of a public benefit corporation developing complex content moderation tools. She is a World Economic Forum Global Innovator and sits on its Global AI Council. She serves on the boards of AIEDU, Legal Momentum and Not For Sale.

    Twitter Linkedin
  • Eddan Katz

    Panelist

    Eddan Katz - Tech Policy Advisor - Credo AI

    Down arrow blue

    Eddan Katz is a global expert in technology law and policy, a digital rights activist, and access to knowledge advocate. Now working with the Center for AI and Digital Policy on the AI and Democratic Values report, he developed the pilot project methodology at the World Economic Forum’s Centre for the Fourth Industrial Revolution network, and led multi-stakeholder initiatives on the AI/ML and Data Policy platforms. He was the first Executive Director of the Yale Information Society Project, the International Affairs Director at the Electronic Frontier Foundation and co-founder of the Sudo Room hackerspace in downtown Oakland.

    Twitter Linkedin
  • Madhulika Srikumar

    Panelist

    Madhulika Srikumar - Program Lead for AI Safety Initiative - Partnership on AI

    Down arrow blue

    Madhulika Srikumar is a Program Lead for the AI safety initiative at Partnership on AI, a global multistakeholder nonprofit shaping the future of responsible AI. Madhu 's current work examines developing norms for responsible publication and deployment of AI research and technology. In her work as a policy professional, she has led global partnerships and engagement with governments, tech companies, and law enforcement, and conducted research in digital rights and technology policy. Prior to PAI, she was a public interest technology fellow at New America in Washington D.C. and an associate fellow and program coordinator with Observer Research Foundation in New Delhi. Madhu is a lawyer by training and received her BA LL.B. (Hons.) from Gujarat National Law University in India. She completed her graduate studies (LL.M.) at Harvard Law School where she was an Inlaks Foundation Scholar and a Cravath International Fellow.

    Linkedin
  • 15:00

    END OF SUMMIT

This website uses cookies to ensure you get the best experience. Learn more