• THIS SCHEDULE TAKES PLACE ON DAY 2

  • 08:00

    REGISTRATION OPENS

  • 08:00

    WORKSHOPS

  • 09:00
    Fiona McEvoy

    WELCOME & OPENING REMARKS

    Fiona McEvoy - Tech Ethics Researcher and Founder - YouTheData.com

    Down arrow blue

    Fiona J McEvoy is an AI ethics writer, researcher, speaker, and thought leader based in San Francisco, CA. She was recently honored in the inaugural Brilliant Women in AI Ethics™ Hall of Fame, established at the end of 2020 to recognize “brilliant women who have made exceptional contributions to the space of AI Ethics and diversity.”

    Fiona is the founder of YouTheData.com, a platform for the discussion of the societal impact of tech and AI, and has had numerous articles published in media outlets including Slate, VentureBeat, The Next Web, Offscreen, and All Turtles. Fiona is also regularly asked to present her ideas to AI conferences across the US and internationally, and in recent years has spoken at events in the UK, Canada, Spain, Portugal, and Mexico. Her work has been referenced in books, including Francine Banner’s Crowdsourcing the Law, David Vance’s Business Essentials, and Jeff Mapua’s Virtual Reality and You.

    Fiona holds a graduate degree in Philosophy, with a special focus on ethics and technology.

    Twitter Linkedin
  • AI ETHICS LANDSCAPE

  • 09:10
    Dexter Fichuk

    Your Model is Biased

    Dexter Fichuk - Developer - Shopify

    Down arrow blue

    Your Model is Biased

    The data products we build can be extremely powerful, but how do we build them so we can avoid unintentional biases, discrimination and not create extreme echo chambers. The data products we build can be extremely powerful, but how do we build them so we can avoid biases, discrimination and not create extreme echo chambers. This talk will explore the unintentional negative sides that come with building products that are built from data over heuristics and the ethics of the systems we build.

    Dexter Fichuk is a data scientist turned engineer at Shopify. He's worked on building data products and productionizing ML systems in codepaths hit millions of times daily for the last 3 years. He currently works on Shopify's Shop App.

    Twitter Linkedin
  • 09:30
    Genevieve Smith

    Seven Leadership Strategies to Mitigate Bias in AI

    Genevieve Smith - Associate Director - Berkeley Haas Center for Equity, Gender & Leadership

    Down arrow blue

    Seven leadership strategies to mitigate bias in AI

    Artificial intelligence, which represents the largest economic opportunity of our lifetime, is increasingly employed to make decisions affecting most aspects of our lives. This is exciting and use of AI in predictions and decision-making can reduce human subjectivity, but it can also embed biases, producing discriminatory outcomes at scale and posing immense risks to business. Harnessing the transformative potential of AI requires addressing these biases. By mitigating bias in AI, business leaders can unlock value responsibly and equitably. Genevieve's presentation will outline why bias exists in AI systems and share 7 strategic strategies for leaders to implement.

    Genevieve Smith is the Associate Director at the Berkeley Haas Center for Equity, Gender & Leadership (EGAL), which seeks to reimagine business for a more equitable and inclusive society. She is Lead Author of EGAL’s playbook on Mitigating Bias in Artificial Intelligence and leads research on advancing inclusive AI. For over a decade, Genevieve has conducted research and advised Fortune 500 companies on inclusive technology, gender equity, and women’s economic empowerment. Prior to working at Haas, Genevieve worked for the International Center for Research on Women, UN Women and the UN Foundation’s Clean Cooking Alliance.

    Twitter Linkedin
  • 09:50
    Toju Duke

    Responsible AI at Google

    Toju Duke - Program Manager - Responsible AI - Google

    Down arrow blue

    Responsible AI at Google

    AI is fundamental, groundbreaking technology with an adoption rate of 64% year over year. Along with its innovative and transformational abilities comes the challenges it faces with regards to ethics and responsibility. If AI/ML systems are developed without responsible and ethical frameworks, they have the propensity to deploy harm amongst individuals within society. It is the responsibility of every organisation developing AI models, to have a Responsible framework they adhere to which is accountable, fair, transparent and safe. In this talk, you’d learn how Google approaches Responsible AI, best practices for AI frameworks and relevant case studies.

    Toju is a Responsible AI Program Manager at Google, with over 15 years experience spanning across Advertising, Retail, Not-For Profits and Tech. She designs Responsible AI programs focused on the development and implementation of Responsible AI frameworks amongst Google’s product areas, with a focus on Foundation Models, Natural Language Processing, and Generative Language Models. With a proven track record on business success and project management, she is a Manager for Women in AI Ireland, Tech start-ups Mentor, and a Business Advisor. Toju is a public speaker and advocates for transparent and bias free AI aimed at reducing systemic injustices and furthering equality. She is also the founder of VIBE, a women's community focused on personal and professional development using the underlying principles of emotional intelligence.

    Twitter Linkedin
  • 10:10
    Denise Kleinrichert

    Empathy, Ethics & AI

    Denise Kleinrichert - Professor, Management/Ethics - San Francisco State University

    Down arrow blue

    Empathy, Ethics & AI

    This talk proposes a new perspective for moral considerations of AI. A systematic ethical lens must be aligned with the human duty to the well-being of others at work or those impacted by work practices and products, including artificial intelligence. AI must be designed and used with the preservation of the human well-being of others. The Association of Computing Machinery (ACM) promotes a robust Code of Ethics and Professional Conduct for the practice of business computing to guide development and deployment practices. This session focuses on the ethical concerns, implications, and practices of business AI ideation, development, deployment, and practices using a cognitive lens of empathy (beyond a compliance mindset). Empathy is a necessary, but unique, theoretical lens in the AI domain.

    Denise Kleinrichert, Ph.D. is a Professor of Management and Business Ethics at the Lam Family College of Business, San Francisco State University. She was the Director of the Center for Ethical and Sustainable Business (CESB) for many years, served as Chair of the annual Business Ethics Week (2007 - 2018), is the founder of the Ethics & Compliance Workshop series, co-developed the Graduate Certificate in Ethical Artificial Intelligence, the Graduate Business Certificate in Ethics & Compliance. She teaches undergraduate and MBA seminar courses in Ethics and Compliance; Political, Social, & Legal Environments of Business.

    Twitter Linkedin
  • 10:30

    COFFEE & NETWORKING BREAK

  • 11:00

    BUILDING ETHICAL AI

  • 11:00
    Alexandra Ross

    Best Practices for Building a Data Ethics Program

    Alexandra Ross - Senior Director, Senior Data Protection, Use & Ethics Counsel - Autodesk

    Down arrow blue

    Best Practices for Building a Data Ethics Program

    As the use of artificial intelligence, machine learning and Big Data continues to develop across industries, companies are presented with increasingly complex legal, ethical and operational challenges. Companies that create or work with AI offerings to support or enhance products or business methods need guidance to succeed. Learn how best to build and maintain an ethics by design program, leverage your existing privacy and security program framework, and manage stakeholders at this presentation by legal and data ethics leads at Autodesk.

    Key Takeaways:

    • Understand current best practices for ensuring compliance with key regulations focused on AI.

    • Learn how to engage stakeholders, leverage resources and build, staff and maintain an ethics program.

    • Tips on building an ethical data culture, governance models, training and awareness.

    Alexandra Ross is Senior Director, Senior Data Protection, Use & Ethics Counsel at Autodesk, Inc. where she provides legal, strategic and governance support for Autodesk’s global privacy, security, data use and ethics programs. She is also an Advisor to BreachRx and an Innovators Evangelist for The Rise of Privacy Tech (TROPT). Previously she was Senior Counsel at Paragon Legal and Associate General Counsel for Wal-Mart Stores. She is a certified information privacy professional (CIPP/US, CIPP/E, CIPM, CIPT, FIP and PLS), holds a law degree from UC Hastings College of Law, and a B.S. in theater from Northwestern University. Alexandra is a recipient of the 2019 Bay Area Corporate Counsel Award – Privacy.

    Twitter Linkedin
  • Alec Shuldiner

    Best Practices for Building a Data Ethics Program

    Alec Shuldiner - Data Ethics Program Lead - Autodesk

    Down arrow blue

    Alec Shuldiner, PhD., leads Autodesk’s Data Ethics Program, a key component of the company’s trusted data practices. He has a background in big data, compliance, and technological systems, and is an occasional IoT researcher and commentator.

    Twitter Linkedin
  • 11:20
    Kathy Baxter

    Practical Advice for Building an Ethical AI Practice

    Kathy Baxter - Principal Architect, Ethical AI Practice - Salesforce

    Down arrow blue

    Practical Advice for Building an Ethical AI Practice

    Developing ethical AI is not a nice-to-have, but the responsibility of entire organizations to guarantee data accuracy. Without ethical AI, we break customer trust, perpetuate bias and create data errors—all of which generate risk to the brand and business performance, but most importantly cause harm. And we can’t ignore that consumers—and our own employees—expect us to be responsible with the technology solutions we create and use to make a positive impact on the world. It doesn’t matter if you’re a leader of a company creating technologies that rely upon AI applications, or if you’re a leader at the companies that choose to embrace the technologies, you must understand the complexities, risks and implications of ethical AI use while democratizing data for all. Kathy will share practical recommendations for building a responsible AI practice.

    As a Principal Architect of Ethical AI Practice at Salesforce, Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, "Understanding your users," was published in May 2015. You can read about her current research at einstein.ai/ethics.

    Twitter Linkedin
  • 11:40

    PANEL: Looking Beyond Theory; Considerations & Best Practices for Operationalizing Ethical AI in Industry

  • Shilpi Agarwal

    Moderator

    Shilpi Agarwal - Founder & Chief Data Ethics Officer - DataEthids4All

    Down arrow blue

    Shilpi Agarwal is a Data Philanthropist, Adjunct Faculty at Stanford and MIT $100K Launch Mentor.

    Armed with the technical skills from her Bachelor of Engineering in Computer Science, design thinking skills from her Masters in Design, combined with 20+ years of Business and Marketing know-how by working as a Marketing Consultant for some really big and some small brands, Shilpi started DataEthics4All, troubled with the unethical use of data around her on social media, in business and in political campaigns.

    DataEthics4All is a Community bringing the STEAM in AIᵀᴹ Movement for Youth and celebrating Ethics 1stᵀᴹ Champions of today and tomorrow pledging to help 5 Million economically disadvantaged students in the next 5 years by breaking barriers of entry in tech and creating awareness on the ethical use of data in data science and artificial intelligence in enterprise, working towards a better Data and AI World.

    Twitter Linkedin
  • Navrina Singh

    Panelist

    Navrina Singh - Founder - Credo AI

    Down arrow blue

    Navrina Singh is the Founder of Credo AI and a technology leader with over 18+ years experience in Enterprise SaaS, AI and Mobile. Navrina has held multiple product and business leadership roles at Microsoft and Qualcomm. Navrina is an executive board member of the Mozilla foundation guiding their trustworthy AI charter. Navrina is also a young global leader with the World economic forum & was on their future council for AI guiding policies & regulations in responsible AI. Navrina holds an MS in Electrical & Computer Engineering from the University of Wisconsin-Madison, an MBA from the University of Southern California and a BS in Electronics & Telecommunications from Pune College of Engineering, India.

    Twitter Linkedin
  • Kinjal Basu

    Panelist

    Kinjal Basu - Senior Staff Software Engineer - LinkedIn

    Down arrow blue

    Kinjal is currently a Sr. Staff Software Engineer and the tech lead for Responsible AI at LinkedIn, focusing on challenging problems in fairness, explainability, and privacy. He leads several initiatives across different product applications towards making LinkedIn a responsible and equitable platform. He received his Ph.D. in Statistics from Stanford University, with a best thesis award and has several published papers in many top journals and conferences. He has been serving as a reviewer and program committee member in multiple top venues such as NeurIPS, ICML, KDD, FAccT, WWW, etc

    Twitter Linkedin
  • Olivia Gambelin

    Panelist

    Olivia Gambelin - Founder & CEO - Ethical Intelligence

    Down arrow blue

    Olivia is an AI Ethicist who works to bring ethical analysis into tech development to create human-centric innovation. She believes there is strength in human values that, when applied to artificial intelligence, lead to robust technological solutions we can trust. Olivia holds an MSc in Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars, as well as a BA in Philosophy and Entrepreneurship from Baylor University. Currently, Olivia works as the Chief Executive Officer of Ethical Intelligence where she leads a remote team of over thirty experts in the Tech Ethics field. She is also the co-founder of the Beneficial AI Society, sits on the Advisory Board of Tech Scotland Advocates as well as the Founding Editorial Board of Springer Nature's AI and Ethics Journal.

    Twitter Linkedin
  • 12:20

    LUNCH

  • ETHICAL AI IN PRACTICE

  • 13:20
    Branka Panic

    “Do No Harm” in the Algo Age – Is Ethics Enough?

    Branka Panic - Founding Director - AI for Peace

    Down arrow blue

    “Do No Harm” in the Algo Age – Is Ethics Enough?

    Data science has tremendous applications distributed across various sectors, healthcare, energy, automotive industry, education. Peacebuilding and humanitarian actors are not only catching up with these applications and the extensive debate on AI principles, transparency, accountability, privacy, the ethical and human-rights approach of algorithmic decision-making, but they also have a lot to contribute. Especially their experiences in sensitive fragile and crisis-affected areas and other high-risk settings can be beneficial for how to design, develop and implement data-driven and algo-related approaches. This talk is looking into the specific benefits that can be achieved through the “conflict sensitivity and do no harm” approach and their applications in the algo age.

    All these applications can bring immense potential to support peacebuilding and humanitarian work and help populations in need. However, we must recognize that these methods come with extreme risk to both the privacy and lives of vulnerable populations if the data is misused or used inappropriately. Although these risks exist across different contexts, the sensitive and often black-box nature of emerging technologies uniquely exacerbates these challenges. In order to “do no harm,” we must be able to understand and tackle the ethical issues of working with data. This talk is asking the additional question “is ethics enough?”.

    Branka Panic is the Founder and Executive Director of AI for Peace, a nonprofit ensuring artificial intelligence benefits peace, security, and sustainable development and where diverse voices influence the creation of AI and related technologies. Branka is a passionate advocate for positive peace, with 13 years of experience in the humanitarian-peace-development nexus, working with governments and think-tanks across the globe. She is a co-founder and Board Member of the Center for Exponential Technologies, connecting policy and the tech world. She is a founding member of Sustainable Healthy Habitats and Healthy Humans for Peace, enabling the sustainability of humanitarian action within the framework of the Sustainable Development Goals (SDGs).

    Twitter Linkedin
  • 13:40
    Vishwakarma Singh

    Powering a Safe Online Experience at Scale with Machine Learning

    Vishwakarma Singh - Machine Learning Researcher - Pinterest

    Down arrow blue

    Powering a Safe Online Experience at Scale with Machine Learning

    Large web platforms like Pinterest which facilitate users to create or save content are sometimes misused by bad actors to distribute harmful, hateful, harassing, or misleading content. These unsafe content not only bring very unpleasant experiences to platform users but rarely can also lead to very dangerous outcomes. Identifying various forms of unsafe content is a hard research problem. Operationalizing an effective scalable solution on a real-world platform is even harder because of the ecosystem complexity, dynamic nature, system requirements, massive size, and impact. In the context of Pinterest, I would discuss challenges as well as the standard processes, techniques, and systems used to identify and act against harmful content at scale.

    Vishwakarma Singh is Machine Learning Technical Lead for Trust and Safety at Pinterest where he leads strategy, innovation, and solutions for proactively fighting various platform abuse at scale using Machine Learning. He previously worked at Apple as a Principal Machine Learning Scientist. He earned a PhD in Computer Science with specialization in “Pattern Querying in Heterogeneous Datasets” from University of California at Santa Barbara. He has published many research papers in peer-reviewed conferences and journals.

    Linkedin
  • 14:00
    Nikon Rasumov

    Privacy and Fairness and MLX

    Nikon Rasumov - Product Manager - Meta

    Down arrow blue

    Privacy and Fairness and MLX

    AI Data and Feature Engineering has some important Privacy and Fairness and Experience requirements including: Lineage Tracking, Purpose Limitation, Retention, Data Minimization, Unauthorized as well as avoiding Label Imbalance, Label Bias, Model Bias. I will talk about some of the techniques to address those requirements.

    Nikon Rasumov has +10 years of experience in building B2C and B2B start-ups from the ground up. He holds a Ph.D. from Cambridge University in computational neuroscience as well as affiliations with MIT and Singularity University. As an expert in information-driven product design, his publications and patents deal with how to minimize vulnerabilities resulting from sharing too much information. Nikon’s product portfolio includes Symantec Cyber Resilience Readiness™, SecurityScorecard Automatic Vendor Detection ™, Symantec CyberWriter™, Cloudflare Bot Management with various other insurance and security analytic platforms. Currently Nikon is responsible for Privacy and Developer Experience of AI Data and Feature Engineering at Facebook.

    Twitter Linkedin
  • 14:20

    PANEL: AI Governance & Regulation

  • Keith Sonderling

    Moderator

    Keith Sonderling - Commissioner - U.S. Equal Employment Opportunity Commission (EEOC)

    Down arrow blue

    Keith E. Sonderling was confirmed by the U.S. Senate on September 22, 2020, to be a Commissioner on the U.S. Equal Employment Opportunity Commission. In this capacity, he is responsible for enforcing federal laws that prohibit discrimination against job applicants and employees because of race, color, religion, sex, national origin, age, disability or genetic information. Commissioner Sonderling previously served as the Acting and Deputy Administrator of the Wage and Hour Division at the U.S. Department of Labor. Prior to that, Commissioner Sonderling was a partner at the Gunster Law Firm in Florida, where he practiced labor and employment law.

    Twitter Linkedin
  • Kay Firth-Butterfield

    Panellist

    Kay Firth-Butterfield - Head of AI & ML - World Economic Forum

    Down arrow blue

    Kay Firth-Butterfield is Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum and is one of the foremost experts in the world on the governance of AI. She is a Barrister, former Judge and Professor, technologist and entrepreneur who has an abiding interest in how humanity can equitably benefit from new technologies, especially AI. Kay is an Associate Barrister (Doughty Street Chambers), Master of the Inner Temple, London and serves on the Lord Chief Justice’s Advisory Panel on AI and Law. She co-founded AI Global and was the world’s first Chief AI Ethics officer in 2014 and created the AIEthics twitter hashtag. Kay is Vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles. She is on the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI and AI4All. Kay has advanced degrees in Law and International Relations and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic and social changes arising from the use of AI. She has been consistently recognized as a leading woman in AI since 2018 and was featured in the New York Times as one of 10 Women Changing the Landscape of Leadership.

    Twitter Linkedin
  • Eddan Katz

    Panellist

    Eddan Katz - AI Policy Clinic Research Team - Center for AI and Digital Policy

    Down arrow blue

    Eddan Katz is a global expert in technology law and policy, a digital rights activist, and access to knowledge advocate. Now working with the Center for AI and Digital Policy on the AI and Democratic Values report, he developed the pilot project methodology at the World Economic Forum’s Centre for the Fourth Industrial Revolution network, and led multi-stakeholder initiatives on the AI/ML and Data Policy platforms. He was the first Executive Director of the Yale Information Society Project, the International Affairs Director at the Electronic Frontier Foundation and co-founder of the Sudo Room hackerspace in downtown Oakland.

    Twitter Linkedin
  • 15:00

    END OF SUMMIT

This website uses cookies to ensure you get the best experience. Learn more