• ML FAIRNESS SUMMIT

  • Times in PDT

  • 08:00

    WELCOME & OPENING REMARKS - 8am PDT | 11am EDT | 4pm BST

  • DEVELOPING FAIR ML

  • 08:05
    Michael Tetelman

    Is Fair AI Possible? - Yes, and We Have All We Need For It!

    Michael Tetelman - AI Research - Volkswagen Group

    Down arrow blue

    Is Fair AI Possible? - Yes, and We Have All We Need For It!

    As AI-based technology started next industrial revolution it exposed a lot of social inequalities that were mostly hidden and suddenly become obvious to everyone. Unintentional bias in data and AI solutions along with ability to scale it make the usage of the new technology unequal and unfair to people and social groups. Is AI technology responsible for that? No. The technology itself is not fair or unfair. The way we use the technology makes it fair or unfair. We have to evolve the AI technologies to comply with our social norms and we have everything we need to achieve it.

    Key Takeaways:

    *The new AI technology is a great amplifier – it is scaling up both its achievements and deficiencies. Biased data and solutions make us to pose a question: Is the new technology bad for us?

    *The answer is – no. The technology itself does not know social norms that we want. We have to use the technology in a way that will comply with our social norms.

    *I will show how the problems of data bias and unfairness of AI could be solved by using the AI technology itself, so there is a scalable way to automatically correct data biases and find fair solutions that we can accept and use.

    Michael Tetelman has PhD in Theoretical and Mathematical Physics. He is working on developing Bayesian methods for Neural Networks, Deep Learning and Artificial Intelligence. He is currently interested in researching Machine Learning methods for AI Fairness, and specifically AI methods for removing biases in data, self-supervised learning for automatic data labeling and correcting labeling errors. In the past he did research on Machine Learning methods for Optical Character Recognition, Image Processing and Data Compression. His work in Physics includes theoretical and applied studies of Phase Transitions and Quantum Fields Theory.

    Twitter Linkedin
  • 08:30
    Tim Yandel

    Fighting AI Bias: Protecting the Soul of AI

    Tim Yandel - VP - Sama

    Down arrow blue

    Fighting AI Bias: Saving the Soul of AI

    Tim Yandel from Sama will discuss the importance of AI Bias, its impact on machine learning algorithms and what we can all do collectively to save the soul of AI moving forward. In addition to individuals and politicians, companies with AI as a core strategic initiative have the power to incite real change and it needs to happen now.

    Key Takeaways:

    *Overview of the Bias Problem

    *How to counter and avoid bias entering your algorithms

    *Building Ethical AI for the future

    Tim Yandel is the VP of Global Sales at Sama, a certified B Corp that helps build the world's most advanced machine learning algorithms while providing a real impact in the world. Tim is passionate about fostering a Sama culture that cares deeply about helping clients watch their AI itiatives come to life and an ensure the impact of these initiatives change lives for the better.

    Twitter Linkedin
  • 08:55
    Georgios Damaskinos

    Private Distributed Learning in a Byzantine World

    Georgios Damaskinos - Machine Learning Engineer - Facebook

    Down arrow blue

    Private Distributed Learning in a Byzantine World

    The ever-growing number of edge devices (e.g., smartphones) and the exploding volume of sensitive data they produce, call for distributed machine learning techniques that are privacy-preserving. Given the increasing computing capabilities of modern edge devices, these techniques can be realized by pushing the sensitive-data-dependent tasks of machine learning to the edge devices and thus avoid disclosing sensitive data.

    I will present two important challenges in this new computing paradigm along with an overview of our proposed solutions to address them. First, for many applications, such as news recommenders, data needs to be processed fast, before it becomes obsolete. Second, given the large amount of uncontrolled edge devices, some of them may undergo arbitrary (Byzantine) failures and deviate from the distributed learning protocol with potentially negative consequences such as learning divergence or even biased predictions.

    Key Takeaways:

    *Our data is extremely valuable and vulnerable => let's push it to the "Edge"

    *Machine Learning at the Edge is possible yet challenging due to (a) temporality of the data and (b) unreliability of the machines

    Georgios is a Machine Learning Engineer at Facebook London, focusing on natural language processing. He received his Ph.D. from EPFL in September 2020, where he worked under the supervision of Rachid Guerraoui. Before joining EPFL, he received his MEng in Electrical and Computer Engineering from NTUA. His research focuses on distributed machine learning techniques that are privacy-preserving and robust against arbitrary failures (such as adversarial attacks). He is mainly a practitioner but also studies algorithmic tools from a theoretical perspective. His work has led to publications in multiple premier conferences such ICML and AAAI while he has also won several awards including the EPFL Ph.D. fellowship and the best paper award in Middleware 2020. More about Georgios: https://gdamaskinos.com/

    Linkedin
  • 09:20
    Léa Genuit

    ML Fairness 2.0 - Intersectional Group Fairness

    Léa Genuit - Data Scientist - Fiddler

    Down arrow blue

    ML Fairness 2.0 - Intersectional Group Fairness

    As more companies adopt AI, more people question the impact AI creates on society, especially on algorithmic fairness. However, most metrics that measure the fairness of AI algorithms today don’t capture the critical nuance of intersectionality. Instead, they hold a binary view of fairness, e.g., protected vs. unprotected groups. In this talk, we’ll discuss the latest research on intersectional group fairness using worst-case comparisons. Key Takeaways:

    *The importance of fairness in AI

    *Why AI fairness is even more critical today

    *Why intersectional group fairness is critical to improving AI fairness

    Léa is a data scientist at Fiddler AI. With a belief that all AI products should be ethical, she focuses on researching transparency in AI algorithms, including AI explainability, fairness, and bias. When she’s away from her laptop, she can be found running through a cool ocean breeze in the Presidio of San Francisco.

    Twitter Linkedin
  • 09:45

    COFFEE & NETWORKING BREAK

  • 09:55
    Aparna Dhinakaran

    The Man, The Machine, And The Black Box: ML Observability: A Critical Piece in Ensuring Responsible AI

    Aparna Dhinakaran - Co-Founder & CPO - Arize AI

    Down arrow blue

    The Man, The Machine, And The Black Box: ML Observability: A Critical Piece in Ensuring Responsible AI

    In this talk, Aparna will kick off covering the challenges organizations face in checking for model fairness, such as the lack of access to protected class information to check for bias and diffuse organizational responsibility of ensuring model fairness. She will then dive into the approaches organizations can take to start addressing ML fairness head-on with a technical overview of fairness definitions and how practical tools such as ML Observability can help build ML fairness checks into the ML workflow.

    Aparna Dhinakaran is Chief Product Officer at Arize AI, a startup focused on ML Observability. She was previously an ML engineer at Uber, Apple, and Tubemogul (acquired by Adobe). During her time at Uber, she built a number of core ML Infrastructure platforms including Michaelangelo. She has a bachelors from Berkeley's Electrical Engineering and Computer Science program where she published research with Berkeley's AI Research group. She is on a leave of absence from the Computer Vision PhD program at Cornell University.

    Twitter Linkedin
  • FAIR ML IN PRACTICE

  • 10:20
    Ariadna Font Llitjós

    Responsible AI @ Twitter

    Ariadna Font Llitjós - Director Engineering ML Platform - Twitter

    Down arrow blue

    Responsible AI @ Twitter

    In this talk, I will focus on Responsible AI in the social media space, and specifically at Twitter, sharing the kinds of problems that we tackle and explore. At Twitter, Responsible AI is a company-wide initiative and concern, called Responsible ML. Leading this work is our ML Ethics, Transparency and Accountability (META) team: a dedicated group of engineers, researchers, and data scientists collaborating across the company to assess downstream or current unintentional harms in the algorithms we use and to help Twitter prioritize which issues to tackle first. I will share where we are today and where we are headed.

    3 key takeaways

    What does responsible AI mean for Twitter and social media

    What it took to build an A-team in charge of Responsible AI @Twitter

    Twitter’s approach to Responsible AI and the work we are doing/planning

    Ari Font Llitjós is the Director of Engineering of Twitter’s Machine Learning Platform, seeking to improve Twitter by enabling advanced and ethical AI. She is also the Engineering Site Lead for the New York City office. Previously, Ari was director of Product Development and Design Principal at IBM Watson (now Data and AI), and Director of Emerging Technologies at IBM Research, where she spearheaded AI Challenges and IBM’s Q Program, leading design and development of IBM’s Quantum commercial offerings as well as the Quantum Experience and IBM’s open-source SDK for writing Quantum experiments, programs and applications.

    Twitter Linkedin
  • 10:45
    Ramya Srinivasan

    Biases in Generative Art: A Causal Look from the Lens of Art History

    Ramya Srinivasan - AI Researcher - Fujitsu Laboratories of America

    Down arrow blue

    Biases in Generative Art: A Causal Look from the Lens of Art History

    With rapid progress in artificial intelligence (AI), popularity of generative art has grown substantially. From creating paintings to generating novel art styles, AI based generative art has showcased a variety of applications. However, there has been little focus concerning the ethical impacts of AI based generative art. In this work, we investigate biases in the generative art AI pipeline right from those that can originate due to improper problem formulation to those related to algorithm design. Viewing from the lens of art history, we discuss the socio-cultural impacts of these biases. Leveraging causal models, we highlight how current methods fall short in modeling the process of art creation and thus contribute to various types of biases. We illustrate the same through case studies, in particular those related to style transfer. Finally, we outline a few pointers that would be useful to consider while designing generative art AI pipelines.

    Ramya Srinivasan, Ph.D, is an AI researcher with Fujitsu Laboratories of America. In this role, Ramya is involved in the design and development of fair and explainable AI solutions, considering the requirements of various stakeholders involved in the pipeline. Ramya’s research interests are in the broad areas of computer vision, explainable AI, causality, and AI ethics.

    Twitter Linkedin
  • 11:10

    BREAKOUT SESSIONS: ROUNDTABLE DISCUSSIONS WITH SPEAKERS

  • 11:35

    COFFEE & NETWORKING BREAK

  • 11:45

    PANEL: Are We Ready to Deploy Fair and Explainable AI/ML? - Challenges & Responses

  • Krishna Sankar

    Moderator

    Krishna Sankar - Distinguished Engineer – AI - U.S. Bank

    Down arrow blue

    Krishna Sankar is a Distinguished Engineer – AI, at the Enterprise Analytics & AI (EAA) of U.S. Bank, focusing on embedding intelligence in Financial Systems; all aspects incl the E3 of AI - Explainability, Experimentation & AI Ethics. Earlier stints include Senior Data Scientist/Volvo, Chief Data Scientist/blackarrow.tv, Data Scientist/Tata America Intl, Director of Data Science/Bioinformatics startup & as a Distinguished Engineer/Cisco. His external work includes teaching, writing blogs, researching chess openings and Lego Robotics. He has been speaking at various conferences incl NeurIPS 2020 (Moderated 2 panels), RE.WORK [bit.ly/3712jKG], Nvidia GTC2019 and GTC2020, ML tutorials at Strata SJC & London, Spark Summit and others. His occasional blogs can be found at https://ksankar.medium.com/ viz. Conversational AI [bit.ly/37PjPBh], Rebooting AI [bit.ly/2ststqi41], The Excessions of xAI [bit.ly/2LDXe2c], Robots Rules[goo.gl/5yyRv6], NeurIPS 2018 [goo.gl/VgeyDT], Garry Kasparov’s Deep Thinking [goo.gl/9qv671] and a lot of Sci-Fi and crime noirs ! He has been guest lecturing at the Naval Post Graduate School, Monterey; and you will find him occasionally at the FLL (Lego Robotics) World Competition as Robots Design Judge.

    Twitter Linkedin
  • Keegan Hines

    Panellist

    Keegan Hines - VP of Machine Learning - Arthur AI

    Down arrow blue

    Keegan Hines is the VP of Machine Learning at Arthur, an ML model monitoring startup providing model observability and fairness auditing for the Fortune 100 and startups alike. He is also an Adjunct Assistant Professor in the Data Science Program at Georgetown University where he teaches courses in machine learning and deep learning. Keegan comes to Arthur from Capital One where he was the Director of Machine Learning Research and developed applications of ML to key financial services areas. He has also held roles at cyberdefense firms and is currently a Co-Founder and Chair of the Conference on Applied Learning for Information Security (CAMLIS). Keegan holds a PhD in Neuroscience from the University of Texas.

    Linkedin
  • Sahab Aslam

    Panellist

    Sahab Aslam - Associate Director, Data Science Capabilities - Merck

    Down arrow blue

    Sahab Aslam received her Masters in Information & Data Science from the University of California, Berkeley. Sahab has unique and diverse experience ranging across data science, digital health, product development, software engineering, and human-centric design in start-ups and Fortune 100 companies. Sahab started her digital health journey 9 years ago, providing digital health solutions via SMS and voice recording technologies in underserved populations. Today, she utilizes data science to develop solutions to improve patients' lives. Sahab holds a Masters of Science in Mathematics and a Bachelors in Liberal Arts and Sciences. In her time outside her work, she spends advising start-ups and mentoring data science students.

    Twitter Linkedin
  • Madeleine Elish

    Panellist

    Madeleine Elish - Senior Research Scientist - Google

    Down arrow blue

    Madeleine Clare Elish is a cultural anthropologist examining the societal impacts of AI and automation. She is a senior research scientist at Google. Previously she led the AI on the Ground Initiative at Data & Society, where she and her team investigated the promises and risks of integrating AI technologies into society.

    As a researcher and anthropologist, Madeleine has worked to reframe debates about the ethical design, use, and governance of AI systems. She has conducted field work across varied industries and communities, ranging from the Air Force, the driverless car industry, and commercial aviation to precision agriculture and emergency healthcare. Her research has been published and cited in scholarly journals and mainstream media publications. She holds a PhD in Anthropology from Columbia University and an SM in Comparative Media Studies from MIT.

    Twitter Linkedin
  • 12:15

    MAKE CONNECTIONS: MEET WITH ATTENDEES VIRTUALLY FOR 1:1 CONVERSATIONS & GROUP DISCUSSIONS

  • 12:30

    END OF SUMMIT

This website uses cookies to ensure you get the best experience. Learn more