• 08:00

    DOORS OPEN & LIGHT BREAKFAST

  • 09:00
    Susannah Shattuck

    WELCOME NOTE & OPENING REMARKS

    Susannah Shattuck - Head of Product - Credo AI

    Down arrow blue

    Responsible AI: A Strategic Approach to the Regulatory Challenges & Opportunities of AI

    While enterprises are adopting artificial intelligence at ever-increasing rates, regulators are beginning to turn their focus on new requirements for AI systems. With sweeping new regulations like the EU AI Act on the horizon, and industry-specific regulations like the New York City algorithmic hiring law already on the books, it's clear that enterprises will need to adopt new strategies and processes for managing AI governance and compliance. In this talk, Susannah Shattuck, Head of Product at Credo AI, will talk about emerging trends in the AI regulatory landscape and share the Responsible AI strategy that can help your enterprise prepare for forthcoming regulation.

    Susannah Shattuck is Head of Product at Credo AI, where she builds AI governance tools that help organizations design, develop, and deploy ethical AI at scale. She has been working in AI governance for the last five years; her passion for AI governance can be traced back to her days on the IBM Watson implementations team, where she helped Fortune 50 companies adopt their first AI use cases and saw how critical trust was to the success of enterprise AI/ML deployments. Prior to joining Credo AI, Susannah built AI/ML products at Arthur AI, X: the Moonshot Factory [formerly Google X], and IBM Watson. She received her BA from Yale University and her MBA from Stanford Graduate School of Business.

    Linkedin
  • CURRENT LANDSCAPE

  • 09:15
    Natesh Arunachalam

    How To Build Robust Machine Learning Models

    Natesh Arunachalam - Lead Data Scientist - Finicity, a Mastercard Company

    Down arrow blue

    How to Build Robust Machine Learning Models

    The adoption of AI offers numerous potential benefits. However, it has also become increasingly common for AI models to pose their own unique set of risks. I will be presenting a robust pipeline for machine learning model development that spans from data to deployment. Adoption of this pipeline can help mitigate some of the common risks posed by ML models.

    Natesh is a Lead Data Scientist at Finicity where he creates Machine Learning products leveraging open banking data. Prior to this, he was a core member of the Machine Learning CoE at JPMChase and specialized in lending, fraud and marketing models.

    Linkedin
  • 09:40
    Mack Wallace

    Regtech is for Regulators Too: AI Use Cases for Supervisors

    Mack Wallace - Specialist on Fintech/Regtech - The World Bank

    Down arrow blue

    Regtech is for Regulators Too: AI Use Cases for Supervisors

    Over the last decade, consumer finance has been transformed due to digital technology. Traditional players have undergone digital transformations, while a range of new, non-traditional players have entered the market, including fintechs, mobile network operators, and technology firms. While this transformation has led to beneficial innovations, it also presents financial sector supervisors with new challenges. Supervisors are undergoing a profound shift to data- and technology-driven oversight. Drawing on research from 14 financial authorities worldwide, this presentation will explore concrete use cases and examples of the application of supervisory technology.

    Mackenzie Wallace is co-author of the World Bank’s 2021 technical note, “The Next Wave of Suptech Innovation: Suptech Solutions for Market Conduct Supervision.” He is a former financial regulator and early employee of the U.S. Consumer Financial Protection Bureau (CFPB), where he helped pioneer the authority’s innovative consumer complaint system and public complaint database. He also served as Fintech Policy Advisor at USAID where he helped create the RegTech for Regulators Accelerator (R2A), working with financial authorities globally to embed data and technology into supervision. He currently serves as Senior Director of Product at fintech, MPOWER Financing, designing inclusive financial products to make higher education more accessible.

    Linkedin
  • 10:05
    Uday Kamath

    Driving Adoption of AI at Enterprise Scale

    Uday Kamath - Chief Analytics Officer - Smarsh

    Down arrow blue

    Driving Adoption of AI at Enterprise Scale

    The explosion of communications volume and variety have led compliance teams to accelerate investments in the use of AI. However, this means new concerns to address, including robustness, explainability, model governance and validation. This session will explore how innovative firms are overcoming these challenges.​

    Uday Kamath has spent more than two decades developing analytics products and combines this experience with learning in statistics, optimization, machine learning, bioinformatics, and evolutionary computing. Uday has contributed to many journals, conferences, and books, is the author of books like Transformers for Machine Learning: A Deep Dive, XAI: An Introduction to Interpretable XAI, Deep Learning for NLP and Speech Recognition, Mastering Java Machine Learning, and Machine Learning: End-to-End guide for Java developers. He held many senior roles: Chief Analytics Officer for Digital Reasoning, Advisor for Falkonry, and Chief Data Scientist for BAE Systems Applied Intelligence. Uday has many patents and has built commercial products using AI in domains such as compliance, cybersecurity, financial crime, and bioinformatics. Uday currently works as the Chief Analytics Officer for Smarsh. He is responsible for data science, research of analytical products employing deep learning, transformers, explainable AI, and modern techniques in speech and text for the financial domain and healthcare.

    Twitter Linkedin
  • 10:30

    COFFEE

  • GOVERNANCE & COMPLIANCE

  • 11:00
    Harry Mendell

    Compliance Program Enhancements using Artificial Intelligence: Natural Language Processing with LEX

    Harry Mendell - Data Architecture - Federal Reserve Bank of New York

    Down arrow blue

    Compliance Program Enhancements using AI: Natural Language Processing with LEX

    LEX is a Natural Language Processing Platform which was built by the Technology Group and Supervision. It is a language extraction tool that identifies language of interest. Its primary use case has been to identify language within Supervision’s continuous monitoring documents but it can be applied to many other use cases. LEX is the first custom-built NLP software put into production in FRBNY and has since been rolled out to 800 users within NY Supervision and the system. This talk will cover:

    • Business Problem: BSA/AML

    • Application of AI/ML/NLP • Language models and how word embedding changed everything!

    • How a language model trained on news articles is good, but one trained on relevant documents is so much more powerful, particularly when blended with the news model.

    • The importance of scheduled retraining and making sure our models have an up-to-date vocabulary

    Harry Mendell specializes in artificial intelligence, machine learning, natural language processing and data architecture. He applies these techniques to innovative solutions for the Federal Reserve. He is in the Data Architecture Group and is also co-chairman of the AI Innovation Roundtable which helps to promote the use of AI and machine learning throughout the system. Harry has a BS and MS in Computer Science from the University of Pennsylvania. His thesis was in artificial intelligence and computer vision. He then worked at Bell Labs with the original Unix team to design the first Unix based workstation. He then joined the financial sector focusing first on the pricing and trading of equity and credit derivatives and later risk management and compliance. The re-emergence of AI and NLP around 2012 led Harry back to his original interests in these areas and he formed a startup to research and develop applications in FinTech which included financial compliance tools. Realizing that this mission was too large and too important for a startup he decided to take a more direct route to introducing these ideas and is now working at the Federal Reserve.

    Twitter Linkedin
  • 11:25
    Frida Polli

    Creating Unbiased AI: How AI Can be More Compliant with Employment Regulation

    Frida Polli - CEO & Co-Founder - Pymetrics

    Down arrow blue

    Creating Unbiased AI: How AI Can be More Compliant with Employment Regulation

    When it comes to artificial intelligence (AI), many have highlighted the need for more thoughtful, beneficial design. Learn what principles pymetrics' CEO, Dr. Frida Polli, believes are key to using it exclusively for good, including when it comes to compliance of employment regulation. These principles include:

    • Supporting user data privacy and ownership so that users are empowered, not overpowered by technology

    •Training AI with unbiased data and auditing algorithms for disparate impact to yield unbiased results

    • Aiming for full transparency around data going into algorithms and subsequent outcomes

    • Using open-sourced methods to allow for quality assurance

    Dr. Frida Polli is an award-winning Harvard and MIT trained neuroscientist turned CEO and a global thought leader on both the future of work and ethical AI, including how the latter will play a critical role in shaping this future.

    She is the founder and CEO of pymetrics, a talent matching platform that uses behavioral data and audited AI to help companies like Unilever, LinkedIn, and Accenture better understand their workforce, as well as make fairer and more predictive people decisions.

    Frida was a pre-doctoral fellow at Harvard Medical School, a postdoctoral fellow at MIT, as well as a Life Science Fellow at HBS. She has appeared on CNN, BBC, MSNBC, BloombergTV, and NPR, as well as presented as a WEF Tech Pioneer at the World Economic Forum in Davos, the President’s Circle at the National Academy of Sciences, and other major scientific and world conferences.

  • 11:50
    Albert Fox Cahn

    Automated Injustice: Civil Rights & AI

    Albert Fox Cahn - Executive Director - Surveillance Technology Oversight Project

    Down arrow blue

    Automated Injustice: Civil Rights & AI

    This session will analyze emerging concerns about the civil rights impact of AI systems as well as the legislative and regulatory efforts being taken to curb bias, protect privacy, and strengthen civil rights protections.

    Albert Fox Cahn is the Surveillance Technology Oversight Project’s ( S.T.O.P.’s) founder and executive director. He is also a Practitioner-in-Residence at N.Y.U Law School’s Information Law Institute and a fellow at Yale Law School’s Information Society Project, Ashoka, and New Profit’s Civic Lab. Albert started S.T.O.P. with the belief that local surveillance is an unprecedented threat to public safety, equity, and democracy.

    Albert is a frequent commentator, with more than 100 articles in the New York Times, Boston Globe, Guardian, WIRED, Slate, NBC Think, Newsweek, and other publications. He frequently lectures at leading universities and speaks at leading technology governance forums. Albert previously served as an associate at Weil, Gotshal & Manges LLP, where he advised Fortune 50 companies on technology policy, antitrust law, and consumer privacy.

    Albert also serves on the New York Immigration Coalition’s Immigrant Leaders Council, the New York Immigrant Freedom Fund’s Advisory Council, IEEE Standards Association P3119 AI Procurement Working Group, and is an editorial board member for the Anthem Ethics of Personal Data Collection. Albert received his J.D., cum laude, from Harvard Law School (where he was an editor of the Harvard Law & Policy Review), and his B.A. in Politics and Philosophy from Brandeis University.

  • 12:10
    Susannah Shattuck

    Responsible AI: A Strategic Approach to the Regulatory Challenges & Opportunities of AI

    Susannah Shattuck - Head of Product - Credo AI

    Down arrow blue

    Responsible AI: A Strategic Approach to the Regulatory Challenges & Opportunities of AI

    While enterprises are adopting artificial intelligence at ever-increasing rates, regulators are beginning to turn their focus on new requirements for AI systems. With sweeping new regulations like the EU AI Act on the horizon, and industry-specific regulations like the New York City algorithmic hiring law already on the books, it's clear that enterprises will need to adopt new strategies and processes for managing AI governance and compliance. In this talk, Susannah Shattuck, Head of Product at Credo AI, will talk about emerging trends in the AI regulatory landscape and share the Responsible AI strategy that can help your enterprise prepare for forthcoming regulation.

    Susannah Shattuck is Head of Product at Credo AI, where she builds AI governance tools that help organizations design, develop, and deploy ethical AI at scale. She has been working in AI governance for the last five years; her passion for AI governance can be traced back to her days on the IBM Watson implementations team, where she helped Fortune 50 companies adopt their first AI use cases and saw how critical trust was to the success of enterprise AI/ML deployments. Prior to joining Credo AI, Susannah built AI/ML products at Arthur AI, X: the Moonshot Factory [formerly Google X], and IBM Watson. She received her BA from Yale University and her MBA from Stanford Graduate School of Business.

    Linkedin
  • 12:35

    LUNCH

  • 13:45
    Eryk Walczak

    Measuring Complexity of Banking Regulations Using Natural Language Processing & Network Analysis

    Eryk Walczak - Senior Research Data Scientist - Bank of England

    Down arrow blue

    Measuring Complexity of Banking Regulations Using Natural Language Processing & Network Analysis

    The banking reforms that followed the financial crisis of 2007–08 led to an increase in UK banking regulation from almost 400,000 to over 720,000 words, and to concerns about their complexity. We define complexity in terms of the difficulty of processing linguistic units, both in isolation and within a broader context, and use natural language processing and network analysis to calculate complexity measures on a novel dataset that covers the near universe of prudential regulation for banks in the United Kingdom before (2007) and after (2017) the reforms. Linguistic, ie textual and network, complexity in banking regulation is concentrated in a relatively small number of provisions, and the post-crisis reforms have accentuated this feature. In particular, the comprehension of provisions within a tightly connected ‘core’ requires following long chains of cross-references.

    Key Takeaways: • AI/ML techniques can be used to study complexity of banking regulations • We describe the changes to the UK banking regulations before and after the Great Financial Crisis (2007 vs. 2017) • We develop a new dataset that can be used for other purposes. This research can be seen as an early step towards automating banking regulations (RegTech)

    Eryk Walczak is a senior research data scientist in the Advanced Analytics Division at the Bank of England. Prior to joining the Bank, Eryk worked in analytic roles for a fintech and a social media company. His current research interests involve applying data science and experimental methods to study macroeconomics.

    Twitter
  • 14:05

    Fireside Chat: Should the Government Regulate AI?

  • Natesh Arunachalam

    MODERATOR

    Natesh Arunachalam - Lead Data Scientist - Finicity, a Mastercard Company

    Down arrow blue

    How to Build Robust Machine Learning Models

    The adoption of AI offers numerous potential benefits. However, it has also become increasingly common for AI models to pose their own unique set of risks. I will be presenting a robust pipeline for machine learning model development that spans from data to deployment. Adoption of this pipeline can help mitigate some of the common risks posed by ML models.

    Natesh is a Lead Data Scientist at Finicity where he creates Machine Learning products leveraging open banking data. Prior to this, he was a core member of the Machine Learning CoE at JPMChase and specialized in lending, fraud and marketing models.

    Linkedin
  • Frida Polli

    PANELIST

    Frida Polli - CEO & Co-Founder - Pymetrics

    Down arrow blue

    Creating Unbiased AI: How AI Can be More Compliant with Employment Regulation

    When it comes to artificial intelligence (AI), many have highlighted the need for more thoughtful, beneficial design. Learn what principles pymetrics' CEO, Dr. Frida Polli, believes are key to using it exclusively for good, including when it comes to compliance of employment regulation. These principles include:

    • Supporting user data privacy and ownership so that users are empowered, not overpowered by technology

    •Training AI with unbiased data and auditing algorithms for disparate impact to yield unbiased results

    • Aiming for full transparency around data going into algorithms and subsequent outcomes

    • Using open-sourced methods to allow for quality assurance

    Dr. Frida Polli is an award-winning Harvard and MIT trained neuroscientist turned CEO and a global thought leader on both the future of work and ethical AI, including how the latter will play a critical role in shaping this future.

    She is the founder and CEO of pymetrics, a talent matching platform that uses behavioral data and audited AI to help companies like Unilever, LinkedIn, and Accenture better understand their workforce, as well as make fairer and more predictive people decisions.

    Frida was a pre-doctoral fellow at Harvard Medical School, a postdoctoral fellow at MIT, as well as a Life Science Fellow at HBS. She has appeared on CNN, BBC, MSNBC, BloombergTV, and NPR, as well as presented as a WEF Tech Pioneer at the World Economic Forum in Davos, the President’s Circle at the National Academy of Sciences, and other major scientific and world conferences.

  • 15:00

    END OF SUMMIT

This website uses cookies to ensure you get the best experience. Learn more