15 - 16 June 2022

Trusted AI Summit Trusted AI Summit schedule

MLOps Summit San Francisco



Download PDF
  • 08:00

    Coffee & Registration

  • 09:00
    Dr. John Rares Almasan

    Trusted AI Stage: Chair Welcome

    Dr. John Rares Almasan - Associate Partner, Distinguished Engineer and Cloud & AI Technology Executive Leader - McKinsey & Company

    Down arrow blue

    Understanding the Importance of Securing the ML Code with ML

    Cyberthreats are becoming smarter and more complex as the threat players have access to newer, AI driven, cloud enhanced technologies. Having the ability to forecast cyberattacks before they happen depends on the same AI capabilities. How can we outsmart the cyberthreats? The answer is by augmenting traditional Cybersecurity with the help of Machine Learning and Deep Leaning technologies. This session will be focused on defining the threat players, types of security threats and defensive techniques, and will culminate with a practical example on how Generative Adversarial Networks can help build proactive, resilient, and robust cybersecurity solutions.

    Dr. John Almasan is a is a distinguished engineer within McKinsey & Company's Cloud Data and Analytics team. An accomplished technology executive with 20+ years of experience, John has led global tech teams and built large-scale data, analytics, and cloud platforms at Bank of America, American Express, and Nationwide Insurance. An expert in multi-cloud big data engineering, machine learning, and data science, John focuses on enabling the acceleration of AI adoption, as well as employee cross-training. John is a member of the Arizona State University’s Board of Advisors and serves as an adjunct professor at several universities, actively involved in preparing the next generation to meet the future skillset needs and demands. John holds two master’s degrees (Engineering and Statistics) and a Doctorate of Business Administration. He is credited with 15+ patents and is the recipient of several awards.

    Linkedin
  • 09:15

    Building AI Responsibly - From the Ground Up

  • Alexandra Ross

    SPEAKER

    Alexandra Ross - Senior Director, Senior Data Protection, Use & Ethics Counsel - Autodesk

    Down arrow blue

    Building AI Responsibly – From the Ground Up

    As the use of artificial intelligence, machine learning and Big Data continues to develop across industries, companies are presented with increasingly complex legal, ethical and operational challenges. Companies that create or work with AI offerings to support or enhance products or business methods need guidance to succeed. Learn how best to build and maintain an ethics by design program, leverage your existing privacy and security program framework, and manage stakeholders at this presentation by legal and data ethics leads at Autodesk.

    Key Takeaways:

    • Understand current best practices for ensuring compliance with key regulations focused on AI.

    • Learn how to engage stakeholders, leverage resources and build, staff and maintain an ethics program.

    • Tips on building an ethical data culture, governance models, training and awareness.

    Alexandra Ross is Senior Director, Senior Data Protection, Use & Ethics Counsel at Autodesk, Inc. where she provides legal, strategic and governance support for Autodesk’s global privacy, security, data use and ethics programs. She is also an Advisor to BreachRx and an Innovators Evangelist for The Rise of Privacy Tech (TROPT). Previously she was Senior Counsel at Paragon Legal and Associate General Counsel for Wal-Mart Stores. She is a certified information privacy professional (CIPP/US, CIPP/E, CIPM, CIPT, FIP and PLS), holds a law degree from UC Hastings College of Law, and a B.S. in theater from Northwestern University. Alexandra is a recipient of the 2019 Bay Area Corporate Counsel Award – Privacy.

    Twitter Linkedin
  • Alec Shuldiner

    SPEAKER

    Alec Shuldiner - Data Ethics Program Lead - Autodesk

    Down arrow blue

    Alec Shuldiner, PhD., leads Autodesk’s Data Ethics Program, a key component of the company’s trusted data practices. He has a background in big data, compliance, and technological systems, and is an occasional IoT researcher and commentator.

    Twitter Linkedin
  • 09:45
    Christelle Mombo-Zigah

    The Business Value of RAI and Why It Matters

    Christelle Mombo-Zigah - Responsible AI Committee Member - Cisco

    Down arrow blue

    The Business Value of RAI and Why It Matters

    Responsible AI Committee Member I am an Integral member of the Responsible AI committee to drive forward the AI/ML agenda across the entirety of the firm. I support strategic priorities and promote responsible AI and ethical AI development by focusing on customer and partner business values, impact, and opportunities.

    Senior Success Programs Manager, Global Enterprise West Partner cross-functionally with customer-facing organizations (sales, engineering, renewals) to define regional strategies, KPIs, and metrics and meet the program’s growth targets Drive customer adoption across 5 Cisco architectures; manage 20 global enterprise customers to increase customer sentiment and renewal rate ($11M in AOV and 100% CSAT) Launch two global innovations with a human-centric focus to improve the experience of Cisco internal and external customers.

    Linkedin
  • 10:15
    Nikon Rasumov

    Privacy and Fairness and MLX

    Nikon Rasumov - Product Manager - Meta

    Down arrow blue

    Privacy and Fairness and MLX

    AI Data and Feature Engineering has some important Privacy and Fairness and Experience requirements including: Lineage Tracking, Purpose Limitation, Retention, Data Minimization, Unauthorized as well as avoiding Label Imbalance, Label Bias, Model Bias. I will talk about some of the techniques to address those requirements.

    Nikon Rasumov has +10 years of experience in building B2C and B2B start-ups from the ground up. He holds a Ph.D. from Cambridge University in computational neuroscience as well as affiliations with MIT and Singularity University. As an expert in information-driven product design, his publications and patents deal with how to minimize vulnerabilities resulting from sharing too much information. Nikon’s product portfolio includes Symantec Cyber Resilience Readiness™, SecurityScorecard Automatic Vendor Detection ™, Symantec CyberWriter™, Cloudflare Bot Management with various other insurance and security analytic platforms. Currently Nikon is responsible for Privacy and Developer Experience of AI Data and Feature Engineering at Facebook.

    Twitter Linkedin
  • 10:40

    Morning Break

  • 11:00
    Kyra Yee

    Algorithmic Bias Bounties: A Community-Driven Approach to Surfacing Harms

    Kyra Yee - Machine Learning Research Engineer - Twitter

    Down arrow blue

    Algorithmic Bias Bounties: A Community Driven Approach to Surfacing Harms

    Proactively detecting bias in machine learning models is difficult, and companies often fail to find out about harms until they’ve already reached the public. We want to change that. We were inspired by how bug bounties have been used in the security world to establish best practices for identifying and mitigating vulnerabilities in order to protect the public. We hope bias bounties can be used similarly to cultivate a community of people focused on ML ethics to help us identify a broader range of issues than we would be able to on our own. This is motivated by the belief that direct feedback from the communities who are affected by our algorithms helps us design products to better serve all people and communities. In this session, we will review some of the challenges of hosting a bias bounty and what we learned from people’s submissions.

    Kyra is a research engineer on the machine learning ethics, transparency, and accountability team at Twitter, where she works on methods for detecting and mitigating algorithmic harms. Prior to Twitter, she was a resident at Meta (formerly Facebook) AI research working on machine translation. She is passionate about working towards safe and equitable deployment of technology.

    Linkedin
  • 11:30

    Understanding the Importance of Securing the ML Code with ML

  • Dr. John Rares Almasan

    SPEAKER

    Dr. John Rares Almasan - Associate Partner, Distinguished Engineer and Cloud & AI Technology Executive Leader - McKinsey & Company

    Down arrow blue

    Understanding the Importance of Securing the ML Code with ML

    Cyberthreats are becoming smarter and more complex as the threat players have access to newer, AI driven, cloud enhanced technologies. Having the ability to forecast cyberattacks before they happen depends on the same AI capabilities. How can we outsmart the cyberthreats? The answer is by augmenting traditional Cybersecurity with the help of Machine Learning and Deep Leaning technologies. This session will be focused on defining the threat players, types of security threats and defensive techniques, and will culminate with a practical example on how Generative Adversarial Networks can help build proactive, resilient, and robust cybersecurity solutions.

    Dr. John Almasan is a is a distinguished engineer within McKinsey & Company's Cloud Data and Analytics team. An accomplished technology executive with 20+ years of experience, John has led global tech teams and built large-scale data, analytics, and cloud platforms at Bank of America, American Express, and Nationwide Insurance. An expert in multi-cloud big data engineering, machine learning, and data science, John focuses on enabling the acceleration of AI adoption, as well as employee cross-training. John is a member of the Arizona State University’s Board of Advisors and serves as an adjunct professor at several universities, actively involved in preparing the next generation to meet the future skillset needs and demands. John holds two master’s degrees (Engineering and Statistics) and a Doctorate of Business Administration. He is credited with 15+ patents and is the recipient of several awards.

    Linkedin
  • Jaspreet Singh

    SPEAKER

    Jaspreet Singh - Senior Principal Engineer - McKinsey & Company

    Down arrow blue

    Understanding the Importance of Securing the ML Code with ML

    Cyberthreats are becoming smarter and more complex as the threat players have access to newer, AI driven, cloud enhanced technologies. Having the ability to forecast cyberattacks before they happen depends on the same AI capabilities. How can we outsmart the cyberthreats? The answer is by augmenting traditional Cybersecurity with the help of Machine Learning and Deep Leaning technologies. This session will be focused on defining the threat players, types of security threats and defensive techniques, and will culminate with a practical example on how Generative Adversarial Networks can help build proactive, resilient, and robust cybersecurity solutions.

    Jaspreet Singh is an experienced senior principal engineer in McKinsey & Company’s Software Engineering and Enterprise Data & Analytics team. Since joining McKinsey in Jan 2002, Jaspreet has led various engagements related to Enterprise Data & Analytics. Jaspreet brings more than 2 decades of experience in building high-performing teams and delivering enterprise data and analytics solutions across a variety of technology platforms and functions. Jaspreet holds a post-graduate degree in Computer Science and several certifications including AWS Solution Architect Professional, Azure Data Engineer, Google Cloud Professional Data Engineer, Scrum Master, and Product Owner. In addition, Jaspreet has 3 patents pending with the United States Patent Office related to techniques for automated cloud data and technology solution delivery using machine learning and artificial intelligence modeling.

    Linkedin
  • 12:00

    Panel Discussion: Create Trusted Models with Explainable AI

  • Dr. John Rares Almasan

    MODERATOR

    Dr. John Rares Almasan - Associate Partner, Distinguished Engineer and Cloud & AI Technology Executive Leader - McKinsey & Company

    Down arrow blue

    Understanding the Importance of Securing the ML Code with ML

    Cyberthreats are becoming smarter and more complex as the threat players have access to newer, AI driven, cloud enhanced technologies. Having the ability to forecast cyberattacks before they happen depends on the same AI capabilities. How can we outsmart the cyberthreats? The answer is by augmenting traditional Cybersecurity with the help of Machine Learning and Deep Leaning technologies. This session will be focused on defining the threat players, types of security threats and defensive techniques, and will culminate with a practical example on how Generative Adversarial Networks can help build proactive, resilient, and robust cybersecurity solutions.

    Dr. John Almasan is a is a distinguished engineer within McKinsey & Company's Cloud Data and Analytics team. An accomplished technology executive with 20+ years of experience, John has led global tech teams and built large-scale data, analytics, and cloud platforms at Bank of America, American Express, and Nationwide Insurance. An expert in multi-cloud big data engineering, machine learning, and data science, John focuses on enabling the acceleration of AI adoption, as well as employee cross-training. John is a member of the Arizona State University’s Board of Advisors and serves as an adjunct professor at several universities, actively involved in preparing the next generation to meet the future skillset needs and demands. John holds two master’s degrees (Engineering and Statistics) and a Doctorate of Business Administration. He is credited with 15+ patents and is the recipient of several awards.

    Linkedin
  • Frankie Cancino

    PANELIST

    Frankie Cancino - Data Scientist - Mercedes-Benz Research & Development

    Down arrow blue

    Frankie Cancino is a Data Scientist at Mercedes-Benz Research & Development, working on applied machine learning initiatives. Prior to joining Mercedes-Benz R&D, Frankie was a Senior AI Scientist at Target AI, focused on methods to improve demand forecasting and anomaly detection. He is also the organizer and founder of Data Science Minneapolis. Data Science Minneapolis is a community that brings together professionals, researchers, data scientists, and AI enthusiasts.

    Linkedin
  • Sonu Durgia

    PANELIST

    Sonu Durgia - Product Lead, AI Fairness - Meta

    Down arrow blue

    Sonu is the product lead for AI Fairness at Meta (prev Facebook). She has over 20 years of experience building and leading global teams across various functions and industries at Walmart, VMware, Oracle and Morgan Stanley.

    Linkedin
  • Usha Jagannathan, Ph.D.

    PANELIST

    Usha Jagannathan, Ph.D. - Principal Engineer and Digital Innovation Leader - McKinsey & Company

    Down arrow blue

    Dr. Usha Jagannathan is a Principal Engineer & Digital Innovation Leader at McKinsey & Company within Technology & Digital. She brings twenty years of digital transformation experience in building new technology ecosystems, and empowering teams to build user-centric products that meet customer needs. Usha leads domain level AI/ML applications for cloud transformations, drives innovation initiatives and support the domain strategy to accelerate innovation, and coach product development teams to deliver custom digital cloud-native solutions.

    Before McKinsey, Usha worked at a digital innovation lab for Marsh & McLennan and spearheaded the IT graduate degree program as a program chair at Arizona State University. Following her Ph.D. dissertation, she pioneered the concept of virtual IT lab immersion for online engineering degree programs to facilitate experiential learning.

    Her research interests include machine learning, ethical AI, decarbonization and AI, measuring carbon footprint in cloud instances. Usha is a passionate STEM advocate and works to increase industry diversity through apprenticeship and work-study programs.

    Linkedin
  • 12:45

    Lunch

  • 13:45
    Shilpi Agarwal

    Reducing AI’s BMI

    Shilpi Agarwal - Founder & Chief Data Ethics Officer - DataEthics4All

    Down arrow blue

    Reducing AI’s BMI

    Do you know what is AI’s BMI today? You guessed it. It's really high!! So, how do we fix this? Do you think Big Tech needs to go on a DIET? What diet would that be? Keto? Paleo? Mediterranean or Vegetarian? Come, Join us for this fun talk and learn the right diet for AI Companies.

    Shilpi Agarwal is a Data Philanthropist, Adjunct Faculty at Stanford and MIT $100K Launch Mentor.

    Armed with the technical skills from her Bachelor of Engineering in Computer Science, design thinking skills from her Masters in Design, combined with 20+ years of Business and Marketing know-how by working as a Marketing Consultant for some really big and some small brands, Shilpi started DataEthics4All, troubled with the unethical use of data around her on social media, in business and in political campaigns.

    DataEthics4All is a Community bringing the STEAM in AIᵀᴹ Movement for Youth and celebrating Ethics 1stᵀᴹ Champions of today and tomorrow pledging to help 5 Million economically disadvantaged students in the next 5 years by breaking barriers of entry in tech and creating awareness on the ethical use of data in data science and artificial intelligence in enterprise, working towards a better Data and AI World.

    Twitter Linkedin
  • 14:20
    Abhishek Gupta

    Key Lessons on Operationalizing Responsible AI

    Abhishek Gupta - Senior Responsible AI Leader & Expert, Boston Consulting Group (BCG) and - Founder & Principal Researcher, Montreal AI Ethics Institute

    Down arrow blue

    Key Lessons on Operationalizing Responsible AI

    As organizations realize the importance of Responsible AI, both to meet customer demands and regulatory requirements, we're seeing a movement from principles to practice. Yet, we continue to see mixed results in the actual implementation of these ideas in practice. Primarily, organizations run into technical and organizational challenges that hinder the successful adoption of Responsible AI programs. This talk will dive into some key lessons on how to effectively operationalize Responsible AI in practice from both a technical and organizational lens, focusing on comprehensiveness, change management, and the imperative for doing so urgently.

    Abhishek Gupta is the Senior Responsible AI Leader & Expert with the Boston Consulting Group (BCG) where he works with BCG's Chief AI Ethics Officer to advise clients and build end-to-end Responsible AI programs. He is also the Founder & Principal Researcher at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy. Through his work as the Chair of the Standards Working Group at the Green Software Foundation, he is leading the development of a Software Carbon Intensity standard towards the comparable and interoperable measurement of the environmental impacts of AI systems.

    His work focuses on applied technical, policy, and organizational measures for building ethical, safe, and inclusive AI systems and organizations, specializing in the operationalization of Responsible AI and its deployments in organizations and assessing and mitigating the environmental impact of these systems. He has advised national governments, multilateral organizations, academic institutions, and corporations across the globe. His work on community building has been recognized by governments from across North America, Europe, Asia, and Oceania. He is a highly sought after speaker with talks at the United Nations, European Parliament, G7 AI Summit, TEDx, Harvard Business School, Kellogg School of Management, amongst others. His writing on Responsible AI has been featured by Wall Street Journal, MIT Technology Review, Protocol, Fortune, VentureBeat, amongst others.

    He is an alumnus of the US State Department International Visitors Leadership Program representing Canada and has received The Gradient Writing Prize 2021 for his work on The Imperative for Sustainable AI Systems. His research has been published in leading AI journals and presented at top-tier ML conferences like NeurIPS, ICML, and IJCAI. He is the author of the widely-read State of AI Ethics Report and The AI Ethics Brief. He formerly worked at Microsoft as a Machine Learning Engineer in Commercial Software Engineering (CSE) where his team helped to solve the toughest technical challenges faced by Microsoft's biggest customers. He also served on the CSE Responsible AI Board at Microsoft.

    Linkedin
  • 14:20

    Round Table Discussions

  • Shilpi Agarwal

    Round Table Topic Leader: Data Ethics in Business - The Cornerstone of Customer Trust

    Shilpi Agarwal - Founder & Chief Data Ethics Officer - DataEthics4All

    Down arrow blue

    Reducing AI’s BMI

    Do you know what is AI’s BMI today? You guessed it. It's really high!! So, how do we fix this? Do you think Big Tech needs to go on a DIET? What diet would that be? Keto? Paleo? Mediterranean or Vegetarian? Come, Join us for this fun talk and learn the right diet for AI Companies.

    Shilpi Agarwal is a Data Philanthropist, Adjunct Faculty at Stanford and MIT $100K Launch Mentor.

    Armed with the technical skills from her Bachelor of Engineering in Computer Science, design thinking skills from her Masters in Design, combined with 20+ years of Business and Marketing know-how by working as a Marketing Consultant for some really big and some small brands, Shilpi started DataEthics4All, troubled with the unethical use of data around her on social media, in business and in political campaigns.

    DataEthics4All is a Community bringing the STEAM in AIᵀᴹ Movement for Youth and celebrating Ethics 1stᵀᴹ Champions of today and tomorrow pledging to help 5 Million economically disadvantaged students in the next 5 years by breaking barriers of entry in tech and creating awareness on the ethical use of data in data science and artificial intelligence in enterprise, working towards a better Data and AI World.

    Twitter Linkedin
  • Ban Kawas

    Round Table Topic Leader: Explainable AI (XAI) and Its Role in Building Trusted AI

    Ban Kawas - Senior Research Scientist - Reinforcement Learning - Meta

    Down arrow blue

    Ban is a Senior AI Research Scientist at Meta. She is working on democratizing Reinforcement Learning and enabling its use in the real world, spanning several application areas from compiler optimization to embodied AI. Ban and her team are developing ReAgent; an end-to-end platform for applied RL, checkout open source version at https://reagent.ai/

    Linkedin
  • Naman Kohli

    Round Table Topic Leader: Causal Analysis

    Naman Kohli - Applied Scientist - Amazon

    Linkedin
  • 15:15

    Afternoon Networking Break

  • 15:45
    Aalok Shanbhag

    Overcoming 'Black Box' Model Challenges

    Aalok Shanbhag - Senior Machine Learning Engineer - Snap Inc.

    Linkedin
  • 16:15
    Arjun Prakash

    The Benefits of Programmatic Labeling for Trustworthy AI

    Arjun Prakash - Director of Solutions - Snorkel AI

    Down arrow blue

    Arjun heads Snorkel AI’s product and GTM strategy for use-cases and solutions across industries. Arjun joined Snorkel AI from Palantir, where he was an early employee of the commercial business and spent 8 years building and leading various teams including healthcare, financial services, and commercial strategy. Prior to Palantir, Arjun was a researcher at BlackRock and Siemens Corporate Research and has a degree in electrical and computer engineering from Cornell University.

    Linkedin
  • 16:40
    Apostol Vassilev

    Bridging the Ethics Gap Surrounding AI

    Apostol Vassilev - Research Team Lead; AI & Cybersecurity Expert - National Institute of Standards and Technology (NIST)

    Down arrow blue

    Bridging the Ethics Gap Surrounding AI

    This session will motivate the need for a comprehensive socio-technical approach to assessing the impact of AI on individuals and society. While there are many approaches for ensuring the technology we use every day is safe and secure, there are factors specific to AI that require new perspectives. AI systems are often placed in contexts where they can have the most consequential for people impact. Whether that impact is helpful or harmful is a fundamental question of the field of Trustworthy and Responsible AI. Trustworthy and Responsible AI is not just about whether a given AI system is biased, fair or ethical, but whether it does what is claimed. Many practices exist for responsibly producing AI: transparency, test, evaluation, validation, and verification of AI systems and datasets, human factors such as participatory design techniques and multi-stakeholder approaches, and a human-in-the-loop. However, none of these practices individually or in concert are a panacea against bias and each brings its own set of pitfalls. What is missing from current remedies is guidance from a broader socio-technical perspective that connects these practices to societal values. To successfully manage the risks of AI bias, we must operationalize these values and create new norms around how AI is built and deployed. This is the approach taken in the recent NIST SP 1270: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, https://doi.org/10.6028/NIST.SP.1270.

    Apostol Vassilev leads a Research Team at NIST. His team focuses on a wide range of AI problems: AI bias identification and mitigation, meta learning with large language models for various NLP tasks, robustness and resilience of AI systems, applications of AI for mitigating cybersecurity attacks. Apostol’s scientific background is in mathematics (Ph.D.) and computer science (MS), but he is also interested in social aspects of using AI technology and advocates for a comprehensive socio-technical approach to evaluating AI’s impact on individuals and society.

    Linkedin
  • 17:10
    Dr. John Rares Almasan

    Trusted AI Closing Chair Remarks

    Dr. John Rares Almasan - Associate Partner, Distinguished Engineer and Cloud & AI Technology Executive Leader - McKinsey & Company

    Down arrow blue

    Understanding the Importance of Securing the ML Code with ML

    Cyberthreats are becoming smarter and more complex as the threat players have access to newer, AI driven, cloud enhanced technologies. Having the ability to forecast cyberattacks before they happen depends on the same AI capabilities. How can we outsmart the cyberthreats? The answer is by augmenting traditional Cybersecurity with the help of Machine Learning and Deep Leaning technologies. This session will be focused on defining the threat players, types of security threats and defensive techniques, and will culminate with a practical example on how Generative Adversarial Networks can help build proactive, resilient, and robust cybersecurity solutions.

    Dr. John Almasan is a is a distinguished engineer within McKinsey & Company's Cloud Data and Analytics team. An accomplished technology executive with 20+ years of experience, John has led global tech teams and built large-scale data, analytics, and cloud platforms at Bank of America, American Express, and Nationwide Insurance. An expert in multi-cloud big data engineering, machine learning, and data science, John focuses on enabling the acceleration of AI adoption, as well as employee cross-training. John is a member of the Arizona State University’s Board of Advisors and serves as an adjunct professor at several universities, actively involved in preparing the next generation to meet the future skillset needs and demands. John holds two master’s degrees (Engineering and Statistics) and a Doctorate of Business Administration. He is credited with 15+ patents and is the recipient of several awards.

    Linkedin
  • 17:15

    Networking Reception

  • 18:15

    End of Day One

  • THIS EVENT STARTS AT 8:45 AM

  • 08:45

    Coffee & Registration

  • 09:45
    Dr. John Rares Almasan

    Trusted AI Stage: Chair Welcome

    Dr. John Rares Almasan - Associate Partner, Distinguished Engineer and Cloud & AI Technology Executive Leader - McKinsey & Company

    Down arrow blue

    Understanding the Importance of Securing the ML Code with ML

    Cyberthreats are becoming smarter and more complex as the threat players have access to newer, AI driven, cloud enhanced technologies. Having the ability to forecast cyberattacks before they happen depends on the same AI capabilities. How can we outsmart the cyberthreats? The answer is by augmenting traditional Cybersecurity with the help of Machine Learning and Deep Leaning technologies. This session will be focused on defining the threat players, types of security threats and defensive techniques, and will culminate with a practical example on how Generative Adversarial Networks can help build proactive, resilient, and robust cybersecurity solutions.

    Dr. John Almasan is a is a distinguished engineer within McKinsey & Company's Cloud Data and Analytics team. An accomplished technology executive with 20+ years of experience, John has led global tech teams and built large-scale data, analytics, and cloud platforms at Bank of America, American Express, and Nationwide Insurance. An expert in multi-cloud big data engineering, machine learning, and data science, John focuses on enabling the acceleration of AI adoption, as well as employee cross-training. John is a member of the Arizona State University’s Board of Advisors and serves as an adjunct professor at several universities, actively involved in preparing the next generation to meet the future skillset needs and demands. John holds two master’s degrees (Engineering and Statistics) and a Doctorate of Business Administration. He is credited with 15+ patents and is the recipient of several awards.

    Linkedin
  • 10:00
    Kathy Baxter

    Our Role in Guiding Responsible AI Regulation

    Kathy Baxter - Principal Architect, Ethical AI Practice - Salesforce

    Down arrow blue

    Our Role in Guiding Responsible AI Regulation

    Major technological advancements rarely begin as safe, inclusive, or focused on long-term societal impacts. As we have all seen, and some have painfully experienced, AI is no different in that regard. However, it is different in terms of the sheer scale, speed, and complexity of its impact so it is unsurprising that there is significant effort to create standards, frameworks, and regulations. There are still many questions to be answered about how to standardize or regulate AI but there are things that every organization creating and implementing AI can do to prepare for upcoming regulations and create trustworthy technology, which Kathy will share

    As a Principal Architect of Ethical AI Practice at Salesforce, Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, "Understanding your users," was published in May 2015. You can read about her current research at einstein.ai/ethics.

    Twitter Linkedin
  • 10:30

    Ethical AI in Healthcare

  • Anne de Hond

    SPEAKER

    Anne de Hond - Visiting Researcher - Stanford University School of Medicine

    Linkedin
  • Marieke van Buchem

    SPEAKER

    Marieke van Buchem - Visiting Researcher - Stanford University

    Linkedin
  • 11:00
    John Lunsford

    Bringing Infrastructure into Present and Future Considerations of AI Mistrust

    John Lunsford - User Experience Researcher - Uber

    Down arrow blue

    Bringing Infrastructure into Present and Future Considerations of AI Mistrust

    The ride-for hire industry has been around for a long time. More than 800 years in fact. And some of its earliest iterations incorporated rudimentary algorithmic decision making into the activity of for-hire transit. Without discussions of fairness these systems went on to structure modern society’s unequal transportation environment, allowing fairness only to apply to those already in power. As we develop AI solutions to address problems of inequality in access, we have to consider how the promise of fairness is mediated by unfair systems that ai depends on to function. That interaction then becomes the foundation for trust- or mistrust - in AI’s deployment and ability to address problems of fairness in social, political, economic, & material systems. John will share ways to approach tracking, documenting, and building AI fairness practices into landscapes that were not always designed to accommodate them.

    A User Experience Researcher in Safety, John earned his/their PhD in Communication from Cornell University in 2021, as well as an MS in Communication, an MA in Cultural anthropology, and BS in Political Science. A classical ethnographer by training, John has expanded an anthropological approach to encompass media studies, social physiology, political science, and urban design. It’s from that mixed vantage that John considers the effects on and of technology on social process and structures; documenting for his PhD the legacy of for hire transportations’ impact on the evolution of unequal access, its reflection of dominant societal priorities, and their impact on emerging rideshare and autonomous transportation systems. John’s current work in the realm of safety blends a passion for wicked problems with the demand of real-world complexities impacting the transportation landscape.

    Linkedin
  • 11:30

    Break

  • 11:45
    Julia Anderson

    The Ethics of Conversational AI

    Julia Anderson - Conversation Designer - Freelance

    Down arrow blue

    Julia Anderson is a writer and conversation designer based in Los Angeles. While working as a healthcare consultant, Julia saw how technology affected lives and decided to explore what AI had to offer. Fascinated by interaction design and the user experience, she now works with conversational AI’s broad capabilities. Whether it is helping design voice assistants like Bixby or demonstrating best practices for multimodal devices, she is eager to create technology that communicates like you do.

    Linkedin
  • 12:15
    Kinjal Basu

    Operationalizing Responsible AI in Large-Scale Organizations

    Kinjal Basu - Senior Staff Software Engineer - LinkedIn

    Down arrow blue

    Operationalizing Responsible AI in Large-Scale Organizations

    Most large-scale organizations face challenges while scaling their infrastructure to support multiple teams across multiple product domains. More often than not, individual teams build systems and models to power their specific product areas, but because of the innate differences in the products and infrastructure support, the broad use of Responsible AI techniques poses a serious challenge for organizations.

    Each product can potentially have a different definition of “fairness” across different dimensions and hence require very different measurement and mitigation solutions. In this talk, we will focus on how we are building a scalable system on our machine learning platform that can not only measure but also mitigate unintended consequences of AI models across most products at LinkedIn.

    We will discuss how this system aims to seamlessly integrate into each and every AI pipeline and measures unfairness across different protected attributes. The system is flexible to incorporate different definitions of fairness as required by the product. Moreover, if and when algorithmic bias is detected we also have a system to remove such bias through state-of-the-art AI algorithms across different notions of fairness. That being said, we are just starting and there is much more work to be done and we don’t have all the answers yet.

    Finally, all of the above point to having a good intent towards ethical practices. But the real win comes from the actual member impact after launching such bias mitigated models in production. We will also discuss how we A/B test our models and systems once they are launched in production and incorporate those learnings to improve the overall member experience. Thus, connecting the overall intent and impact cycle.

    Kinjal is currently a Sr. Staff Software Engineer and the tech lead for Responsible AI at LinkedIn, focusing on challenging problems in fairness, explainability, and privacy. He leads several initiatives across different product applications towards making LinkedIn a responsible and equitable platform. He received his Ph.D. in Statistics from Stanford University, with a best thesis award and has several published papers in many top journals and conferences. He has been serving as a reviewer and program committee member in multiple top venues such as NeurIPS, ICML, KDD, FAccT, WWW, etc

    Twitter Linkedin
  • 12:45
    Supreet Kaur

    Closing General Session: Striking the Right Balance: ML or No ML

    Supreet Kaur - Assistant Vice President - Morgan Stanley

    Down arrow blue

    MLOps/Trusted AI Summit: Closing General Session: Complexity vs Simplicity in ML and AI Projects

    Women in AI Reception: Pivoting into AI

    Supreet is an AVP at Morgan Stanley. Prior to Morgan Stanley, she was a management consultant at ZS Associates where she automated different workflows and built data driven solutions for fortune 500 clients. She is extremely passionate about technology and AI and hence started her own community called DataBuzz where she engages the audience by sharing the latest AI and Tech trends and also mentors people who want to pivot in this field.

  • 13:05
    Dr. John Rares Almasan

    Closing Chair Remarks

    Dr. John Rares Almasan - Associate Partner, Distinguished Engineer and Cloud & AI Technology Executive Leader - McKinsey & Company

    Down arrow blue

    Understanding the Importance of Securing the ML Code with ML

    Cyberthreats are becoming smarter and more complex as the threat players have access to newer, AI driven, cloud enhanced technologies. Having the ability to forecast cyberattacks before they happen depends on the same AI capabilities. How can we outsmart the cyberthreats? The answer is by augmenting traditional Cybersecurity with the help of Machine Learning and Deep Leaning technologies. This session will be focused on defining the threat players, types of security threats and defensive techniques, and will culminate with a practical example on how Generative Adversarial Networks can help build proactive, resilient, and robust cybersecurity solutions.

    Dr. John Almasan is a is a distinguished engineer within McKinsey & Company's Cloud Data and Analytics team. An accomplished technology executive with 20+ years of experience, John has led global tech teams and built large-scale data, analytics, and cloud platforms at Bank of America, American Express, and Nationwide Insurance. An expert in multi-cloud big data engineering, machine learning, and data science, John focuses on enabling the acceleration of AI adoption, as well as employee cross-training. John is a member of the Arizona State University’s Board of Advisors and serves as an adjunct professor at several universities, actively involved in preparing the next generation to meet the future skillset needs and demands. John holds two master’s degrees (Engineering and Statistics) and a Doctorate of Business Administration. He is credited with 15+ patents and is the recipient of several awards.

    Linkedin
  • 13:15

    Lunch

  • 14:15

    End of Summit

MLOps Summit San Francisco

MLOps Summit San Francisco

15 - 16 June 2022

Get your ticket
This website uses cookies to ensure you get the best experience. Learn more