• 08:00

    REGISTRATION OPENS

  • 09:00
    Jinjin Zhao

    WELCOME & OPENING REMARKS

    Jinjin Zhao - Manager of ML Science & Engineering - Amazon

    Down arrow blue

    Jinjin is a Senior Applied/ML Scientist at Amazon with 6+ years of research and practical application experience in a few domains (Supply Chain, Retail, Advertising and recommender systems, Voice assistant, AI Education). Jinjin has been devoted to AI Education since early 2018. During the past 2-3 years, she was able to publish 10 research papers at various conferences (ACM [email protected], AAAI, EducationDataMining, etc) and there are more under double-blind peer review. She is enthusiastic at research and bringing researchers together for bigger wins and also leads the dWdt initiative for mentoring and guiding junior scientists and engineers for their growth in research and practical AI applications.

    Linkedin
  • BUILDING AI IN ENTERPRISE

  • 09:15
    Wenting Sun

    Things You Need to Consider in MLOps for Your ML Production Systems

    Wenting Sun - Senior Data Science Manager - Ericsson

    Down arrow blue

    Things You Need to Consider in MLOps for Your ML Production Systems

    This talk will cover a broad overview of the technology stack to support efficient MLOps for both ML development and deployment. We will discuss considerations that have to be made regarding different deployment scenarios (e.g. AIaaS, vs. embedded AI/ML). This talk will also provide practical design choices under the constraints of compute and communication bandwidth in a production ML system.

    Wenting Sun is a Senior Data Science Manager in Ericsson. She leads a team of data scientists and data engineers to develop cutting-edge AI/ML applications in the telecommunication domain and drives some of the AI-related open sources initiatives for Ericsson. She is a governing board member of Linux foundation AI (LFAI)

    Linkedin
  • 09:35
    Jerry Xu

    Building Machine Learning Infrastructure At Scale

    Jerry Xu - Architect for Business Integrity's Machine Learning Infrastructure - Facebook

    Down arrow blue

    Jerry Xu is the Architect for Business Integrity's Machine Learning Infrastructure at Facebook. Jerry has over 20 years of experience building high-performance and large-scale systems with multiple engineering and leadership positions at Lyft, Box, Twitter, Zynga, and Microsoft. Before joining Facebook, he was the co-founder and CEO at Datatron Technologies, a pioneer in MLOps automation. Jerry holds several patterns on machine learning and data storage.

    Linkedin
  • 09:55
    Braden Hancock

    Automated Data Labeling: The Power of Going Programmatic

    Braden Hancock - Co-founder & Head of Technology - Snorkel AI

    Down arrow blue

    Automated Data Labeling: The Power of Going Programmatic

    Labeling training data is exhausting—the de facto bottleneck most AI teams face today. Eager to alleviate this pain point of AI development, practitioners have long sought ways to automate this labor-intensive labeling process. Automate too little (e.g., with manual labeling optimizations such as active learning or model-assisted labeling) and the gains are marginal. Automate too much and your model becomes disconnected from the essential human-provided domain knowledge it needs to solve relevant problems. The key to truly transformative (e.g., 10x to 100x) efficiency improvements is to change the interface to labeling altogether, moving from manual labeling collecting individual labels one-by-one to programmatic labeling with labeling functions that capture labeling rationales. The result is a labeling process that is significantly more scalable, adaptable, and governable. In this talk, we review these techniques for automating parts of the labeling process, show how the Snorkel Flow platform integrates them in a unified framework, and share real-world experiences from Fortune 500 companies that have made the transition from manual to programmatic labeling.

    Braden is a co-founder and Head of Technology at Snorkel AI. Before Snorkel, Braden researched and developed new interfaces to machine learning systems in academia (Stanford, MIT, Johns Hopkins, BYU) and industry (Facebook, Google).

    Twitter Linkedin
  • 10:15

    COFFEE & NETWORKING BREAK

  • SUPPORTING AI IN ENTERPRISE

  • 10:50
    Julie Amundson

    Why Your ML Infrastructure Team Needs ML Practitioners

    Julie Amundson - Director, Machine Learning Infrastructure - Netflix

    Down arrow blue

    Why Your ML Infrastructure Team Needs ML practitioners

    Have you ever built ML infrastructure that wasn't as popular as you'd hoped? In this talk we explore the practice of going beyond user empathy to feeling the pain of our users. This practice has helped to guide the development of Metaflow since 2017. Metaflow has grown to hundreds of projects within Netflix and dozens of companies since it was open sourced in 2019. David Berg, an engineer with Netflix's ML Platform will give you some practical tips on how to explore the ML infrastructure problem space through walking in the shoes of our users.

    Julie leads the Machine Learning Infrastructure at Netflix, with the goal of scaling Data Science while increasing innovation. She previously built streaming infrastructure behind the "play" button while Netflix was transitioning from domestic DVD-by-mail service to international streaming service. Julie also co-founded Order of Magnitude Labs, with a mission to build AI capable of doing things that humans find easy and today’s machines find hard: exploration, communication, creativity and accomplishing long-range goals. Early in her career, Julie developed data processing software at Lawrence Livermore National Laboratory that enabled scientists to study the newly-sequenced human genome.

    Twitter Linkedin
  • 11:10
    Aashish Sheshadri

    Tesseract: ML in Container Infrastructure

    Aashish Sheshadri - Staff Machine Learning Engineer - PayPal

    Down arrow blue

    Tesseract: ML in Container Infrastructure

    A hybrid elastic cloud and on-prem infrastructure is the new normal. ML has been called upon to enable overlooked efficiency gains leveraging opportunities in this new dynamic. Tesseract at PayPal is our ML enabler for our infrastructure to find optimality in resource efficiency and cost with strong reliability and resiliency guarantees. In this talk, we will introduce one such ML enabler from Tesseract driving pool right sizing at all times to meet anticipated demand. We present a Hybrid Deep Learning and Statistical approach to model anticipated demand. An accurate measure of incoming demand enables us to be right-sized while keeping guarantees on reliability, resiliency, and availability. Our experience dealing with demand volatility and how we temper them to enable actionability with tradeoffs will be the centerpiece of this talk.

    Aashish Sheshadri is a Staff Machine Learning Engineer at PayPal. Where he enables ML for the Infrastructure. Applying ML to infrastructure efficiencies, security hardening and accelerating MLOps. Before PayPal, Aashish spent 4 years in Robotics and NLP research at UT Austin and CMU.

    Linkedin
  • 11:30
    Surabhi Bhargava

    You Deployed Your Machine Learning Model, What Could Possibly Go Wrong?

    Surabhi Bhargava - Machine Learning Engineer - Adobe

    Down arrow blue

    You Deployed Your Machine Learning Model, What Could Possibly Go Wrong?

    Training a highly accurate ML model is hard. Deploying it in production is even harder. Even if you manage to get it all together, things do not stop here. There is plenty that needs to be done after a model is deployed. In this presentation, we will talk about why is deployment not the last step in the ML model lifecycle, what all could possibly go wrong after model deployment and finally, how can that be prevented?

    Surabhi is a Senior Machine Learning Engineer at Adobe building products for document intelligence. She is passionate about improving experiences using AI and ML techniques. Surabhi holds a master’s degree in Machine Learning from Columbia University and completed her undergraduate studies from IIT Guwahati, India. Apart from enterprise use cases, she has conducted research in the fields of healthcare and social good.

    Linkedin
  • 11:50
    Heather Nolis

    From POC to Multi-Million Dollar Investment: Rapid Scaling of AI Prototypes

    Heather Nolis - Principal Engineer, Software, Machine Learning - T-Mobile

    Down arrow blue

    From POC to Multi-Million Dollar Investment: Rapid Scaling of AI Prototypes

    A quick AI prototype is the best way to get buy-in for your great new concept! But after you have that green light from your enterprise, how do you scale? T-Mobile’s enterprise AI team began as a small, 12 week prototype – and now is a fully enterprise-essential team with multiple projects spanning business units and dozens of engineers. In this talk, Heather will guide us through the lessons she’s learned in her 4 years of experience designing prototypes, running POCs, and scaling successful experiments to create software that makes millions of real-time predictions a day.

    Key Takeaways: • In agile POCs, people tend not to write logs – but they’re essential to iterating on your prototype at all. Logging must be MVP for AI prototypes.

    • Design must be agile and run alongside dev teams. Agile design must be informed by data.

    • Make your teams fully stacked – from analyst to DevOps engineer – to avoid wasting cycles coordinating with other teams for essential functions.

    Heather Nolis is a founding member of the AI @ T-Mobile team, focusing the conversion of cutting-edge analyses to real-time, scalable data-driven products. She spends her time irl with her number-obsessed son Amber in rainy Seattle. You can find her @heatherklus on Twitter, where she talks about super sweet machine learning, data for good, and bad reality television.

    Twitter Linkedin
  • 12:10
    Andres Asaravicius

    From Startup To Unicorn: What We’ve Learned From Scaling Models In Production

    Andres Asaravicius - Data Science Team Lead - Riskified

    Down arrow blue

    From Startup To Unicorn: What We’ve Learned From Scaling Models In Production

    Scaling machine learning is not an easy task but it’s even more challenging when your machine learning solution is at the core of your company, and you need to keep on pace with the hyper-growth of your business and the industries you serve. For Riskified, an eCommerce risk management platform founded in 2013, the rapid growth of eCommerce required equally rapid, exponential scaling of its machine learning models from day one. This talk will provide meaningful insight into the lessons learned while building and scaling our machine learning platform, models and processes.

    Andres has been with Riskified for over 6 years; he leads a multidisciplinary team of experienced Data Scientists and Engineers. Over the years, he has led end-to-end research and has been a key player in building the data science infrastructure. Today, his team focuses on building state-of-the-art tools to early detect sophisticated fraud attacks that help Riskified to block millions of dollars a year of potential fraud. With a background in Statistics and Sociology, he aims to transform complex data into simple, clear, and practical knowledge. He is passionate about Machine Learning technologies, soccer and basketball.

    Twitter Linkedin
  • 12:30

    LUNCH

  • APPLYING AI IN ENTERPRISE

  • 13:30
    Hien Luu

    MLOps at DoorDash

    Hien Luu - Senior Engineering Manager - DoorDash

    Down arrow blue

    MLOps at DoorDash

    MLOps is one of the hottest topics being discussed in the ML practitioner community. Streamlining the ML development and productionalizing ML are important ingredients to realize the power of ML, however it requires a vast and complex infrastructure. The ROI of ML projects will start only when they are in production. The journey to implementing MLOps will be unique to each company. At DoorDash, we’ve been applying MLOps for a couple of years to support a diverse set of ML use cases and to perform large scale predictions at low latency. This session will share our approach to MLOps, as well as some of the learnings and challenges.

    Hien Luu is a Sr. Engineering Manager at DoorDash, leading the Machine Learning Platform team. Hien is particularly passionate about the intersection between Big Data and Machine Learning infrastructure. Hien has given presentations at various conferences like Data + AI Summit, XAI Summit, MLOpsWorld, MLOps Salon, Apply(), YOW!, and QCon,

    Twitter Linkedin
  • 13:50
    Emily Curtin

    Stop Making Data Scientists Do Systems

    Emily Curtin - Senior Machine Learning Engineer - Mailchimp

    Down arrow blue

    Making Code and Humans GPU-Capable at Mailchimp

    What happens when you have a bunch of data scientists, a bunch of new and old projects, a big grab-bag of runtime environments, and you need to get all those humans and all that code access to GPUs? Come see how the ML Eng team at Mailchimp wrestled first with connecting abstract containerized processes to very-not-abstract hardware, then scaled that process across tons of humans and projects. We’ll talk through the technical how-to with Docker, Nvidia, and Kubernetes, but all good ML Engineers know that wrangling the tech is only half the battle and the human factors can be the trickiest part.

    3 Key Takeaways • An overview of the call stack from container, orchestration framework, OS, and all the way down to real GPU hardware • How ML Eng at Mailchimp provides GPU-compatible dev environments for many different projects and data scientists • An experienced take on how to balance data scientist’s human needs against heavy system optimization (spoiler alert: favor the humans)

    Emily May Curtin is a Senior Machine Learning Platform Engineer at Mailchimp, which is definitely what she thought she’d be doing back when she went to film school. She combines her wealth of experience in DevOps, data engineering, distributed systems, and “cloud stuff” to enable Data Scientists at Mailchimp to do their best work. Truthfully, she’d rather be at her easel painting hurricanes and UFOs. Emily lives (and paints) in her hometown of Atlanta, GA, the best city in the world, with her husband Ryan who’s a pretty cool guy.

    Twitter Linkedin
  • 14:10
    Matt Linder

    Mindful ML at Headspace Health: How to Practice Personalization Without Sacrificing Your Values

    Matt Linder - Senior Machine Learning Engineer - Headspace Health

    Down arrow blue

    Mindful ML at Headspace Health: How to Practice Personalization Without Sacrificing Your Values

    Headspace Health is a values-first company: through 10 years of innovating the use of technology in meditation and mindfulness, the focus has never shifted from making sure that our approach to technological implementation is as mindful as our content. Over the last year, much of that innovation has centered around ML-powered personalization: from in-app content recommendation to push notifications to AI-driven dynamic welcome flows. But implementing ML/AI solutions brings unique challenges: personalization is inherently intimate and demands great sensitivity in the ways we use and communicate member data. Additionally, de-biasing datasets and models is a huge priority, since both domains are prone to many types of bias. In this talk, we’ll explore how Headspace Health balances the intimate and data-centric process of building ML solutions while maintaining our brand identity as a mindful actor.

    Matt Linder is a Senior Machine Learning Engineer at Headspace Health, where he develops and implements models, maintains internal tools and libraries, and builds out ML-powered prototypes. He loves collaborating with various squads, teams, and product owners to develop AI-powered solutions. In another life, Matt was and is a touring professional classical guitarist with his group Mobius Trio. They focus on developing new music for plucked strings - collaborating with composers, other performers, and institutions to do so.

    Linkedin
  • 14:30
    Max Li

    Assign Experiment Variants at Scale in Online Controlled Experiments

    Max Li - Senior Data Scientist - Wish

    Down arrow blue

    Assign Experiment Variants at Scale in Online Controlled Experiments

    Randomization is the key to establishing causality in controlled experiments. Online controlled experiments (A/B tests) has become the gold standard to learn the impact of new product features in technology companies. Randomization enables inference of casualty from A/B test. Because the randomized assignment maps end users to experiment buckets and balances user characteristics (both observed and unobserved) between the groups, and experiments can attribute any outcome differences between the experiment groups (control and treatment) to the product feature under experiment. Technology companies run A/B tests at scale – hundreds if not thousands of A/B tests concurrently, and each with hundreds of millions of users. The large scale poses unique challenges to randomization. First, the randomized assignment must be computationally fast since the experiment service has hundreds of thousands of queries per second (QPS), and the QPS grows quickly in a hypergrowth company. Second, experiment variant assignments must be independent between the hundreds of experiments a user is assigned to. Third, the assignment must be consistent when a user revisits the same experiment or more users are included in the experiment. We present a novel assignment algorithm and provide statistical tests to validate the randomized assignments. The results of the study show that not only is this algorithm computationally fast but also satisfies the statistical requirements: unbiased and independent between experiments.

    Max is a senior data scientist at Wish where he focuses on experimentation (A/B testing) and machine learning. He has been improving the A/B testing platform at Wish on various fronts, including infrastructure, statistical testing, usability, etc. His passion is to empower data-driven decision-making through the rigorous use of data. Max earned his Ph.D. in Statistical Informatics from the University of Arizona.

    Linkedin
  • 14:50

    COFFEE & NETWORKING BREAK

  • 15:20
    Adam Kraft

    Where are we with AutoML?

    Adam Kraft - Machine Learning Engineer - Google Brain

    Down arrow blue

    Where are we With AutoML?

    AutoML aims to help everyone achieve state-of-the-art AI for their specific problems. Techniques such as Neural Architecture Search (NAS) push the boundary of finding efficient and high quality models. How is AutoML being used today and where is the field headed in the future? This talk gives an overview of AutoML techniques, exploring how they work and the challenges of applying them across different AI tasks and settings.

    Adam Kraft is a machine learning engineer on the Google Brain Team, working on AutoML for a wide variety of AI tasks. Before Google, Adam spent eight years in computer vision and machine learning, working with satellite imagery at Orbital Insight and helping customers shop with their camera phones at Amazon.

    Twitter Linkedin
  • 15:40
    Gabe Zingaretti

    How to use AI to manage NPLs (Non-Performing Loans)

    Gabe Zingaretti - COO - Altada

    Down arrow blue

    How to Use AI to Manage NPLs (Non-Performing Loans)

    Gabe Zingaretti will give practical examples on how Altada’s AI solutions can be applied for asset management, contract intelligence and due diligence in NPLs acquisitions.

    He will also review the industry outlook and growing regulatory pressure, which are increasing the needs of businesses to accelerate their digital transformation by adopting technology that improves operational efficiency, and unlocking value from large sources of structured and unstructured data.

    Gabe Zingaretti has a doctorate in Biomedical Engineering from University of Rome-La Sapienza, and He was a postdoc research fellow at Columbia University, NY.

    With 25 years of experience in building exceptional teams and companies from seed round to IPO, his entrepreneurial spirit has brought him to take on leadership roles in several start-up from Artificial Intelligence, Machine Learning and Surgical Robotics.

    As COO he has led and mentored several business functions such as R&D, Manufacturing, Supply Chain, Accounting, IP and Customer Support. While his background is technical by training, He has a strong affinity with Business Development, Sales and World Wide Customer Success experience.

    Twitter Linkedin
  • 16:00

    PANEL: Strategies for Effectively Building, Deploying & Monitoring AI

  • Ankit Jain

    Moderator

    Ankit Jain - Senior Research Scientist - Meta

    Down arrow blue

    Ankit Jain currently works as a machine learning tech lead at Meta where he works on a variety of growth ranking and business integrity problems. Previously, he was a ML researcher at Uber AI where he worked on application of deep learning methods to different problems ranging from food delivery, fraud detection to self-driving cars. He has co-authored a book on machine learning titled TensorFlow Machine Learning Projects. Additionally, he’s been a featured speaker and published papers in many of the top AI conferences and universities. He was recently awarded as top 40under40 data scientists from India. He earned his MS from UC Berkeley and BS from IIT Bombay (India).

    Linkedin
  • Jacqueline Nolis

    Panelist

    Jacqueline Nolis - Head of Data Science - Saturn Cloud

    Down arrow blue

    Dr. Jacqueline Nolis is a data science leader with over 15 years of experience in managing data science teams and projects at companies ranging from DSW to Airbnb. She currently is the Head of Data Science at Saturn Cloud where she helps design products for data scientists. Jacqueline has a PhD in Industrial Engineering and co-authored the book Build a Career in Data Science.

    Twitter Linkedin
  • Édouard d´Archimbaud

    Panelist

    Édouard d´Archimbaud - Co-Founder & CTO - Kili Technology

    Down arrow blue

    Édouard d´Archimbaud is a Data Scientist and a CTO of Kili Technology. He co-founded the company in 2018 after holding various positions in research and operational projects at several banking institutions and investment funds. He led the Data Science and Artificial Intelligence Lab at BNP Paribas CIB. He graduated from the École Polytechnique with a specialization in Applied Mathematics and Computer Science, and obtained a Master's degree in Machine Learning from the École Normale Supérieure de Cachan.

    Twitter Linkedin
  • Frankie Cancino

    Panelist

    Frankie Cancino - Data Scientist - Mercedes-Benz Research & Development

    Down arrow blue

    Frankie Cancino is a Data Scientist at Mercedes-Benz Research & Development, working on applied machine learning initiatives. Prior to joining Mercedes-Benz R&D, Frankie was a Senior AI Scientist at Target AI, focused on methods to improve demand forecasting and anomaly detection. He is also the organizer and founder of Data Science Minneapolis. Data Science Minneapolis is a community that brings together professionals, researchers, data scientists, and AI enthusiasts.

    Linkedin
  • 17:00

    NETWORKING RECEPTION

  • 18:00

    END OF DAY ONE

  • 08:00

    REGISTRATION OPENS

  • 09:00
    Jigyasa Grover

    WELCOME NOTE

    Jigyasa Grover - Machine Learning Engineer - Twitter

    Down arrow blue

    Jigyasa Grover is a Machine Learning Engineer at Twitter, co-author of the book 'Sculpting Data for ML', and also an ML Google Developer Expert. She has a myriad of experiences from her brief stints at Facebook, National Research Council of Canada, and Institute of Research & Development France involving data science, mathematical modeling, and software engineering. Red Hat ‘Women in Open Source’ Award Winner and Google Summer of Code alumna, Jigyasa is an ardent open-source contributor as well. Previously, she has served as the Director of Women Who Code and Women Techmakers to help bridge the gender gap in technology.

    Twitter Linkedin
  • ENTERPRISE SOFTWARE

  • 09:10
    Lakshmi Ravi

    Data Collection and Synthesis

    Lakshmi Ravi - Applied Scientist - Amazon

    Down arrow blue

    Selecting ML Algorithms and Validating

    ML Practitioners often have a dilemma in identifying the right ML Model for the problem space. In this talk, I will be going over the common questions that will help in narrowing down the right next step. Developed model will have to meet certain validation metrics. The next common question is how the validation metrics proposed by scientists will have to be explained to business leaders and help them decide if the model is eligible to deployed. The next step is to find mechanisms to develop and study the online validation metrics. Often online metrics of an ML model launched will require studying the results in Treatment-Control fashion. In this talk, I will describe common development practices that helps in A/B testing of experiments.

    Lakshmi is an Applied Scientist with Amazon.She has been working with Amazon Machine Learning teams for the last 4.5 years. She had the chance to be part of Alexa's NLP team, Behavior Analytics (a causal Inference division in Amazon) and Amazon Music teams (improving the voice experience in Alexa).

    Linkedin
  • 09:30
    Vera Serdiukova

    Transformed: How to Build a Successful NLP Product with Transformers

    Vera Serdiukova - Senior AI Product Manager - Salesforce

    Down arrow blue

    Transformed: How to Build a Successful NLP Product with Transformers

    Transformers and natural language processing (NLP) are frequently viewed as a match made in heaven. Over the past few years, this type of machine learning models has become a default answer to any language-related task, would that be summarization, machine translation, or text generation. Transformers are a no-brainer. But is it really so? In her talk, Vera will discuss pragmatic considerations for building Transformer-powered NLP products. What makes Transformers a wise business decision? What are the costs involved? How can one build a viable business and product with Transformers? Vera will touch on these and other practical aspects of bringing Transformers from news headlines to real-life production.

    Key Takeaways

    • When it comes to Product, a pure transformer language model is not a substitute for a value proposition;

    • Modern sophisticated transformer models come with a hefty price tag (even the open-source ones);

    • Can transformers be an element of a business model? Yes, they should be.

    Vera Serdiukova is a Senior AI Product Manager at Salesforce, where she works primarily in the field of Natural Language Processing (NLP). Before Salesforce, she was a part of LG’s Silicon Valley lab, where her focus was on Edge AI/on-device Machine Learning. Prior to that, Vera built Conversational AI interfaces for Bosch’s robotics, connected car, and smart home products.

    Linkedin
  • 09:50
    Henriette Cramer

    Algorithmic Impact Assessment at Organizational Scale

    Henriette Cramer - Director of Algorithmic Impact and Research - Spotify

    Down arrow blue

    Algorithmic Impact Assessment at Organizational Scale

    Unintended negative side effects of Machine Learning have gained attention—and rightly so. Models and recommendations can amplify existing inequalities. However, pragmatic challenges stand in the way of practitioners that have committed to address these issues. Organizations do not automatically have full insight in the impact that their machine learning investments are having, both positive and negative. There are few clear guidelines or industry-standard processes that can be readily applied in domain-specific practice. Barriers include research necessary to understand issues at hand, developing approaches to assess, address and monitor issues, and confronting organizational or institutional challenges to implementing solutions at scale. We here share lessons learnt from both organizational and technical practice.

    Henriette Cramer is Director of Algorithmic Impact and Research at Spotify. Her team’s work focuses on assessing and addressing the impact of data and machine learning decisions in music and podcast streaming. This includes translating abstract calls to action into concrete organizational structure and tooling, as well as data-informed product direction. Henriette has a PhD from the University of Amsterdam, multiple patents, and peer-reviewed publications, which can be found at henriettecramer.com

    Twitter Linkedin
  • AI IMPLEMENTATION

  • 10:10
    Carlos Morato

    Potential of AI in Healthcare

    Carlos Morato - VP of Science & AI - Optum

    Down arrow blue

    Unlocking the Potential of AI in Healthcare

    The healthcare AI opportunity resides in the realm of predictive and prescriptive analytics. Applying AI machine learning models to large data sets enables healthcare organizations to detect patterns and make predictions and recommendations on outcomes using data and experiences rather than explicit programming instructions. But with every opportunity to apply AI in healthcare, there is a risk that organizations won’t have what it takes to successfully develop, implement or operationalize AI. Having the technology and infrastructure for AI is not enough. Having the right people—with the technical capability and healthcare expertise—is enormously important. The speaker discusses strategies and plans to attract and retain the talent needed to unlock the potential of AI.

    Carlos Morato is the Vice President of Science & Artificial Intelligence at Optum, a leading health services innovation company. He is a skilled lead scientist with more than 20 years of experience and 30 patents. Always working at the forefront of innovation, Morato is experienced in management, strategy, and technology consulting for autonomous systems, robotics and artificial intelligence systems. He holds a Ph.D. in mechanical engineering and robotics from the University of Maryland.

    Linkedin
  • 10:30

    COFFEE & NETWORKING BREAK

  • 11:00
    Alex Patry

    Deep Personalization in Jobs Marketplace: A LinkedIn Perspective

    Alex Patry - Senior Staff Software Engineer - LinkedIn

    Down arrow blue

    Deep Personalization in Jobs Marketplace: A LinkedIn Perspective.

    At LinkedIn, our jobs marketplace attempts to optimize the matching of hiring managers with the job seekers with the goal of getting more job seekers hired. In this talk, we will look at how we use deep models to capture job seeker’s career trajectories and hiring managers requirements, use of multi-task learning to optimize for both sides of the marketplace while driving equitable outcomes across the board.

    Alex has been a machine learning engineer at LinkedIn for almost seven years. He had tour of duties in LinkedIn Groups, Content Search and Discovey, Feed, and has been tech leading in LinkedIn Talent Solutions and Careers for the last three years.

    Prior to working at LinkedIn, Alex lived in Montreal where he completed a PhD in Statistical Machine Translation, then work for five years on information extraction.

    Linkedin
  • 11:20
    Sergey Zelvenskiy

    Project RADAR: Intelligent Early Fraud Detection System with Humans in the Loop

    Sergey Zelvenskiy - Lead Machine Learning Engineer - Uber

    Down arrow blue

    Project RADAR: Intelligent Early Fraud Detection System with Humans in the Loop

    Payment fraud is a severe problem for marketplace platforms like Uber. It directly affects the financial stability of the platform as a whole. One of the key challenges to solving this problem is the continuous emergence of new patterns of fraud attacks. In this talk, we will show how the project RADAR brings together algorithms, technology, and experts to efficiently block the fraud early. RADAR uses time-series anomaly detection, feature selection, and pattern mining algorithms. It’s built on top of Uber’s technical infrastructure and data streams. This talk is for the technical and business audience interested in technological innovation behind fraud detection systems.

    Sergey Zelvenskiy is a software engneer, algorithm designer, and entrepreneur. He solves complex real-world problems using the combination of software engineering, data pipelines, and machine learning. His current focus is financial fraud detection and mitigation at Uber. Previously he worked on e-commerce, search ranking, fintech, and security domains. Sergey’s algorithmic intestines include deep learning, anomaly detection, pattern mining, search ranking, and NLP. Sergey was a co-founder and founding CTO of ForUsAll - the provider of 401k and financial wellness services, where he built a no-code platform for building financial advice products.

    Twitter Linkedin
  • 11:40
    Mash Syed

    Understanding Customers Through Cohort-Based Lookalike Modeling

    Mash Syed - Lead Data Scientist - Chipotle Mexican Grill

    Down arrow blue

    Understanding Customers Through Cohort-Based Lookalike Modeling

    How do you measure the incremental value of your customers? Is it possible to find pairs of customers who share similar behavioral attributes? How can we apply machine learning to help us find customers who are like one another?

    Chipotle has millions of customers and a robust digital platform. Some customers are new to the brand while others are existing customers that are new to a offer or program. In this talk we will walk through a cohort based lookalike framework that can help us get closer to understanding our customers and how to measure their value to the enterprise.

    Mash Syed is the Lead Data Scientist at Chipotle Mexican Grill where he partners closely with marketing, loyalty, food safety, and finance to uncover actionable insights around customer behavior, revenue forecasting, and channel growth, using internal and external data sources.

    Linkedin
  • 12:00
    Aakash Sabharwal

    A Unified ML Data Pipeline for Real Time Features: From Training to Serving

    Aakash Sabharwal - Senior Engineering Manager - Etsy

    Down arrow blue

    A Unified ML Data Pipeline for Real-Time Features: From Training to Serving

    On a global marketplace like Etsy where buyers come to buy unique, varied items from sellers from around the globe, the inventory of items is constantly changing. Users preferences also change in real time as they discover the latest selection being offered. In such a dynamic environment, Machine Learning models for different applications (including search, recommendations or computational advertisement) need to collect different real time data signals, process them and finally leverage them to make the most relevant predictions.

    In this talk we will detail how we use realtime feature logging & streaming systems to capture in-session / trending activities, in order to compute features for our different ML models and use it for downstream applications such as a Bandit or Reinforcement Learning System.

    Aakash is a Senior Engineering Manager in Etsy's Machine Learning Infrastructure group. His team's focus is on building scalable & efficient realtime ML systems that allow Etsy to leverage its vast quantities of marketplace data for different ML applications such as search, advertisement or recommendations. Aakash has been involved with different startup companies since the start of his career including Ooyala, Platfora, Quantifind & finally Blackbird, which was acquired by Etsy. At all these companies his work has been at the intersection of Data Science, Machine Learning & Distributed Systems. Aakash holds a degree in Computer Science from Carnegie Mellon.

    Twitter Linkedin
  • Anni He

    A Unified ML Data Pipeline for Real Time Features: From Training to Serving

    Anni He - Senior Software Engineer - Etsy

    Down arrow blue

    A Unified ML Data Pipeline for Real-Time Features: From Training to Serving

    On a global marketplace like Etsy where buyers come to buy unique, varied items from sellers from around the globe, the inventory of items is constantly changing. Users preferences also change in real time as they discover the latest selection being offered. In such a dynamic environment, Machine Learning models for different applications (including search, recommendations or computational advertisement) need to collect different real time data signals, process them and finally leverage them to make the most relevant predictions.

    In this talk we will detail how we use realtime feature logging & streaming systems to capture in-session / trending activities, in order to compute features for our different ML models and use it for downstream applications such as a Bandit or Reinforcement Learning System.

    Anni is a Senior Software Engineer working on Etsy’s ML systems. Her work ensures that Etsy’s buyer activity is readily processed in real-time and easily integrated into ML applications that powers in-session personalization for search and recommendations across the organization. Anni holds a degree in Systems Design Engineering from University of Waterloo and a masters degree researching ML applications in healthcare. Anni lives in Toronto and in her spare time enjoys venturing into the Canadian wilderness with her canoe.

    Linkedin
  • 12:20

    LUNCH

  • 13:20
    Hamed Nazari

    Implementing Neural Network Using Quantum Technologies

    Hamed Nazari - Principal Scientist, AI & ML, Quantum Computing - Comcast Silicon Valley Innovation Center

    Down arrow blue

    Implementing Neural Network Using Quantum Technologies

    In this presentation, I am going to share with the audience what differentiates quantum computing from classical computation. Then I am going to walk the audience through a very popular framework for machine learning: Neural Network using IBM quantum SDK, QISKIT. In this presentation, I am going to build a quantum neural network to demonstrate how it learns from data and later use the model to predict the new data. For this experiment, we use a standard dataset that can be applied to any uses cases.

    Hamed Nazari is a Principal Scientist at Comcast innovation Labs who is championing quantum computation and quantum physics across Comcast corporation. He is also well-versed in microcontroller design which spans both analog and digital circuits. He conducts research and collaborates with other teams to implement challenging problems in the intersection of AI and hardware. Hamed has contributed to developing many technologies that influence Comcast's future products. Through his tenure at Innovation Labs, he has been fortunate to work with the top-of-the-line AI researchers to implement end-to-end demonstrations of my works which have been publicized within Comcast.

    Linkedin
  • 13:40
    Rishabh Misra

    Scaling Tweet Reply Ranking to Handle Tens of Millions of QPS

    Rishabh Misra - Machine Learning Engineer - Twitter

    Down arrow blue

    Scaling Tweet Reply Ranking to Handle Tens of Millions of QPS

    The Tweet Detail page on Twitter shows ranked replies to a particular tweet. With various product and ranking improvements, this surface has been seeing organic growth in usage. Furthermore, when external websites embed viral tweets, the reply serving service experiences a sharp increase in traffic. With a fixed time budget for serving a response, we developed a Light Ranking module that incorporates various Machine Learning signals and system performance signals to adaptively cut down the low-quality candidates under higher system load, allowing the service to reliably handle tens of millions of QPS.

    Rishabh Misra is an ML Engineer at Twitter, Inc, and co-author of the book "Sculpting Data for ML". He combines his past engineering experiences in designing large-scale systems, working at Amazon and Arcesium (a D.E. Shaw company), and research experiences in Applied Machine Learning, with publications at top venues, to develop distributed Machine Learning relevance systems as part of the Content Quality team. In his downtime, he enjoys watching sci-fi shows, gaming, and spending time with his family.

    Twitter Linkedin
  • 14:00
     Girija Narlikar

    Creating a Highly Personalized Store Through Instacart Ads

    Girija Narlikar - Director of Engineering in Ads - Instacart

    Down arrow blue

    Creating a Highly Personalized Store Through Instacart Ads

    Imagine your favorite neighborhood grocery store, except that it’s arranged especially for you on every visit. Instacart’s Ads platform is powering a marketplace filled with digital stores that can help you locate your favorite products with ease, discover new items to your taste, and inspire you with rich food related content--all without leaving the comfort of your home. Instacart is now available to more than 85% of U.S. households and 90% of Canadian households. Girija will describe machine learning driven products that Instacart Ads is building, with the aim of delighting our customers with personalized product discovery experiences.

    Girija Narlikar is a Director of Engineering in Ads at Instacart, the leading online grocery platform in North America. Girija joined Instacart in March 2021, after a 3.5 year stint in Google Ads, where she led teams using Machine Learning to identify sensitive content in text, image and videos, including political or COVID-19 related misinformation. Previously, she worked at Facebook and co-founded an AI-driven start-up in India. Girija holds computer science degrees from IIT Bombay and CMU. Outside of work she enjoys hiking, sports, as well as improvising "healthy" new recipes and ordering esoteric ingredients for them on Instacart!

    Twitter Linkedin
  • 14:20

    PANEL: Overcoming Barriers in AI Implementation: A Cross Industry Analysis

  • Rekha Venkatakrishnan

    Moderator

    Rekha Venkatakrishnan - Head of Product Management - Taco Bell

    Down arrow blue

    Rekha is a product evangelist and technologist with a solid background in product management, strategy and engineering in leadership roles. She currently heads the eCommerce product teams at Taco Bell and a firm believer in building simple yet delightful products with seamless experiences that help win both customers and business. Rekha is a strong advocate of Women in Tech, Data and Product and runs mentor circles to advance women in different functions.

    Linkedin
  • Chandra Shekhar Dhir

    Panelist

    Chandra Shekhar Dhir - Applied AI/ML Director - JPMorgan Chase & Co.

    Down arrow blue

    Chandra Dhir is the AIML Director at AI Services and Innovation team in JP Morgan Chase & Co. (JPMC) working on innovative services & solutions for real-world financial problems leveraging state-of-the-art speech recognition, speech synthesis, natural language understanding and predictive technologies.

    Chandra is deeply passionate about building machine learning (ML) powered products at scale, and iteratively improving the end-user experience by continuously optimizing ML product cycle modules ranging from data to large scale distributed training to deployment. Prior to JPMC, he led a team of engineers and researchers working on voice based technologies for all Apple products and Siri supported languages. Before joining Apple, he was part of a startup in South Korea where he developed low footprint audio and video fingerprinting technology for multimedia search which is embedded in smart TVs of some of the major consumer electronics companies. He has been training neural networks for about 20 years finding commercial applications in computer vision, and speech recognition tasks during his career. He received his bachelors in Electrical Engineering from IIT Madras, and M.S. and Ph.D. degrees in Bio and Brain Engineering from KAIST.

    Linkedin
  • Chin Ling

    Panelist

    Chin Ling - Director of Architecture - Gap

    Down arrow blue

    Chin Ling is the lead architect for data and analytics functions at Gap Inc. He is accountable for setting the technology vision, strategy, architecture and overseeing a wide range of data and AI/ML initiatives across the enterprise. Before joining Gap, Chin held numerous roles leading and building core engineering, data, and analytics organizations in retail, recruiting SaaS, and finance. He graduated with a dual-degree in Electrical Engineering and Computer Science at The University of Melbourne, completed his final year at Carnegie Mellon University, and held a CFA charter from 2010 to 2016.

    Linkedin
  • Lakshminarayanan (LN) Renganarayana

    Panelist

    Lakshminarayanan (LN) Renganarayana - Sr. Director and Head of AI / ML Engineering - Adobe

    Down arrow blue

    LN is a technology leader who has helped bring to life several data and ML products over the past 15 years. As the Head of AI/ML Engineering for Document Cloud at Adobe, he is helping Adobe build the next generation of AI-powered document experiences. In the past, LN has helped Workday build cutting-edge Enterprise AI products, has helped Symantec build streaming analytics service, and was a researcher at IBM T.J. Watson Research center. LN holds a Ph. D. in computer science and his work has had strong innovation and business impacts with awards from ACM, Workday, IBM, and HP.

    Linkedin
  • 15:00

    END OF SUMMIT

This website uses cookies to ensure you get the best experience. Learn more