• 08:00

    REGISTRATION OPENS

  • DEEP LEARNING LANDSCAPE

  • 09:00
    Anirudh Koul

    WELCOME & OPENING REMARKS

    Anirudh Koul - AI Scientist - Pinterest

    Down arrow blue

    Anirudh Koul is a noted AI expert, UN/TEDx speaker, author of O'Reilly's Practical Deep Learning book and a former scientist at Microsoft Research, where he founded Seeing AI, considered the most used technology among the blind community after the iPhone. He works at Pinterest helping incubate emerging technologies. With features shipped to a billion users, he brings over a decade of production-oriented applied research experience on petabyte-scale datasets. He also serves as an ML Lead for Frontier Development Labs & SpaceML - NASA's AI Accelerator, and coaches a podium-winning team for Roborace autonomous driving championship @175mph. His work in the AI for Good field, which IEEE has called 'life-changing', has received awards from CES, FCC, MIT, Cannes Lions, American Council of the Blind, showcased at events by UN, World Economic Forum, White House, House of Lords, Netflix, National Geographic, and lauded by world leaders including Justin Trudeau and Theresa May. For his work, he received the IET Career Achievement Award in 2019.

    Twitter Linkedin
  • 09:15

    From Open-Endedness to AI

  • 09:35
    Sebastian Raschka

    Customer Ratings, Letter Grades, and Other Rankings: Using Deep Learning When Class Labels Have A Natural Order

    Sebastian Raschka - Assistant Professor of Statistics/ Lead AI Educator - University of Wisconsin-Madison/Grid.ai

    Down arrow blue

    Customer Ratings, Letter Grades, and Other Rankings: Using Deep Learning When Class Labels Have A Natural Order

    Deep learning offers state-of-the-art results for classifying images and text. Common deep learning architectures and training procedures focus on predicting unordered categories, such as recognizing a positive and negative sentiment from written text or indicating whether images contain cats, dogs, or airplanes. However, in many real-world problems, we deal with prediction problems where the target variable has an intrinsic ordering. For example, think of customer ratings (e.g., 1 to 5 stars) or medical diagnoses (e.g., disease severity labels such as none, mild, moderate, and severe). This talk will describe the core concepts behind working with ordered class labels, so-called ordinal data. We will cover hands-on PyTorch examples showing how to take existing deep learning architectures for classification and outfit them with loss functions better suited for ordinal data while only making minimal changes to the core architecture.

    Sebastian Raschka is an Assistant Professor of Statistics at the University of Wisconsin-Madison focusing on machine learning and deep learning research. His recent research projects have focused on general challenges such as few-shot learning for working with limited data and developing deep neural networks for ordinal targets. As Lead AI Educator at Grid.ai, Sebastian plans to continue following his passion for helping people get into machine learning and artificial intelligence.

    Twitter Linkedin
  • 09:55
    Dawn Song

    Trustworthy Deep Learning

    Dawn Song - Professor of Computer Science - UC Berkeley

    Down arrow blue

    Trustwrothy Deep Learning

    Dawn Song is a Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. Her research interest lies in deep learning, security, and blockchain. She has studied diverse security and privacy issues in computer systems and networks, including areas ranging from software security, networking security, distributed systems security, applied cryptography, blockchain and smart contracts, to the intersection of machine learning and security. She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, the Faculty Research Award from IBM, Google and other major tech companies, and Best Paper Awards from top conferences in Computer Security and Deep Learning. She is an IEEE Fellow. She is ranked the most cited scholar in computer security (AMiner Award). She obtained her Ph.D. degree from UC Berkeley. Prior to joining UC Berkeley as a faculty, she was a faculty at Carnegie Mellon University from 2002 to 2007. She is also a serial entrepreneur.

    Twitter Linkedin
  • 10:15

    COFFEE & NETWORKING BREAK

  • TOOLS FOR DEEP LEARNING

  • 10:50
    Nick Ryder

    Zero Shot Capabilities at Scale

    Nick Ryder - Member of Technical Staff - OpenAI

    Down arrow blue

    Nick Ryder is a Member of Technical Staff at OpenAI, following receiving his PhD in Mathematics from the University of California in 2019.

    Linkedin
  • 11:10
    Hanie Sedghi

    Exploring the Limits of Large Scale Pre-training

    Hanie Sedghi - Research Scientist - Google Brain

    Down arrow blue

    Exploring the Limits of Large Scale Pre-training

    Recent developments in large-scale machine learning suggest that by scaling up data, model size and training time properly, one might observe that improvements in pre-training would transfer favorably to most downstream tasks. In this work, we systematically study this phenomena and establish that, as we increase the upstream accuracy, the performance of downstream tasks saturates. In particular, we investigate more than 4800 experiments on Vision Transformers, MLP-Mixers and ResNets with number of parameters ranging from ten million to ten billion, trained on the largest scale of available image data (JFT, ImageNet21K) and evaluated on more than 20 downstream image recognition tasks. We propose a model for downstream performance that reflects the saturation phenomena and captures the nonlinear relationship in performance of upstream and downstream tasks. Delving deeper to understand the reasons that give rise to these phenomena, we show that the saturation behavior we observe is closely related to the way that representations evolve through the layers of the models. We showcase an even more extreme scenario where performance on upstream and downstream are at odds with each other. That is, to have a better downstream performance, we need to hurt upstream accuracy.

    Hanie Sedghi is a Research Scientist at Allen Institute for Artificial Intelligence (AI2). Her research interests include large-scale machine learning, high-dimensional statistics and probabilistic models. More recently, she has been working on inference and learning in latent variable models. She has received her Ph.D. from University of Southern California with a minor in Mathematics in 2015. She was also a visiting researcher at University of California, Irvine working with professor Anandkumar during her Ph.D. She received her B.Sc. and M.Sc. degree from Sharif University of Technology, Tehran, Iran.

    Twitter Linkedin
  • 11:30

    Continual Lifelong Learning & Experimentation

  • 11:50
    Richard Socher

    FIRESIDE CHAT - Making ML Work in the Real World

    Richard Socher - Founder - you.com

    Down arrow blue

    Richard Socher is the founder of you.com and previous Chief Scientist at Salesforce. He was also previously the CEO and founder of MetaMind, a startup that seeked to improve artificial intelligence and make it widely accessible. He obtained his PhD from Stanford working on deep learning with Chris Manning and Andrew Ng and won the best Stanford CS PhD thesis award. He is interested in developing new AI models that perform well across multiple different tasks in natural language processing and computer vision.

    He was awarded the Distinguished Application Paper Award at the International Conference on Machine Learning (ICML) 2011, the 2011 Yahoo! Key Scientific Challenges Award, a Microsoft Research PhD Fellowship in 2012 and a 2013 "Magic Grant" from the Brown Institute for Media Innovation and the 2014 GigaOM Structure Award.

    Twitter Linkedin
  • 12:30

    LUNCH

  • COMPUTER VISION

  • 13:30
    Adrien Gaidon

    Beyond Supervised Driving

    Adrien Gaidon - Machine Learning Lead - Toyota Research Institute

    Down arrow blue

    Self-Supervised Analysis-by-Synthesis

    Modern Machine Learning is hitting the limits of purely supervised learning. Hence, self-supervised learning is emerging as a promising alternative, or at least a complementary approach. In this talk, we will discuss how computer vision is pushing beyond the limits of supervised learning using self-supervised analysis-by-synthesis, i.e. model based reconstruction. In particular, we will highlight recent progress at the confluence of neural approaches and simulation to ensure the scalability and robustness of deep neural networks for 3D vision.

    Adrien Gaidon is the Head of Machine Learning Research at the Toyota Research Institute (TRI) in Los Altos, CA, USA. Adrien’s research focuses on scaling up ML for robot autonomy, spanning Scene and Behavior Understanding, Simulation for Deep Learning, 3D Computer Vision, and Self-Supervised Learning. He received his PhD from Microsoft Research - Inria Paris in 2012, has over 50 publications and patents in ML & Computer Vision (cf. Google Scholar), and his research is used in a variety of domains, including automated driving

    Twitter Linkedin
  • 13:50
    Denis Gudovskiy

    Normalizing Flows for Real-Time Unsupervised Anomaly Detection

    Denis Gudovskiy - Senior AI Researcher - Panasonic AI Lab

    Down arrow blue

    Normalizing Flows for Real-Time Unsupervised Anomaly Detection

    Anomaly detection is a growing area of research in computer vision with many applications in industrial inspection, road traffic monitoring, medical diagnostics etc. However, the common supervised anomaly detection is not viable in practical systems. A more appealing approach is to collect only unlabeled anomaly-free images for train dataset i.e. to rely on an unsupervised anomaly detection. In this talk, I introduce our recent CFLOW-AD model that is based on a novel promising class of generative models called normalizing flows adopted for anomaly detection. Real-time CFLOW-AD is faster and smaller by a factor of 10x than prior models. Our experiments with the industrial MVTec dataset show that CFLOW-AD outperforms previous approaches both in detection and localization tasks.

    Denis Gudovskiy is a senior researcher at Panasonic AI lab in Mountain View. He specializes in deep learning-based algorithms for AI applications. His portfolio of research projects includes optimization of deep neural networks for edge AI devices, explainable AI tools, and automatic dataset management for computer vision applications. Denis received his M.Sc. in Computer Engineering from the University of Texas, Austin in 2008. Denis sees corporate research as an important layer between moonshot academia projects and clearly-defined product development roadmaps in business units. His goal is to find and promote viable academia-grade opportunities at Panasonic within the exponentially growing landscape of AI applications.

    Linkedin
  • 14:10
    Duncan Curtis

    Enable User-Centric AI Development with Sama

    Duncan Curtis - VP of Product - Sama

    Down arrow blue

    Enable User-Centric AI Development with Sama

    Building a technical data labeling product for Computer Vision and 3D Lidar can be a very challenging mandate. It is harder to accomplish with annotators with lower digital literacy, while the consumers of the annotated output come from highly technical AI engineering backgrounds. By attending this session, you will be equipped with tips on how to channel the voice of your user at the right phase of your AI product development, best practices to bring your development team closer to the problem and general guidelines to create a human-centric phased approach to build AI products.

    As Vice President of Product at Sama, Curtis brings over 15 years of extensive experience in Product Management with a proven track record of bringing innovative enterprise products and solutions to market for Fortune 500 companies. As the product leader at Sama, Duncan leverages his expertise in computer vision and autonomous vehicles to supercharge the Sama AI/ML Training Data platform with deeper incorporation of machine learning. Prior to joining Sama, Duncan led the product management for autonomous vehicle startup Zoox and with Google Duncan impacted the gaming experiences of over 1 billion daily active users with his work on Google Play Games.

    Twitter Linkedin
  • 14:30
    Lex Fridman

    An Introduction to Reinforcement Learning

    Lex Fridman - AI Researcher - MIT

    Down arrow blue

    An Introduction to Reinforcement Learning

    Lex Fridman is a researcher at MIT, working on deep learning approaches in the context of semi-autonomous vehicles, human sensing, personal robotics, and more generally human-centered artificial intelligence systems. He is particularly interested in understanding human behavior in the context of human-robot collaboration, and engineering learning-based methods that enrich that collaboration. Before joining MIT, Lex was at Google working on machine learning for large-scale behavior-based authentication.

    Twitter Linkedin
  • 15:00

    COFFEE & NETWORKING BREAK

  • NATURAL LANGUAGE UNDERSTANDING

  • 15:30
    Leonardo Neves

    NLP in Practice: Challenges of Language Understanding on Social Platforms

    Leonardo Neves - Principal Research Scientist - Snap Inc

    Down arrow blue

    NLP in Practice: Challenges of Language Understanding on Social Platforms

    In the last several years, the field of Natural Language Processing has seen tremendous advances. Models like BERT and GPT-3 have completely changed the way practitioners operate and models are used in real-life applications. Still, there is a big gap between state-of-the-art performance on benchmarks and the performance seen on social platforms like Snapchat and Twitter, where the text is short, informal, dynamic, and lacking in context. For this talk, we will introduce some of the challenges and approaches used for text understanding unders these circumstances.

    Leonardo Neves is a Principal Research Scientist at Snap Inc where he leads the Computational Social Science group. His research focuses on Natural Language Processing, specifically in leveraging additional modalities and context to improve language and behavior understanding. Leo has more than 20 publications in top-tier conferences like ACL, EMNLP, WWW, and AAAI, among others.

    Before joining Snap Inc., Leonardo worked for Pivotal Software Inc., Intel, and Yelp. He earned a Master's in Intelligent Information Systems from Carnegie Mellon University and a BE in Computer and Information Engineering from Rio de Janeiro's Federal University.

    Twitter Linkedin
  • 15:50
    Omar Florez

    Modeling Multimodal Toxicity Content at Twitter

    Omar Florez - NLP Researcher - Twitter Cortex

    Down arrow blue

    Multimodal Tweet Understanding

    A Tweet contains information that goes beyond 280 characters. For instance the text in a Tweet provides information enriched with emojis, #hashtags, and @mentions. However, a Tweet contains other modalities such as images, likes, and trends representing each one a piece of a puzzle. Using Machine Learning to learn from these signals allows us a more general understanding of Tweets. In this talk, I present how to learn multimodal representations from Tweets using Transformers and neural architectures, these methods are studied for understanding memes or modeling toxicity.

    Omar is a NLP Researcher at Twitter Cortex, working on state-of-the-art research on enabling natural conversations between humans and devices. Implementing meta-learning and memory augmented recurrent neural networks to learn with small data, as we humans do, and to deal with catastrophic forgetting.

    Twitter Linkedin
  • 16:10
    Uliana Popov

    Wikidepia on Demand

    Uliana Popov - Deep Learning Engineer, NLP - AI vs COVID-19 Initative

    Down arrow blue

    Wikidepia on Demand

    The goal of AIvsCovid19 initiative is to build a Biomedical Literature Research Tool to help healthcare providers and biomedical researchers, for both this global pandemic and for the health challenges of the future. Over a million biomedical articles are published each year. It is humanly impossible to read through all the papers, and identify the relevant information. Employing Language Models (MLs) we perform the search, information retrieval and hyper summarization, creating a concise read. The output text contains a summary of all the relevant snippets and links to the source papers.

    Uliana loves ML and Data. She received her B.A. in Comp. Science from the Technion. After graduation she worked as SW Engineer at IBM. At grad school her main focus was big data visualization. In collaboration with LANL she analyzed the formation of large scale structures in the Universe. At the onset of the pandemic Uliana joined AIvsCovid19 initiative as ML Engineer. She is fascinated by the power of the Transformers, and their impact on the NLP field.

    Linkedin
  • 16:30

    PANEL: Expectations for Deep Learning Trends in the Near Future

  • Ipsita Mohanty

    Panellist

    Ipsita Mohanty - Software Engineer, Machine Learning - Technical Lead - Walmart Global Tech

    Down arrow blue

    Ipsita Mohanty is a Software Engineer, Machine Learning - Technical Lead, working on several key product and research initiatives at Walmart Global Tech. She has an MS degree in Computer Science from Carnegie Mellon University, Pittsburgh. Prior to her Masters' program, Ipsita worked as an Associate for six years, developing trading and machine learning algorithms at Goldman Sachs in their Global Market Division at Bengaluru & London locations. She has published work on Natural Language Understanding, and her research work spans across disciplines of computer science, deep learning, and human psychology.

    Linkedin
  • Janvi Palan

    Panellist

    Janvi Palan - Machine Learning Research Engineer - Samsung Research America

    Linkedin
  • Vishakha Sharma

    Panellist

    Vishakha Sharma - Principal Data Scientist - Roche

    Linkedin
  • Shiraz Zaman

    Panelist

    Shiraz Zaman - Head of Machine Learning Platform - Lyft

    Down arrow blue

    Shiraz is an industry leader with deep experience in building and scaling machine learning solutions. He is currently head of machine learning platform at Lyft solving the unique optimization problems through advanced ML. His areas of interest include large-scale distributed systems, databases, application of machine learning to real world problems, model monitoring and governance.

    Linkedin
  • 17:00

    NETWORKING RECEPTION

  • 18:00

    END OF DAY ONE

  • 08:00

    REGISTRATION OPENS

  • 08:00

    WORKSHOPS

  • 09:00
    Dharini Chandrasekaran

    WELCOME NOTE

    Dharini Chandrasekaran - Staff Software Engineer - Twitter

    Down arrow blue

    Dharini is a Staff Software Engineer at Twitter. She has a decade of experience spanning the gamut of the technology stack from device drivers to android platform applications to large scale backend systems. Her current focus is solving large scale distributed system challenges. She currently works on Twitter’s ad platform where she wrangles complex services to run at scale. She has previously worked on NVIDIA’s GPUs as well as Amazon’s ground breaking Echo devices. Dharini is passionate about supporting and empowering the next generation of women technology leaders and takes opportunities to mentor women both at her workplace as well as through US Department of State’s initiatives like TechWomen. She is a Technical Advisor to NASA and SpaceML and worked with citizen scientists to use big data to combat climate change and has spoken about her work at conferences like Berlin Buzzwords. When she gets a few free moments, she loves spending time outdoors hiking with her dog Bailey, snowboarding or curling up indoors with books and dabbling in painting.

    Linkedin
  • GENERATIVE MODELS

  • 09:10
    Stefanos Nikolaidis

    Generating Diverse Content via Latent Space Illumination

    Stefanos Nikolaidis - Assistant Professor - University of Southern California

    Down arrow blue

    Generating Diverse Content via Latent Space Illumination

    Generative adversarial networks (GANs) are now a ubiquitous approach to procedurally generate content. While GANs can output content that is stylistically similar to human-authored examples, human designers often want to explore the generative design space of GANs to extract content that is diverse with respect to measures of interest. In this talk, I show how searching the latent space of GANs with quality diversity algorithms results in the automatic generation of complex, diverse and realistic content, including faces, video game levels and environments for human-agent coordination.

    Stefanos Nikolaidis is an Assistant Professor of Computer Science at the University of Southern California and leads the Interactive and Collaborative Autonomous Robotics Systems (ICAROS) lab. His research focuses on stochastic optimization approaches for learning and evaluation of human-robot interactions. His work leads to end-to-end solutions that enable deployed robotic systems to act optimally when interacting with people in practical, real-world applications. Stefanos completed his PhD at Carnegie Mellon's Robotics Institute and received an MS from MIT, a MEng from the University of Tokyo and a BS from the National Technical University of Athens. His research has been recognized with an oral presentation at NeurIPS and best paper awards and nominations from the IEEE/ACM International Conference on Human-Robot Interaction, the International Conference on Intelligent Robots and Systems, and the International Symposium on Robotics.

    Twitter Linkedin
  • 09:30
     Tatiana Likhomanenko

    CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings

    Tatiana Likhomanenko - Research Scientist - Apple

    Down arrow blue

    CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings

    Without positional information, attention-based Transformer neural networks are permutation-invariant. Absolute or relative positional embeddings are the most popular ways to feed Transformer models with positional information. Absolute positional embeddings are simple to implement, but suffer from generalization issues when evaluating on sequences longer than seen at training time. Relative positions are more robust to input length change, but are more complex to implement and yield inferior model throughput due to extra computational and memory costs. In this talk, we will discuss an augmentation-based approach (CAPE) for absolute positional embeddings, which keeps the advantages of both absolute (simplicity and speed) and relative positional embeddings (better generalization).

    Tatiana is a Research Scientist in the Machine Learning Research group, Apple. Prior to Apple, Tatiana was an AI Resident and later a Postdoctoral Research Scientist in speech recognition team, Facebook AI Research. Tatiana received her PhD in mixed type partial differential equations from Moscow State University in 2017. For 4 years she worked on applications of Machine Learning to High Energy Physics as a Research Scientist in the joint lab at Yandex and CERN, and later at the startup NTechLab, a leader in face recognition. The main focus of her recent research is transformers generalization and speech recognition (semi-, weakly- and unsupervised learning, domain transfer and robustness).

  • NEURAL NETWORK REPRESENTATION & GENERALIZATION

  • 09:50
    Forough Arabshahi

    Neuro-Symbolic Learning Algorithms for Automated Reasoning

    Forough Arabshahi - Senior Research Scientist - Meta

    Down arrow blue

    Neuro-Symbolic Learning Algorithms for Automated Reasoning

    Humans possess impressive problem solving and reasoning capabilities, be it mathematical, logical or commonsense reasoning. Computer scientists have long had the dream of building machines with similar reasoning and problem solving abilities to humans. Currently, there are three main challenges in realizing this dream. First, the designed system should be able to extrapolate in a zero-shot way and reason in scenarios that are much harder than what it has seen before. Second, the system’s decisions/actions should be interpretable, so that humans can easily verify if the decisions are due to reasoning skills or artifacts/sparsity in data. Finally, even if the decisions are easily interpretable, the system should include some way for the user to efficiently teach the correct reasoning when it makes an incorrect decision. We discuss how we can address these challenges using instructable neuro-symbolic reasoning systems. Neuro-symbolic systems bridge the gap between two major directions in artificial intelligence research: symbolic systems and neural networks. We will see how these hybrid models exploit the interpretability of symbolic systems to obtain explainability. Moreover, combined with our developed neural networks, they extrapolate to harder reasoning problems. Finally, these systems can be directly instructed by humans in natural language, resulting in sample-efficient learning in data-sparse scenarios.

    Forough Arabshahi is a Senior Research Scientist at Meta Platforms (Facebook) Inc. Her research focuses on developing sample-efficient, robust, explainable and instructable machine learning algorithms for automated reasoning. Prior to joining Meta, she was a postdoctoral researcher working with Tom Mitchell at Carnegie Mellon University. During her postdoc, she developed an explainable neuro-symbolic commonsense reasoning engine for the learning by instruction agent (LIA). During her PhD with Animashree Anandkumar and Sameer Singh she developed sample-efficient and provably consistent latent variable graphical models and deep learning models that extrapolate to harder examples by extracting hierarchical structures from examples. The grand goal of her research is to build a reasoning system that learns problem solving strategies by incorporating real world examples, symbolic knowledge from the problem domain as well as human natural language instructions and demonstrations.

    Twitter Linkedin
  • APPLIED DEEP LEARNING

  • 10:10
    Maithra Raghu

    Explainability Considerations for AI Design

    Maithra Raghu - Sr Research Scientist - Google Brain

    Down arrow blue

    Explainability Considerations for AI Design

    Many AI explainability techniques focus on considerations around AI deployment. But another crucial challenge for AI is their complex design process, spanning data, model choices and algorithms for learning. In this discussion, we overview some of the important considerations for explainability to help with AI design. What might explainability in the design process be defined as? What are some of the approaches being developed and their practical takeaways? What are the key open questions looking forwards?

    Maithra Raghu is a Senior Research Scientist at Google Brain and finished her PhD in Computer Science at Cornell University. Her research broadly focuses on enabling effective collaboration between humans and AI, from design to deployment. Specifically, her work develops algorithms to gain insights into deep neural network representations and uses these insights to inform the design of AI systems and their interaction with human experts at deployment. Her work has been featured in many press outlets including The Washington Post, WIRED and Quanta Magazine. She has been named one of the Forbes 30 Under 30 in Science, a 2020 STAT Wunderkind, and a Rising Star in EECS.

    Twitter Linkedin
  • 10:30

    COFFEE & NETWORKING BREAK

  • 11:00
    Tonya Custis

    AI for Design & Make at Autodesk

    Tonya Custis - Director of AI Research - AutoDesk

    Down arrow blue

    AI for Design & Make at Autodesk

    At Autodesk, we empower our customers with design & make software in the architecture, engineering, construction, manufacturing, and media & entertainment industries. Autodesk Research’s AI Lab are active members of the AI research community, publishing at top conferences and collaborating with academic labs. We do research in geometric deep learning, reinforcement learning, multimodal deep learning and other AI approaches that can be applied to our customers' workflows to make them more informed, efficient, and creative. This talk will highlight some of our research targeted at augmenting the design & make process across industries, and about future directions in human-AI collaboration for design & make.

    Dr. Tonya Custis has over 15 years of experience performing applied Artificial Intelligence research and leading AI research teams & projects at Autodesk, Thomson Reuters, eBay, and Honeywell. Tonya earned a Ph.D. in Linguistics, M.S. in Computer Science, M.A. in Linguistics, and a B.A. in Music. In her current position as the Director of AI Research at Autodesk, she leads a team of research scientists carrying out foundational and applied research in AI technologies in the Manufacturing, AEC, and Media & Entertainment industries. Her research interests include Natural Language Processing, Discourse Understanding, Information Retrieval, Machine Learning, Geometric Deep Learning, and Multimodal Deep Learning.

    Linkedin
  • 11:20
    Ryan Alimo

    Everyday Spin-offs from Technology Developed for NASA Missions

    Ryan Alimo - Lead Machine Learning Scientist - NASA Jet Propulsion Laboratory

    Down arrow blue

    Everyday Spin-offs from Technology Developed for NASA Missions

    Dr. Alimo has been working on the development and analysis of machine learning algorithms for increasing the efficiency of autonomous systems with a wide range of applications from swarm spacecraft to mobile apps on smartphones running on embedded processor units and/or computer clouds. He has been developing augmented intelligence (AI) software that increases the efficiency of the human operators rather than taking their jobs.

    3 Takeaways:

    • The future of Mars exploration with autonomous intelligent systems

    • JPL scientist saw applications of computer visions and AI that are powerful for terrestrial applications

    • Found applications of CV and AI in PropTech, and home remodelling

    Dr. Ryan Alimo is a ML/AI scientist at NASA’s JPL and founder of OPAL AI Inc, a technology startup up in the Silicon Beach. Dr. Alimo’s research interests span theory and practice of data-driven optimization, machine vision, and swarm autonomy. He was awarded NASA JPL’s Voyager in 2019 and Discovery Awards in 2020 where his research works were featured in technology highlights of NASA’s JPL in 2019 and 2020. He obtained his PhD from UC San Diego in data-driven optimization followed by a Postdoc at Caltech’s CAST in autonomous systems before joining NASA’s JPL and founding OPAL AI. He is passionate about human-AI Mars exploration and building habits on the Moon and beyond.

    Twitter Linkedin
  • 11:40
    Sudeep Das

    WHAT/IF: Leveraging Causal Machine Learning at Netflix

    Sudeep Das - Machine Learning Lead - Netflix

    Down arrow blue

    WHAT/IF: Leveraging Causal Machine Learning at Netflix

    Most Machine Learning algorithms used in Personalization and Search, including Deep Learning, are purely associative, and learn from the correlations between features and outcomes. In many scenarios, going beyond the purely associative nature, and understanding the causal mechanism between taking a certain action and the resulting outcome becomes the key in decision making. Causal Inference gives us a principled way of learning such relationships, and when married with machine learning, becomes a powerful tool that can be leveraged at scale. In this talk, we will give a high level overview of how Netflix is using Causal Machine Learning in various applications. We will go over different flavors of Causal ML techniques we are exploring at Netflix, the learnings, the challenges, and discuss future directions.

    Sudeep is a Machine Learning Area Lead at Netflix, where his main focus is on developing the next generation of machine learning algorithms to drive the personalization, discovery and search experience in the product. Apart from algorithmic work, he also takes a keen interest in data visualizations. Sudeep has had more than fifteen years of experience in machine learning applied to both large scale scientific problems, as well as in the industry. He holds a PhD in Astrophysics from Princeton University.

    Twitter Linkedin
  • Aish Fenton

    WHAT/IF: Leveraging Causal Machine Learning at Netflix

    Aish Fenton - Director of Machine Learning - Netflix

    Down arrow blue

    WHAT/IF: Leveraging Causal Machine Learning at Netflix

    Most Machine Learning algorithms used in Personalization and Search, including Deep Learning, are purely associative, and learn from the correlations between features and outcomes. In many scenarios, going beyond the purely associative nature, and understanding the causal mechanism between taking a certain action and the resulting outcome becomes the key in decision making. Causal Inference gives us a principled way of learning such relationships, and when married with machine learning, becomes a powerful tool that can be leveraged at scale. In this talk, we will give a high level overview of how Netflix is using Causal Machine Learning in various applications. We will go over different flavors of Causal ML techniques we are exploring at Netflix, the learnings, the challenges, and discuss future directions.

    Aish is a Director of Machine Learning at Netflix. His org is responsible for the core recommendation and search algorithms used at Netflix. Aish has over 23 years of experience at the intersection of mathematics and software engineering. Prior to Netflix, Aish lead the data science teams at Opentable, Foodspotting, iVistra, and founded the company, vWork, solving large-scale optimization problems.

    Twitter Linkedin
  • 12:00

    LUNCH

  • 13:00
    Dawn Lu

    The Evolution of DoorDash’s Recommendations Algorithm for Grocery Substitutions

    Dawn Lu - Senior Data Scientist, Machine Learning - DoorDash

    Down arrow blue

    The Evolution of DoorDash’s Recommendations Algorithm for Grocery Substitutions

    Building a recommendations model from scratch is a challenge that many start-ups face. When DoorDash first entered the grocery & convenience space in 2020, it was critical to recommend good substitutes when certain items that customers ordered were out-of-stock at the store. In this talk, I’ll provide an overview of how we tackled the cold-start problem and how we evolved the recommendations model over time as more data was collected. I’ll dive into three distinct phases of the model evolution, in which the final phase uses a DLRM model. Along the way, we’ll uncover some interesting consumer patterns, such as whether Pepsi and Coca-Cola are substitutable.

    Dawn Lu is a senior data scientist on the Machine Learning team at DoorDash, where she has built several foundational ML systems over the past 4 years. As the first data scientist to work on DoorDash’s new verticals team, Dawn led the development of fulfillment initiatives for the grocery & convenience verticals. Prior to that, she focused on building predictive models to power DoorDash’s logistics engine, such as architecting a new driver pay model and improving ETA accuracy during hyper-growth. She holds a bachelor’s degree in Economics from Yale University.

    Linkedin
  • 13:20
    Vipul Raheja

    Understanding Iterative Revision Patterns in Writing

    Vipul Raheja - Research Scientist - Grammarly

    Down arrow blue

    Understanding Iterative Revision Patterns in Writing

    Writing is, by nature, a strategic, adaptive, and, more importantly, an iterative process. A crucial part of writing is editing and revising the text. Previous works on text revision have focused on defining edit intention taxonomies within a single domain or developing computational models with a single level of edit granularity, such as sentence-level edits, which differ from human revision cycles. This talk will describe IteraTeR: the first large-scale, multi-domain, edit-intention annotated corpus of iteratively revised text. In particular, IteraTeR is collected based on a new framework to comprehensively model the iterative text revisions that generalizes to a variety of domains, edit intentions, revision depths, and granularities. When we incorporate our annotated edit intentions, both generative and action-based text revision models significantly improve automatic evaluations. We show that through our work, we are able to better understand the text revision process, making vital connections between edit intentions and writing quality, enabling the creation of diverse corpora to support computational modeling of iterative text revisions.

    Vipul Raheja is a Research Scientist at Grammarly. He works on developing robust and scalable Natural Language Processing and Deep Learning approaches for building the next generation of intelligent writing assistance systems, focused on improving the quality of written communication. His research interests lie at the intersection of text editing and controllable text generation. He holds a Masters in Computer Science from Columbia University, where he was affiliated with the Center for Computational and Learning Systems. He received a dual-degree in Computer Science and Engineering from IIIT Hyderabad.

    Linkedin
  • 13:40
    Arne Stoschek

    AI Applications to Enable Autonomous Flight

    Arne Stoschek - Head of Autonomy & Machine Learning - A³ by Airbus

    Down arrow blue

    AI Applications to Enable Autonomous Flight

    Annual air travel volume is expected to double by 2036, putting significant strain on the industry to meet rising demand for new aircraft and its operation. Adaptable autonomous systems will be required to successfully sustain the future of aviation travel. During this session, Arne Stoschek who leads Acubed’s autonomy technologies effort, will address how various AI applications – such as machine learning and computer vision – can progress the advancement towards autonomous flight by developing decision-making software that can safely navigate the world around an aircraft.

    At Acubed, Airbus’ Innovation in Silicon Valley, Arne leads the effort focused on building autonomous flight and machine learning solutions to enable autonomous aircraft operations. Passionate about robotics and autonomous electric vehicles, Arne has held engineering leadership positions at global companies such as Volkswagen/Audi and Infineon, and at aspiring Silicon Valley startups, namely Lucid Motors/Atieva, Knightscope and Better Place. Arne holds a Doctor of Philosophy in Electrical and Computer Engineering from the Technical University of Munich and previously held a computer vision and data analysis research position at Stanford University.

    Linkedin
  • 14:00

    Do Deep Generative Models Know What They Don't Know

  • 14:20

    PANEL: How Can We Best Harness The Potential Machine Learning & Overcome Challenges for Innovative Applications

  • Saurabh Khanwalkar

    Panellist

    Saurabh Khanwalkar - VP of Engineering, Machine Learning Products - Course Hero

    Down arrow blue

    Saurabh Khanwalkar is the VP of Machine Learning at Course Hero, an EdTech Unicorn. At Course Hero, Saurabh leads the Machine Learning, Natural Language Processing, Search, and Recommendations teams and is responsible for shipping intelligent and personalized learning experiences to millions of students and educators. Saurabh has over 18 years of R&D and leadership experience in Machine Learning Products in diverse industries such as DARPA research, social media analytics, consumer electronics, Healthtech, and Edtech. Saurabh has technical publications and patents in Speech Processing, Natural Language Processing, and Information Retrieval and is on the review committee for NAACL, COLING, and other academic conferences offering industry track Machine Learning papers. Saurabh is passionate about leveraging Machine Learning to solve the hardest and KPI impacting business problems and is a strong believer that ML can be the 10x force multiplier for customer success and monetization.

    Linkedin
  • Ignacio G López-Francos

    Panelist

    Ignacio G López-Francos - Senior Research Engineer - NASA Ames Research Center

    Down arrow blue

    Ignacio G. López-Francos is a Senior Research Engineer with the Intelligent Systems Division at the NASA Ames Research Center. His current research focuses on assured autonomy, rad-tolerant neuromorphic computing, and vision-based navigation. He also supports VIPER, NASA's first lunar robotic rover mission, by developing ML-based tools to enhance lunar orbital imagery. Prior to that, he worked as an AI Research Scientist at NASA's Space Portal Office, where he led the research and development of AI applications supporting NASA's science and exploration objectives. He has a hybrid background in engineering and computer science and over eight years of industry experience at companies such as Meta and United Airlines.

    Linkedin
  • 15:00

    END OF EVENT

This website uses cookies to ensure you get the best experience. Learn more