30 - 31 January 2020

Applied AI Summit Applied AI Summit schedule

RE•WORK San Francisco Summit

  • 08:00

    REGISTRATION & LIGHT BREAKFAST

  • 09:00
    Sarah Catanzaro

    WELCOME

    Sarah Catanzaro - Partner - Amplify Partners

    Down arrow blue

    Sarah is a Partner at Amplify Partners where she focuses on early-stage investments in machine intelligence, data science, and data management. Sarah has several years of experience in developing data acquisition strategies and leading machine and deep learning-enabled product development. As head of data at Mattermark, she led a team to collect and organize information on over one million private companies; as a consultant at Palantir and as an analyst at Cyveillance, she implemented analytics solutions for municipal and federal agencies; and as a program manager at the Center for Advanced Defense Studies, she directed projects on adversary behavioral modeling and Somali pirate network analysis. Sarah earned a B.A. in International Security Studies from Stanford University.

    Twitter Linkedin
  • CURRENT LANDSCAPE OF APPLIED AI

  • 09:15
    Eddan Katz

    Unlocking Public Sector Adoption of AI through Government Procurement

    Eddan Katz - Project Lead: AI & ML - World Economic Forum

    Down arrow blue

    Unlocking Public Sector Adoption of AI through Government Procurement

    Eddan Katz is the Project Lead on Digital Protocol Networks at the World Economic Forum, where he facilitates the norms-setting process and dissemination of the protocols advanced by the projects at the Center for the Fourth Industrial Revolution. Eddan has previously served as the International Affairs Director at the Electronic Frontier Foundation, where he worked on advocacy initiatives at international multi-stakeholder decision-making bodies in the areas of cybercrime, data privacy, intellectual property, and freedom of expression. He was the first Executive Director of the Information Society Project at Yale Law School where he taught Cyberlaw and founded the Access to Knowledge initiative. He has a J.D. from UC Berkeley Law School and a B.A. in Philosophy from Yale.

    Twitter Linkedin
  • APPLYING AI METHODS TO SOLVE CHALLENGES IN INDUSTRY

  • USE CASES: DEEP LEARNING

  • 09:35
    Frankie Cancino

    Understanding the Behavior of Time Series Data Using the Matrix Profile and Deep Learning

    Frankie Cancino - Senior Engineer & Data Scientist - Target

    Down arrow blue

    Understanding the Behavior of Time Series Data Using the Matrix Profile and Deep Learning

    Target is a large retail company with over 1,800 stores in the U.S. Because of this scale, it can be difficult to find anomalous behavior in data or pinpoint what metrics could potentially be correlated. In order to understand the behavior of this data at scale, Target open-sourced the Python library matrixprofile-ts. Using this library, we can layer models on top of the Matrix Profile to find when anomalous behavior occurs or when different metrics in different areas of the company affect each other. This talk will briefly go over the matrixprofile-ts library and examples of where deep learning models can be applied to complement it.

    Frankie Cancino is a Senior Engineer and Data Scientist for Target, a Fortune 50 company, in Minneapolis. While working at Target, he is also a graduate student at the University of Minnesota earning a Master of Science degree in Business Analytics. Frankie is also known as the organizer and founder of the Data Science Minneapolis group. Data Science Minneapolis is a community that brings together professionals, researchers, data scientists, and AI enthusiasts. This community is dedicated to learning, teaching, and building technologies related to data science topics.

    Linkedin
  • 09:55

    Enhance Recommendations in Uber Eats with Graph Convolutional Networks

  • Piero Molino

    CO-PRESENTING

    Piero Molino - Senior Research Scientist & Co-Founder - Uber AI

    Down arrow blue

    Enhance Recommendations in Uber Eats with Graph Convolutional Networks

    Uber Eats has become synonymous to online food ordering. With increasing selection of restaurants and dishes in the app, personalization is quite crucial to drive growth. One aspect of personalization is better recommendation of restaurants and dishes to the users so they can get the right food at the right time.

    In this talk, we present how to augment the ranking models with better representations of users, dishes and restaurants. Specifically, we show how to leverage the graph structure of Uber Eats data to learn node embeddings of various entities using state of the art Graph Convolutional Networks implemented in Tensorflow. We also show that these methods perform better than standard Matrix Factorization approaches for our use case.

    Key Takeaway - The audience will learn about how to build deep learning models on graph data using Graph Convolutional Networks to obtain better entity representations to use for recommendation. They will also learn about strategies to scale the model to very big datasets.

    Biography: Piero Molino is a Senior Research Scientist at Uber AI with focus on machine learning for language and dialogue. Piero completed a PhD on Question Answering at the University of Bari, Italy. Founded QuestionCube, a startup that built a framework for semantic search and QA. Worked for Yahoo Labs in Barcelona on learning to rank, IBM Watson in New York on natural language processing with deep learning and then joined Geometric Intelligence, where he worked on grounded language understanding. After Uber acquired Geometric Intelligence, he became one of the founding members of Uber AI Labs. He currently leads the development of Ludwig, a code-free deep learning framework.

    Twitter Linkedin
  • Ankit Jain

    CO-PRESENTING

    Ankit Jain - Sr Research Scientist - Uber

    Down arrow blue

    Enhance Recommendations in Uber Eats with Graph Convolutional Networks

    Uber Eats has become synonymous to online food ordering. With increasing selection of restaurants and dishes in the app, personalization is quite crucial to drive growth. One aspect of personalization is better recommendation of restaurants and dishes to the users so they can get the right food at the right time.

    In this talk, we present how to augment the ranking models with better representations of users, dishes and restaurants. Specifically, we show how to leverage the graph structure of Uber Eats data to learn node embeddings of various entities using state of the art Graph Convolutional Networks implemented in Tensorflow. We also show that these methods perform better than standard Matrix Factorization approaches for our use case.

    Key Takeaway: The audience will learn about how to build deep learning models on graph data using Graph Convolutional Networks to obtain better entity representations to use for recommendation. They will also learn about strategies to scale the model to very big datasets.

    Ankit currently works as a Senior Research Scientist at Uber AI Labs, the machine learning research arm of Uber. His work primarily involves the application of Deep Learning methods to a variety of Uber’s problems ranging from forecasting, food delivery to self driving cars. Previously, he has worked in variety of data science roles at Bank of America, Facebook and other startups. He has co-authored a book on machine learning titled “Tensorflow Machine Learning Projects”. Additionally, he has been a featured speaker in many of the top AI conferences and universities across US including UC Berkeley, OReilly AI conference etc. He completed his MS from UC Berkeley and BS from IIT Bombay (India).

    Twitter Linkedin
  • 10:15

    COFFEE

  • MACHINE LEARNING

  • 10:55
    Vladimir Alves

    In-Storage Distributed Machine Learning for the Edge

    Vladimir Alves - CTO & Co-Founder - NGD

    Down arrow blue

    In-Storage Distributed Machine Learning for the Edge

    Cloud-only architectures will soon not be able to keep up with the volume and velocity of data across the network, therefore gradually reducing the value that can be created from these investments. Edge computing can help solve the limitations in current infrastructure to enable mission-critical, data-dense IoT and other advanced digital use cases by reducing or eliminating data movement and address latency and energy efficiency bottlenecks. To address the problems above in the context of ML applications, it is necessary to perform training and inference at the edge, transmitting only processed data (metadata) or full data only when necessary. Doing this, however, faces the limitation that most devices do not present strong computing capabilities and, even if they did, it would take too much energy to make them work.

    Big data analytics solutions, such as Hadoop, have addressed the performance challenge by using a distributed architecture based on a new paradigm that relies on moving computation closer to data. Similarly, by pushing the “move computation to data” paradigm to its ultimate limit we enable highly efficient and flexible in-storage processing capability in solid state drives, i.e., computational storage. By moving data processing tasks closer to where the data resides, we dramatically reduce the storage bandwidth bottleneck, data movement cost, and improve the overall energy efficiency creating an ideal platform for Machine Learning at the Edge.

    NGD’s computational storage device (CSD) provides a seamless programming model based on a Linux OS and high-level programming languages thanks to a complete standard network software and protocol stack. It is the first commercially available SSD that can be configured to run a server-like operating system (e.g., Linux), allowing general application developers to fully leverage existing tools and libraries to minimize the effort to create and maintain applications running in-storage.

    This paper proposes a framework for distributed, in-storage training of neural networks on heterogeneous clusters of computational storage devices. Such devices contain multi-core application processors as well SIMD engines and virtually eliminate data movement between the host and storage, resulting in both improved performance and power savings. More importantly, this in-storage processing style of training ensures that private data never leaves the storage while fully controlling the sharing of public data. Experimental results have shown up to 2.7x speedup and 69% reduction in energy consumption and no significant loss in accuracy.

    Vladimir Alves obtained his PhD degree in Microelectronics when the 500nm CMOS process was all the hype. Since then Vladimir worked in the academia, startups and multinational companies architecting and developing System on Chips. In the last 15 years he has been focusing on solid state storage technology and is now the co-founder and Chief Technology Officer at NGD Systems helping create innovative technology that pushes the boundaries of storage and computation.

    Twitter Linkedin
  • 11:15
    Seth Weidman

    ML to Stop Synthetic Fraud

    Seth Weidman - Seth Weidman - Sentilink

    Down arrow blue

    Sentilink uses machine learning to stop synthetic fraud. Specifically, the Sentilink API allows lenders to determine in real time whether someone applying for their loan is a real person, or whether they are merely a "synthetic identity" created by a fraudster. In this talk, Sentilink data scientist Seth Weidman will discuss the subtle signals Sentilink uses to detect such fraud, and also share lessons we've learned about managing real time machine learning models in production.

    Seth Weidman is a data scientist at Sentilink, where he works on their algorithms to stop synthetic fraud. He has done many jobs in the data analytics and machine learning space, from working as a business analyst in management consulting to doing machine learning engineering at Facebook, and recently published an introductory book on Deep Learning with O'Reilly. He has degrees in mathematics and economics from the University of Chicago.

    Twitter Linkedin
  • USE CASES: FORECASTING & RECOMMENDATIONS

  • 11:35
    Erin Gustafson

    Duolingo’s Growth Model: A Framework for Understanding, Exploration, and Forecasting

    Erin Gustafson - Senior Data Scientist - Duolingo

    Down arrow blue

    Duolingo’s Growth Model: A framework for understanding, exploration, and forecasting

    Duolingo's evolution has required us to have a more fine-grained understanding of the levers that drive user growth. Our Growth Model is a framework that allows us to better characterize our topline DAU and MAU metrics and model how they change over time. Using this framework, we have identified new strategies to unlock user growth, built a robust forecasting engine, and explored new opportunities for product development.

    Erin Gustafson is a Senior Data Scientist at Duolingo, where she works on product-focused exploratory modeling, forecasting, and experimentation. She has experience working in Growth and Monetization, where she applies her statistics and ML skills to improving user/product understanding and making data-informed recommendations on future product development. Before joining Duolingo, Erin completed her PhD in linguistics and conducted research on bilingualism.

    Twitter Linkedin
  • 11:55
    Kamiya Motwani

    Personalizing the Online Grocery Substitution Experience

    Kamiya Motwani - Staff Data Scientist - Walmart Labs

    Down arrow blue

    Personalizing the Online Grocery Substitution Experience

    In this talk, I will present a broad overview of personalizing recommendations in Online Grocery. Unlike typical recommender systems used in e-commerce that have roots in collaborative filtering, recommending groceries to customers poses unique challenges. How do we handle periodicity in purchases? How do we blend periodicity in purchases with a broad prior based on seasonality? Should recommendations be tailored at an item level, or more broadly at basket level? I will provide a high level end to end overview of online grocery shopping, and as a case study, I will provide deep insights into personalized substitutions. I will describe our state of the art neural network based model that allows order pickers to substitute out of stock items during order fulfilment. Perhaps in a system where we strive to optimize substitutions for our customers, how do we model the inherent bias, akin to a noisy channel, introduced by order pickers? Apart from providing details on personalized substitutions, I will also expand on recent work that touches on basket level substitutions as a multi- objective problem that takes in to account objectives such as cost control and recipe completion.

    Kamiya Motwani is a Staff Data Scientist and manager at Walmart Labs India. She is currently a data science lead in Personalization Team. She has also worked extensively on click prediction for advertisements and has rich practical experience building machines that learn from data. Prior to Walmart Labs, she has worked in prestigious organizations such as Oracle corporation and Yahoo Inc. She holds a Master's degree in Computer science from the University of Wisconsin Madison where she focused extensively on Machine learning and probabilistic modelling. Kamiya has also filed several patents in the area of recommender systems, and published papers at premier conferences including NIPS and IEEE ICASSP.

    Twitter Linkedin
  • USE CASES: NATURAL LANGUAGE PROCESSING

  • 12:15
    Muhammed Ahmed

    Multimodal Learning For Campaign Classification

    Muhammed Ahmed - Data Scientist - Mailchimp

    Down arrow blue

    Multimodal Learning For Campaign Classification

    Mailchimp is the world's largest marketing automation platform. Over a billion emails are sent by it every day, which raises the question: what exactly are its users sending? We'll do a deep dive into the way that Mailchimp combines data across natural language and image modalities to generate numerical representations of email campaigns and make sense of users' content. Which allows it to empower small businesses by surfacing personalized recommendations.

    Muhammed Ahmed is a Data Scientist at Mailchimp who specializes in natural language processing and deep learning. At Mailchimp, he has implemented several state-of-the-art deep learning models (ELMo, BERT, XLNet, RoBERTa, T5) for natural language understanding. His deployed models have been used for email campaign text classification tasks like spam detection and predicting user intent. Most recently, his focus has been on developing multimodal models which generate high quality numerical representations using both text and images.

    Twitter Linkedin
  • 12:35
     Viswanath Sivakumar

    Understanding Text on Images at Scale

    Viswanath Sivakumar - Researcher - Facebook AI Research (FAIR)

    Down arrow blue

    Understanding Text on Images at Scale

    Understanding text that appears on images in social media platforms is important not just for improving experiences such as the incorporation of text into screen readers for the visually impaired, but they also help keep the community safe by proactively identify inappropriate or harmful content in a way that pure object detection or NLP systems alone cannot.

    This talk describes the challenges behind building an industry-scale scene-text extraction system at Facebook that processes over 2 billion images each day. I'll cover the Deep Learning methods behind building models that perform detection of text in arbitrary orientations with high-accuracy, and how simple convolutional models work extremely well for recognizing text in over 50 languages. A critical aspect of the work is scaling up these models for efficient server-side inference. I'll dive into quantization methods to run neural networks with 8-bit integer weights and activations instead of 32-bit floating points, and the challenges involved in bridging the accuracy gap.

    I’m a Researcher at Facebook AI Research working on machine learning for systems where I’m currently exploring reinforcement learning to improve the performance of computer networks. Prior to that, I was part of Facebook AI Applied Computer Vision Research group where I founded and lead the Rosetta project—a large-scale machine learning system for understanding text in images and videos. I had also made extensive improvements to the low-level performance and efficiency of Computer Vision models in production.

    Twitter Linkedin
  • 12:55

    LUNCH

  • 13:55
    Ananth Sankar

    Deep Neural Networks for Search and Recommendation Systems at LinkedIn

    Ananth Sankar - Principal Staff Engineer - LinkedIn

    Down arrow blue

    Deep Neural Networks for Search and Recommendation Systems at LinkedIn

    Deep neural networks like convolutional neural networks (CNN), recurrent neural networks (RNN), and attention-based encoder-decoder networks have made a big impact in several natural language processing (NLP) applications, such as sentence classification, part of speech tagging, and machine translation. In recent years, models like BERT and its variants have improved the state of the art in NLP through contextual word embeddings, and sentence embeddings. Another attraction of these models is that they can be finetuned for target applications.

    In this talk, I will describe how we have successfully used deep neural networks for natural language processing and understanding at LinkedIn. In particular, I will discuss our work in query and document understanding, as well as document ranking for search and recommendation systems.

    Ananth Sankar is a Principal Staff Engineer in the Artificial Intelligence group at LinkedIn, where he works on multimedia content understanding and natural language processing. During his career, he has also made many R&D contributions in the area of speech recognition. He has taught courses at Stanford and UCLA, given several invited talks, co-authored more than 50 refereed publications, and has 10 accepted patents.

    Twitter Linkedin
  • USE CASES: REINFORCEMENT LEARNING

  • 14:15
    Rakesh Rana

    Reinforcement Learning for Exchange Rate Forecasting

    Rakesh Rana - Lead Data Scientist - Nordea

    Down arrow blue

    Reinforcement learning for Exchange Rate Forecasting

    Ability to make better forecast of future exchange rates is highly valuable for investors exposed to assets or liabilities in foreign currencies. Most large organizations in financial and non-financial domain have to manage their foreign currency exposures for achieving desired financial returns and or minimize the risks. In this project we compared three different approaches to forecast medium to long term exchange rate forecasting. We evaluated performance of econometric models, time series models, and reinforcement learning agents using Q-learning algorithm. We tested the models on different currently baskets including EUR/USD, GBP/USD, SEK/EUR, and NOK/EUR using market data from 2000-2018 covering different economic conditions.

    Rakesh Rana works as Lead Data Scientist at Nordea Life & Pensions, Sweden. His work focuses on applying AI to solve business problems and create customer value. Rakesh received his M.S. degree in Finance and PhD in Computer Science from Chalmers/University of Gothenburg, Sweden. His work and research interests revolve around using data science and machine learning algorithms mainly within the financial domain.

    Twitter Linkedin
  • 14:35
    Rein Houthooft

    Generating the Best Game Experience through AI

    Rein Houthooft - Head of AI - Happy Elements

    Down arrow blue

    Generating the Best Game Experience through AI

    Happy Elements is the producer of one of the largest active mobile games worldwide. Within our AI lab, we aim to optimize the gameplay experience of each player individually at truly massive scale. Towards this goal, we research and develop machine learning algorithms and systems for dynamic game adaptation. This talk elaborates on how we achieve real-time game content personalization by leveraging high volumes of data through deep contextual bandits, as well as our current research projects in applied deep reinforcement learning.

    Key Takeaways: Optimizing game design through AI can improve user LTV/retention and enhance player experience.

    Rein Houthooft leads Happy Elements AI Team. Originally from Belgium (EU), Rein received his PhD in EECS from Ghent University. Part of his research was conducted as a researcher at OpenAI and at the Berkeley AI Research lab of UC Berkeley, with a focus deep reinforcement learning and generative models. Previously, Rein was involved in the organization of the annual NeurIPS Deep RL Workshop.

    Twitter Linkedin
  • 14:55
    Gary Ren

    How Machine Learning Powers On-Demand Logistics At DoorDash

    Gary Ren - Machine Learning Engineer - DoorDash

    Down arrow blue

    How Machine Learning Powers On-Demand Logistics At DoorDash

    DoorDash has a complex, three-sided, and real-time marketplace that presents many challenging problems where machine learning has a lot of impact. This talk will give an overview of where machine learning is used in DoorDash, and then dive deeper into how we use machine learning to power our logistics engine, which is the system that powers the fulfillment of deliveries. Topics covered will include the vehicle routing problem with trillions of combinations, delivery time predictions for all your favorite restaurants, and our exploration with reinforcement learning for logistics.

    Twitter Linkedin
  • 15:15

    COFFEE

  • USE CASES: COMPUTER VISION

  • 16:00
    Andrew Zhai

    Recommendations and Search at Pinterest + Embeddings

    Andrew Zhai - Staff Software Engineer - Pinterest

    Down arrow blue

    Recommendations and Search at Pinterest + Embeddings

    Over 300 million users come to Pinterest monthly to discover ideas for their creative outlets through our recommendation and search products. Embeddings are a fundamental technology in these systems, powering the match engine to surface relevant and engaging content. We represent all aspects of the Pinterest ecosystem (pins, users, text, and images) under this common representation, enabling us to learn relationships across entity types to jointly optimize for product goals. Join us as we discuss how recent advancements in computer vision, natural language processing, and graph convolutional neural networks power embeddings and together enabled both new product experiences such as Pinterest Lens and improved performance of our core recommendation systems. We also discuss key infrastructure challenges and our solutions to scaling embedding search to web-scale systems.

    Andrew is a Staff Software Engineer working in the Visual Search and Applied Science groups at Pinterest. During his career, he was the founding engineer and TL of visual search at Pinterest, leading multiple generations of serving, indexing, and modeling to build products including Pinterest Lens. More recently, he leads the embedding efforts at Pinterest to push the limits of recommendation systems. Andrew received his B.S at UC Berkeley and M.S at Stanford.

    Twitter Linkedin
  • 16:20
    Shahmeer Mirza

    7-Eleven’s Digital Transformation: Using Applied AI to Disrupt Convenience

    Shahmeer Mirza - Machine Learning Engineer & Team Lead - 7-Eleven

    Down arrow blue

    7-Eleven’s Digital Transformation: Using Applied AI to Disrupt Convenience

    7-Eleven was founded in 1927 as the world’s first convenience store, and for decades has operated as the marketplace leader in the convenience retail business. Through the years, 7-Eleven has continued its obsession with “giving the customers what they want, when and where they want it,” leading the way with a number of innovations in the industry. The first self-serve soda fountains, Slurpees, and to-go coffee were key milestones that kept the business ahead of competition. The last two decades have seen a rapidly changing technology landscape, and thus in 2016, 7-Eleven began its Digital Transformation to ensure its future as an innovation leader in the retail space. Today we’ll talk about the latest breakthrough in that transformation journey…

    Shahmeer Mirza is a Tech Lead and Machine Learning Engineer at 7Next, the R&D Division of 7-Eleven. Over the last several months he has led the team developing 7-Eleven’s Checkout-Free technology. In November of 2019, the team opened their first store at 7-Eleven’s headquarters, a culmination of their work in computer vision, machine learning, algorithms, distributed computing, and hardware engineering. He was previously at PepsiCo, where he developed next generation automation, computer vision, and machine learning solutions for Industry 4.0 applications. Shahmeer is also passionate about democratizing AI capabilities; while at PepsiCo, he created the first in a series of Data Analytics courses to upskill associates across the Snacks R&D organization. He holds a B.S. in Chemical and Biomolecular Engineering from Georgia Tech, and is currently pursuing his M.S. in Computer Science at Georgia Tech.

    Twitter Linkedin
  • 16:40
    Shreyansh Daftry

    Role of AI in Future Mars Exploration

    Shreyansh Daftry - Research Scientist - NASA JPL

    Down arrow blue

    Role of AI in Future Mars Exploration

    Shreyansh Daftry is a Research Scientist at NASA Jet Propulsion Laboratory (JPL) in Pasadena, California, working at the intersection of Artificial Intelligence and Space Technology to help develop the next generation of robots for Earth, Mars and beyond. Shreyansh received his M.S. degree in Robotics from the Robotics Institute, Carnegie Mellon University, USA in 2016, and his B.S. degree in Electronics and Communications Engineering in 2013. His research interests spans computer vision, machine learning and autonomous robotics, with a focus on real-time computation, safety and adaptability.

    Twitter Linkedin
  • 17:00

    CONVERSATION & DRINKS

  • 08:00

    DOORS OPEN

  • 09:00
    Shreya Ghelani

    WELCOME

    Shreya Ghelani - Data Scientist - Amazon

    Down arrow blue

    From Word Embeddings to Pre-trained Models : A New Age in NLP

    In computer vision, for a few years now, the trend has been to pre-train vision models on the huge ImageNet corpus to achieve state of the art results. With the latest innovations in the natural language processing world like ULMFiT, BERT, GPT-2, etc. pre-trained models based on Language Modeling have been dubbed as NLP’s ‘ImageNet’ moment. The standard approach of conducting NLP projects has been to initialize the first layer of a neural network with vanilla (context independent) word embeddings like Word2Vec and GloVe and then training the rest of the network from scratch on task-specific data. However this is now changing and many of the current state-of-the-art models for supervised NLP tasks are models trained on language modeling and then fine tuned on task-specific data. In this talk, we will explore some of these techniques that have taken the NLP world by storm.

    Shreya is a Data Scientist and ML practitioner at Amazon. At Amazon, she spends her time making Alexa smarter and more productive for her customers by working on some very challenging ML problems ranging from personalization to relevance to text classification and natural language understanding. Before joining Amazon, Shreya was at the University of Cincinnati where she got her master's degree in Analytics and thoroughly enjoyed deep diving into the data mining and applied machine learning space.

    Twitter Linkedin
  • AI APPLIED IN SOCIETY

  • 09:10
    Topher White

    AI for Conservation

    Topher White - Founder & CEO - Rainforest Connection

    Down arrow blue

    AI for Conservation

    Rainforest Connection (RFCx) is an innovative nonprofit startup at the forefront of conservation technology committed to applying the most effective and timely technology to protect our planet’s precious, ancient forests and wildlife from illegal logging and poaching. RFCx listens to the rainforest remotely, using commonplace mobile tech and existing telecommunications infrastructure, and transforms these audio streams into a profound and automatic understanding of the forest soundscape, rooting out any threats using AI/ML models. RFCx partners with local NGOs and indigenous tribes to deter incursions through real-time threat detection and provides forensic evidence to enable governments to take action to prevent further incursions. They make this data available to academic researchers and government agencies to assist the fields of field ecology and conservation. RFCx is able to cheaply and effectively protect rainforests around the world, with current projects in 11 countries on 5 continents monitoring over 3,000 sq km of rainforest. Conserving rainforests is one of the cheapest and fastest ways to slow climate change today. One acoustic monitoring device, protecting 3 sq km of forest, equates to 15k tons of CO2 sequestered or taking 3,000 cars off the road. RFCx will have expanded into 5 new regions by Spring of 2020. As they work toward a future of plug and play acoustic monitoring devices, they will be able to protect vast expanses of threatened ecosystems even more cheaply. This is needed as soon as possible, as the past solutions of on-ground patrols, camera traps, and satellite imagery are relatively ineffective to protect these large, dense reserves.

    We will cover how we build and utilize AI to make it possible to send real-time alerts to protect these ecosystems, focusing on one or two projects to illustrate how it works (and include amazing visuals and audio clips from the rainforest). We will also cover how we are building a Bioacoustics Platform with Google to bring scientists with acoustic data together with data scientists to build out ML models for species being studied. This will ultimately allow for conservation tactics to improve significantly.

    Topher White is Founder and CEO of Rainforest Connection. Topher has experience building systems for large and small startups as well as international science projects, including four years working on nuclear fusion at ITER, in France. He has received multiple accolades for his work, including being named a National Geographic Emerging Explorer, a Draper-Richards-Kaplan (DRK) Fellow and an “Engineering Hero” by the Institute for Electrical and Electronics Engineers (IEEE).

    Topher’s background is primarily in Physics, software development and Communication, having received a degree in Physics at Kenyon College and going on to work for years at SLAC Natl Accelerator Lab (High Energy Physics) and the ITER Organization (Nuclear Fusion) in France. Along the way, he also served as CTO for two startups in San Francisco, where he obtained industry-level experience in software development — the foundation of the Rainforest Connection platform.

    Twitter Linkedin
  • 09:30
    Bryton Shang

    Helping Fish Farmers Feed The World With Deep Learning

    Bryton Shang - Founder and CEO - Aquabyte

    Down arrow blue

    Helping Fish Farmers Feed The World With Deep Learning

    This talk focuses on Aquabyte's application of computer vision to various fish farming use cases, including detecting sea lice and weighting fish. We identify the various problems associated with mass fish farming and the challenges with developing machine learning solutions that can measure the height and weight of fish, the use of computer vision algorithms in assessing issues like sea lice, which can be up to 25% of the cost associated with running farms, and cool new features in the works like facial recognition for fish and optimal fish feeding.

    Bryton Shang is the founder and CEO of Aquabyte, a Silicon Valley and Norway-based venture-backed company applying machine learning and computer vision to aquaculture fish farming for biomass estimation, sea lice counting, and feed optimization and formulation. Bryton was named to the 2019 Forbes 30 Under 30 in Manufacturing & Industry.

    Graduating at the top of his engineering class at Princeton University, Bryton has led several venture-backed startups. Bryton built deep learning algorithms to diagnose cancer as CTO of HistoWiz, a biotechnology firm. He also co-founded iQ License, a brand licensing platform, and Nikao Investments, an algorithmic trading firm.

    Twitter Linkedin
  • 09:50
    Saurabh Johri

    The Future of AI with Healthcare

    Saurabh Johri - Chief Scientist - Babylon Health

    Down arrow blue

    The Future of AI in Healthcare

    In this talk, I will discuss our work at Babylon, building digital health products to provide affordable and accessible healthcare to everyone on earth. AI is central to our mission, driving the creation of products which empower users and clinicians with up-to-date information on their health, and that of their patients. The potential impact of AI in healthcare is immense, but there are also sizeable challenges and considerations that must be addressed. I will discuss some of the imperatives for AI in healthcare and some of the key design decisions that must be considered when moving solutions from R&D into the real world. Finally, I will discuss some of our latest research and its implications for delivering personalised medicine.

    Saurabh leads the AI research team at Babylon. He has been with Babylon since 2016. In this time he has guided the team to develop Babylon's AI for the development of the triage, diagnostic and predictive models for healthcare, and applied the team’s research in Bayesian Machine Learning and Causal inference. Prior to Babylon, Saurabh spent time as a post-doctoral researcher at the MRC Centre for Outbreak Analysis & Modelling at Imperial College London. This work was funded by the Gates Foundation in collaboration with the CDC, and focused on the development of novel statistical machine learning methods to estimate poliovirus transmission from genetic sequence data. Before his post-doctoral work, Saurabh completed his PhD in population genetics from Imperial College London, investigating the population genetics of Tuberculosis and predicting new drug targets from whole genome sequence data.

    Twitter Linkedin
  • 10:10
    Jonathan Zaleski

    Why the Human Element Remains Essential in Applied AI

    Jonathan Zaleski - Sr Director of Engineering - Applause

    Down arrow blue

    Why the Human Element Remains Essential in Applied AI

    Can you remove the human element from human interactions? More importantly, should you?

    A lot has been made of the relationship between humans and machines. News segments seem to pit them against each other, but the truth is it’s not humans vs. machines, but rather, it’s a symbiotic relationship. Bringing the two together in a true partnership is the only way to realize the incredible potential of AI and the efficiencies and cost savings it can deliver.

    In this session, you will hear: - A case study on our journey to automate the crowd - Thoughts on how to balance AI and human skills - Ways to incorporate AI in a roadmap in a meaningful way

    Jonathan Zaleski is a highly skilled, versatile and reliable technical leader with a demonstrated history of working in the internet industry. He has more than 15 years of engineering and technology experience across multiple verticals and platforms. Jonathan is a polyglot skilled in software development, scalability and Agile methodologies who uses his breadth of knowledge and skill to get the best out of his team. He is a dedicated leader who continuously strives for excellence.

    As the senior director of engineering at Applause, Jonathan and his team build best-in-class software with an eye toward innovation and next-generation concepts. They work to improve the capabilities of the Applause Platform, using cutting-edge technologies like artificial intelligence and machine learning to make the company more efficient.

    Most recently, Jonathan held several engineering leadership roles at Wayfair. Prior to that, he was a principal software engineer at Sermo. He earned his Bachelor of Arts (B.A.) focused in Computer Science, Environmental Science & Math from Westfield State University.

    Twitter Linkedin
  • CROSS-SECTOR AUTONOMOUS ANALYTICS

  • 10:30
    Ramesh Panuganty

    3 Steps to Implement AI Architecture for Autonomous Analytics

    Ramesh Panuganty - Founder and CEO - MachEye

    Down arrow blue

    3 Steps to Implement AI Architecture for Autonomous Intelligence

    How many of your business users could answer complex questions like “Which 2 products decline in sales every year in the third week of December in California?” 

    2 out of 10? 80% would just request an analyst to create a report. This results in a tsunami of reports which are never looked at again. Users don't speak SQL and data doesn't speak English. How do you bridge that gap?

    It’s time to solve the “last mile” problem of user adoption while reducing mundane data processing tasks for data scientists.

    Join this session to learn about:

     - Teaching machines how to tell data stories to humans - Humanizing UX through interactive audio-visuals instead of more reports - Leveraging machine learning models to automatically surface and deliver business insights

    Learn from industry experts how the largest energy drink manufacturer, largest beverage manufacturer and the largest student loan company are solving these challenges.

    Ramesh Panuganty is the Founder and CEO of MachEye. He is a creative technology pioneer (10 patents, several publications) and entrepreneur (launched & exited three start-ups). His projects include: - SelectQ: an ed-tech platform that generates SAT questions on the fly using AI & NLG, with ratcheting complexity until full preparation. - Drastin (acquired by Splunk in 2017): recognized in the top five AI platforms by Gartner, also where Ramesh created "Conversational Analytics" as a new BI market category. - Cloud360 Hyperplatform (acquired by Cognizant in 2012), where Ramesh created “Cloud Management Platforms” as a new market category and built a $29M ARR business.

    Twitter Linkedin
  • 10:45

    COFFEE

  • CROSS INDUSTRY LEARNINGS: HEALTHCARE

  • 11:15

    Applied Deep Learning in Healthcare

  • Julie Zhu

    CO-PRESENTING

    Julie Zhu - Distinguished Engineer/Chief Data Scientist - Optum, United Health Group

    Down arrow blue

    Julie is a Hands-on Data Science leader in Health Care Analytics with 19+ years’ experience of Advanced data analytics, machine learning, deep learning and Natural Language Processing, in-depth knowledge of Health Care data, business operations and health care products in vary health care areas, knows how they can be best applied to develop effective and innovative solutions that address the health care issues. Recruited and built Data Science teams and established the machine learning and deep learning capabilities, act as Chief Data Scientist to advance Artificial Intelligence and data science technology to the teams cross United Health Group.

    Twitter Linkedin
  • Galina Grunin

    CO-PRESENTING

    Galina Grunin - Distinguished Engineer - Optum, United Health

    Down arrow blue

    Galina Grunin is a Distinguished Engineer in the Advanced Technology Collaborative team at Optum, UnitedHealth Group with a focus on Deep Learning. She is a hands-on IT practitioner with expert knowledge and experience in deep learning, orchestration and pattern deployment, cloud computing architecture, software-defined storage, software-defined networking, adapting legacy systems for cloud, web technologies, and application and middleware integration. Galina's achievements in the above fields are recognized by over 40 U.S. patents in which Galina is a named inventor.

    Prior to joining Optum, Galina was the lead architect/technical lead for various projects at IBM Cloud and IoT divisions.

    Twitter Linkedin
  • 11:35
    Venu Vasudevan

    The AI Impact on Daily Touch Products

    Venu Vasudevan - Director, Data Science & AI Research - Procter & Gamble

    Down arrow blue

    The AI Impact on Daily Touch Products

    P&G creates a wide range of ‘daily touch’ products that impact the lives of billions of users on a daily basis. These products range from smart products with digital capabilities such the Oral-B toothbrush to a number of decidedly ‘analog’ products that make laundry rooms, living rooms, bedrooms, kitchens, nurseries, and bathrooms a little more enjoyable. This talk will broadly cover the AI value proposition in changing consumer insight generation and product discovery, product in-use experience and product-market iteration. Narrowly, it will cover one or two use cases on the use of Deep Learning to reframe product discovery and product experiences in superior ways.

    Venu directs the R&D Data Science & AI organization at Procter & Gamble research. He is a technology leader with a track record of successful consumer & enterprise innovation at the intersection of AI, Machine Learning, Big Data, and IoT. Previously he was VP of Data Science at Lightpad, an IoT startup acquired by a large Internet player , led the creation of a video analytics and Machine Learning platform acquired by Comcast , and was a founding member of the Motorola team that created the Zigbee IoT standard. Venu holds a PhD (Databases & AI) from The Ohio State University, and was a member of Motorola’s Science Advisory Board (top 2% of Motorola technologists). He is an Adjunct Professor at Rice University’s Electrical and Computer Engineering department, and was a mentor at Chicago’s 1871 startup incubator.

    Twitter Linkedin
  • CROSS INDUSTRY LEARNINGS: RETAIL & CUSTOMER SERVICE

  • 11:55
    Prakhar Mehrotra

    ML Applications & Challenges in Brick-n-Mortar Retail

    Prakhar Mehrotra - Senior Director of Machine Learning - Walmart Labs

    Down arrow blue

    ML Applications & Challenges in Brick-n-Mortar Retail

    This talk will focus on applications of ML in brick-n-mortar retail operations, and how it differs from online world. Specifically, how we are using recent advancements in Machine Learning to power core retail operations like pricing, assortment and replenishment. It also will discuss how we can leverage human expertise and use it as feedback to improve the algorithms.

    Prakhar Mehrotra currently is Senior Director of Machine Learning for Retail Data Science at Walmart Labs, based out of Sunnyvale CA. He overseas research and development of pricing, assortment, replenishment and planning algorithms to help merchants take smarter decisions. Prior to joining Walmart, he was Head of Data Science, Finance at Uber Technologies, San Francisco. At Uber, he built the Data Science arm for Finance and led global team of data scientists and data analysts spread across Amsterdam, Hyderabad and San Francisco. He lead the research and development of Machine Learning Algorithms related to Financial Forecasting (Supply & Demand), Budget Planning, Economic Simulations for Autonomous Vehicles. In his role, he has also worked on research and development related to payment analytics and treasury financial simulations. Prior to Uber, Mr. Mehrotra worked as Sr. Data Scientist at Twitter, Inc in San Francisco as part of Sales & Monetization team. He has Advanced Engineer’s degree in Aeronautics from California Institute of Technology (Caltech), Pasadena, and dual Masters in Aeronautics and Applied Mechanics from Ecole Polytechnique, Paris and Caltech. He did is undergraduate in Mechanical Engineering from National Institute of Technology, Trichy, India. He has given numerous invited talks including keynote speaker at EARL conference, Toronto Machine Learning Summit, NYU Center for Data Science, Wharton Technology Conference at Wharton School of Business. He also chaired the session on Forecasting at International Symposium on Forecasting, Australia 2017 and was invited judge (Risk & Intelligence) at the European Fintech Awards. Brussels.

    Twitter Linkedin
  • 12:15
    German I Parisi

    Toward Lifelong Conversational AI

    German I Parisi - Director of Applied AI - McD Tech Labs, McDonald's

    Down arrow blue

    Toward Lifelong Conversational AI

    Conversational agents have become increasingly popular in a wide range of business areas. Prominent examples of applications that have been transforming speech-to-speech interactions are Amazon’s Alexa, Apple’s Siri, and McDonald’s voice-activated drive-thru. Companies from various industries are now exploring new ways of building products and services that rely on robust natural language interactions. A major technical challenge is how these solutions can efficiently incorporate new knowledge and increase performance over time while confining computational cost and addressing the current limitations of artificial learning systems designed to perform best in benchmark datasets. In this talk, I will introduce and discuss state-of-the-art machine learning technology in conversational AI with the ability to acquire, fine-tune, and transfer knowledge from large and continuous streams of data. The systems can learn in correspondence to novel interactions or the necessity to enrich domain-specific knowledge and logic. I will focus on scalable deep learning models for end-to-end natural language understanding and hybrid approaches to lifelong conversational agents in multiple application domains.

    German I. Parisi is the Director of Applied AI at McD Tech Labs in Mountain View, California, a Silicon Valley-based research center established by McDonald’s Corporation to advance the state of the art in AI-powered technology systems for customer interaction and support. He is also an independent research fellow of the University of Hamburg, Germany, and the co-founder and board member of ContinualAI, the largest research organization and open community on continual learning for AI with a network of over 600 scientists. He received his Bachelor's and Master's degree in Computer Science from the University of Milano-Bicocca, Italy. In 2017 he received his PhD in Computer Science from the University of Hamburg on the topic of multimodal neural representations with deep recurrent networks. In 2015 he was a visiting researcher at the Cognitive Neuro-Robotics Lab of the Korea Advanced Institute of Science and Technology (KAIST), South Korea, winners of the 2015 DARPA Robotics Challenge. His main research interests include human-robot interaction, continual robot learning, and neuroscience-inspired AI.

    Twitter Linkedin
  • 12:35

    LUNCH

  • 13:40
    Gabor Melli

    On-Demand Low-Latency Near Real-Time Predictions at Scale

    Gabor Melli - Senior Director of Engineering (ML & AI) - Sony PlayStation

    Down arrow blue

    On-Demand Low-Latency Near Real-Time Predictions at Scale

    Predictive machine learning is optimizing customer experiences across many web, mobile and console interactions. This session presents the development process at Sony PlayStation that delivers scalable real-time low-latency predictive ML-based solutions on the cloud.

    Gabor Melli is Senior Director of Engineering (ML&AI) at Sony PlayStation. He has twenty-plus years of experience in the delivery of large-scale data-driven initiatives at both enterprises ranging from Sony PlayStation, AT&T, Microsoft, T-Mobile and Wal*Mart, and start-ups such as Meal.com, VigLink and OpenGov. He continues to publish, present and organize world-class conferences.

    Twitter Linkedin
  • CROSS INDUSTRY LEARNINGS: ENERGY

  • 14:00
    Sanam Mirzazad

    Application of AI in the Utility Industry and its Challenges

    Sanam Mirzazad - Technical Leader, AI - Electric Power Research Institute (EPRI)

    Down arrow blue

    Application of AI in the Utility Industry and its Challenges

    AI promises to surpass the tools the electric power industry has relied on for the past century, making it vital to the industry’s future; AI is poised to be critical in developing and operating the Integrated Grid and its combination of centralized power with distributed energy resources such as solar, and electric vehicles. However, using AI in the power industry comes with its own challenges, such as understanding the physics of the industry to incorporate in the AI models, as well as lack of extensive and high-quality data sets. This talk will focus on some applications of AI in the power industry and elaborate on the challenges as well as how the industry can address these challenges.

    Sanam Mirzazad, Ph.D., is a Technical leader at Electric Power Research Institute (EPRI). In her current position, she leads the integrated grid activities associated with the EPRI’s artificial intelligence (AI) initiative, where she leverages her expertise in closing the gap between the power industry and the AI community. Sanam holds a Master’s degree in Power systems and a Ph.D. in Control systems from The Pennsylvania State University. Before joining EPRI, she was a research scientist working on multiple projects in smart energy, human-computer interaction, and natural language understanding.

    Twitter Linkedin
  • CROSS INDUSTRY LEARNINGS: TRANSPORT

  • 14:20
    Yingying Kang

    AI to Shape the Future of Travel

    Yingying Kang - Principal AI/ML Research Scientist & Director of Data Science - Travelport

    Down arrow blue

    AI to Shape the Future of Travel

    Data is like a new energy source, and AI is like the electricity generated from Data, powering many of our interactions and conveniences, including travel which is a major expense in our daily life. According to World Travel & Tourism Council, Travel and Tourism industry is one of the largest industries with global economic contribution of over $8.8trillion in 2018. The joint power of AI and Data Science will bring disruptions to travel industry. This presentation will present an overview of how AI will change our travel experiences. An Intelligent Travel Navigation Platform will be introduced. This platform will change the way people to travel in future, from shopping, on-boarding to socializing with local citizens and resources in another city. The supporting technologies will be introduced to enable this platform too.

    Dr. Y. Kang is the Principal AI/ML Research Scientist and directs the AI & Data Science Lab at Travelport, a leading Travel Tech corporation. She has 20 years of success in Large Scale Service Oriented Architect, Optimization Modeling, Artificial Intelligence, Machine Learning and Deep Learning, majorly focusing on Travel, Transportation, and IT industry. She has highly accomplished in designing and developing technical infrastructures and solutions for AI/ML, Big Data Analytics Platform, Hybrid Cloud Computing, Pricing/Cost/Risk Optimization in Travel Tech, Transportation, Software/IT, Social Media and ERP/PLM/SCM/CRM. She holds Ph.D. in Operations Research from State University of New York at Buffalo, following Prof. R. Batta, an authority of Optimization Theory and Urban Planning from M.I.T.

    Twitter Linkedin
  • 14:40
    Arne Stoschek

    Convolutional Neural Networks are Now. Buckle Up - We’re Taking Off Fast

    Arne Stoschek - Project Executive - A3 by Airbus

    Down arrow blue

    Convolutional Neural Networks (CNN) are now. Buckle up—we’re taking off fast.

    In the midst of building the most advanced data collection systems to ensure innovation is met with safety, recent advancements in deep learning, specifically convolutional neural networks (CNN), have inspired confidence amongst software engineers who now rely on this innovative approach to auto-label millions upon millions of data sets to test and teach this complex, brain-like system to label images appropriately. This presentation will explore the continued maturation of this technology as it relates to the future of autonomous flight.

    Arne is the Project Executive heading up Wayfinder, a project focused on the development of autonomous flight and machine learning solutions to enable self-piloted operations of a range of aircraft, from urban air mobility vehicles to large commercial airplanes. Passionate about robotics and autonomous electric vehicles, he has held engineering leadership positions at global companies such as Volkswagen/Audi and Infineon, and at aspiring Silicon Valley startups, namely Lucid Motors/Atieva, Knightscope and Better Place. Arne earned a Doctor of Philosophy in Electrical and Computer Engineering from the Technical University of Munich and held a computer vision and data analysis research position at Stanford University.

    Twitter Linkedin
  • 15:00

    END OF SUMMIT

  • 15:00

    FAREWELL NETWORKING MIXER

  • Day 1 10:25

    Introduction to Reinforcement Learning

    Lex Fridman - Researcher - MIT

    Down arrow blue

    An Introduction to Reinforcement Learning

    Lex Fridman is a researcher at MIT, working on deep learning approaches in the context of semi-autonomous vehicles, human sensing, personal robotics, and more generally human-centered artificial intelligence systems. He is particularly interested in understanding human behavior in the context of human-robot collaboration, and engineering learning-based methods that enrich that collaboration. Before joining MIT, Lex was at Google working on machine learning for large-scale behavior-based authentication.

    Twitter Linkedin
  • Day 1 11:10

    Muppets and Transformers: The New Stars of NLP

    Joel Grus - Principal Engineer - Capital Group

    Down arrow blue

    Muppets and Transformers: The New Stars of NLP

    The last few years have seen huge progress in NLP. Transformers have become a fundamental building block for impressive new NLP models. ELMo, BERT, and their descendants have achieved new state-of-the-art results on a wide variety of tasks. In this talk I'll give some history of these "new stars" of NLP, explain how they work, compare them to their predecessors, and discuss how you can apply them to your own problems.

    Joel Grus is Principal Engineer at Capital Group, where he oversees the development and deployment of machine learning systems. Previously he was a research engineer at the Allen Institute for Artificial Intelligence, where he helped develop AllenNLP, a deep learning library for NLP researchers. Before that he worked as a software engineer at Google and a data scientist at a variety of startups. He is the author of the beloved book Data Science from Scratch: First Principles with Python, the beloved blog post "Fizz Buzz in Tensorflow", and the polarizing JupyterCon talk "I Don't Like Notebooks". You can find him on Twitter @joelgrus

    Twitter Linkedin
  • Day 1 12:45

    Lunch & Learn

    Join the Speakers for Lunch - - Roundtable Discussions during Lunch

  • Day 1 14:25

    How Can AI Aid Digital Transformation – Mesh Twin Learning?

    Maciej Mazur - Chief Data Scientist - PGS Software

    Down arrow blue

    How can AI Aid Digital Transformation – Mesh Twin Learning?

    Industry 4.0 is becoming more and more widespread – from semi-automated nests through to automated production lines and Smart Factories. By scaling Digital Twin technology with Artificial Intelligence and Machine Learning, PGS Software combined Data Science and Cloud computing to create Mesh Twin Learning (MTL) – a solution in which 2 or more AI-controlled Edge environments learn from one another’s experiences and adopt best practices to optimize workflows across the entire production plant network. This is one of the key enablers for digital transformation in German manufacturing, which we are now bringing to the US.

    As Chief Data Scientist at PGS Software, Maciej is the technical lead of the data team and implements ML-based solutions for clients around the globe. In his 10 years of IT-experience, he’s worked for major players like Nokia and HPE, developing complex optimisation algorithms even before they the term Data Science was coined.

    Twitter Linkedin
  • Day 1 16:00

    Hands-on Workshop: BERT based Conversational Q&A Platform for Querying a complex RDBMS with No Code

    Peter Relan - Chairman and CEO - Got-it.ai

    Down arrow blue

    Hands-on Workshop: BERT based Conversational Q&A Platform for Querying a complex RDBMS with No Code

    Most business and operations people in organizations want to ask questions of databases regularly. But they are limited by minimal schema understanding and SQL skills. In the field of AI, conversational agents like Rasa, Dialogflow, Lex, Watson, Luis are emerging as NLU-based dialog agents that hook into actions or custom fulfillment logic. Got It is unveiling the first AI product that creates a conversational interface to any custom database schema on MySQL or Google Big Query, using Rasa or Dialog Flow. Got It’s No Code approach automates the discovery and addition of new intents/slots and actions, based on incoming user questions and knowledge of the database schema. Thus, the end-end system adapts itself to an evolving schema and user questions until it can answer virtually any question. Got It supports full sentence NLP for chat based UIs, and search keyword NLP for Analytics UIs to dynamically query a database, without custom fulfillment logic, by utilizing a proprietary DNN.

    This workshop provides a hands-on session demonstrating how quick the set up is for the product to start retrieving data from a sophisticated retail industry database schema, for both business analytics as well as for customer service use cases.

    Peter Relan is the founding investor and chairman of breakthrough companies, including Discord (300M users), Epic! (95% of US elementary schools) and Got-it.ai (AI+Human Intelligence for Saas and Paas products). Formerly a Hewlett Packard Resident Fellow at Stanford University, and a senior Oracle executive, Peter is working with the Got It team on driving user and business productivity higher by 10X, applying Google BERT and transfer learning to real business databases with minimal training data sets, that allow users to program queries and analytics tools with no technical skills.

    Twitter Linkedin
  • Day 2 10:30

    Panel & Networking

    Investing in Startups: Hear from the Investors - - Panel & Connect

    Down arrow blue

    Session takeaways: 1) What are the short, medium and long-term challenges in investing in AI to solve challenges in business & society? 2) What are the main success factors for AI startups? 3) What are the challenges from a VC perspective?

  • Day 2 11:20

    Ludwig, a Code-Free Deep Learning Toolbox

    Piero Molino, Uber AI - Sr. Research Scientist & Co-Founder - Uber AI

    Down arrow blue

    Ludwig, a Code-Free Deep Learning Toolbox

    The talk will introduce Ludwig, a deep learning toolbox that allows to train models and to use them for prediction without the need to write code. It is unique in its ability to help make deep learning easier to understand for non-experts and enable faster model improvement iteration cycles for experienced machine learning developers and researchers alike. By using Ludwig, experts and researchers can simplify the prototyping process and streamline data processing so that they can focus on developing deep learning architectures.

    Piero Molino is a Senior Research Scientist at Uber AI with focus on machine learning for language and dialogue. Piero completed a PhD on Question Answering at the University of Bari, Italy. Founded QuestionCube, a startup that built a framework for semantic search and QA. Worked for Yahoo Labs in Barcelona on learning to rank, IBM Watson in New York on natural language processing with deep learning and then joined Geometric Intelligence, where he worked on grounded language understanding. After Uber acquired Geometric Intelligence, he became one of the founding members of Uber AI Labs. He currently leads the development of Ludwig, a code-free deep learning framework.

    Twitter Linkedin
  • Day 2 11:50

    Building a Conversational Experience in Minutes with Samsung’s Bixby

    Adam Cheyer - Co-Founder and VP Engineering/VP of R&D - Viv Labs/Samsung

    Down arrow blue

    Building a Conversational Experience in Minutes with Samsung’s Bixby

    For decades, the relationship between developer and computer was simple: the human told the machine what to do. Next came machine learning systems, where the machine was in charge of computing the functional logic behind developer-supplied examples, typically in a form that humans couldn't even understand. Now we are entering a new age of software development, where humans and machines work collaboratively together, each doing what they do best. The Developer describes the "what" -- objects, actions, goals -- and the machine produces the "how", writing the code that satisfied each user's request by interweaving developer-provided components. The result is a system that is easier to create and maintain, while providing an end-user experience that is more intelligent and adaptable to users' individual needs. In this talk, we will show concrete examples of this software trend using a next-generation conversational assistant named Bixby. We will supply you with a freely downloadable development environment so that you can give this a try yourself, and teach you how to build a conversational experience in minutes, to start monetizing your content and services through a new channel that will be backed by more than a billion devices in just a few years.

    Adam Cheyer is co-Founder and VP Engineering of Viv Labs, and after acquisition in 2016, a VP of R&D at Samsung. Previously, Mr. Cheyer was co-Founder and VP Engineering at Siri, Inc. In 2010, Siri was acquired by Apple, where he became a Director of Engineering in the iPhone/iOS group. Adam is also a Founding Member and Advisor to Change.org, the premier social network for positive social change, and a co-Founder of Sentient Technologies. Mr. Cheyer is an author of more than 60 publications and 27 issued patents.

    Linkedin
  • Day 2 14:00

    Panel & Q&A

    Ethics in AI: Panel, Q&A & Drop-In - - Hear from Experts in Ethics and Ask your Questions

RE•WORK San Francisco Summit

RE•WORK San Francisco Summit

30 - 31 January 2020

Get your ticket
This website uses cookies to ensure you get the best experience. Learn more