
REGISTRATION & LIGHT BREAKFAST

WELCOME
Mariya Yao - TOPBOTS
Mariya is the CTO and Head of Research & Design at TOPBOTS, an AI strategy & development company building machine learning solutions for Fortune 500 customers and leading brands like L'Oreal, LinkedIn, and Paypal. She also co-authored the book Applied Artificial Intelligence: A Handbook For Business Leaders and writes for Forbes about enterprise AI.


THEORY & APPLICATIONS


Yaniv Taigman - Research Scientist - Facebook AI Research (FAIR)
Personalized Generative Models
Yaniv Taigman - Facebook AI Research (FAIR)
Personalized Generative Models
Generative models are getting better these days, thanks to contributions in adversarial-based training and autoregressive models. In this talk I will describe two new generative models, one in computer vision and graphics and one for voice synthesis, with the commonality of being identity-preserving and applicable 'in-the-Wild' for the task of synthesizing a look-alike, sound-alike avatar.
I graduated from Tel-Aviv University with a Master’s in Computer Science. While pursuing my PhD research, I co-founded Face.com where I held the position of CTO. When Face.com was acquired by Facebook in 2012, I joined the office in Menlo Park to lead research and engineering projects. During this time I worked on efficient methods for face recognition (DeepFace project), and helped start the AI group. In 2016, I established a satellite FAIR team in Tel-Aviv.


Keith Adams - Slack
Learning Embeddings at Slack
The technique of embedding discrete data in a continuous, moderate-dimensional space has proven useful for learning representations in many different domains. Embeddings learned from text, graphs, and human-created tags can support information retreival, recommendations, classification, and subjective human insight. In this talk I play with StarSpace, a new, open-source supervised embedding framework, and use it to learn representations of text, channels, and users.
Keith Adams is Chief Architect at Slack. Prior to Slack, he worked at Facebook, where he contributed to the search, the HipHop Virtual Machine for PHP, and Facebook AI Research. Keith was also an early engineer at VMware. He is a computing generalist, with a recurring interest in the hardware/software interface.




Ian Goodfellow - Staff Research Scientist - Google Brain
Generative Adversarial Networks
Ian Goodfellow - Google Brain
Generative Adversarial Networks
Ian Goodfellow is a Staff Research Scientist at Google Brain. He is the lead author of the MIT Press textbook Deep Learning. In addition to generative models, he also studies security and privacy for machine learning. He has contributed to open source libraries including TensorFlow, Theano, and Pylearn2. He obtained a PhD from University of Montreal in Yoshua Bengio's lab, and an MSc from Stanford University, where he studied deep learning and computer vision with Andrew Ng. He is generally interested in all things deep learning.



COFFEE


Peter Carr - Research Engineer - Disney Research
Modeling Team Strategies using Deep Imitation Learning
Peter Carr - Disney Research
Modeling Team Strategies using Deep Imitation Learning
Current state-of-the-art sports statistics compare players and teams to league average performance, such as “Expected Point Value” (EPV) in basketball. These measures have enhanced our ability to analyze, compare and value performance in sport. But they are inherently limited because they are tied to a discrete outcome of a specific event. For example, EPV for basketball focuses on estimating the probability of a player making a shot based on the current situation. In this work, we explore how teams control time and space by examining sequential decision making.
We have developed an automatic "ghosting" system which illustrates where defensive players should have been (instead of where they actually were) based on the locations of the opposition players and ball. We employ a machine learning technique called deep imitation learning, and modify standard recurrent neural network training to consider both instantaneous and future losses, which enables ghosted players to anticipate movements of their teammates and the opposition. Our approach avoids the man-years of manual annotation need to train existing ghosting systems, and can be fine tuned to mimic the behavior of specific teams or playing styles.
Peter Carr is a Research Scientist at Disney Research, Pittsburgh. His research interests lie at the intersection of computer vision, machine learning and robotics. In particular, he has focused on computer vision algorithms for camera calibration and object tracking, as well as machine learning techniques for understanding spatio-temporal trajectory data. Peter joined Disney Research in 2010 after receiving his PhD from the Australian National University.




Christian Szegedy - Staff Research Scientist - Google
Deep Learning for Formal Reasoning
Christian Szegedy - Google
Deep Learning for Formal Reasoning
Deep learning has transformed machine perception in the past five years. However, recognizing patterns is a crucial feature of intelligence in general. Here we give a short overview on how deep learning can be utilized for formal reasoning, especially for reasoning in large mathematical theories. The fact that pattern recognition capabilities are essential for these tasks has wider implications for other tasks like software synthesis and long term planning in complicated environments. Here is will give a short overview on some methods that leverage deep learning for such tasks.
Christian Szegedy is research scientist at Google, working on deep learning for computer vision, including image recognition, object detection and video analysis. He is the designer of the Inception architecture which set new state of the art on the latest ImageNet benchmark in the latest Large Scale Visual Recognition Competition. Before joining Google in 2010, he was scientist at Cadence Research Laboratories in Berkeley devising algorithms for chip design. His background is discrete mathematics and mathematical optimization. Christian got his PhD from the University of Bonn in applied mathematics in 2005.

COMPUTER VISION
Carl Vondrick - Google
Predictive Vision
Our research studies Predictive Vision with the goal of anticipating the events that may happen in the immediate future. To tackle this challenge, we present predictive vision algorithms that learn directly from large amounts of raw, unlabeled data. Capitalizing on millions of natural videos, our work develops methods for machines to learn to anticipate the visual future, forecast human actions, and recognize ambient sounds.
Carl Vondrick is a research scientist at Google and he will be an assistant professor at Columbia University in fall 2018. He received his PhD from the Massachusetts Institute of Technology in 2017. His research was awarded the Google PhD Fellowship, the NSF Graduate Fellowship, and is featured in popular press, such as NPR, CNN, the Associated Press, and the Late Show with Stephen Colbert.




Andrew Tulloch - Research Engineer - Facebook
Deep Learning in Production at Facebook
Andrew Tulloch - Facebook
Compilers for Deep Learning @ Facebook
With the growth in the complexity of our modeling tools (new operations, heavily dynamic graphs, etc), the changes in our numerical demands (new numerical formats, mixed precision models, etc), and our exploding hardware ecosystem (custom ASIC/FPGA accelerators, new instructions such as VNNI and WMMA, etc), it's getting harder for our traditional ML graph interpreters to deliver high performance in a reliable and maintainable fashion. We'll talk about some of our work at Facebook on ML compilers, our production applications, the exciting research questions and new domains these tools open up.
I'm a research engineer at Facebook, working on the Facebook AI Research and Applied Machine Learning teams to drive the large amount of AI applications at Facebook. At Facebook, I've worked on the large scale event prediction models powering ads and News Feed ranking, the computer vision models powering image understanding, and many other machine learning projects. I'm a contributor to several deep learning frameworks, including Torch and Caffe. Before Facebook, I obtained a masters in mathematics from the University of Cambridge, and a bachelors in mathematics from the University of Sydney.


LUNCH
NATURAL LANGUAGE PROCESSING


Matthew Peters - Researcher - Allen Institute for Artificial Intelligence
Towards a More Efficient, Less Painful Discovery of Scientific Research Findings.
Matthew Peters - Allen Institute for Artificial Intelligence
From Word2vec to ELMo: Using Context to Improve Word Vectors for NLU
Word vectors such as word2vec are ubiquitous in natural language processing (NLP) systems as they allow models to leverage large amounts of unlabeled text. However, they have several shortcomings, notably they produce a single vector for each word. This is especially problematic for words with many senses since understanding the syntactic and semantic roles of these words requires examining the broader context in which they are used. In this talk, I will show how to overcome these limitations and learn contextual representations of word meaning from unlabeled text. When added to existing NLP systems, these ELMo representations provide a significant increase in overall performance across a wide range range of tasks including question answering and sentiment classification. I will also provide some intuition for what the ELMo representations encode and why they are empirically successful.
Matthew Peters is a Research Scientist at AI2 exploring applications of deep neural networks to fundamental questions in natural language processing. Prior to joining AI2, he was the Director of Data Science at a Seattle start up, a research analyst in the finance industry and a post-doc investigating cloud-climate feedback. He has a PhD in Applied Math from the University of Washington.


REINFORCEMENT LEARNING


Junhyuk Oh - Research Scientist - DeepMind
Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
Junhyuk Oh - DeepMind
AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
Deep reinforcement learning approaches have been shown to perform well on domains where tasks and rewards and well-defined. However, in adversarial multi-agent environments, where the agent is required to improve its policy through self-play, the agent should not only solve the given task (i.e., learning to beat itself via self-play) but also develop diverse policies and strategies over time in order to become strong and robust when playing against unseen competitors. In this talk, I will present AlphaStar which is the first AI to defeat a top professional player in the game of Starcraft, one of the most challenging Real-Time Strategy (RTS) games. Specifically, I will show how such complex and robust strategies can emerge through a distributed multi-agent RL algorithm, where a population of agents compete with each other with slightly different internal goals.
Key Takeaways:
- The current state-of-the-art RL algorithms can achieve super-human performance on a complex real-time strategy game (Starcraft)
Junhyuk Oh is a research scientist at DeepMind. He received his Ph.D. from Computer Science and Engineering at the University of Michigan in 2018, co-advised by Prof. Honglak Lee and Prof. Satinder Singh. His research focuses on deep reinforcement learning problems such as dealing with partial observability, generalization, planning, and multi-agent reinforcement learning. His work was featured at MIT Technology Review and Daily Mail.


Ilya Sutskever - OpenAI
The Power of Large scale RL and generative models
Ilya Sutskever received his PhD in 2012 from the University of Toronto working with Geoffrey Hinton. After completing his PhD, he cofounded DNNResearch with Geoffrey Hinton and Alex Krizhevsky which was acquired by Google. He is interested in all aspects of neural networks and their applications.


SEARCH & RECOMMENDATIONS


Yves Raimond - Director of Machine Learning - Netflix
Deep Learning for Recommender Systems
Yves Raimond - Netflix
Deep Learning for Recommender Systems
In this talk, we will survey Deep Learning methods applied to personalization and recommendations. After providing an overview of the Netflix recommender system, we will be going over recently published research at the intersection of Deep Learning and recommender systems and how they relate to traditional collaborative filtering techniques. We will then highlight promising new directions in that space.
Yves Raimond is a Research/Engineering Director at Netflix, where he leads the Promotion & Growth Algorithm Engineering team: a mixed team of researchers and engineers building the next generation of Machine Learning algorithms used to drive the Netflix experience. Before that, he was a Lead Research Engineer in BBC R&D, working on information extraction from Multimedia content. He holds a PhD from Queen Mary, University of London.



Justin Basilico - Director of Machine Learning - Netflix
Deep Learning for Recommender Systems
Justin Basilico - Netflix
Deep Learning for Recommender Systems
In this talk, we will survey Deep Learning methods applied to personalization and recommendations. After providing an overview of the Netflix recommender system, we will be going over recently published research at the intersection of Deep Learning and recommender systems and how they relate to traditional collaborative filtering techniques. We will then highlight promising new directions in that space.
Justin Basilico is a Research/Engineering Director for Page Algorithms Engineering at Netflix. He leads an applied research team focused on developing the next generation of algorithms used to generate the Netflix homepage through machine learning, recommendation, and large-scale software engineering. Prior to Netflix, he worked on machine learning in the Cognitive Systems group at Sandia National Laboratories.




Dumitru Erhan - Staff Research Scientist - Google Brain
Learning How to Generate the Future in Videos
Dumitru Erhan - Google Brain
Enabling World Models via Unsupervised Representation Learning of Environments
Recent advances in deep neural networks have enabled impressive and often superhuman performance in tasks such as object recognition, object detection, segmentation, image description, visual question-answering and even medical image diagnosis. In many such scenarios, achieving state of the art performance requires collecting large amounts human-labeled data, which is expensive to acquire. In order to build intelligent agents that quickly adapt to new scenes, conditions, tasks, we need to develop techniques, algorithms and models that can operate on little data or that can generalize from training data that is not similar to the test data. World Models have long been hypothesized to be a key piece in the solution to this problem. In this talk I will describe our recent advances for modeling sequential observations. These approaches can help with building agents that interact with the environment and mitigate the sample complexity problems in reinforcement learning. They can also enable agents that generalize quicker to new scenarios, tasks, objects and situations and are thus more robust to environment changes.
Dumitru Erhan is a Staff Research Scientist in the Google Brain team in San Francisco. He received a PhD from University of Montreal (MILA) in 2011 with Yoshua Bengio, where he worked on understanding deep networks. Afterwards, he has done research at the intersection of computer vision and deep learning, notably object detection (SSD), object recognition (GoogLeNet), image captioning (Show & Tell), visual question-answering, unsupervised domain adaptation (PixelDA), active perception and others. Recent work has focused on video prediction and generation, as well as its applicability to model-based reinforcement learning. He aims to build and understand agents that can learn as much as possible to self-supervised interaction with the environment, with applications to the fields of robotics and self-driving cars.



COFFEE
DEEP LEARNING FRAMEWORKS
Clement Farabet - NVIDIA
Industry-Grade Deep Learning
Today’s AI is arming humans with superpowers — from aiding doctors to make better diagnoses to helping the public move around safely. Entire industries are being redefined, and new ones are emerging as well. AI, today mostly powered by Deep Learning, is a powerful tool, but one that is not trivial to master and integrate into existing industry workflows. NVIDIA has enabled the current AI boom by providing critical compute power necessary for scientists to solve a wide range of AI problems, specifically challenging perception problems. Today NVIDIA is investing in higher-level abstractions to solve even more complex innovations, like self-driving cars, and help other industries leverage Deep Learning. In this talk, you’ll learn how Deep Learning has evolved over the past 10 years, how we enabled this field, and continue to do so; what we are doing to get to fully autonomous cars; and how we are building platforms to enable anyone to create value with Deep Learning. I will also talk about research at NVIDIA and how we operate a fast-moving R&D team to rapidly transfer research into products.
Clement Farabet is VP of AI Infrastructure at NVIDIA. His team is responsible for building NVIDIA’s next-generation AI platform, to enable a broad range of applications, from autonomous cars to healthcare. Clement received a PhD from Université Paris-Est in 2013, while at NYU, co-advised by Laurent Najman and Yann LeCun. His thesis focused on real-time image understanding, introducing multi-scale convolutional neural networks and a custom hardware arch for deep learning. He cofounded Madbits, a startup focused on web-scale image understanding, sold to Twitter in 2014. He cofounded Twitter Cortex, a team focused on building Twitter’s deep learning platform for recommendations/search/spam/nsfw/ads.




Dave Lacey - Head of Software Engineering - Graphcore
Graph Computing for Machine Intelligence
Dave Lacey - Graphcore
Graph Computing for Machine Intelligence
Machine intelligence gives rise to a new form of computing which will have a profound effect on both hardware and software systems. Given the emergence of new algorithms and hardware, we need to think about how we program the software for these systems. This needs to address a new paradigm of compute but also allow us to integrate with existing software systems and practice. I will resent our work on graph programming for this purpose.
Dave Lacey is software technical lead at Graphcore, working on the programming environment and software stack for the IPU - a brand new type of processor for the next generation of machine intelligence computer systems. He has a PhD from the University of Oxford in Computer Science and has over 16 years of experience in research and development of programming tools and applications in many areas including machine learning, HPC and embedded systems. Prior to Graphcore, he worked at the University of Oxford, the University of Warwick, Clearspeed Technology and XMOS.




Vicki Cheung - Head of Infrastructure - Previously OpenAI
Building Infrastructure for Deep Learning
Vicki Cheung - Previously OpenAI
Building Infrastructure for Deep Learning
OpenAI is a non-profit research company that does cutting-edge AI research. Kubernetes and Docker have allowed us the flexibility to experiment with various computing frameworks and topologies without paying the infrastructure cost, and have enabled us to keep up with the pace of deep learning research. However, our use cases are distinctly different from the well-supported microservice use case, and we've iterated on our infrastructure and tooling to optimize for our work. In this talk, we will go over some of the motivations and internals of our customizations, as well as an example of how they all come to work together to accelerate research.
Vicki is part of the founding team and leads infrastructure at OpenAI, where they run deep learning experiments with large numerical compute requirements at scale. Previously, she led engineering at TrueVault and was a founding engineer at Duolingo.




Raj Talluri - SVP of Product Management - Qualcomm
Intelligent Vision: Combining DL With Traditional Computer Vision for Performance in Vision-Based IoT Applications
Raj Talluri - Qualcomm
Intelligent Vision: Combining Deep Learning With Traditional Computer Vision for Performance in Vision-Based IoT Applications
In the last couple of decades we have seen tremendous advances in visual computing technologies. The smartphone revolution has led to new breakthroughs in camera processing, machine learning and computer vision, which are now finding their way into many Internet of Things applications - including self-driving cars, virtual reality headsets, connected cameras, autonomous robots and more. This talk will address how a hybrid approach combining deep learning with traditional computer vision delivers significant performance and power-efficiency improvements for IoT applications requiring vision processing. Raj Talluri, senior vice president of product management for IoT at Qualcomm Technologies, Inc. will review results from concrete implementations using this hybrid approach, and discuss the role of heterogeneous computing architectures on its implementation.
Raj Talluri serves as senior vice president of product management for Qualcomm Technologies, Inc. (QTI), where he is currently responsible for managing QTI’s Internet of Things (IoT) business. Prior to this role, he was responsible for product management of mobile computing, Sense ID 3D finger print technology and Qualcomm Snapdragon Application Processor technologies. Talluri has more than 20 years of experience spanning across business management, strategic marketing, and engineering management. He began his career at Texas Instruments (TI), working on media processing in their corporate research labs. During that time, Talluri started multiple new businesses in digital consumer electronics and wireless technologies. Talluri holds a Ph.D in electrical engineering from the University of Texas at Austin. He also holds a Master of Engineering degree from Anna University in Chennai, India and a Bachelor of Engineering from Andhra University in Waltair, India. He has published more than 35 journal articles, papers, and book chapters in many leading electrical engineering publications. He has been granted 13 U.S. patents for image processing, video compression, and media processor architectures. Talluri was chosen as No. 5 on Fast Company’s list of 100 Most Creative People in Business in 2014.



Conversation & Drinks
Download the PDF agenda below

DOORS OPEN

WELCOME
Rumman Chowdhury - Accenture
Designing Ethical AI Solutions
The imperative for ethical design is clear – but how do we move from theory to practice? In this workshop, Accenture expert and Responsible AI lead, Rumman Chowdhury will lead a design thinking and ideation session to illustrate how AI solutions can imbue ethics and responsibility. This interactive session will ask participants to help design an AI solution and provide guidance for the right kinds of ethical considerations. Each participant will leave with an understanding of applied ethical design.
Rumman is a Senior Principal at Accenture, and Global Lead for Responsible AI. She comes from a quantitative social science background and is a practicing data scientist. She leads client solutions on ethical AI design and implementation. Her professional work extends to partnerships with the IEEE and World Economic Forum. She has been named a fellow of the Royal Society for the Arts and is one of BBC’s 100 most influential women of 2017.


STARTUP SESSION


Nathan Wheeler - Chief Product Officer - Entropix
Using DL To Reverse the Resolution-Degrading Effects of Conventional Video Capture
Nathan Wheeler - Entropix
Using Deep Learning To Reverse the Resolution-Degrading Effects of Conventional Video Capture
Entropix patented technology reconstructs video to 9x it’s captured pixel density while dramatically reducing the bandwidth and storage requirements to only the parts that matter. Our technology is easily integrated with an array of intelligent video analytics and security and public safety video management systems.
Ten years of hardware and software design and innovations in the enterprise HD surveillance market help Nathan shape Entropix as a software product. He is also the Founder and Chairman of Network Optix, an enterprise video management software company listed as the 7th fastest growing software company in the US. (2016, Inc. 5000).


Yousuke Okada - ABEJA
Business Production AI on Cloud and Edges, along with Case Studies
When AI technology is implemented as a business solution, preparation of data, pipeline and algorithm are required, along with peripheral systems to manage GPUs. We have created its own success AI approach featuring all functions needed by implementation: Data Collection, Model Deployment & Training, Model Management, GPU Edge Device Management and many more. This method is already in use by 100 companies such as Manufacture (preventive malfunction, visual inspection, auto picking), Retail (VMD optimization, measure the effect of marketing), Logistics (auto picking, delivery optimization) and etc, with over 450 case studies.
Born in 1988. Started programming from 10-year-old. Majored in computer graphics in high school and awarded a prize from Minister of Education, Culture, Sports, Science and Technology. In college, researched 3-dimensional computer graphics using Graphical Processing Unit with Compute Unified Device Architecture and proposed some paper in various international conferences. Focused on solving the distributed physical simulation. After that, joined small venture company to manage a multi-million business. In 2011 stayed at Silicon Valley and impressed artificial intelligence technology on research stage. Backed Japan, Started ABEJA as CEO / CTO.



Scott Stephenson - Co-Founder & CEO - Deepgram
Your Voice is Pure Gold: Understand Speech Data with Deep Learning
Scott Stephenson - Deepgram
Your Voice is Pure Gold: Understand Speech Data with Deep Learning
With deep learning, businesses are quickly cashing in on the promise of big data. It’s now true that you can record the data, build the model, and get results. Alexa and Siri have given you a glimpse of the new gold -- it’s your speech data. Recorded phone calls and meetings are full of rich information that can now be extracted to help your business. We’ll discuss this new wave of tech and how to deploy AI Speech Products into your business.
Dr. Scott Stephenson is a dark matter physicist turned Deep Learning entrepreneur. He earned a PhD in particle physics from University of Michigan where his research involved building a lab two miles underground in China to detect dark matter. Scott left his physics post-doc research position to co-found Deepgram, a San Francisco-based Speech AI company. Deepgram participated in YCombinator in 2016 and leads private companies in research and deployment for deep learning based Speech AI.




Kevin Peterson - Head of Software - Marble
How Deep Learning Enables Autonomous Vehicles
Kevin Peterson - Marble
How Deep Learning Enables Autonomous Vehicles
Self-driving vehicles will transform how we work and play. Today, thirty percent of the cars on the road at any given time are trying to park. We spend 500 million hours each day driving to and from the grocery store. The impact of automating these tasks and more is huge. Marble is building self-driving delivery vehicles to give you back this time. I'll talk about why delivery is a good application of robotics, and how deep learning enables us to automate driving.
Co-Founder and software lead, Kevin Peterson, is responsible for the software that enables Marble’s robots to seamlessly navigate the city and efficiently route to their destination. Before Marble, Kevin developed perception systems for the first self-driving car and led development of a lunar lander for the Google Lunar XPRIZE. He has developed robots to chase submarines, rove the Moon, and clean up unexplored ordnance.




Eli David - CTO - Deep Instinct
End-to-End Deep Learning for Detection, Prevention, and Classification of Cyber Attacks
Eli David - Deep Instinct
End-to-End Deep Learning for Detection, Prevention, and Classification of Cyber Attacks
With more than a million new malicious files created every single day, it is becoming exceedingly difficult for currently existing malware detection methods to detect most of these new sophisticated attacks. In this talk, we describe how Deep Instinct uses an end-to-end deep learning based approach to effectively train its brain on hundreds of millions of files, and thus providing by far the highest detection and prevention rates in the cybersecurity industry today. We will additionally explain how deep learning is employed for malware classification and attribution of attacks to specific entities.
Dr. Eli David is a leading expert in the field of computational intelligence, specializing in deep learning (neural networks) and evolutionary computation. He has published more than thirty papers in leading artificial intelligence journals and conferences, mostly focusing on applications of deep learning and genetic algorithms in various real-world domains. For the past ten years, he has been teaching courses on deep learning and evolutionary computation, in addition to supervising the research of graduate students in these fields. He has also served in numerous capacities successfully designing, implementing, and leading deep learning based projects in real-world environments. Dr. David is the developer of Falcon, a grandmaster-level chess playing program based on genetic algorithms and deep learning. The program reached the second place in World Computer Speed Chess Championship. He received the Best Paper Award in 2008 Genetic and Evolutionary Computation Conference, the Gold Award in the prestigious "Humies" Awards for Human-Competitive Results in 2014, and the Best Paper Award in 2016 International Conference on Artificial Neural Networks. Currently Dr. David is the co-founder and CTO of Deep Instinct, the first company to apply deep learning to cybersecurity. Recently Deep Instinct was recognized by Nvidia as the "most disruptive AI startup".



COFFEE
APPLICATIONS
Daphne Koller - insitro
Machine Learning: A New Approach to Drug Discovery
Modern medicine has given us effective tools to treat some of the most significant and burdensome diseases. At the same time, it is becoming consistently more challenging to develop new therapeutics: clinical trial success rates hover around the mid-single-digit range; the pre-tax R&D cost to develop a new drug (once failures are incorporated) is estimated to be greater than $2.5B; and the rate of return on drug development investment has been decreasing linearly year by year, and some analyses estimate that it will hit 0% before 2020. A key contributor to this trend is that the drug development process involves multiple steps, each of which involves a complex and protracted experiment that often fails. We believe that, for many of these phases, it is possible to develop machine learning models to help predict the outcome of these experiments, and that those models, while inevitably imperfect, can outperform predictions based on traditional heuristics. The key will be to train powerful ML techniques on sufficient amounts of high-quality, relevant data. To achieve this goal, we are bringing together cutting edge methods in functional genomics and lab automation to build a bio-data factory that can produce relevant biological data at scale, allowing us to create large, high-quality datasets that enable the development of novel ML models. Our first goal is to engineer in vitro models of human disease that, via the use of appropriate ML models, are able to provide good predictions regarding the effect of interventions on human clinical phenotypes. Our ultimate goal is to develop a new approach to drug development that uses high-quality data and ML models to design novel, safe, and effective therapies that help more people, faster, and at a lower cost.
Daphne Koller is the CEO and Founder of insitro, a startup company that aims to rethink drug development using machine learning. She is also the Co-Chair of the Board and Co-Founder of Coursera, the largest platform for massive open online courses (MOOCs). Daphne was the Rajeev Motwani Professor of Computer Science at Stanford University, where she served on the faculty for 18 years. She has also been the Chief Computing Officer of Calico, an Alphabet company in the healthcare space. She is the author of over 200 refereed publications appearing in venues such as Science, Cell, and Nature Genetics. Daphne was recognized as one of TIME Magazine’s 100 most influential people in 2012 and Newsweek’s 10 most important people in 2010. She has been honored with multiple awards and fellowships during her career including the Sloan Foundation Faculty Fellowship in 1996, the ONR Young Investigator Award in 1998, the Presidential Early Career Award for Scientists and Engineers (PECASE) in 1999, the IJCAI Computers and Thought Award in 2001, the MacArthur Foundation Fellowship in 2004, and the ACM Prize in Computing in 2008. Daphne was inducted into the National Academy of Engineering in 2011 and elected a fellow of the American Academy of Arts and Sciences in 2014 and of the International Society of Computational Biology in 2017. Her teaching was recognized via the Stanford Medal for Excellence in Fostering Undergraduate Research, and as a Bass University Fellow in Undergraduate Education.


Rumman Chowdhury - Accenture
Designing Ethical AI Solutions
The imperative for ethical design is clear – but how do we move from theory to practice? In this workshop, Accenture expert and Responsible AI lead, Rumman Chowdhury will lead a design thinking and ideation session to illustrate how AI solutions can imbue ethics and responsibility. This interactive session will ask participants to help design an AI solution and provide guidance for the right kinds of ethical considerations. Each participant will leave with an understanding of applied ethical design.
Rumman is a Senior Principal at Accenture, and Global Lead for Responsible AI. She comes from a quantitative social science background and is a practicing data scientist. She leads client solutions on ethical AI design and implementation. Her professional work extends to partnerships with the IEEE and World Economic Forum. She has been named a fellow of the Royal Society for the Arts and is one of BBC’s 100 most influential women of 2017.




Nina Berry - J6 S&T Advisor - Joint Improvised-Threat Defeat Organization (JIDO)
Democratizing Data Science & Machine Learning to the End-User
Nina Berry - Joint Improvised-Threat Defeat Organization (JIDO)
Democratizing Data Science and Machine Learning to the End-User
Machine learning techniques represent powerful tools and has resulted in hypotheses that we use to focus and test the innovative concepts behind these solutions: (1) measurable benefits (efficiency, accuracy, and discovery) to end-users; (2) including machine learning models improves speed and accuracy; (3) comparing traditional data science against new machine learning techniques will identify changes in data science productivity. We funded investments against real-world problems using algorithmic solutions including deep learning, predictive analytics, translation, classification & clustering, and facial/object recognition. The results of this effort answer the hypothesis and determine strengths and weaknesses of exposing algorithms directly to non-expert end-users.
Nina is a Computer Science Software R&D Advisor from the DOE Sandia National Laboratories, detailed as a contractor to JIDO for over ten years. Her multi-disciplinary research background covers applied agent-based and artificial intelligence in diverse domains such as distributed computing, enterprise systems, advanced analytics, frameworks for cognitive encapsulation, computational terrorist recruitment models, video processing, wireless smart sensor, and pervasive computing devices. She provides JIDO with technical guidance for selecting software analytics and architectures used to detect and understand the Counter Threat Network domains, by modeling disparate big data to extract, integrate, visualize, mine, and fuse.


Karthik Ramasamy - Senior Data Scientist - Uber
Deep Defense: Using Deep Learning to Fight off Uber Fraudsters
Karthik Ramasamy - Uber
Deep Defense: Using Deep Learning to Fight off Uber Fraudsters
Fraud models are generally based on narrow data streams processed by traditional machine learning models such as gradient boosted machines. Our talk will cover how Uber improved on this by applying deep learning to extract complex feature relationships from high-dimensional datasets such as tapstream and location data. We will cover the lessons we learned while applying deep learning to three fraud use cases: Finding anomalous trip locations based on all Uber trip history; Using tap streams to model normal vs fraud app usage; Computer vision for validating credit cards and IDs.
Karthik is a senior data scientist at Uber focusing on solving fraud problems using machine learning. He builds advanced machine learning models like semi-supervised and deep learning models to detect account takeovers and stolen credit cards. Before Uber, Karthik was a co-founder of his company LogBase where he worked on real-time analytics infrastructure and building models to rate drivers based on their driving behavior. Prior to founding LogBase, he was a founding member of the LinkedIn security team where he developed various security products, with a particular focus on anti-automation efforts.


LUNCH


Kourosh Modarresi - Senior AI - ML Scientist - Adobe
Application of DNN for Modern Data with two Examples: Recommender Systems & User Recognition
Kourosh Modarresi - Adobe
Application of Deep Neural Networks (DNN) for Modern Data with two examples; Recommender Systems and User Recognition
For almost two decades, we have been dealing with “Modern Data” as the prevalent type of data in many areas of science and technology. “Modern Data” has unique characteristics such as, extreme sparsity, very high correlation, massive size and high dimensionality. A major difficulty is that many of the old machine learning models cannot be applied on Modern Data. In this talk, we show our deploying DNN models in dealing with modern data effectively as , among other advantages, DNN models do not rely on many of the assumptions and abstractions the traditional models depend on.
Kourosh has completed his graduate studies at Stanford (MS, PhD). His PhD dissertation, “A local regularization method using multiple regularization levels“, focused on developing new modeling approaches for a major issue in AI and Machine Learning, regularization. His PhD advisor was Gene Golub. He has been working as entrepreneur and also as Sr AI- Machine Learning scientist at Adobe where he has been the founder and manager of Adobe AI-Machine Learning group. At Adobe, Kourosh has been leading many projects such as "AI-based Recommender Systems", “Automated, Insightful and Interpretable Clustering” and "AI-based User Comprehensive View – Connection of users cross devices, venues and channels".


Miao Lu - Yahoo Labs
Campaign Representation Learning in Advertisement
I'll present our work on training deep learning models in conversion rate (CVR) prediction in online advertisement. Specifically, I would like to describe an attention joint embedding model to simultaneously learn heterogeneous sources of information for ads campaigns. The campaign representations were able to capture similarities on ad contents, conversion rules and targeting user groups. Extensive experiments on real-world dataset were conducted to show the effectiveness of the proposed attention campaign embedding model, in comparison with different baselines in CVR prediction.
Key Takeaways:
- Representation learning with heterogeneous information
- Conversion rate prediction
- Cold-start Campaign
Miao is a research scientist from Yahoo Research, working on Native/Display/Search Ads Recommendation and Forecasting, leading the corporate traffic and revenue forecasting projects. He has strong interdisciplinary background in Statistics, Machine Learning and Data Mining, with wide applications in biomedical science and internet technology. Before joining Yahoo, he obtained a PhD / MS in statistics from University of Virginia, and a BS in statistics from Zhejiang University.


Frank Xia - Uber
Personalization using LSTMs
Personalization is a common theme in social networks and e-commerce businesses. However, personalization at Uber will involve understanding of how each driver/rider is expected to behave on the platform. One way to quantify future behavior is to understand the amount of trips a driver/rider will do. In this talk, I will present our work on training LSTMs for short term trip predictions (4-6 weeks) of each driver on the platform. Specifically, I would like to describe how we combine past engagement data of a particular driver with incentive budgets and use a custom loss function (i.e. zero inflated poisson) to come up with accurate trip predictions using LSTMs. Predicting rider/driver level behaviors can help us find cohorts of high performance drivers, run personalized offers to retain users, and deep dive into understanding of deviations from trip forecasts.
I am a data scientist at Uber focusing on solving forecasting problems using deep learning methods. Previously I worked in the finance industry using machine learning for researching quantitative trading strategies. I hold a Masters from UC Berkeley and a Masters from NYU.


DEEP LEARNING INTEGRATION
Amy Gershkoff - Ancestry
Is Your Organization Ready for ML?
As data science has become increasingly popular, many organizations rush to hire ML experts without laying the proper foundation to ensure their success, including creating proper database architecture, building out essential data science technology, establishing data governance, and instilling data-driven decision-making throughout the organization. Absent these elements, many ML experts join companies excited to deploy their data science expertise only to end up marred in data cleaning or lobbying for tech resources. In this presentation, I discuss how organizations can prepare their organization for success, as well as how candidates can diagnose whether the organization is truly ready for ML.
Dr. Gershkoff is Chief Data Officer for Ancestry, which specializes in genealogy and consumer genomics. Previously, she was Chief Data Officer at Zynga. During her career, she has led the Customer Analytics & Insights team at eBay, served as Chief Data Scientist at WPP, and was Head of Media Planning at Obama for America, where she designed the campaign’s advertising and analytics strategy. Gershkoff was named one of America’s “40 Under 40” leading entrepreneurs, one of the Top 50 Women to Watch in Tech, and one of San Francisco's Most Influential Women in Business. She holds a Ph.D. from Princeton University.




Manuel Proissl - Head of Predictive Analytics in Banking Products - UBS
On the Embedding of Reliable Deep Learning in the Enterprise
Manuel Proissl - UBS
Towards Algorithmic Assurance of Governing Machine Learning Systems at Scale
Over the past years a vast amount of research and guidelines have been published with the aim to pave the way towards 'governance frameworks' of machine learning systems affecting consumers, particularly around adversarial robustness, model transparency, privacy conservation, algorithmic fairness and ethical principles. This presentation focuses on a set of techniques that have shown potential and presumably practical relevance in financial services. Furthermore, the talk attempts to also shed light on opportunities and challenges of embedding third-party APIs that have been developed/trained by global communities.
Manuel is currently Head of Predictive Analytics in Banking Products at UBS. Previously, he's been a senior advisor and machine learning cloud platform lead at Ernst & Young, developed numerous AI-driven business solutions for global organizations, and held managing roles in cross-border audit & advisory engagements and leading international research collaborations with contributions to AI research, Cognitive Control Systems and Particle Physics.


END OF SUMMIT
Download the PDF agenda below

New to Deep Learning? Time to Ask Qs! - Networking
Networking & Open Floor with Experts

Maithili Mavinkurve - Sightline Innovation
The Commercialization of Deep Learning

Mahmood Tabaddor - Consortium for Safer AI
What are the Main Practical Safety Issues with AI Products?

Mahesh Ram - Solvvy
How Machine Learning Applications Can Deliver a Superior Customer Support Experience

How can we Utilize AI to Protect the Environment & Increase Sustainability? - WORKSHOP
Panel Discussion with Leading Experts & Practitioners

Startup Mentoring Session - BREAKOUT SESSION
Startup Mentoring Session with VCs and Industry Experts

Ofer Ronen - Chatbase
Analysing & Optimizing Bots More Easily for Better User Experiences

What is AI’s Biggest Impact on Data Centers? - WORKSHOP
Breakout Session & Open Floor With Experts

David Nola - NVIDIA
Image Creation using Generative Adversarial Networks (GANs)