
REGISTRATION & COFFEE

WELCOME
Rumman Chowdhury - Accenture
Designing Ethical AI Solutions
The imperative for ethical design is clear – but how do we move from theory to practice? In this workshop, Accenture expert and Responsible AI lead, Rumman Chowdhury will lead a design thinking and ideation session to illustrate how AI solutions can imbue ethics and responsibility. This interactive session will ask participants to help design an AI solution and provide guidance for the right kinds of ethical considerations. Each participant will leave with an understanding of applied ethical design.
Rumman is a Senior Principal at Accenture, and Global Lead for Responsible AI. She comes from a quantitative social science background and is a practicing data scientist. She leads client solutions on ethical AI design and implementation. Her professional work extends to partnerships with the IEEE and World Economic Forum. She has been named a fellow of the Royal Society for the Arts and is one of BBC’s 100 most influential women of 2017.


THEORY & APPLICATIONS
Ilya Sutskever - OpenAI
The Power of Large scale RL and generative models
Ilya Sutskever received his PhD in 2012 from the University of Toronto working with Geoffrey Hinton. After completing his PhD, he cofounded DNNResearch with Geoffrey Hinton and Alex Krizhevsky which was acquired by Google. He is interested in all aspects of neural networks and their applications.




Andrew Tulloch - Research Engineer - Facebook
Deep Learning in Production at Facebook
Andrew Tulloch - Facebook
Compilers for Deep Learning @ Facebook
With the growth in the complexity of our modeling tools (new operations, heavily dynamic graphs, etc), the changes in our numerical demands (new numerical formats, mixed precision models, etc), and our exploding hardware ecosystem (custom ASIC/FPGA accelerators, new instructions such as VNNI and WMMA, etc), it's getting harder for our traditional ML graph interpreters to deliver high performance in a reliable and maintainable fashion. We'll talk about some of our work at Facebook on ML compilers, our production applications, the exciting research questions and new domains these tools open up.
I'm a research engineer at Facebook, working on the Facebook AI Research and Applied Machine Learning teams to drive the large amount of AI applications at Facebook. At Facebook, I've worked on the large scale event prediction models powering ads and News Feed ranking, the computer vision models powering image understanding, and many other machine learning projects. I'm a contributor to several deep learning frameworks, including Torch and Caffe. Before Facebook, I obtained a masters in mathematics from the University of Cambridge, and a bachelors in mathematics from the University of Sydney.



Brendan Frey - Co-Founder & CEO, & Professor - Deep Genomics & University of Toronto
Keynote: Reprogramming the Human Genome: Why AI is Needed
Brendan Frey - Deep Genomics & University of Toronto
How Deep Learning is Transforming Drug Discovery
Brendan Frey, CEO and Founder of Deep Genomics, will explain how AI did most of the heavy lifting in obtaining the company's first therapeutic candidate. This included discovering novel biology, designing novel compounds, prioritizing compounds by predicted potency and toxicity, creating animal models, designing animal studies and designing the clinical trial. Their AI technology is enabling Deep Genomics to explore an expanding universe of genetic therapies, and to advance novel drug candidates more rapidly and with a higher rate of success than was previously possible.



COFFEE
REINFORCEMENT LEARNING & UNSUPERVISED LEARNING


Andrej Karpathy - Director of Artificial Intelligence - Tesla
Reinforcement Learning on the Web
Andrej Karpathy - Tesla
I'll present our work on training reinforcement learning agents to interact with and complete tasks in web browsers. In the short term our agents are learning to interact with common UI web elements like buttons, sliders and text fields. In the longer term we hope to address more complex tasks, such as achieving competence in interactive online exercises intended for schoolchildren to learn mathematics. I'll use these examples to also give a short but complete Reinforcement Learning tutorial.
Andrej is a 5th year PhD student at Stanford University, studying Deep Learning and its applications to Computer Vision and Natural Language Processing. He is also Head of Artificial Intelligence and Autopilot Vision at Tesla. In particular, his recent work has focused on Image Captioning, Recurrent Neural Network Language Models and Reinforcement Learning. On a side, he enjoys implementing state of the art Deep Learning models in Javascript, competing against Convolutional Networks on the ImageNet challenge, and blogging. Before joining Stanford he completed an undergraduate degree in Computer Science and Physics at the University of Toronto and a Computer Science Master's degree at the University of British Columbia.




Ofir Nachum - Research Scientist - Google Brain
UREX: Under-appreciated Reward Exploration
Ofir Nachum - Google Brain
Learning Abstractions with Hierarchical Reinforcement Learning
Hierarchical RL has long held the promise of enabling deep RL to solve more complex and temporally extended tasks by abstracting away lower-level details from a higher-level agent. In this talk, we describe how to turn this promise into a reality. We present a hierarchical design in which a higher-level agent solves a task by iteratively directing a lower-level policy to reach certain goals. We describe how both levels may be trained concurrently in a highly-efficient, off-policy manner. Furthermore, we present a provably-optimal technique for learning abstract notions of `goals' without explicit supervision. Our resulting method achieves excellent performance on a suite of difficult navigation tasks.
Key Takeaways:
- Hierarchy can multiply the capabilities of an RL agent
- The key to good hierarchical RL is using goal-conditioned policies
- Recent research provides the tools to train these models efficiently
Ofir Nachum currently works at Google Brain as a Research Scientist. His research focuses on reinforcement learning, with notable work including PCL (path consistency learning) and HIRO (hierarchical reinforcement learning with off-policy correction). He received his Bachelor's and Master's from MIT. Before joining Google, he was an engineer at Quora, leading machine learning efforts on the feed, ranking, and quality teams.




Chelsea Finn - Research Scientist & Post-doctoral Scholar - Google Brain & Berkeley AI Research
End-To-End Deep Robotic Learning
Chelsea Finn - Google Brain & Berkeley AI Research
Chelsea Finn is a research scientist at Google Brain and post-doctoral scholar at Berkeley AI Research. Starting in 2019, she will join the faculty in CS at Stanford University. She is interested in how learning algorithms can enable machines to acquire general notions of intelligence, allowing them to autonomously learn a variety of complex sensorimotor skills in real-world settings. She received her PhD in CS at UC Berkeley in 2018 and her Bachelors in EECS at MIT in 2014.



LUNCH
Toru Nishikawa - Preferred Networks
Preferred Networks Inc. (PFN), has been actively working on applications of deep learning on real world problems. In collaboration with leading companies and research institues, PFN has been focusing on deep learning in three domains: Industrial machinery including manufacturing robots, Smart transportation including autonomous driving, and Lifescience including cancer diagnosis and treatment. In December 2016, together with FANUC, a world leader in industrial machinery and industrial robots, we launch the world’s first commercial IoT platform for manufacturing that has Deep Learning technology at its core. In IoT, among other industries, Deep Learning is not only a research topic any more, but an important key technology in driving business.
The dramatic evolvement in the functional capabilities of IoT devices, and the fact that data generated by devices is incomparably larger than that generated by humans, are two particularly important factors contributing to the fast-paced innovation in various industries. Similarly, advancement in Deep Learning research is expanding its applications beyond pure data analysis to device actuation and control in the physical world. However, in order for algorithms to be able to efficiently learn real-time control of real world devices, a combination of advancement in both Deep Learning and computing is essential. That is the concept of Edge-Heavy-Computing where by bringing intelligence close to the network edge devices, the overall system makes it possible for those devices to efficiently learn in a distributed and collaborative manner, while resolving the data communication bottleneck often faced in IoT applications. In this talk, I will introduce some of the work we have been doing at PFN, highlight some results, and also give examples of how new computing boosts the value brought by Deep Learning.
Toru Nishikawa is the president and CEO of Preferred Networks, Inc., a Tokyo-based startup company specialized in applying the latest artificial intelligence technologies to emerging problems in the area of Internet of Things (IoT). He was one of the world finalists of the ACM ICPC (International Collegiate Programming Contest) while he was a graduate student of University of Tokyo. In 2006, he, together with his collegemates and his fellow contenders at ICPC, founded Preferred Infrastructure, Inc., a precursor company. In 2014, Nishikawa founded Preferred Networks with the aim of expanding their businesses into the realization of Deep Intelligence – a future IoT in which all devices, as well as the network itself, are equipped with machine intelligence. He is an ambitious entrepreneur, programmer, and father of a child.


COMPUTER VISION


Ian Goodfellow - Staff Research Scientist - Google Brain
Generative Adversarial Networks
Ian Goodfellow - Google Brain
Generative Adversarial Networks
Ian Goodfellow is a Staff Research Scientist at Google Brain. He is the lead author of the MIT Press textbook Deep Learning. In addition to generative models, he also studies security and privacy for machine learning. He has contributed to open source libraries including TensorFlow, Theano, and Pylearn2. He obtained a PhD from University of Montreal in Yoshua Bengio's lab, and an MSc from Stanford University, where he studied deep learning and computer vision with Andrew Ng. He is generally interested in all things deep learning.


Alexei Efros - UC Berkeley
Alexei Efros (associate professor, UC Berkeley) works in the area of computer vision and computer graphics, especially at the intersection of the two. He is particularly interested in using data-driven techniques to tackle problems where large quantities of unlabeled visual data are readily available. He is a recipient of NSF CAREER award (2006), Sloan Fellowship (2008), Guggenheim Fellowship (2008), SIGGRAPH Significant New Researcher Award (2010), and the Helmholtz Test-of-Time Prize (2013).


Durk Kingma - OpenAI
Variational Autoencoders
Current deep learning mostly relies on supervised learning: given a vast amount of examples of humans performing tasks such as labeling images or translation, we can teach computers to mimic humans at these tasks. Since supervised methods only focus on modeling the task directly, however, these methods are not particularly efficient: they need much more examples than humans require for learning new tasks. Enter unsupervised learning, where computers not only model tasks, but also their context, vastly improving data efficiency. We discuss the powerful framework of Variational Autoencoders (VAEs), a synthesis of deep learning and Bayesian methods, as a principled yet practical approach towards unsupervised deep learning. In addition to the underlying mathematics, we discuss current scientific and practical applications of VAEs, such as semi-supervised learning, drug discovery, and image resynthesis.
Diederik (or Durk) Kingma is a Research Scientist at OpenAI, with a focus on unsupervised deep learning. His research carreer started in 2009, while graduated at Utrecht University, working with prof. Yann LeCun at NYU. Since 2013, he pursues a PhD with prof. Max Welling in Amsterdam, focusing on the intersection of deep learning and Bayesian inference. Early in his PhD, he proposed the Variational Autoencoder (VAE), a principled framework for Bayesian unsupervised deep learning. Other well-known work is Adam, a now standard method for stochastic gradient descent.


Brad Folkens - CloudSight
Your AI is Blind
The final missing piece in AI is Visual Cognition and Understanding. In order for this dream to be realized, it takes more than winning scores at classifying ImageNet. We discuss our 4 years of experience in scaling, quality control, data management, and other important lessons learned in commercializing computer vision in the marketplace, including the procurement of the largest training dataset ever created for Visual Cognition and Understanding through Deep Learning.
Brad Folkens is the co-founder and CTO of CloudSight, where he leads the effort to build the world’s first visual cognition platform to power the future of AI. He earned his Theoretical Computer Science and Mathematics degrees with honors and immediately went to work building profitable companies, first with the award winning PublicSalary.com HR Tool, and later with his co-founder, Dominik, grossing over $1m/year in revenue.



COFFEE
DEEP LEARNING & ECONOMIC IMPACT

PANEL: How Will Deep Learning Change Manufacturing and Industry?
Shivon Zilis - Bloomberg
Shivon is a partner and founding member of Bloomberg Beta, a $75 million venture fund backed by Bloomberg L.P. that invests in startups transforming the future of work. Shivon is obsessed with the most important force changing work, machine intelligence. She is known for an annual report that researches thousands of machine intelligence companies and selects the most promising real-world problems they are solving. Bloomberg Beta has invested in more than 25 machine intelligence companies to date. She graduated from Yale, where she was the goalie on the ice hockey team. She is an advisor to OpenAI, a fellow at the Creative Destruction Lab, on the advisory board of University of Alberta's Machine Learning group, and a charter member of C100. She co-hosts an annual conference at the University of Toronto that brings together the foremost authors, academics, founders, and investors in machine intelligence. She was one of Forbes 30 Under 30 in Venture Capital.


Modar Alaoui - Eyeris
Vision AI for Augmented Human Machine Interaction
This session will unveil the latest vision AI technologies that ensure safe and efficient human machine interactions in the industrial automation context. Today’s human-facing industrial AI applications lack a key element for Human Behavior Understanding (HBU) that is critical for augmented safety and enhancing productivity. The second part of this session will detail how real-world applications can benefit from a comprehensive suite of visual behavior analytics that are readily available today.
Modar is a serial entrepreneur and expert in AI-based vision software development. He is currently founder and CEO at Eyeris, developer of a Deep Learning-based emotion recognition software, EmoVu, that reads facial micro-expressions. Eyeris uses Convolutional Neural Networks (CNN's) as a Deep Learning architecture to train and deploy its algorithm in to a number of today’s commercial applications. Modar combines a decade of experience between Human Machine Interaction (HMI) and Audience Behavioral Measurement. He is a frequent keynoter on “Ambient Intelligence”, a winner of several technology and innovation awards and has been featured in many major publications for his work.


Inmar Givoni - Uber ATG
Inmar Givoni is a Senior Autonomy Engineering Manager at Uber Advanced Technology Group, Toronto, where she leads a team whose mission is to bring from research and into production cutting-edge deep-learning models for self-driving vehicles. She received her PhD (Computer Science) in 2011 from the University of Toronto, specializing in machine learning, and was a visiting scholar at the University of Cambridge. She worked at Microsoft Research, Altera (now Intel), Kobo, and Kindred at roles ranging from research scientist to VP, applying machine learning techniques to various problem domains and taking concepts from research to production systems. She is an inventor of several patents and has authored numerous top-tier academic publications in the areas of machine learning, computer vision, and computational biology. She is a regular speaker at AI events, and is particularly interested in outreach activities for young women, encouraging them to choose technical career paths. For her volunteering efforts she has received the 2017 Arbor Award from UofT. In 2018 she was recognized as one of Canada’s 50 inspiring women in STEM.


DEEP LEARNING SYSTEMS


Andres Rodriguez - Senior Technical Lead for Deep Learning - Intel
Catalyzing Deep Learning’s Impact in the Enterprise
Andres Rodriguez - Intel
Catalyzing Deep Learning’s Impact in the Enterprise
Deep learning is unlocking tremendous economic value across various market sectors. Individual data scientists can draw from several open source frameworks and basic hardware resources during the very initial investigative phases but quickly require significant hardware and software resources to build and deploy production models. Intel offers various software and hardware to support a diversity of workloads and user needs. Intel Nervana delivers a competitive deep learning platform to make it easy for data scientists to start from the iterative, investigatory phase and take models all the way to deployment. This platform is designed for speed and scale, and serves as a catalyst for all types of organizations to benefit from the full potential of deep learning. Example of supported applications include but not limited to automotive speech interfaces, image search, language translation, agricultural robotics and genomics, financial document summarization, and finding anomalies in IoT data.
Andres Rodriguez is a deep learning solutions architect with Intel Nervana where he designs deep learning solutions for Intel’s customers and provides technical leadership across Intel for deep learning. Andres received his PhD from Carnegie Mellon University for his research in machine learning, and prior to joining Intel, he was a deep learning research scientist with the Air Force Research Laboratory. He holds over 20 peer reviewed publications in journals and conferences, and a book chapter on machine learning.




Avidan Akerib - VP of the Associative Computing - GSI Technology
In-Place Computing: High-Performance Search
Avidan Akerib - GSI Technology
In-Place Computing: High-Performance Search
This presentation details an in-place associative computing technology that changes the concept of computing from serial data processing—where data is moved back and forth between the processor and memory—to massive parallel data processing, compute, and search in-place directly in the main processing array. This in-place associative computing technology removes the bottleneck at the IO between the processor and memory, resulting in significant performance-over-power ratio improvement compared to conventional methods that use CPU and GPGPU (General Purpose GPU) along with DRAM. Target applications include, convolutional neural networks, recommender systems for e-commerce, and data mining tasks such as prediction, classification, and clustering.
Avidan Akerib is VP of the Associative Computing business unit at GSI Technology. He holds a PhD from the Weizmann Institute of Science where he developed the theory of associative computing and applications for image processing and graphics. Avidan has over 30 years of experience in parallel computing, image processing and pattern recognition, and associative processing. He holds over 20 patents related to parallel computing and associative processing.



Conversation & Drinks - sponsored by Qualcomm

REGISTRATION & COFFEE

WELCOME
Nathan Benaich - Playfair Capital
I joined Playfair Capital in 2013 to focus on deal sourcing, due diligence, and helping our companies grow. I'm particularly interested in artificial intelligence and machine learning, infrastructure-as-a-service, mobile, and bioinformatics. I've led, originated or participated in Seed through Growth investments including Mapillary, Appear Here, Dojo, and Festicket.
Prior to Playfair, I earned an M.Phil and Ph.D in oncology as a Gates and Dr. Herchel Smith Scholar at the University of Cambridge, and a BA in biology from Williams College. I've published research focused on technologies to halt the fatal spread of cancer around the body.


STARTUP SESSION
Will Jack - Remedy
Bringing Deep Learning to The Front Lines of Healthcare
Today’s healthcare system is not built for the seamless integration of innovations in machine learning. Health data is heavily siloed, inaccessible, and non standardized, making it challenging to work with in machine learning systems. Additionally, deep learning techniques aren’t well suited for many healthcare problems due to the difficulty in interpreting their decisions. At Remedy, we’re building a healthcare system that captures granular data at the point of care, and enables the deployment of machine learning models at the point of care. We’re also developing interpretable model to tackle tasks such as diagnosis, physician education, treatment planning, and triage.
Will is CEO of Remedy Health, a startup focused on building a healthcare system around a strong software backbone to enable the seamless integration of new innovations into care. Will is also a venture partner at Alsop Louie Partners. Prior to this Will worked in R&D on SpaceX’s internet satellite project, and studied physics and computer science at MIT. An Ohio native, Will spent his childhood developing a particle collider in his basement, using the nuclear reactions it carried out to investigate novel methods of medical imaging.




Alex Dalyac - Co-Founder & CEO - Tractable
Addressing the Labelling Bottleneck in Computer Vision for Learning Expert Tasks
Alex Dalyac - Tractable
Thanks to deep learning, AI algorithms can now surpass human performance in image classification. However, behind these results lie tens of thousands of man hours spent annotating images. This significantly prohibits commercial applications where cost and time to market are key. At Tractable, our solution centers on creating a feedback loop from learning algorithm to human, turning the latter into a “teacher” rather than a blind labeler. Dimensionality reduction, information retrieval and transfer learning are some of our core proprietary techniques. We will demonstrate a 15x labeling cost reduction on the expert task of estimating from images the cost to repair a damaged vehicle – an important application for the insurance industry.
Alex is Co-founder & CEO of Tractable, a young London-based startup bringing recent breakthroughs in AI to industry. Tractable's current focus is on automating visual recognition tasks. Its long term vision is to expand into natural language, robot control, and spread disruptive AI throughout industry. Tractable was founded in 2014 and is backed by $2M of venture capital from Silicon Valley investors, led by Zetta Venture Partners. Alex has a degree in econometrics & mathematical economics from the LSE, and a postgraduate degree in computer science from Imperial College London. Alex's experience within Deep Learning investing is on the receiving side, particularly on how to attract US venture capital into Europe as early as the seed stage.



Roland Memisevic - Chief Scientist - Twenty Billion Neurons
The "Something Something" Video Dataset
Roland Memisevic - Twenty Billion Neurons
Is solving video the key next breakthrough in computer vision? We’ll discuss the key challenges in applying deep learning techniques to video understanding. This will include approaches to building high quality datasets and annotating data for video is quite different than image understanding. What are the key use cases for video today and tomorrow? How do we address concerns around privacy and fears about “big brother”? Last but not least how does video advance the field of AI more towards general intelligence and common sense understanding of the physical world in machine learning models..
Roland Memisevic received his PhD in Computer Science from the University of Toronto in 2008. He subsequently held positions as research scientist at PNYLab, Princeton, as post-doctoral fellow at the University of Toronto and ETH Zurich, and as junior professor at the University of Frankfurt. In 2012 he joined the MILA deep learning group at the University of Montreal as assistant professor. He has been on leave from his academic position since 2016 to lead the research efforts at Twenty Billion Neurons, a German-Canadian AI startup he co-founded. Roland is Fellow of the Canadian Institute for Advanced Research (CIFAR).




Augustin Marty - Co-Founder & CEO - Deepomatic
Creating Image Recognition Solutions Through DL & Human-Machine Cooperation
Augustin Marty - Deepomatic
Why Do Corporations Need to Own Their Data?
Augustin Marty is the CEO of Deepomatic, a company that believes artificial intelligence should be made accessible to all. To achieve this goal, Deepomatic has created a platform which enables businesses to develop their own image recognition systems. By 28 he had founded his first company in China, had worked in India, optimising combustion cycles for Power Plants, and at Vinci Construction Group designing and selling engineering projects. Augustin met his cofounders just after high school; sharing the same passion for entrepreneurship they decided to partner in early 2014 and created Deepomatic, the image intelligence company.



COFFEE
DEEP LEARNING APPLICATIONS IN INDUSTRY & BUSINESS


Shubho Sengupta - AI Research - Facebook AI Research (FAIR)
Systems Challenges for Deep Learning
Shubho Sengupta - Facebook AI Research (FAIR)
Systems Challenges for Deep Learning
Training neural network models and deploying them in production poses a unique set of computing challenges. The ability to train large models fast, allows researchers to explore the model landscape quickly and push the boundaries of what is possible with Deep Learning. However a single training run often consumes several exaflops of compute and can take a month or more to finish. Similarly, some problem areas like speech synthesis and recognition have real time requirements which places a limit on how much time it can take to evaluate a model in production. In this presentation, I will talk about three systems challenges that need to be addressed so that we can continue to train and deploy rich neural network models.
Shubho is now working on AI Research at FAIR. He was previous a senior research scientist at Silicon Valley AI Lab (SVAIL) at Baidu Research.
I am an architect of the High Performance Computing inspired training platform that is used to train some of the largest recurrent neural network models in the world at SVAIL. I also spend a large part of my time exploring models for doing both speech recognition and speech synthesis and what it would take to train these model at scale and deploy them to hundreds of millions of our users. I am the primary author of the WarpCTC project that is used commonly for speech recognition. Before coming to the industry, I got my PhD in Computer Science from UCDavis focusing on parallel algorithms for GPU computing and subsequently went to Stanford for a Masters in Financial Math.

Sergey Levine - UC Berkeley
Generalization and the Role of Data in Reinforcement Learning
Over the past decade, we have witnessed a revolution in supervised machine learning, as large, high-capacity models trained on huge datasets attain amazing results across a range of domains, from computer vision to natural language processing and speech recognition. But can these gains in supervised learning performance translate into more effective and optimal decision making? The branch of machine learning research that studies decision making is called reinforcement learning, and while more effective and performant reinforcement learning methods have also been developed over the past decade, in general it has proven challenging for reinforcement learning to benefit from large datasets, because it is conventionally thought of as an active online learning framework, which makes reusing large previously collected datasets difficult. In this talk, I will discuss how reinforcement learning algorithms can enable broad generalization through the use of large and diverse prior datasets. This concept lies at the core of offline reinforcement learning, which addresses the development of reinforcement learning methods that do not require active interaction with their environment but instead, much like current supervised learning methods, learn from previously collected datasets. Crucially, unlike supervised learning, such methods directly optimize for optimal downstream decision making, maximizing long-horizon reward signals. I will describe the computational and statistical challenges associated with offline reinforcement learning, describe recent algorithmic developments, and present a few promising applications.
Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.




Stefano Ermon - Assistant Professor - Stanford University
Machine Learning for Sustainability
Stefano Ermon - Stanford University
Machine Learning for Sustainability
Policies for sustainable development entail complex decisions about balancing environmental, economic, and societal needs. Making such decisions in an informed way presents significant computational challenges. Modern AI techniques combined with new data streams have the potential to yield accurate, inexpensive, and highly scalable models to inform research and policy. In this talk, I will present an overview of my group's research on applying computer science techniques in sustainability domains, including poverty and food security.
Stefano Ermon is an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory and the Woods Institute for the Environment. Stefano's research is centered on techniques for scalable and accurate inference in graphical models, statistical modeling of data, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability.



Bryan Catanzaro - VP of Applied Deep Learning Research - NVIDIA
More than a GPU: Platforms for Deep Learning
Bryan Catanzaro - NVIDIA
More than a GPU: Platforms for Deep Learning
Training and deploying state of the art deep neural networks is very computationally intensive, with tens of exaflops needed to train a single model on a large dataset. The high density compute afforded by modern GPUs has been key to many of the advances in AI over the past few years. However, researchers need more than a fast processor – they also need optimized libraries, and tools to efficiently program so that they can experiment with new ideas. They also need scalable systems that use many of these processors together to train a single model. In this talk, I’ll discuss platforms for deep learning, and how NVIDIA is working to make the platforms of the future for Deep Learning.
Bryan Catanzaro is VP of Applied Deep Learning Research at NVIDIA, where he leads a team solving problems in fields ranging from video games to chip design using deep learning. Prior to his current role at NVIDIA, he worked at Baidu to create next generation systems for training and deploying end-to-end deep learning based speech recognition. Before that, he was a researcher at NVIDIA, where he wrote the research prototype and drove the creation of CUDNN, the low-level library now used by most AI researchers and deep learning frameworks to train neural networks. Bryan earned his PhD from Berkeley, where he built the Copperhead language and compiler, which allows Python programmers to use nested data parallel abstractions efficiently. He earned his MS and BS from Brigham Young University, where he worked on computer arithmetic for FPGAs.



LUNCH


Judy Hoffman - Postdoc Researcher - Stanford Computer Vision Group
A General Framework for Domain Adversarial Learning
Judy Hoffman - Stanford Computer Vision Group
A General Framework for Domain Adversarial Learning
Deep convolutional networks have provided significant advances in recent years, but progress has primarily been limited to fully supervised settings, requiring large amounts of human annotated data for training. Recent results in adversarial adaptive representation learning demonstrate that such methods can also excel when learning in sparse/weakly labeled settings across modalities and domains. In this talk, I will present a general framework for domain adversarial learning, by which a game is played between a discriminator which seeks to determine whether an image arises from the large labeled data source or from the new sparsely labeled dataset, while the representation seeks to limit the distinguishability of the two data sources. Together this game produces a model adapted for the new domain with minimal or no new human annotations.
Judy Hoffman is a Postdoctoral Researcher in the Stanford Computer Vision group, working with Fei-Fei Li. Her research focuses on developing learning representations and recognition models with limited human annotations. She received her PhD in Electrical Engineering and Computer Science from University of California, Berkeley in Summer 2016, where she was advised by Trevor Darrell and Kate Saenko. She is interested in lifelong learning, adaptive methods, and adversarial models.




Chris Moody - Data Scientist - Stitch Fix
Practical, Active, Interpretable & Deep Learning
Chris Moody - Stitch Fix
Practical, Active, Interpretable & Deep Learning
I'll review applied deep learning techniques we use at Stitch Fix to understand our client's personal style. Interpretable deep learning models are not only useful to scientists, but lead to better client experiences -- no one wants to interact with a black box virtual assistant. We do this in several ways. We've extended factorization machines with variational techniques, which allows us to learn quickly by finding the most polarizing examples. And by enforcing sparsity in our models, we have RNNs and CNNs that reveal how they function. The result is a dynamic machine that learns quickly and challenges our client's style.
Chris Moody came from a Physics background from Caltech and UCSC, and is now a scientist at Stitch Fix's Data Labs. He has an avid interest in NLP, has dabbled in deep learning, variational methods, and Gaussian Processes. He's contributed to the Chainer deep learning library (http://chainer.org/), the super-fast Barnes-Hut version of t-SNE to scikit-learn (http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) and written (one of the few!) sparse tensor factorization libraries in Python (https://github.com/stitchfix/ntflib). Lately he's been working on lda2vec (https://lda2vec.readthedocs.org/en/latest/).




Stacey Svetlichnaya - Software Development Engineer - Flickr
Deep Learning for Emoji Prediction
Stacey Svetlichnaya - Flickr
Quantifying Visual Aesthetics on Flickr
What makes an image beautiful? More pragmatically, can an algorithm distinguish high-quality photographs from casual snapshots to improve search results, recommendations, and user engagement at scale? We leverage social interaction with Flickr photos to generate a massive dataset for computational aesthetics and train machine learning models to predict the likelihood images being of high, medium, or low quality. I will present our approach and findings, address some of the challenges of quantifying subjective preferences, and discuss applications of the aesthetics model to finding, sharing, and creating visually compelling content in an online community.
Stacey Svetlichnaya is a software engineer on the Yahoo Vision & Machine Learning team. Her recent deep learning research includes object recognition, image aesthetic quality and style classification, photo caption generation, and modeling emoji usage. She has worked extensively on Flickr image search and data pipelines, as well as automating content discovery and recommendation. Prior to Flickr, she helped develop a visual similarity search engine with LookFlow, which Yahoo acquired in 2013. Stacey holds a BS and MS in Symbolic Systems from Stanford University.




Tony Jebara - Director of Machine Learning Research - Netflix
Personalized Content/Image Selection
Tony Jebara - Netflix
Personalized Content/Image Selection
A decade ago, Netflix launched a challenge to predict how each user would rate each movie in our catalog. This accelerated the science of machine learning and matrix factorization. Since then, our learning algorithms and models have evolved with multiple layers, multiple stages and nonlinearities. Today, we use machine learning and deep variants to rank a large catalog by determining the relevance of each of our titles to each of our users, i.e. personalized content selection. We also use machine learning to find how to best present the top ranked items for the user. This includes selecting the best images to display for each title just for you, i.e. personalized image selection.
Tony directs machine learning research at Netflix and is sabbatical professor at Columbia University. He serves as general chair of the 2017 International Conference on Machine Learning. He has published over 100 scientific articles in the field of machine learning and has received several best paper awards.




Danny Lange - SVP of AI & Machine Learning - Unity Technologies
Bringing Machine Learning to Every Corner of Your Business
Danny Lange - Unity Technologies
Learning from Multi-Agent, Emergent Behaviors in a Simulated Environment
A revolution in reinforcement learning is happening, one that helps companies harness the more diverse, complex, virtual simulations available to accelerate the pace of innovation. Join this session to learn about particular environments already created that have yielded surprising advances in AI agents, and to better understand how emergent behaviors and open-endedness in multi-agent systems can lead to the most optimal designs and real-world practices.
Dr. Danny Lange is Senior Vice President of Artificial Intelligence and Machine Learning at Unity. As head of machine learning at Unity, Lange leads the company’s innovation around AI (Artificial Intelligence) and Machine Learning, focusing on bringing AI to simulation and gaming.
Prior to joining Unity, Lange was the head of machine learning at Uber, where he led efforts to build the world’s most versatile Machine Learning platform to support the company’s hyper-growth. Lange also served as General Manager of Amazon Machine Learning -- an AWS product that offers Machine Learning as a Cloud Service. Before that, he was Principal Development Manager at Microsoft where he led a product team focused on large-scale Machine Learning for Big Data.
Lange spent 8 years on Speech Recognition Systems, first as CTO of General Magic, Inc., then through his work on General Motor’s OnStar Virtual Advisor, one of the largest deployments of an intelligent personal assistant until Siri. Danny started his career as a Computer Scientist at IBM Research.
He holds MS and Ph.D. degrees in Computer Science from the Technical University of Denmark. He is a member of the Association for Computer Machinery (ACM) and IEEE Computer Society, and has several patents to his credit.



END OF SUMMIT