REGISTRATION & COFFEE
Rumman Chowdhury - Accenture
Designing Ethical AI Solutions
The imperative for ethical design is clear – but how do we move from theory to practice? In this workshop, Accenture expert and Responsible AI lead, Rumman Chowdhury will lead a design thinking and ideation session to illustrate how AI solutions can imbue ethics and responsibility. This interactive session will ask participants to help design an AI solution and provide guidance for the right kinds of ethical considerations. Each participant will leave with an understanding of applied ethical design.
Rumman is a Senior Principal at Accenture, and Global Lead for Responsible AI. She comes from a quantitative social science background and is a practicing data scientist. She leads client solutions on ethical AI design and implementation. Her professional work extends to partnerships with the IEEE and World Economic Forum. She has been named a fellow of the Royal Society for the Arts and is one of BBC’s 100 most influential women of 2017.
THEORY & APPLICATIONS
Ilya Sutskever - OpenAI
Meta Learning and Self Play
Meta learning is the idea that learning systems can learn to learn fast and well. Self play systems are intriguing because they can lead to immensely complex behavior even in very simple environments. I will present several results in meta learning applied to different domains, self play results applied to simulated physics environments, and discuss the connection between the two.
Ilya Sutskever received his PhD in 2012 from the University of Toronto working with Geoffrey Hinton. After completing his PhD, he cofounded DNNResearch with Geoffrey Hinton and Alex Krizhevsky which was acquired by Google. He is interested in all aspects of neural networks and their applications.
Andrew Tulloch - Facebook
Deep Learning in Production at Facebook
Facebook is powered by machine learning and AI. From advertising relevance, news feed and search ranking to computer vision, face recognition, and speech recognition, we run ML models at massive scale, computing trillions of predictions every day. I'll talk about some of the tools and tricks we use for scaling both the training and deployment of some of our deep learning models at Facebook. I'll also cover some useful libraries that we've open-sourced for production-oriented deep learning applications.
I'm a research engineer at Facebook, working on the Facebook AI Research and Applied Machine Learning teams to drive the large amount of AI applications at Facebook. At Facebook, I've worked on the large scale event prediction models powering ads and News Feed ranking, the computer vision models powering image understanding, and many other machine learning projects. I'm a contributor to several deep learning frameworks, including Torch and Caffe. Before Facebook, I obtained a masters in mathematics from the University of Cambridge, and a bachelors in mathematics from the University of Sydney.
Brendan Frey - Deep Genomics & University of Toronto
How Deep Learning is Transforming Drug Discovery
By driving cars, beating humans at their own games, transcribing speech and translating text, deep learning is changing the world. However, over the past five years, exponential growth in biomedical datasets has created the perfect opportunity for deep learning to disrupt drug discovery. Already, deep learning systems can identify disease mutations more reliably than human experts, and can predict which compounds will modulate the activity of therapeutic targets. I'll describe different approaches to drug discovery and explain how Deep Genomics is using its machine learning platform to identify therapies for metabolic, neuromuscular and neurodegenerative disorders, and to advance them into clinical trials as fast as possible. I'll also describe our approach to overcoming the so-called 'black box' challenge of establishing trust with stakeholders, regulators, clinicians and patients.
Brendan Frey is internationally recognized as a leader in machine learning and genome biology. His group has published over a dozen papers in Science, Nature and Cell, and their most recent work on using deep learning to identify protein-DNA interactions was highlighted on the front cover Nature Biotechnology. Frey is a Fellow of the Royal Society of Canada, a Fellow of the Institute for Electrical and Electronic Engineers, and a Fellow of the American Association for the Advancement of Science. He has consulted for several industrial research and development laboratories in Canada, the United States and England, and has served on the Technical Advisory Board of Microsoft Research. Most recently, Dr. Frey spun out a company called Deep Genomics.
REINFORCEMENT LEARNING & UNSUPERVISED LEARNING
Andrej Karpathy - Tesla
I'll present our work on training reinforcement learning agents to interact with and complete tasks in web browsers. In the short term our agents are learning to interact with common UI web elements like buttons, sliders and text fields. In the longer term we hope to address more complex tasks, such as achieving competence in interactive online exercises intended for schoolchildren to learn mathematics. I'll use these examples to also give a short but complete Reinforcement Learning tutorial.
Ofir Nachum - Google Brain
Learning Abstractions with Hierarchical Reinforcement Learning
Hierarchical RL has long held the promise of enabling deep RL to solve more complex and temporally extended tasks by abstracting away lower-level details from a higher-level agent. In this talk, we describe how to turn this promise into a reality. We present a hierarchical design in which a higher-level agent solves a task by iteratively directing a lower-level policy to reach certain goals. We describe how both levels may be trained concurrently in a highly-efficient, off-policy manner. Furthermore, we present a provably-optimal technique for learning abstract notions of `goals' without explicit supervision. Our resulting method achieves excellent performance on a suite of difficult navigation tasks.
Ofir Nachum currently works at Google Brain as a Research Scientist. His research focuses on reinforcement learning, with notable work including PCL (path consistency learning) and HIRO (hierarchical reinforcement learning with off-policy correction). He received his Bachelor's and Master's from MIT. Before joining Google, he was an engineer at Quora, leading machine learning efforts on the feed, ranking, and quality teams.
Chelsea Finn - Google Brain & Berkeley AI Research
Chelsea Finn is a research scientist at Google Brain and post-doctoral scholar at Berkeley AI Research. Starting in 2019, she will join the faculty in CS at Stanford University. She is interested in how learning algorithms can enable machines to acquire general notions of intelligence, allowing them to autonomously learn a variety of complex sensorimotor skills in real-world settings. She received her PhD in CS at UC Berkeley in 2018 and her Bachelors in EECS at MIT in 2014.
Toru Nishikawa - Preferred Networks
Preferred Networks Inc. (PFN), has been actively working on applications of deep learning on real world problems. In collaboration with leading companies and research institues, PFN has been focusing on deep learning in three domains: Industrial machinery including manufacturing robots, Smart transportation including autonomous driving, and Lifescience including cancer diagnosis and treatment. In December 2016, together with FANUC, a world leader in industrial machinery and industrial robots, we launch the world’s first commercial IoT platform for manufacturing that has Deep Learning technology at its core. In IoT, among other industries, Deep Learning is not only a research topic any more, but an important key technology in driving business.
The dramatic evolvement in the functional capabilities of IoT devices, and the fact that data generated by devices is incomparably larger than that generated by humans, are two particularly important factors contributing to the fast-paced innovation in various industries. Similarly, advancement in Deep Learning research is expanding its applications beyond pure data analysis to device actuation and control in the physical world. However, in order for algorithms to be able to efficiently learn real-time control of real world devices, a combination of advancement in both Deep Learning and computing is essential. That is the concept of Edge-Heavy-Computing where by bringing intelligence close to the network edge devices, the overall system makes it possible for those devices to efficiently learn in a distributed and collaborative manner, while resolving the data communication bottleneck often faced in IoT applications. In this talk, I will introduce some of the work we have been doing at PFN, highlight some results, and also give examples of how new computing boosts the value brought by Deep Learning.
Toru Nishikawa is the president and CEO of Preferred Networks, Inc., a Tokyo-based startup company specialized in applying the latest artificial intelligence technologies to emerging problems in the area of Internet of Things (IoT). He was one of the world finalists of the ACM ICPC (International Collegiate Programming Contest) while he was a graduate student of University of Tokyo. In 2006, he, together with his collegemates and his fellow contenders at ICPC, founded Preferred Infrastructure, Inc., a precursor company. In 2014, Nishikawa founded Preferred Networks with the aim of expanding their businesses into the realization of Deep Intelligence – a future IoT in which all devices, as well as the network itself, are equipped with machine intelligence. He is an ambitious entrepreneur, programmer, and father of a child.
Ian Goodfellow - Google Brain
Generative Adversarial Networks
Ian Goodfellow is a Staff Research Scientist at Google Brain. He is the lead author of the MIT Press textbook Deep Learning. In addition to generative models, he also studies security and privacy for machine learning. He has contributed to open source libraries including TensorFlow, Theano, and Pylearn2. He obtained a PhD from University of Montreal in Yoshua Bengio's lab, and an MSc from Stanford University, where he studied deep learning and computer vision with Andrew Ng. He is generally interested in all things deep learning.
Alexei Efros - UC Berkeley
Unsupervised Learning in Computer Vision
Computer vision has made great progress through the use of deep learning, trained with large-scale labeled data. However, good labeled data requires expertise and curation and can be expensive to collect. Can one discover useful visual representations without the use of explicitly curated labels? In this talk, I will present several case studies exploring the paradigm of self-supervised learning -- using raw data as its own supervision -- for tasks in computer vision and computer graphics.
Alexei Efros (associate professor, UC Berkeley) works in the area of computer vision and computer graphics, especially at the intersection of the two. He is particularly interested in using data-driven techniques to tackle problems where large quantities of unlabeled visual data are readily available. He is a recipient of NSF CAREER award (2006), Sloan Fellowship (2008), Guggenheim Fellowship (2008), SIGGRAPH Significant New Researcher Award (2010), and the Helmholtz Test-of-Time Prize (2013).
Durk Kingma - OpenAI
Current deep learning mostly relies on supervised learning: given a vast amount of examples of humans performing tasks such as labeling images or translation, we can teach computers to mimic humans at these tasks. Since supervised methods only focus on modeling the task directly, however, these methods are not particularly efficient: they need much more examples than humans require for learning new tasks. Enter unsupervised learning, where computers not only model tasks, but also their context, vastly improving data efficiency. We discuss the powerful framework of Variational Autoencoders (VAEs), a synthesis of deep learning and Bayesian methods, as a principled yet practical approach towards unsupervised deep learning. In addition to the underlying mathematics, we discuss current scientific and practical applications of VAEs, such as semi-supervised learning, drug discovery, and image resynthesis.
Diederik (or Durk) Kingma is a Research Scientist at OpenAI, with a focus on unsupervised deep learning. His research carreer started in 2009, while graduated at Utrecht University, working with prof. Yann LeCun at NYU. Since 2013, he pursues a PhD with prof. Max Welling in Amsterdam, focusing on the intersection of deep learning and Bayesian inference. Early in his PhD, he proposed the Variational Autoencoder (VAE), a principled framework for Bayesian unsupervised deep learning. Other well-known work is Adam, a now standard method for stochastic gradient descent.
Brad Folkens - CloudSight
Your AI is Blind
The final missing piece in AI is Visual Cognition and Understanding. In order for this dream to be realized, it takes more than winning scores at classifying ImageNet. We discuss our 4 years of experience in scaling, quality control, data management, and other important lessons learned in commercializing computer vision in the marketplace, including the procurement of the largest training dataset ever created for Visual Cognition and Understanding through Deep Learning.
Brad Folkens is the co-founder and CTO of CloudSight, where he leads the effort to build the world’s first visual cognition platform to power the future of AI. He earned his Theoretical Computer Science and Mathematics degrees with honors and immediately went to work building profitable companies, first with the award winning PublicSalary.com HR Tool, and later with his co-founder, Dominik, grossing over $1m/year in revenue.
DEEP LEARNING & ECONOMIC IMPACT
PANEL: How Will Deep Learning Change Manufacturing and Industry?
Shivon Zilis - Bloomberg
Shivon is a partner and founding member of Bloomberg Beta, a $75 million venture fund backed by Bloomberg L.P. that invests in startups transforming the future of work. Shivon is obsessed with the most important force changing work, machine intelligence. She is known for an annual report that researches thousands of machine intelligence companies and selects the most promising real-world problems they are solving. Bloomberg Beta has invested in more than 25 machine intelligence companies to date. She graduated from Yale, where she was the goalie on the ice hockey team. She is an advisor to OpenAI, a fellow at the Creative Destruction Lab, on the advisory board of University of Alberta's Machine Learning group, and a charter member of C100. She co-hosts an annual conference at the University of Toronto that brings together the foremost authors, academics, founders, and investors in machine intelligence. She was one of Forbes 30 Under 30 in Venture Capital.
Modar Alaoui - Eyeris
Vision AI for Augmented Human Machine Interaction
This session will unveil the latest vision AI technologies that ensure safe and efficient human machine interactions in the industrial automation context. Today’s human-facing industrial AI applications lack a key element for Human Behavior Understanding (HBU) that is critical for augmented safety and enhancing productivity. The second part of this session will detail how real-world applications can benefit from a comprehensive suite of visual behavior analytics that are readily available today.
Modar is a serial entrepreneur and expert in AI-based vision software development. He is currently founder and CEO at Eyeris, developer of a Deep Learning-based emotion recognition software, EmoVu, that reads facial micro-expressions. Eyeris uses Convolutional Neural Networks (CNN's) as a Deep Learning architecture to train and deploy its algorithm in to a number of today’s commercial applications. Modar combines a decade of experience between Human Machine Interaction (HMI) and Audience Behavioral Measurement. He is a frequent keynoter on “Ambient Intelligence”, a winner of several technology and innovation awards and has been featured in many major publications for his work.
Inmar Givoni - Uber ATG
Deep Learning for Self Driving Vehicles
In this talk I will cover some of the exciting and innovative deep learning technologies recently developed by the R&D team at the Uber Advanced Technologies Group, based out of Toronto and highlight some of the aspects of the engineering work required to integrate such technology into the vehicle platforms.
Inmar Givoni is an Autonomy Engineering Manager at Uber Advanced Technology Group, Toronto, where she leads a team whose mission is to bring from research and into production cutting-edge deep-learning models for self-driving vehicles. She received her PhD (Computer Science) in 2011 from the University of Toronto, specializing in machine learning, and was a visiting scholar at the University of Cambridge. She worked at Microsoft Research, Altera (now Intel), Kobo, and Kindred at roles ranging from research scientist to VP, applying machine learning techniques to various problem domains and taking concepts from research to production systems. She is an inventor of several patents and has authored numerous top-tier academic publications in the areas of machine learning, computer vision, and computational biology. She is a regular speaker at AI events, and is particularly interested in outreach activities for young women, encouraging them to choose technical career paths. For her volunteering efforts she has received the 2017 Arbor Award from UofT. In 2018 she was recognized as one of Canada’s 50 inspiring women in STEM.
DEEP LEARNING SYSTEMS
Andres Rodriguez - Intel
Catalyzing Deep Learning’s Impact in the Enterprise
Deep learning is unlocking tremendous economic value across various market sectors. Individual data scientists can draw from several open source frameworks and basic hardware resources during the very initial investigative phases but quickly require significant hardware and software resources to build and deploy production models. Intel offers various software and hardware to support a diversity of workloads and user needs. Intel Nervana delivers a competitive deep learning platform to make it easy for data scientists to start from the iterative, investigatory phase and take models all the way to deployment. This platform is designed for speed and scale, and serves as a catalyst for all types of organizations to benefit from the full potential of deep learning. Example of supported applications include but not limited to automotive speech interfaces, image search, language translation, agricultural robotics and genomics, financial document summarization, and finding anomalies in IoT data.
Andres Rodriguez is a deep learning solutions architect with Intel Nervana where he designs deep learning solutions for Intel’s customers and provides technical leadership across Intel for deep learning. Andres received his PhD from Carnegie Mellon University for his research in machine learning, and prior to joining Intel, he was a deep learning research scientist with the Air Force Research Laboratory. He holds over 20 peer reviewed publications in journals and conferences, and a book chapter on machine learning.
Avidan Akerib - GSI Technology
In-Place Computing: High-Performance Search
This presentation details an in-place associative computing technology that changes the concept of computing from serial data processing—where data is moved back and forth between the processor and memory—to massive parallel data processing, compute, and search in-place directly in the main processing array. This in-place associative computing technology removes the bottleneck at the IO between the processor and memory, resulting in significant performance-over-power ratio improvement compared to conventional methods that use CPU and GPGPU (General Purpose GPU) along with DRAM. Target applications include, convolutional neural networks, recommender systems for e-commerce, and data mining tasks such as prediction, classification, and clustering.
Avidan Akerib is VP of the Associative Computing business unit at GSI Technology. He holds a PhD from the Weizmann Institute of Science where he developed the theory of associative computing and applications for image processing and graphics. Avidan has over 30 years of experience in parallel computing, image processing and pattern recognition, and associative processing. He holds over 20 patents related to parallel computing and associative processing.
Conversation & Drinks - sponsored by Qualcomm
REGISTRATION & COFFEE
Nathan Benaich - Playfair Capital
I joined Playfair Capital in 2013 to focus on deal sourcing, due diligence, and helping our companies grow. I'm particularly interested in artificial intelligence and machine learning, infrastructure-as-a-service, mobile, and bioinformatics. I've led, originated or participated in Seed through Growth investments including Mapillary, Appear Here, Dojo, and Festicket.
Prior to Playfair, I earned an M.Phil and Ph.D in oncology as a Gates and Dr. Herchel Smith Scholar at the University of Cambridge, and a BA in biology from Williams College. I've published research focused on technologies to halt the fatal spread of cancer around the body.
Will Jack - Remedy
Bringing Deep Learning to The Front Lines of Healthcare
Today’s healthcare system is not built for the seamless integration of innovations in machine learning. Health data is heavily siloed, inaccessible, and non standardized, making it challenging to work with in machine learning systems. Additionally, deep learning techniques aren’t well suited for many healthcare problems due to the difficulty in interpreting their decisions. At Remedy, we’re building a healthcare system that captures granular data at the point of care, and enables the deployment of machine learning models at the point of care. We’re also developing interpretable model to tackle tasks such as diagnosis, physician education, treatment planning, and triage.
Will is CEO of Remedy Health, a startup focused on building a healthcare system around a strong software backbone to enable the seamless integration of new innovations into care. Will is also a venture partner at Alsop Louie Partners. Prior to this Will worked in R&D on SpaceX’s internet satellite project, and studied physics and computer science at MIT. An Ohio native, Will spent his childhood developing a particle collider in his basement, using the nuclear reactions it carried out to investigate novel methods of medical imaging.
Alex Dalyac - Tractable
Thanks to deep learning, AI algorithms can now surpass human performance in image classification. However, behind these results lie tens of thousands of man hours spent annotating images. This significantly prohibits commercial applications where cost and time to market are key. At Tractable, our solution centers on creating a feedback loop from learning algorithm to human, turning the latter into a “teacher” rather than a blind labeler. Dimensionality reduction, information retrieval and transfer learning are some of our core proprietary techniques. We will demonstrate a 15x labeling cost reduction on the expert task of estimating from images the cost to repair a damaged vehicle – an important application for the insurance industry.
Alex is Co-founder & CEO of Tractable, a young London-based startup bringing recent breakthroughs in AI to industry. Tractable's current focus is on automating visual recognition tasks. Its long term vision is to expand into natural language, robot control, and spread disruptive AI throughout industry. Tractable was founded in 2014 and is backed by $2M of venture capital from Silicon Valley investors, led by Zetta Venture Partners. Alex has a degree in econometrics & mathematical economics from the LSE, and a postgraduate degree in computer science from Imperial College London. Alex's experience within Deep Learning investing is on the receiving side, particularly on how to attract US venture capital into Europe as early as the seed stage.
Roland Memisevic - Twenty Billion Neurons
Is solving video the key next breakthrough in computer vision? We’ll discuss the key challenges in applying deep learning techniques to video understanding. This will include approaches to building high quality datasets and annotating data for video is quite different than image understanding. What are the key use cases for video today and tomorrow? How do we address concerns around privacy and fears about “big brother”? Last but not least how does video advance the field of AI more towards general intelligence and common sense understanding of the physical world in machine learning models..
Roland Memisevic received his PhD in Computer Science from the University of Toronto in 2008. He subsequently held positions as research scientist at PNYLab, Princeton, as post-doctoral fellow at the University of Toronto and ETH Zurich, and as junior professor at the University of Frankfurt. In 2012 he joined the MILA deep learning group at the University of Montreal as assistant professor. He has been on leave from his academic position since 2016 to lead the research efforts at Twenty Billion Neurons, a German-Canadian AI startup he co-founded. Roland is Fellow of the Canadian Institute for Advanced Research (CIFAR).
Augustin Marty - Deepomatic
Why Do Corporations Need to Own Their Data?
Augustin Marty is the CEO of Deepomatic, a company that believes artificial intelligence should be made accessible to all. To achieve this goal, Deepomatic has created a platform which enables businesses to develop their own image recognition systems. By 28 he had founded his first company in China, had worked in India, optimising combustion cycles for Power Plants, and at Vinci Construction Group designing and selling engineering projects. Augustin met his cofounders just after high school; sharing the same passion for entrepreneurship they decided to partner in early 2014 and created Deepomatic, the image intelligence company.
DEEP LEARNING APPLICATIONS IN INDUSTRY & BUSINESS
Shubho Sengupta - Facebook AI Research (FAIR)
Systems Challenges for Deep Learning
Training neural network models and deploying them in production poses a unique set of computing challenges. The ability to train large models fast, allows researchers to explore the model landscape quickly and push the boundaries of what is possible with Deep Learning. However a single training run often consumes several exaflops of compute and can take a month or more to finish. Similarly, some problem areas like speech synthesis and recognition have real time requirements which places a limit on how much time it can take to evaluate a model in production. In this presentation, I will talk about three systems challenges that need to be addressed so that we can continue to train and deploy rich neural network models.
Shubho is now working on AI Research at FAIR. He was previous a senior research scientist at Silicon Valley AI Lab (SVAIL) at Baidu Research.
I am an architect of the High Performance Computing inspired training platform that is used to train some of the largest recurrent neural network models in the world at SVAIL. I also spend a large part of my time exploring models for doing both speech recognition and speech synthesis and what it would take to train these model at scale and deploy them to hundreds of millions of our users. I am the primary author of the WarpCTC project that is used commonly for speech recognition. Before coming to the industry, I got my PhD in Computer Science from UCDavis focusing on parallel algorithms for GPU computing and subsequently went to Stanford for a Masters in Financial Math.
Sergey Levine - UC Berkeley
Deep Robotic Learning
Deep learning has been demonstrated to achieve excellent results in a range of passive perception tasks, from recognizing objects in images to recognizing human speech. However, extending the success of deep learning into domains that involve active decision making has proven challenging, because the physical world presents an entire new dimension of complexity to the machine learning problem. Machines that act intelligently in open-world environments must reason about temporal relationships, cause and effect, and the consequences of their actions, and must adapt quickly, follow human instruction, and remain safe and robust. Although the basic mathematical building blocks for such systems -- reinforcement learning and optimal control -- have been studied for decades, such techniques have been difficult to extend to real-world control settings. For example, although reinforcement learning methods have been demonstrated extensively in settings such as games, their applicability to real-world environments requires new and fundamental innovations: not only does the sample complexity of such methods need to be reduced by orders of magnitude, but we must also study generalization, stability, and robustness. In this talk, I will discuss how deep learning and reinforcement learning methods can be extended to enable real-world robotic control, with an emphasis on techniques that generalize to situations, objects, and tasks. I will discuss how model-based reinforcement learning can enable sample-efficient control, how model-free reinforcement learning can be made efficient, robust, and reliable, and how meta-learning can enable robotic systems to adapt quickly to new tasks and new situations.
Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.
Stefano Ermon - Stanford University
Machine Learning for Sustainability
Policies for sustainable development entail complex decisions about balancing environmental, economic, and societal needs. Making such decisions in an informed way presents significant computational challenges. Modern AI techniques combined with new data streams have the potential to yield accurate, inexpensive, and highly scalable models to inform research and policy. In this talk, I will present an overview of my group's research on applying computer science techniques in sustainability domains, including poverty and food security.
Stefano Ermon is an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory and the Woods Institute for the Environment. Stefano's research is centered on techniques for scalable and accurate inference in graphical models, statistical modeling of data, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability.
Bryan Catanzaro - NVIDIA
More than a GPU: Platforms for Deep Learning
Training and deploying state of the art deep neural networks is very computationally intensive, with tens of exaflops needed to train a single model on a large dataset. The high density compute afforded by modern GPUs has been key to many of the advances in AI over the past few years. However, researchers need more than a fast processor – they also need optimized libraries, and tools to efficiently program so that they can experiment with new ideas. They also need scalable systems that use many of these processors together to train a single model. In this talk, I’ll discuss platforms for deep learning, and how NVIDIA is working to make the platforms of the future for Deep Learning.
Bryan Catanzaro is VP of Applied Deep Learning Research at NVIDIA, where he leads a team solving problems in fields ranging from video games to chip design using deep learning. Prior to his current role at NVIDIA, he worked at Baidu to create next generation systems for training and deploying end-to-end deep learning based speech recognition. Before that, he was a researcher at NVIDIA, where he wrote the research prototype and drove the creation of CUDNN, the low-level library now used by most AI researchers and deep learning frameworks to train neural networks. Bryan earned his PhD from Berkeley, where he built the Copperhead language and compiler, which allows Python programmers to use nested data parallel abstractions efficiently. He earned his MS and BS from Brigham Young University, where he worked on computer arithmetic for FPGAs.
Judy Hoffman - Stanford Computer Vision Group
A General Framework for Domain Adversarial Learning
Deep convolutional networks have provided significant advances in recent years, but progress has primarily been limited to fully supervised settings, requiring large amounts of human annotated data for training. Recent results in adversarial adaptive representation learning demonstrate that such methods can also excel when learning in sparse/weakly labeled settings across modalities and domains. In this talk, I will present a general framework for domain adversarial learning, by which a game is played between a discriminator which seeks to determine whether an image arises from the large labeled data source or from the new sparsely labeled dataset, while the representation seeks to limit the distinguishability of the two data sources. Together this game produces a model adapted for the new domain with minimal or no new human annotations.
Judy Hoffman is a Postdoctoral Researcher in the Stanford Computer Vision group, working with Fei-Fei Li. Her research focuses on developing learning representations and recognition models with limited human annotations. She received her PhD in Electrical Engineering and Computer Science from University of California, Berkeley in Summer 2016, where she was advised by Trevor Darrell and Kate Saenko. She is interested in lifelong learning, adaptive methods, and adversarial models.
Chris Moody - Stitch Fix
Practical, Active, Interpretable & Deep Learning
I'll review applied deep learning techniques we use at Stitch Fix to understand our client's personal style. Interpretable deep learning models are not only useful to scientists, but lead to better client experiences -- no one wants to interact with a black box virtual assistant. We do this in several ways. We've extended factorization machines with variational techniques, which allows us to learn quickly by finding the most polarizing examples. And by enforcing sparsity in our models, we have RNNs and CNNs that reveal how they function. The result is a dynamic machine that learns quickly and challenges our client's style.
Chris Moody came from a Physics background from Caltech and UCSC, and is now a scientist at Stitch Fix's Data Labs. He has an avid interest in NLP, has dabbled in deep learning, variational methods, and Gaussian Processes. He's contributed to the Chainer deep learning library (http://chainer.org/), the super-fast Barnes-Hut version of t-SNE to scikit-learn (http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) and written (one of the few!) sparse tensor factorization libraries in Python (https://github.com/stitchfix/ntflib). Lately he's been working on lda2vec (https://lda2vec.readthedocs.org/en/latest/).
Stacey Svetlichnaya - Flickr
Quantifying Visual Aesthetics on Flickr
What makes an image beautiful? More pragmatically, can an algorithm distinguish high-quality photographs from casual snapshots to improve search results, recommendations, and user engagement at scale? We leverage social interaction with Flickr photos to generate a massive dataset for computational aesthetics and train machine learning models to predict the likelihood images being of high, medium, or low quality. I will present our approach and findings, address some of the challenges of quantifying subjective preferences, and discuss applications of the aesthetics model to finding, sharing, and creating visually compelling content in an online community.
Stacey Svetlichnaya is a software engineer on the Yahoo Vision & Machine Learning team. Her recent deep learning research includes object recognition, image aesthetic quality and style classification, photo caption generation, and modeling emoji usage. She has worked extensively on Flickr image search and data pipelines, as well as automating content discovery and recommendation. Prior to Flickr, she helped develop a visual similarity search engine with LookFlow, which Yahoo acquired in 2013. Stacey holds a BS and MS in Symbolic Systems from Stanford University.
Tony Jebara - Netflix
Personalized Content/Image Selection
A decade ago, Netflix launched a challenge to predict how each user would rate each movie in our catalog. This accelerated the science of machine learning and matrix factorization. Since then, our learning algorithms and models have evolved with multiple layers, multiple stages and nonlinearities. Today, we use machine learning and deep variants to rank a large catalog by determining the relevance of each of our titles to each of our users, i.e. personalized content selection. We also use machine learning to find how to best present the top ranked items for the user. This includes selecting the best images to display for each title just for you, i.e. personalized image selection.
Tony directs machine learning research at Netflix and is sabbatical professor at Columbia University. He serves as general chair of the 2017 International Conference on Machine Learning. He has published over 100 scientific articles in the field of machine learning and has received several best paper awards.
Danny Lange - Unity Technologies
Bringing Machine Learning to Every Corner of Your Business
Have you noticed how applications seem to get smarter? Apps make recommendations based on past purchases; you get an alert from your bank when they suspect a fraudulent transaction; and you receive emails from your favorite store when items related to things you typically buy are on sale. These examples of application intelligence use a technology called Machine Learning. Machine Learning uses algorithms to detect patterns in old data and build models that can be used to make predictions from new data. Understanding the algorithms behind Machine Learning is difficult and running the infrastructure needed to build accurate models and use these models at scale is very challenging. At Uber we have built a Machine Learning service that easily allows our teams to embed intelligence into their applications that can perform important functions such as ETA, fraud detection, churn prediction, forecasting demand, and much more.
Dr. Danny B. Lange is Head of Machine Learning at Uber where he leads an effort to build the world’s most versatile Machine Learning platform to support Uber’s rapid growth. With the help of this branch of Artificial Intelligence including Deep Learning, Uber can provide an even better service to its customers. Previously, Danny was the General Manager of Amazon Machine Learning - an AWS product that offers Machine Learning as a Service. Prior to Amazon, Danny was Principal Development Manager at Microsoft where he was leading a product team focused on large-scale Machine Learning for Big Data. Danny has a Ph.D. in Computer Science from the Technical University of Denmark.
END OF SUMMIT
Shohei Hido - Preferred Networks
Shohei Hido received M.S in Informatics from Kyoto University in Japan, 2006. Since then, he has worked at IBM Research in Tokyo for six years as a staff researcher in machine learning and its applications to industries. After joining Preferred Infrastructure, Inc. in 2012, he has worked as the leader of Jubatus project, an open source software framework for real-time, streaming machine learning. After being assigned as Chief Research Officer of Preferred Networks, the spin-off company, he is currently responsible for Deep Intelligence in-Motion, software platform for using deep learning in IoT applications.