
REGISTRATION & LIGHT BREAKFAST

WELCOME
THEORY & APPLICATIONS


David Cox - Director MIT-IBM Watson AI Lab - IBM Research AI
What’s Next in Deep Learning & AI
David Cox - IBM Research AI
What’s next in Deep Learning and AI
Deep learning has unquestionably ignited a revolution in machine learning and artificial intelligence, enabling a wide range of applications across many industries. At the same time, today’s deep learning has important limitations that temper its applicability to many real world problems. This talk will cover some of the research being done in the MIT-IBM Watson AI Lab, a new joint lab between MIT and IBM, founded with a $240m, 10 year commitment from IBM. The lab is focused on fundamental advances in AI to break down key barriers on the journey toward broadly applicable AI.
David Cox is the IBM Director of the MIT-IBM Watson AI Lab, a first of its kind industry-academic collaboration between IBM and MIT, focused on fundamental research in artificial intelligence. The Lab was founded with a $240m, 10 year commitment from IBM and brings together researchers at IBM with faculty at MIT to tackle hard problems at the vanguard of AI.
Prior to joining IBM, David was the John L. Loeb Associate Professor of the Natural Sciences and of Engineering and Applied Sciences at Harvard University, where he held appointments in Computer Science, the Department of Molecular and Cellular Biology and the Center for Brain Science. David's ongoing research is primarily focused on bringing insights from neuroscience into machine learning and computer vision research. His work has spanned a variety of disciplines, from imaging and electrophysiology experiments in living brains, to the development of machine learning and computer vision methods, to applied machine learning and high performance computing methods.
David is a Faculty Associate at the Berkman-Klein Center for Internet and Society at Harvard Law School and is an Agenda Contributor at the World Economic Forum. He has received a variety of honors, including the Richard and Susan Smith Foundation Award for Excellence in Biomedical Research, the Google Faculty Research Award in Computer Science, and the Roslyn Abramson Award for Excellence in Undergraduate Teaching. He led the development of "The Fundamentals of Neuroscience" (http://fundamentalsofneuroscience.org) one of Harvard's first massive open online courses, which has drawn over 750,000 students from around the world. His academic lab has spawned several startups across a range of industries, ranging from AI for healthcare to autonomous vehicles.




Sara Hooker - Research Scholar - Google Brain
Frontiers of Computer Vision: Beyond Accuracy
Sara Hooker - Google Brain
Sara Hooker is a research scholar at Google Brain doing deep learning research on reliable explanations of model predictions for black-box models. Her main research interests gravitate towards interpretability, predictive uncertainty, model compression and security. In 2014, she founded Delta Analytics, a non-profit dedicated to bringing technical capacity to help non-profits across the world use machine learning for good. She grew up in Africa, in Mozambique, Lesotho, Swaziland, South Africa, and Kenya. Her family now lives in Monrovia, Liberia.




Brendan Frey - Co-Founder & CEO, & Professor - Deep Genomics & University of Toronto
How Deep Learning is Transforming Drug Discovery
Brendan Frey - Deep Genomics & University of Toronto
How Deep Learning is Transforming Drug Discovery
Brendan Frey, CEO and Founder of Deep Genomics, will explain how AI did most of the heavy lifting in obtaining the company's first therapeutic candidate. This included discovering novel biology, designing novel compounds, prioritizing compounds by predicted potency and toxicity, creating animal models, designing animal studies and designing the clinical trial. Their AI technology is enabling Deep Genomics to explore an expanding universe of genetic therapies, and to advance novel drug candidates more rapidly and with a higher rate of success than was previously possible.



COFFEE
DEEP LEARNING OPTIMIZATION


Adam Oberman - Professor - McGill University
Making Deep Neural Networks more Robust
Adam Oberman - McGill University
Making Deep Neural Networks more Robust
Deep Neural Networks are very accurate at classifying images, but they lack the robustness guarantees of traditional, but less effective machine learning techniques. In particular, they are vulnerable to adversarial examples, and their predictions do not come with confidence estimates. The lack of robustness is an obstacle to using these models in domains where errors can be costly. In this talk we will show how mathematical tools which have been very popular in image processing can be adapted to give state of the art robustness as well as robustness guarantees for neural network models.
Adam Oberman is a professor in the department of Mathematics and Statistics at McGill University. He got his bachelor’s degree at University of Toronto and PhD at the University of Chicago, and was previously faculty at Simon Fraser University. His research prior 2017 was on partial differential equations, scientific computing, and optimal transportation. During a Simon’s Fellowship at UCLA, he started a project applying PDEs to Deep Learning and is now working on adversarial robustness for DNNs.




Graham Taylor - Associate Professor - University of Guelph
Low-Precision Learning: Heterogeneity and Adversarial Robustness
Graham Taylor - University of Guelph
Low-Precision Learning: Heterogeneity and Adversarial Robustness
This talk will overview some recent results from my lab concerning deep learning with low-precision activations, weights and biases. I will highlight two different ways of learning networks with heterogeneous precision, that is, learning nets where different layers use different amounts of precision, optimized for the task. Low-precision nets are already known for their resource efficiency, speed, and regularization effects. I will show some results which show that binary neural networks are also robust to certain forms of adversarial attacks.
Graham Taylor is a Canada Research Chair and Associate Professor at the University of Guelph where he leads the Machine Learning Research Group. He is the academic director of NextAI, non-profit initiative to strengthen Canada's AI venture creation and a member of the Vector Institute for Artificial Intelligence. In 2016 he was named a CIFAR Azrieli Global Scholar in Learning in Machines and Brains. In 2018, he was named one of Canada's Top 40 under 40. Originally born in London, Ontario, he received his PhD in Computer Science from the University of Toronto in 2009, where he was advised by Geoffrey Hinton and Sam Roweis. He spent two years as a postdoc at the Courant Institute of Mathematical Sciences, New York University working with Chris Bregler, Rob Fergus, and Yann LeCun. Through his research, Graham aims to discover new algorithms and architectures for deep learning. His work also intersects high performance computing, investigating better ways to leverage hardware accelerators to cope with the challenges of large-scale machine learning. He is currently Visiting Faculty at Google Brain, Montreal.



Roger Grosse - Assistant Professor - University of Toronto
Scalable Natural Gradient Training of Deep Neural Networks
Roger Grosse - University of Toronto
Scalable Natural Gradient Training of Deep Neural Networks
Neural networks have recently driven significant progress in machine learning applications as diverse as vision, speech, and text understanding. Despite much engineering effort to boost the computational efficiency of neural net training, most networks are still trained using variants of stochastic gradient descent. Natural gradient descent, a second-order optimization method, has the potential to speed up training by correcting for the curvature of the loss function. Unfortunately, the exact natural gradient is impractical to compute for large networks because it requires solving a linear system involving the Fisher matrix, whose dimension may be in the millions for modern neural network architectures. The key challenge is to develop approximations to the Fisher matrix which are efficiently invertible, yet accurately reflect its structure.
The Fisher matrix is the covariance of log-likelihood derivatives with respect to the weights of the network. I will present techniques to approximate the Fisher matrix using structured probabilistic models of the computation of these derivatives. Using probabilistic modeling assumptions motivated by the structure of the computation graph and empirical analysis of the distribution over derivatives, I derive approximations to the Fisher matrix which allow for efficient approximation of the natural gradient. The resulting optimization algorithm is invariant to some common reparameterizations of neural networks, suggesting that it automatically enjoys the computational benefits of these reparameterizations. I show that this method gives significant speedups in the training of neural nets for image classification and reinforcement learning.
Roger is an Assistant Professor of Computer Science at the University of Toronto, focusing on machine learning. Previously, he was a postdoc at Toronto, after having received a Ph.D. at MIT, studying under Bill Freeman and Josh Tenenbaum. Before that, he completed an undergraduate degree in symbolic systems and MS in computer science at Stanford University. He is also a co-creator of Metacademy, a web site which helps you formulate personalized learning plans for machine learning and related topics. It’s based on a dependency graph of the core concepts. Also, he recently taught an undergraduate neural networks course at the University of Toronto.



LUNCH
REINFORCEMENT LEARNING


Matt Taylor - Research Director - Borealis AI
Learning Sequential Tasks from Human Feedback
Matt Taylor - Borealis AI
Learning Sequential Tasks from Human Feedback
Virtual agents and physical robots need to be able to learn so that they can perform novel or unanticipated tasks. Reinforcement learning is a powerful framework that allows agents to learn to maximize an environmental reward, but there are many cases where no environmental reward signal is present. However, we know that people are able to train via evaluative feedback: consider all of the impressive tasks that dogs can learn to accomplish! This talk will discuss learning algorithms that rely on non-technical user feedback to train agents to perform sequential decision tasks in a variety of settings.
Matt is currently Research Director of the Edmonton Borealis AI research center and holds adjunct appointments at Washington State University and the University of Alberta. He received his doctorate from the University of Texas at Austin in 2008, supervised by Peter Stone. Matt then completed a two year postdoctoral research position at the University of Southern California with Milind Tambe. Since then, he has had positions at Lafayette College and Washington State University, where he held the Allred Distinguished Professorship in Artificial Intelligence. Current research interests include intelligent agents, human-agent interaction, multi-agent systems, reinforcement learning, and robotics.




Mohammad Norouzi - Senior Research Scientist - Google Brain
Reinforcement Learning Meets Sequence Prediction
Mohammad Norouzi - Google Brain
Reinforcement Learning Meets Sequence Prediction
Neural sequence to sequence models have seen remarkable success across a range of tasks including machine translation and speech recognition. I will give an overview of the dominant approach to supervised sequence learning using neural networks. Then, I will present optimal completion distillation (OCD) -- a new approach for training sequence models based on their own mistakes. Given a partial sequence generated by a model, OCD identifies the set of optimal suffixes and accordingly, teaches the model to optimally extend each prefix. OCD achieves the state-of-the-art performance on end-to-end speech recognition on standard benchmarks. In the second half of the talk, I will focus on sequence modeling tasks that involve discovering latent programs as part of the optimization. I will present our approach called memory augmented policy optimization (MAPO) that improves upon REINFORCE by expressing the expected return objective as a weighted sum of two terms: an expectation over a memory of trajectories with high rewards, and a separate expectation over the trajectories outside of the memory. MAPO achieves the state-of-the-art on standard semantic parsing datasets.
Mohammad Norouzi is a senior research scientist at Google Brain in Toronto. His research lies at the intersection of deep learning, natural language processing, and computer vision. His current research focuses on learning statistical models of sequential data and advancing reinforcement learning algorithms and applications. He earned the PhD in computer science at the University of Toronto, under the supervision of Prof. David Fleet working on scalable similarity search algorithms. He was a recipient of the prestigious Google US/Canada PhD fellowship in machine learning.



Shane Gu - Research Scientist - Google Brain
Deep Reinforcement Learning Toward Robotics
Shane Gu - Google Brain
Predictability Maximization: Empowerment As An Intelligence Measure
Intelligence is often associated with the ability to optimize the environment for maximizing one's objectives (e.g. survival). In particular, the ability to predictably change the environment -- empowerment -- is an essential skill that allows agents to efficiently achieve many goals. In this talk, I will discuss empowerment from multiple perspectives, including model-based and classic goal-based RL, and relate it to classic and recently-proposed definitions and measures of intelligence.
Key Takeaways:
Empowerment = mutual information between actions and future states
Maximizing empowerment = maximizing diversity of futures achievable given all actions + maximizing predictability of the future given each possible action
Empowerment could be a more direct measure of general intelligence
Shane Gu is a Research Scientist at Google Brain, where he mainly works on problems in deep learning, reinforcement learning, robotics, and probabilistic machine learning. His recent research focuses on sample-efficient RL methods that could scale to solve difficult continuous control problems in the real-world, which have been covered by Google Research Blogpost and MIT Technology Review. He completed his PhD in Machine Learning at the University of Cambridge and the Max Planck Institute for Intelligent Systems in Tübingen, where he was co-supervised by Richard E. Turner, Zoubin Ghahramani, and Bernhard Schölkopf. During his PhD, he also collaborated closely with Sergey Levine at UC Berkeley/Google Brain and Timothy Lillicrap at DeepMind. He holds a B.ASc. in Engineering Science from the University of Toronto, where he did his thesis with Geoffrey Hinton in distributed training of neural networks using evolutionary algorithms.


VIDEO UNDERSTANDING

PANEL: Understanding the World Through Video (sponsored by M12)
Roland Memisevic - Twenty Billion Neurons
Is solving video the key next breakthrough in computer vision? We’ll discuss the key challenges in applying deep learning techniques to video understanding. This will include approaches to building high quality datasets and annotating data for video is quite different than image understanding. What are the key use cases for video today and tomorrow? How do we address concerns around privacy and fears about “big brother”? Last but not least how does video advance the field of AI more towards general intelligence and common sense understanding of the physical world in machine learning models..
Roland Memisevic received his PhD in Computer Science from the University of Toronto in 2008. He subsequently held positions as research scientist at PNYLab, Princeton, as post-doctoral fellow at the University of Toronto and ETH Zurich, and as junior professor at the University of Frankfurt. In 2012 he joined the MILA deep learning group at the University of Montreal as assistant professor. He has been on leave from his academic position since 2016 to lead the research efforts at Twenty Billion Neurons, a German-Canadian AI startup he co-founded. Roland is Fellow of the Canadian Institute for Advanced Research (CIFAR).


Graham Taylor - University of Guelph
Low-Precision Learning: Heterogeneity and Adversarial Robustness
This talk will overview some recent results from my lab concerning deep learning with low-precision activations, weights and biases. I will highlight two different ways of learning networks with heterogeneous precision, that is, learning nets where different layers use different amounts of precision, optimized for the task. Low-precision nets are already known for their resource efficiency, speed, and regularization effects. I will show some results which show that binary neural networks are also robust to certain forms of adversarial attacks.
Graham Taylor is a Canada Research Chair and Associate Professor at the University of Guelph where he leads the Machine Learning Research Group. He is the academic director of NextAI, non-profit initiative to strengthen Canada's AI venture creation and a member of the Vector Institute for Artificial Intelligence. In 2016 he was named a CIFAR Azrieli Global Scholar in Learning in Machines and Brains. In 2018, he was named one of Canada's Top 40 under 40. Originally born in London, Ontario, he received his PhD in Computer Science from the University of Toronto in 2009, where he was advised by Geoffrey Hinton and Sam Roweis. He spent two years as a postdoc at the Courant Institute of Mathematical Sciences, New York University working with Chris Bregler, Rob Fergus, and Yann LeCun. Through his research, Graham aims to discover new algorithms and architectures for deep learning. His work also intersects high performance computing, investigating better ways to leverage hardware accelerators to cope with the challenges of large-scale machine learning. He is currently Visiting Faculty at Google Brain, Montreal.

Tegan Maharaj - MILA
A senior PhD student at the Montreal Institute for Learning Algorithms (MILA), Tegan's academic research has focused on understanding multimodal data with deep models, particularly for time-dependent data. At the practical end, Tegan has developed datasets and models for video and natural language understanding, and worked on using deep models for predicting extreme weather events. On the more theoretical side, her work examines how data influence learning dynamics in deep and recurrent models. Tegan is concerned and passionate about AI ethics, safety, and the application of ML to environmental management, health, and social welfare.


Evan Nisselson - LDV Capital
Is solving video the key next breakthrough in computer vision? We’ll discuss the key challenges in applying deep learning techniques to video understanding. This will include approaches to building high quality datasets and annotating data for video is quite different than image understanding. What are the key use cases for video today and tomorrow? How do we address concerns around privacy and fears about “big brother”? Last but not least how does video advance the field of AI more towards general intelligence and common sense understanding of the physical world in machine learning models..
Evan Nisselson invests in early stage companies via LDV Capital. He is a serial entrepreneur, professional photographer and digital media expert since the early 1990's. He started making pictures with his Nikon FM at 13 years old. He thrives on helping build teams and businesses that leverage technology to entertain, increase efficiency, and solve problems. Evan's international expertise ranges from assisting technology startups in raising capital, business development, marketing, content development, recruiting and product development.



COFFEE
Your Future in Deep Learning - Talent & Talk Session
PLENARY SESSION


Geoffrey Hinton - Professor - University of Toronto
Deep Learning with Geoffrey Hinton
Geoffrey Hinton - University of Toronto
Geoffrey Hinton designs machine learning algorithms. His aim is to discover a learning procedure that is efficient at finding complex structure in large, high-dimensional datasets and to show that this is how the brain learns to see. He was one of the researchers who introduced the back-propagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. His research group in Toronto made major breakthroughs in deep learning that have revolutionized speech recognition and object classification.


Conversation & Drinks until 6.30pm - Sponsored by m12 & SVB


Networking Mixer - Sponsored by Honda Xcelerator - - The Loose Moose
Networking & Drinks - from 7pm
Networking Mixer - Sponsored by Honda Xcelerator - The Loose Moose
Continue discussions and networking from the event by joining attendees at a Networking Mixer hosted by Honda Xcelerator at The Loose Moose, located at 146 Front St W, Toronto, ON M5J 1G2, less than a 5 minute walk from the event venue.

DOORS OPEN

WELCOME
STARTUP SESSION
David Julian - Netradyne
Autonomously Generated HD Maps
High-Definition (HD) Maps are a key component for autonomous vehicles. However, cost estimates are $2 Billion to map just the US once using special LIDAR mapping vehicles. Given the dynamic nature of the roadway system, frequent cost prohibitive updates are needed. Netradyne Drive-I provides an innovative inexpensive scalable low latency HD mapping solution using computer vision at the edge and crowdsourcing across commercial vehicles to autonomously generate HD maps. This autonomous crowdsourcing allows for multiple updates of the road conditions and layouts per day, including road changes due to construction, accidents, and other dynamic changes.
David is the CTO of Netradyne, an edge computing AI company. Before co-founding Netradyne, David worked at NASA’s Jet Propulsion Laboratory (JPL) on Galileo and Cassini deep space missions. He was later a Principal Engineer in Qualcomm Research, where he was awarded over 100 US patents covering a wide range of technologies, and started and led several R&D efforts, including the Qualcomm Zeroth Deep Learning Team. David has a BSEE from New Mexico State University, and an MS and PhD in Electrical Engineering from Stanford.




Tzvi Aviv - Founder & CEO - AgriLogicAI
Geospatial Intelligence for Profitability & Sustainability in the Agri-food Sector
Tzvi Aviv - AgriLogicAI
Loss Prediction in Crop Insurance
AgriLogicAI helps farmers and crop insurance companies mitigate weather and climate change risks by applying deep learning and machine learning to satellite images and historical farm data. We predict grain yields within season across areas ranging from single fields to entire states. Currently, we are utilizing data from thousands of corn farms across Indiana to predict crop yields and loss risks using satellite imagery, weather, and soil data. Our technology can improve actuarial risk models, automate on-boarding, automate claim processes, and improve financial planning in crop insurance companies.
Tzvi Aviv is an entrepreneurial scientist and innovation consultant working in Toronto. He founded AgriLogicAI to develop and commercialize artificial intelligence and machine learning software for profitability and sustainability in the agri-food sector. Tzvi has won many awards for his work, including seed funding from Next Canada and awards from agricultural and pharmaceutical companies. Prior to AgriLogicAI, he managed a drug development project at the Hospital for Sick Children and received a PhD in Medical Genetics from the University of Toronto. Additionally, he received an MBA from Ryerson University with a focus in the management of innovation and technology.




Oren Kraus - Co-Founder - Phenomic AI
Automated Analysis of Microscopy Image Based Drug Screens with Deep Multiple Instance Learning
Oren Kraus - Phenomic AI
Automated Analysis of Microscopy Image Based Drug Screens with Deep Multiple Instance Learning
High-content screening (HCS) technologies have enabled large-scale microscopy imaging experiments for studying cell biology and for drug screening. These systems produce hundreds of thousands of microscopy images per day and their utility depends on automated image analysis. Our team developed deep learning approaches that learn feature representations directly from pixel intensity values, rather than existing analysis techniques that rely on segmentation and feature extraction based pipelines. As most deep learning pipelines typically relied on having a single centered object per image, these methods were not directly applicable to microscopy datasets. We developed a segmentation free approach that combines deep convolutional neural networks (CNNs) with multiple instance learning (MIL) in order to classify and segment microscopy images using only whole image (i.e. treatment or well) level annotations. Since developing this method we’ve built a scalable platform around it and successfully applied the method to numerous assays including proteome-wide genetic screens in yeast, and drug screening and profiling assays in both fibroblast and cancer cell lines. We’ve also built a secondary weakly-supervised workflow, that allows researchers to cluster and visualize treatments in HCS screens by how similar they are phenotypically. In the presentation, we’ll introduce these more recent results and describe how the two workflows, enabled by our platform, can be used to analyze almost any HCS screen automatically.




Nargiz Mammadova - Founder & CEO - Destin AI
How Artificial Intelligence is Impacting the Future of Immigration
Nargiz Mammadova - Destin AI
How Artificial Intelligence is Impacting Future of Immigration
Presentation Abstract: Imagine a world where you can go to any destination in the world without worrying about how to get your visa? Imagine you are interacting with the government website and its’ AI-powered virtual assistant responds you immediately for your any visa related questions, informs you what kind of documents do you need under five minutes and even tells you what is the likelihood of your visa approval rate before you even apply? – We will explore much more together.
Nargiz is Founder & CEO of Destin AI. Together with her diverse team, she is disrupting immigration field through Artificial Intelligence. Her curiosity towards technology and entrepreneurship comes from her entrepreneurial family. She is an exceptional leader with her strong intuition, creativity, connecting dots, and people relationships skills. Prior to launching Destin AI, she supported various businesses to grow over the 7 years in technology, media, and branding industries. Nargiz received her executive education in 3 countries - Switzerland, Japan and Canada and holds Master of International Business degree from Queen’s University.



COFFEE
DEEP LEARNING SYSTEMS


Kaheer Suleman - Principal Research Manager - Microsoft Research
Teaching Machines to Read
Kaheer Suleman - Microsoft Research
Teaching Machines to Read
For human beings, reading comprehension is a basic task, performed daily. Recently, we see growing interest in Machine Reading Comprehension (MRC) due to potential enterprise applications as well as technological advances including the availability of various MRC datasets like SQUAD, NewsQA, MS MARCO, and others. These datasets have inspired novel, attention-based architectures that can learn sophisticated matching techniques but lack the ability for true comprehension. This talk will provide an overview of the current state-of-the-art for MRC; the limitations preventing widespread usage in real-world applications and the challenges to overcome to progress towards full reading comprehension.
Kaheer is a Principal Research Program Manager at the Microsoft Research Montreal lab. Kaheer co-founded the deep learning for language startup Maluuba and served as its CTO prior to its acquisition, by Microsoft, in early 2017. He currently works on machine learning approaches for natural language processing focusing on question answering, conversation systems and common sense reasoning. I am Principal Research Program Manager at the Microsoft Research Maluuba, Montreal lab. I was one of the co-founders of Maluuba and served as its CTO prior to its acquisition in early 2017. I am interested in machine learning approaches for natural language processing with a focus on question answering, conversation systems and common sense reasoning. Prior to Maluuba, I attended the University of Waterloo where I received a Masters degree in Computer science focusing on information extraction.




Phil Brown - WW Field Engineering Manager - Graphcore
Exploring the relationship between computational architecture and deep learning algorithms
Phil Brown - Graphcore
Exploring the relationship between computational architecture and deep learning algorithms
This talk will look at how deep learning algorithms have evolved alongside contemporary computational architectures. It will touch on the how architectural characteristics can influence the feasibility of different algorithmic approaches and how changing architectural choices can enable new innovation.
Phil leads Graphcore’s Field Engineering team which acts as the focal point for technical engagements with our customers. Prior to joining Graphcore, Phil worked for Cray Inc. in a number of roles, including leading their engagement with the weather forecasting and climate research customers worldwide and as a technical architect. Phil holds a PhD in Computational Chemistry from the University of Bristol.


DEEP LEARNING APPLICATIONS
Tomi Poutanen - TD Bank
Tomi Poutanen is cofounder and co-CEO of Layer 6 AI (layer6.ai) which was recently acquired by TD Bank, which offers the world’s most accurate prediction engine for enterprise data. Layer 6 AI was the first company to offer clients a prediction engine powered by a real-time deep learning framework. Layer 6 AI helps banks, media, cable/telco, and ecommerce companies leverage all of their data to predict customer needs and personalize each customer experience. Tomi is a serial tech entrepreneur with exits to Microsoft and Yahoo! The acquired software powers Azure’s cloud storage and the search ranking algorithm of Bing and Yahoo Web Search. While at Yahoo, Tomi built a novel $200M search advertising program, Paid Inclusion, and launched Yahoo! Answers. Tomi is also a founder of the Vector Institute for Artificial Intelligence and a founding Fellow of the Creative Destruction Lab, the world’s leading AI venture accelerator. Tomi holds an MSc in Computer Engineering and an MBA from the University of Toronto.



LUNCH


Vincent Vanhoucke - Principal Scientist - Google
Robot Perception: Breaking the Data Barrier
Vincent Vanhoucke - Google
Robot Perception: Breaking the Data Barrier
Machine perception has made enormous progress, largely thanks to the availability of large amounts of data made available on the web. The efficacy of this data for the purpose of robotic vision is however severely limited due to its lack of grounding, a comparative dearth of 3D and multimodal data, and the sensitivity of most successful vision algorithms to domain shift. In this talk I'll tackle the problem of learning robotic perception with a focus on addressing the data problem. I'll explore leveraging multi-modality, self-supervision, simulations, active learning, domain transfer, meta-learning, and demonstrate practical ways to improve the sample efficiency of perception algorithms in embodied settings.
Vincent Vanhoucke is a principal scientist in the Google Brain team, and director for Google's robotics research effort. His research has spanned many areas of artificial intelligence and machine learning, from speech recognition to deep learning, computer vision, and robotics. He holds a doctorate from Stanford University and a diplôme d'ingénieur from the École Centrale Paris.




William Brendel - Senior Research Scientist - Snap Research
Deep Learning: Challenges from Hundred of Million Users
William Brendel - Snap Research
Deep Learning: Challenges from Hundred of Million Users
The amount of daily created data in social medias is unprecedented, yet the accessibility of it poses challenges never faced before. Starting with Snapchat as a use case, this talks goes through those challenges, and gives some technical as well as fundamental directions on how the AI of tomorrow will solve them.
William Brendel is a Senior Research Scientist at Snap Research where he is tackling machine learning problems from Computer Vision to Natural Language Processing and creating new mathematical optimization frameworks, and tackling the challenging task of learning how to learn. With seven years of academic research and more than eight years of hand-on research, engineering, and program/product management experience at top companies in USA and Europe (Snapchat, Amazon/A9, Google) he is proficient in Machine Learning, Computer Vision and Mathematics. Currently 90% Scientist and 10% Program Manager, 200% Snaper. Passionate about science, cutting-edge real-time server-side and mobile technologies (including deep learning systems), and product design and management.




Karry Lu - Data Scientist - wework
Powered by Machine Learning: Recommendation Systems at WeWork
Karry Lu - wework
Powered by Machine Learning: Recommendation Systems at WeWork
Recommendation algorithms aim to improve the user experience and drive engagement by delivering personalized content, such as music, retail products, and social content. Team Rex at WeWork was formed to deploy machine learning applications to help our members create their life's work, and bridge the gap between their digital and physical experiences. One of our first products is a personalized newsfeed designed to surface the most relevant user-generated content for each member, such as posts, events and promotions. This application is powered by a suite of recommendation models embedded within a novel multi-armed bandit-based experimentation platform. This framework allows for continuous feedback loops and more granular optimizations compared to a standard A/B testing framework, as well as the ability to leverage a universe of adaptive NLP-based and collaborative filtering models.
Karry Lu is a senior data scientist at WeWork with interests in recommendation systems, NLP, and Bayesian inference. In previous lives, he has led machine learning at a (successfully exited!) foodtech startup, and fought crime for the feds with the power of econometrics. Even more previous lives include relapsed statistician, community organizer and failed novelist.



PANEL: What is the Biggest Challenge to Capturing ROI with AI?
Samuel Couture Brochu - Xpertsea
As Chief Technology Officer for Xpertsea, Samuel Couture Brochu oversees their technology platform and roadmap, IP and patents, technical recruitment, day-to-day engineering operations and management. His passion is building great teams that can reach new heights with scalable solutions using exciting technologies. Through Xpersea's journey to Deep Learning and AI, he experienced how exciting, revolutionary, but also overwhelming, machine learning can be for any business. Before joining XpertSea, Samuel worked at ABB working on remote sensing, computer vision, space and defense projects. He won several awards in engineering, robotics, AI and computer science competitions, not surprising seeing as he dismantled his first computer when he was 9. His interests include guitar, golf, robotics, blockchain, and scotch. Samuel holds a Bachelor of Engineering degree in Computer Software from Laval University.


Maithili Mavinkurve - Sightline Innovation
The Commercialization of Deep Learning
The world has not seen a more disruptive and powerful technology since the inception of the internet itself. Deep Learning is going to transform every single industry that it touches. At Sightline Innovation, our goal is to make this technology accessible to industry and immediately applicable without the need for data scientists or Phds. Sightline Innovation has designed a machine learning as a service platform designed to address problems facing industry today. We are at the forefront of commercializing machine learning applications on our unique technology platform, mlCortex (TM).
The world has not seen a more disruptive and powerful technology since the inception of the internet itself. Deep Learning is going to transform every single industry that it touches. At Sightline Innovation, our goal is to make this technology accessible to industry and immediately applicable without the need for data scientists or Phds. Sightline Innovation has designed a machine learning as a service platform designed to address problems facing industry today. As Founder and COO at Sightline Innovation, Maithili is in charge of ensuring smooth delivery of their solutions into a customer’s organization. Maithili is a long time entrepreneur and leverages decades of engineering management experience to ensure customers can harness the power of deep learning and achieve immediate gains.


Sam Talasila - Shopify
Sam Talasila is a Data Science Lead, who uses data to democratize the commerce landscape for the 600,000+ entrepreneurs at Shopify. Sam speaks at various data meetups around the city of Toronto and has co-organized the Toronto Apache Spark Meetup in the past. A computational neuroscientist by training, Sam is also a serial entrepreneur having started commerce enterprises with his wife and family.


Hossein Rahnama - Ryerson & MIT Media Lab
Hossein Rahnama, a computer scientist living in the paradox of academia and entrepreneurship. He is the founder and CEO of Flybits Inc., a digital experience platform serving a global customer base. Rahnama is a visiting professor at the MIT Media Lab, professor at Ryerson University, and co-founder of the #1 university-based incubator, The Digital Media Zone. His research explores AI, mobile human-computer interaction and the effective design of data-driven services. Rahnama has written 30 publications and received 10 patents in ubiquitous computing. He served as a council member at National Science and Research Engineering Council of Canada and is currently serving on the board of Canadian Science Publishing.



END OF SUMMIT

New to Deep Learning? Time to Ask Qs! - Networking
Networking & Open Floor with Experts

Building Scalable Machine Learning Architectures - Hosted by CBC - WORKSHOP
Presentation and Real-Life Use-cases

Gosia Loj - Big Innovation Centre
Data &Trust – Open Data and Personal Data Ownership Platforms: Social & Legal Implications (Part 1)

Challenges & Opportunities of Investing in AI - PRACTICAL INSIGHTS
VC Panel, Q&A & Networking Session

Dr. Helia Mohammadi - National Healthcare, Microsoft Canada
From Raw Data to Actionable Clinical Insights: High-Throughput Analysis with Microsoft, & Databricks Unified Analytics Platform for Genomics (uap4genomics)

Gosia Loj - Big Innovation Centre
Data &Trust – Open Data and Personal Data Ownership Platforms: Social & Legal Implications (Part 2)