
REGISTRATION & LIGHT BREAKFAST

WELCOME
Hariharan Ananthanarayanan - Osaro
The Current State of Industrial Robotics
Results in the fields of deep and reinforcement learning have ignited immense interest in the possible applications for robotics. In particular, research into learning based approaches to control as well as scene understanding techniques hint at the possibility of new applications and new ways of programming robots. However, most research has been geared toward toy problems, and there continues to be a gap between the most advanced papers and the reality of deployed industrial robots. I will discuss the most recent advances in deep and reinforcement learning for robotics, the current state of industrial robotics, and how Osaro is working to bridge the gap.
Hariharan Ananthanarayanan is a Robotics Engineer and an enthusiast who currently works as a Motion planning Engineer at Osaro, a San Francisco based machine learning company building products powered by Deep Reinforcement Learning. Hariharan has close to ten years of experience in the automation and material handling industry since when he started working for an Automated Guided Vehicle manufacturer in 2006. Hariharan’s expertise lies in the Kinematics of robotic arms particularly in motion planning and controlling of robotic manipulators in industrial environment. He also possesses excellent product development skills which he leveraged in building the first functioning prototype of Obi, an independent feeding device for people with disabilities. His critical contribution to the product and active participation in the company, enabled the founders of Desin LLC to launch their product Obi, successfully in 2016. Hariharan currently focusses on integrating machine learning techniques to control of robotic arms for more intuitive and reliable performance. Hariharan strongly believes that the advancements in machine learning can be leveraged to achieve human like capabilities in manipulation. Hariharan possess a Bachelors Degree in Mechanical Engineering from India (2003), a Master’s in Mechanical Engineering with focus on robotics from University of Tennessee (2005) and a Ph.D in motion planning of industrial manipulators from University of Dayton (2015).


THE CURRENT AI LANDSCAPE


Shahmeer Mirza - Machine Learning Engineer & Team Lead - 7-Eleven
Machine Learning in the Wild: Scaling Automation Applications from Prototype to Plant Floor
Shahmeer Mirza - 7-Eleven
7-Eleven’s Digital Transformation: Using Applied AI to Disrupt Convenience
7-Eleven was founded in 1927 as the world’s first convenience store, and for decades has operated as the marketplace leader in the convenience retail business. Through the years, 7-Eleven has continued its obsession with “giving the customers what they want, when and where they want it,” leading the way with a number of innovations in the industry. The first self-serve soda fountains, Slurpees, and to-go coffee were key milestones that kept the business ahead of competition. The last two decades have seen a rapidly changing technology landscape, and thus in 2016, 7-Eleven began its Digital Transformation to ensure its future as an innovation leader in the retail space. Today we’ll talk about the latest breakthrough in that transformation journey…
Shahmeer Mirza is a Tech Lead and Machine Learning Engineer at 7Next, the R&D Division of 7-Eleven. Over the last several months he has led the team developing 7-Eleven’s Checkout-Free technology. In November of 2019, the team opened their first store at 7-Eleven’s headquarters, a culmination of their work in computer vision, machine learning, algorithms, distributed computing, and hardware engineering. He was previously at PepsiCo, where he developed next generation automation, computer vision, and machine learning solutions for Industry 4.0 applications. Shahmeer is also passionate about democratizing AI capabilities; while at PepsiCo, he created the first in a series of Data Analytics courses to upskill associates across the Snacks R&D organization. He holds a B.S. in Chemical and Biomolecular Engineering from Georgia Tech, and is currently pursuing his M.S. in Computer Science at Georgia Tech.




Lionel Cordesses - Artificial Intelligence Senior Team Manager - Renault Innovation Silicon Valley
Machine Learning Through Analogies Formalized with Tools from Category Theory
Lionel Cordesses - Renault Innovation Silicon Valley
Machine Learning through analogies formalized with tools from Category Theory
Industrial applications relying on Machine Learning (ML) to control robotic systems need to learn with few trials. The number of experiments in this context is limited by either the available time or budget. Such a goal can be attained by using knowledge transfer through analogies, both of which can be mathematically formalized with tools from Category Theory. This leads to a new ML approach that transposes accumulated knowledge to new configurations, while still relying on existing ML tools. Illustrations of this innovative solution are presented on a cyber-physical system, a slot-car game. This method also proves to be versatile enough to be used on simulated systems as well, as it is demonstrated on the Atari 2600 games.
Lionel Cordesses holds a Ph.D. in Computer Vision and Control. After developing autonomous vehicles in the late 90s, he joined the Renault group in France where he created and led the Engine Control team. Then, after six years as a Project Manager in the field of Electric Vehicles, he crossed the Atlantic to build and lead Renault’s Artificial Intelligence team in Sunnyvale, CA. Lionel is also an Independent Expert in Signal Processing. He loves to hear "we know it cannot be done", as this presents an exciting challenge to find a unique solution and ultimately create new patents (34 and counting)




Renaud Detry - Research Scientist - NASA Jet Propulsion Laboratory
Combining Semantic and Geometric Scene Understanding: From Robot Manipulation to Planetary Science
Renaud Detry - NASA Jet Propulsion Laboratory
Combining Semantic and Geometric Scene Understanding: From Robot Manipulation to Planetary Science
Renaud Detry is a research scientist at NASA JPL, and a visiting researcher at ULiege/Belgium and KTH/Stockholm. Detry earned a Master's degree in computer engineering and a Ph.D. in robot learning from ULiege in 2006 and 2010. Shortly thereafter he earned two Starting Grants from the Swedish and Belgian national research institutes. He served as a postdoc at KTH and ULiege between 2011 and 2015, before joining the Robotics and Mobility Section at JPL in 2016. His research interests are in perception for manipulation, robot grasping, computer vision and machine learning. At JPL, Detry is involved in the Mars Sample Return technology development program, and he conducts research in robot autonomy for mobility and opportunistic science on Mars, Europa and Enceladus.




Gretchen Greene - AI Policy Researcher, Lawyer and Computer Vision Scientist - MIT Media Lab/Harvard Berkman Klein Center Assembly AI & Governance
How Machine Vision Fails: Adversarial Attacks and Other Problems
Gretchen Greene - MIT Media Lab/Harvard Berkman Klein Center Assembly AI & Governance
How Machine Vision Fails: Adversarial Attacks and Other Problems
In recent years, we’ve seen remarkable computer vision successes using neural networks. OCR allows automated mail routing. Facial recognition identifies suspects on security video and our friends on social media. Scene segmentation and object detection and classification are used in autonomous vehicle navigation, medical diagnosis and robotic manufacturing.
But we’ve also seen notable failures. A black person is misclassified as a gorilla in Google photos, recalling centuries of racial slurs. In one fatal crash, a Tesla can’t see cross traffic. In another, an Uber misclassifies a pedestrian as an unknown object, then as a vehicle and then as a bicycle, with varying expectations of future travel path. Snow on the road might be mistaken for lane markings. Worn markings and asphalt smoothly transitioning to dirt might not be seen at all. A stop sign can be changed to a speed limit sign with a few pieces of tape or a bit of graffiti. A person can make her face disappear or turn a toy turtle into a rifle. A single pixel change can make an image unrecognizable.
How fragile are the machine vision systems you rely on, what are the ways they will fail and what can you do about it?
An AI policy researcher, lawyer and computer vision scientist, Gretchen Greene advises government leaders on AI strategy, use and policy and works with Cambridge startups on everything from autonomous vehicles to QR codes in wearables. Greene has been a U.S. Departments of Defense, Energy and Homeland Security mathematician, has published in machine learning, science and policy journals and has been interviewed by the Economist, Forbes China and the BBC. An affiliate researcher at MIT’s Media Lab, Greene has a CPhil and MS in math from UCLA and a JD from Yale.



COFFEE
SMART ROBOTS


Thavidu Ranatunga - CTO - Fellow Robots
One Robot Doesn't Fit All; Lessons From the Field
Thavidu Ranatunga - Fellow Robots
One robot doesn't fit all; lessons from the field
As sophisticated and cool as today's army of autonomous robots and self-driving cars seem, the reality is that they are a lot more akin to managing an army of 10yr olds spread out across a city. They are almost always backed by a team of human robot operators remotely watching over them while trying to teach them right from wrong. How do you manage all of that? How do you choose when to intervene and take action or to stay back and let the robots learn on their own? In a lot of ways, the robotics industry today is similar to where PCs were in the 80s- they're specific purpose and the technology is just maturing now, but at the same time, robots and AI are popping up all around you in your daily lives. Yet being so seamless is not an accident; theres actually a lot that goes into designing them to set the right expectations for being accepted by people naturally while still being useful for business purposes. We will cover some of the learnings, stories and (sometimes funny) adaptations we've had to make in order to make robots fit in frictionlessly with you.
Thavidu Ranatunga is CTO of Fellow Robots. Fellow has had robots deployed live in front of customers for almost 4 years in more than 7 different retailers across 2 countries. The robots help customers find products in stores and help store associates with inventory management in the store. Formerly with Microsoft, Groupon and IBM, Thavidu now leads the technology side of Fellow Robots, revolutionizing the future of the Retail experience. He has been programming since he was 8 years old and has a wealth of experience in a wide variety of software development areas, encompassing everything from web technologies to embedded systems. However, his strongest passion is for all things Machine Learning, Computer Vision and Robotics. Thavidu graduated from the Australian National University with a Bachelor of Software Engineering majoring in Mechatronics. He also ranked top 10 in the Australian Informatics Olympiad.




Jeremy Marvel - Research Scientist & Project Leader - National Institute of Standards and Technology
Performance Metrics for AI in Manufacturing HRI
Jeremy Marvel - National Institute of Standards and Technology
Performance Metrics for AI in Manufacturing HRI
Recent years have witnessed the birth of a new era in industrial robotics in which collaborative systems, designed to work safety beside the human workforce, are integrated into historically manual processes. Such technologies represent a relatively low-risk gateway solution to transitioning facilities and operations to a state of partial automation, but retain many of the characteristics of their non-collaborative predecessors. Specifically, collaborative robots largely remain difficult to program, integrate, and maintain. Although the skills of the labor force are expected to increase in the coming years, the collaborative capabilities of next-generation industrial robots must evolve to bridge the technology gap, leading to more effective human-robot teaming in manufacturing application. It is expected advances in artificial intelligence will play a significant role in aiding this transition, but prior to adoption such technologies must be validated and hardened for the industrial environment. In this talk, Dr. Jeremy Marvel from the U.S. National Institute of Standards and Technology (NIST) will discuss work in assessing and assuring artificial intelligence technology for human-robot interaction through measurement science. Dr. Marvel will present an overview of the current technology landscape, and provide some initial metrology results from NIST’s ongoing Performance of Collaborative Robot Systems project. Focal topics include artificial intelligence for collaborative robot safety, robotic system integration, and situation awareness for human-robot teaming.
Jeremy A. Marvel is a research scientist and project leader at the U.S. National Institute of Standards and Technology (NIST), a non-regulatory branch of the U.S. Department of Commerce. Prior to NIST, Dr. Marvel was a research scientist at the Institute for Research in Engineering and Applied Physics at the University of Maryland, College Park, MD. He joined the Intelligent Systems Division at NIST in 2012, and has over thirteen years of robotics research experience in both industry and government. His research expertise include intelligent and adaptive solutions for robot applications, with particular attention paid to human-robot and robot-robot collaborations, multirobot coordination, industrial robot safety, machine learning, perception, and automated parameter optimization. Dr. Marvel currently leads a team of scientists and engineers in metrology efforts at NIST toward collaborative robot performance, and developing tools to enable small and medium-sized enterprises to effectively deploy robot solutions. Dr. Marvel received the Bachelor’s degree in computer science from Boston University in 2003, the Master’s degree in computer science from Brandeis University in 2005, and the Ph.D. degree in computer engineering from Case Western Reserve University in 2010.




Benjamin Hodel - Analytics Tech Specialist - Caterpillar Inc.
Better Earth-moving Machine Control through Artificial Intelligence
Benjamin Hodel - Caterpillar Inc.
Better Earth-moving Machine Control through Artificial Intelligence
Virtual product development iterates new designs in simulation before any prototypes are built, thus reducing cost until the design is optimal. When these simulations are run by a computer, the machines must be operated in a human-like way so that the results can be trusted. This is the function of the virtual “operator model”. Historically, these models have been difficult to create since the simulation environment is very dynamic – simple logical rules fail to achieve the right behavior and advanced operator models require advanced control theory expertise.
Using reinforcement learning, a program can learn to operate an earth-moving machine by itself. Over time, the agent learns to improve its behavior and converge on actions that are optimal. In this way, operations that are difficult to simulate can be solved with little to no human input. Machine simulation examples that have been demonstrated with this approach include excavator bucket-leveling, wheel loader bucket-lifting, and truck path-following. Special attention has been given towards finding policies that produce smooth actions and not oscillatory control. These same techniques can help development of on-machine operator-assist features and autonomous control.
Benj enjoys creating software solutions at the place where physical systems intersect applied science and analytics. Having received a BS in Mechanical Engineering from the University of Illinois Urbana-Champaign, Benj has worked at Caterpillar for over 16 years. He spent his early career as a developer of engineering data analysis software specializing in signal processing and is now an analytics technical specialist. He is currently leading the use of machine learning and deep learning technologies to make better Caterpillar products and solutions. He lives in Dunlap, Illinois, with his wife and three girls.


LUNCH
Andrei Polzounov - Blue River Technology
Deep Learning in Precision Agriculture for Reducing Herbicide
The use of herbicide in agriculture has skyrocketed in the past few decades. This trend has largely been caused by new genetically modified, herbicide resistant crops. Combating the ecological side-effects of chemical overspray as well as easing the economic burden of costly herbicides is where John Deere's Blue River Technology comes in. Blue River’s flagship product is See & Spray. An intelligent machine that utilizes deep learning to automatically detect and classify crops and weeds on-the-fly and uses precision sprayers to selectively spray weeds, saving vast quantities of chemicals in the process. This presentation will cover pixelwise semantic segmentation of imagery collected real-time in the field by See & Spray machines and how that information feeds is used for targeted spraying of unwanted weeds. On the fly detection of spray allows for a closed feedback loop control system where a GPU accelerated semantic autoencoder model works in tandem with the mechanically actuated sprayer system to achieve precision farming.
Andrei is a senior research scientist at Blue River Technology. He is focused on deep learning and computer vision for perception for smarter agricultural machines. Previously, Andrei worked on processing Airbus’ satellite imagery, drones for Lockheed Martin and text localization and semantic understanding of text for Singapore’s Agency for Science Technology and Research. In his spare time Andrei enjoys skiing and hiking.



Lionel Cordesses - Artificial Intelligence Senior Team Manager - Renault Innovation Silicon Valley
PANEL: Global Policy Surrounding AI and Autonomous Systems
Lionel Cordesses - Renault Innovation Silicon Valley
Machine Learning through analogies formalized with tools from Category Theory
Industrial applications relying on Machine Learning (ML) to control robotic systems need to learn with few trials. The number of experiments in this context is limited by either the available time or budget. Such a goal can be attained by using knowledge transfer through analogies, both of which can be mathematically formalized with tools from Category Theory. This leads to a new ML approach that transposes accumulated knowledge to new configurations, while still relying on existing ML tools. Illustrations of this innovative solution are presented on a cyber-physical system, a slot-car game. This method also proves to be versatile enough to be used on simulated systems as well, as it is demonstrated on the Atari 2600 games.
Lionel Cordesses holds a Ph.D. in Computer Vision and Control. After developing autonomous vehicles in the late 90s, he joined the Renault group in France where he created and led the Engine Control team. Then, after six years as a Project Manager in the field of Electric Vehicles, he crossed the Atlantic to build and lead Renault’s Artificial Intelligence team in Sunnyvale, CA. Lionel is also an Independent Expert in Signal Processing. He loves to hear "we know it cannot be done", as this presents an exciting challenge to find a unique solution and ultimately create new patents (34 and counting)


Samiron Ray - Comet Labs
Samiron Ray is a Principal at Comet Labs, an early-stage VC fund for AI and robotics startups that are transforming traditional industries. Comet Labs has made over 30 investments to date in companies like Abundant Robotics, Airmap, Cobalt Robotics, Doc.ai, and 3Scan. Samiron is a graduate of Harvard Law School and Duke University and has previously worked in frontier technology seed investing and technology law.



Gretchen Greene - AI Policy Researcher, Lawyer and Computer Vision Scientist - MIT Media Lab/Harvard Berkman Klein Center Assembly AI & Governance
PANELIST
Gretchen Greene - MIT Media Lab/Harvard Berkman Klein Center Assembly AI & Governance
How Machine Vision Fails: Adversarial Attacks and Other Problems
In recent years, we’ve seen remarkable computer vision successes using neural networks. OCR allows automated mail routing. Facial recognition identifies suspects on security video and our friends on social media. Scene segmentation and object detection and classification are used in autonomous vehicle navigation, medical diagnosis and robotic manufacturing.
But we’ve also seen notable failures. A black person is misclassified as a gorilla in Google photos, recalling centuries of racial slurs. In one fatal crash, a Tesla can’t see cross traffic. In another, an Uber misclassifies a pedestrian as an unknown object, then as a vehicle and then as a bicycle, with varying expectations of future travel path. Snow on the road might be mistaken for lane markings. Worn markings and asphalt smoothly transitioning to dirt might not be seen at all. A stop sign can be changed to a speed limit sign with a few pieces of tape or a bit of graffiti. A person can make her face disappear or turn a toy turtle into a rifle. A single pixel change can make an image unrecognizable.
How fragile are the machine vision systems you rely on, what are the ways they will fail and what can you do about it?
An AI policy researcher, lawyer and computer vision scientist, Gretchen Greene advises government leaders on AI strategy, use and policy and works with Cambridge startups on everything from autonomous vehicles to QR codes in wearables. Greene has been a U.S. Departments of Defense, Energy and Homeland Security mathematician, has published in machine learning, science and policy journals and has been interviewed by the Economist, Forbes China and the BBC. An affiliate researcher at MIT’s Media Lab, Greene has a CPhil and MS in math from UCLA and a JD from Yale.



Michael Hayes - Senior Manager of Government Affairs - Consumer Technology Association
PANELIST
Michael Hayes - Consumer Technology Association
Michael Hayes is the Sr. Manager of Government Affairs at the Consumer Technology Association. Hayes is focused on advancing pro-innovation public policy. He leads CTA’s federal and state efforts on emerging technology issues including artificial intelligence and the sharing economy. He is also responsible for leading federal policy initiatives related to patent litigation reform and high-skilled immigration. Prior to CTA he worked in the U.S. House of Representatives.


Abhishek Gupta - McGill University
Why Public Competence in AI Ethics is Essential to the Future of AI?
Given all the work that has been coming out of the field of responsible and ethical AI, there has been a push for finding universal solutions, often mediated and prepared by a group of experts. But, relying on a small group of experts and hunting for universal solutions is an exercise in futility. What we really need is cultural and contextual sensitivity. This can only be achieved by engaging at the grassroots with the public and tapping into their implicit knowledge of the local culture and context. But this needs to be nurtured and built over time through a public engagement process which is what this session will dive into.
Abhishek Gupta is the founder of Montreal AI Ethics Institute. His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. Abhishek comes from a strong technical background, working as a Software Engineer, Machine Learning at Microsoft in Montreal.
He is also the founder of the AI Ethics community in Montreal that has more than 1350 members from diverse backgrounds who do a deep dive into AI ethics and offer public consultations to initiatives like the Montreal Declaration for Responsible AI. His work has been featured by the United Nations, Oxford, Stanford Social Innovation Review, World Economic Forum and he travels frequently across North America and Europe to help governments, industry and academia understand AI and how they can incorporate ethical, safe and inclusive development processes within their work. More information can be found on https://atg-abhishek.github.io


COMPUTER VISION


Jana Kosecka - Professor at the Department of Computer Science & Visiting Research Scientist - George Mason University & Google
Semantic Understanding for Robot Perception
Jana Kosecka - George Mason University & Google
Semantic Understanding for Robot Perception
Advancements in robotic navigation and fetch and delivery tasks rest to a large extent on robust, efficient and scalable semantic understanding of the surrounding environment. Deep learning fueled rapid progress in computer vision in object category recognition, localization and semantic segmentation, exploiting large amounts of labelled data and using mostly static images. I will talk about challenges and opportunities in tackling these problems in indoors and outdoors environments relevant to robotics applications. These include methods for semantic segmentation and 3D structure recovery using deep convolutional neural networks (CNNs), localization and mapping of large scale environments, training object instance detectors using synthetically generated training data and 3D object pose recovery. The applicability of the techniques for autonomous driving, service robotics, manipulation and navigation will be discussed.
Jana Kosecka is a Professor at the Department of Computer Science, George Mason University and currently a Visiting Research scientist at Google. She is the recipient of David Marr's prize in Computer Vision and received the National Science Foundation CAREER Award. Jana is an Associate Editor of IEEE Robotics and Automation Letters, Member of the Editorial Board of International Journal of Computer Vision and Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence. She has numerous publications in refereed journals and conferences and is a co-author of a monograph titled Invitation to 3D vision: From Images to Geometric Models. Her general research interests are in Computer Vision and Robotics. In particular she is interested 'seeing' systems engaged in autonomous tasks, acquisition of static and dynamic models of environments by means of visual sensing, object recognition and human-computer interaction.



Andrei Polzounov - Senior Research Scientist - Blue River Technology
Autoencoder Based Image Segmentation for Precision Agriculture
Andrei Polzounov - Blue River Technology
Deep Learning in Precision Agriculture for Reducing Herbicide
The use of herbicide in agriculture has skyrocketed in the past few decades. This trend has largely been caused by new genetically modified, herbicide resistant crops. Combating the ecological side-effects of chemical overspray as well as easing the economic burden of costly herbicides is where John Deere's Blue River Technology comes in. Blue River’s flagship product is See & Spray. An intelligent machine that utilizes deep learning to automatically detect and classify crops and weeds on-the-fly and uses precision sprayers to selectively spray weeds, saving vast quantities of chemicals in the process. This presentation will cover pixelwise semantic segmentation of imagery collected real-time in the field by See & Spray machines and how that information feeds is used for targeted spraying of unwanted weeds. On the fly detection of spray allows for a closed feedback loop control system where a GPU accelerated semantic autoencoder model works in tandem with the mechanically actuated sprayer system to achieve precision farming.
Andrei is a senior research scientist at Blue River Technology. He is focused on deep learning and computer vision for perception for smarter agricultural machines. Previously, Andrei worked on processing Airbus’ satellite imagery, drones for Lockheed Martin and text localization and semantic understanding of text for Singapore’s Agency for Science Technology and Research. In his spare time Andrei enjoys skiing and hiking.




Carlo Dal Mutto - CTO - Aquifi
Object Identification and Defect Recognition for Manufacturing and Logistics
Carlo Dal Mutto - Aquifi
Object Identification and Defect Recognition for Manufacturing and Logistics
Real-time object identification and defects recognition are fundamental building blocks of “Industry 4.0” and “Logistics 4.0”, the latest phases of automation in manufacturing and logistics. An intuitive solution for both problems consists in firstly constructing three-dimensional scale models of all the items in the considered inventory and then analyzing all such models by means of 3D Convolutional Neural Networks. Within this session, an overview of the system architecture considered for tackling this problem is provided, with a specific focus on the 3D convolutional Neural Network which has been trained and deployed. Experimental results are also provided.
Carlo Dal Mutto is a computer vision and machine learning engineer interested in the application of deep learning techniques to 3D data. He has received a Ph.D (Dottorato di ricerca) in Information Engineering from University of Padova, Italy in 2012. Since 2011 he has been working with Aquifi Inc. as advisor, computer vision architect, R&D lead and currently as CTO. He is inventor of several patents, he has been invited speaker at major technical conferences, and he has co-authored research papers, two book chapters and two books on 3D data acquisition and processing: Time-of-Flight Cameras and Microsoft KinectTM, Springer, 2011; Time-of-Flight and Structured Light Depth Cameras: Technology and Applications, Springer, 2016. He has served as a reviewer and TPC member for CVPR, ECCV, ICCV, 3DPVT, 3DIMPVT, ICME, IJCV, IEEE SPL, and Springer MVAP.



COFFEE

Talent & Talk
LOOKING TO THE FUTURE

Abhishek Gupta - AI Ethics Researcher - McGill University
PANEL: Industry 4.0 and Cybersecurity - How do we Safeguard our Data and Manage Risk?
Abhishek Gupta - McGill University
Why Public Competence in AI Ethics is Essential to the Future of AI?
Given all the work that has been coming out of the field of responsible and ethical AI, there has been a push for finding universal solutions, often mediated and prepared by a group of experts. But, relying on a small group of experts and hunting for universal solutions is an exercise in futility. What we really need is cultural and contextual sensitivity. This can only be achieved by engaging at the grassroots with the public and tapping into their implicit knowledge of the local culture and context. But this needs to be nurtured and built over time through a public engagement process which is what this session will dive into.
Abhishek Gupta is the founder of Montreal AI Ethics Institute. His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. Abhishek comes from a strong technical background, working as a Software Engineer, Machine Learning at Microsoft in Montreal.
He is also the founder of the AI Ethics community in Montreal that has more than 1350 members from diverse backgrounds who do a deep dive into AI ethics and offer public consultations to initiatives like the Montreal Declaration for Responsible AI. His work has been featured by the United Nations, Oxford, Stanford Social Innovation Review, World Economic Forum and he travels frequently across North America and Europe to help governments, industry and academia understand AI and how they can incorporate ethical, safe and inclusive development processes within their work. More information can be found on https://atg-abhishek.github.io


Abhishek Gupta - McGill University
Why Public Competence in AI Ethics is Essential to the Future of AI?
Given all the work that has been coming out of the field of responsible and ethical AI, there has been a push for finding universal solutions, often mediated and prepared by a group of experts. But, relying on a small group of experts and hunting for universal solutions is an exercise in futility. What we really need is cultural and contextual sensitivity. This can only be achieved by engaging at the grassroots with the public and tapping into their implicit knowledge of the local culture and context. But this needs to be nurtured and built over time through a public engagement process which is what this session will dive into.
Abhishek Gupta is the founder of Montreal AI Ethics Institute. His research focuses on applied technical and policy methods to address ethical, safety and inclusivity concerns in using AI in different domains. Abhishek comes from a strong technical background, working as a Software Engineer, Machine Learning at Microsoft in Montreal.
He is also the founder of the AI Ethics community in Montreal that has more than 1350 members from diverse backgrounds who do a deep dive into AI ethics and offer public consultations to initiatives like the Montreal Declaration for Responsible AI. His work has been featured by the United Nations, Oxford, Stanford Social Innovation Review, World Economic Forum and he travels frequently across North America and Europe to help governments, industry and academia understand AI and how they can incorporate ethical, safe and inclusive development processes within their work. More information can be found on https://atg-abhishek.github.io



Andrew Grotto - Research Fellow - Hoover Institution & CISAC, Stanford University
PANELIST
Andrew Grotto - Hoover Institution & CISAC, Stanford University
Andrew J. Grotto is a Research Fellow at the Hoover Institution and a William J. Perry International Security Fellow at the Center for International Security and Cooperation, both at Stanford University. His research and teaching center on the national security and international economic dimensions of America’s global leadership in information technology innovation.
Before coming to Stanford, Grotto was the Senior Director for Cybersecurity Policy at the White House in both the Obama and Trump Administrations, where he was responsible for coordinating the development and execution of cyber policies relating to critical infrastructure, Federal networks and consumers, as well as broader technology policies with a nexus to cybersecurity, such as artificial intelligence and encryption. Grotto previously served as Senior Advisor for Technology Policy to Commerce Secretary Penny Pritzker; a member of the professional staff of the Senate Select Committee on Intelligence; and as a Senior National Security Analyst at the Center for American Progress.


Roel Schouwenberg - Celsus Advisory Group
Roel Schouwenberg is the Intelligence & Research Director at Celsus Advisory Group. He has over twenty years of experience in the information security space. His extensive experience in threat intelligence and security research first exposed him to defensive and adversarial ML and AI over a decade ago. Roel has a special interest in destructive and targeted cyber attacks against physical infrastructure.




Dan Yu - Senior Research Scientist & Innovation Manager - Siemens Corporate Technology
AI in Industrial Applications: Challenges, Solutions and Future Directions
Dan Yu - Siemens Corporate Technology
AI in Industrial Applications: Challenges, Solutions and Future Directions
Applications of AI in industrial applications are in many ways different from their use in consumer applications, in terms of availability of data, accessibility of data, meaning of data and explainability of data. The presentation then discusses solutions to these thorny challenges, by combining machine learning with existing domain knowhow in the form of knowledge graph. The practices involves using various machine learning practice to convert existing domain knowhow to knowledge graph and to grasp the meaning of data, building standard information models to enable interoperability across domains, and giving explainable answer by combine the power of machine learning and knowledge graph. Finally, the presentation outlooks how combining symbolic and statistic AI will maximize the benefits to industrial applications in general..
Dan Yu is an Innovator at Siemens Corporate Technology since June 2015. Prior to joining Siemens US, he founded Siemens Innovation Center in Wuxi, China in 2012, where Siemens innovates with local partners. He joined Siemens Research (Corporate Technology) China after his study in Tsinghua University and Munich University of Technology. Since then he has been doing technology innovation for a wide spectrum of industrial application areas including industrial manufacturing, intelligent traffic, smart building and logistics with various Internet of Things technologies. Because of the significant innovation contribution, he was awarded “Siemens Inventor of the Year 2010”. Among the 70 patents he authored/coauthored many have already become products or part of Siemens products.


CONVERSATION & DRINKS

REGISTRATION & LIGHT BREAKFAST

WELCOME
Sam Kherat - Bradley University
On Self-Driving Machines
Artificial Intelligence (AI) and Machine Learning (ML) have played a tremendous role in the development autonomous vehicle technologies. Their impact has been even greater in the off-road, mining and construction applications because of the private nature of the applications sites.
In this presentation, Dr. Sam Kherat will overview the role of AI and ML in the adoption of robotics, automation, and operator assist programs in the construction and mining applications where the industry faced fewer skilled operators, increased scrutiny on hazardous or dangerous operations and environmental impacts like operator sound and vibration limits. Dr. Kherat will then show how some of these techniques could be applied for autonomous vehicles.
Dr. Sam Kherat is Adjunct Professor at Bradley University, Peoria, Illinois, responsible for robotics and other mechanical engineering courses. He joined Caterpillar, Inc. in 1996. Dr. Kherat helped found and was appointed Manager of Caterpillar’s automation center in Pittsburgh, PA, in November 2007. Prior to that, Dr. Kherat has been technical lead for Automation and Robotics programs including automated mining trucks, cycle planning, underground mining automation, and automated excavation. He was Caterpillar’s Project Manager for the DARPA Grand Challenges and the 2007 Urban Challenge won by the Caterpillar-Carnegie Mellon University team. Dr. Sam Kherat received his Master’s degree in Electrical Engineering from Bradley University, Peoria, Illinois (1987) and his Ph.D (1994) in Aeronautics and Astronautics from Purdue University, West Lafayette, Indiana.


STARTUP SESSION


Konstantin Kiselev - Co-Founder & CEO - Conundrum
AI Transforming Maintenance and Process Optimization across Industries
Konstantin Kiselev - Conundrum
AI transforming maintenance and process optimization across industries
Konstantin is a co-founder and CEO of Conundrum. As an expert in AI, he has advised multinational companies and governmental organizations on AI strategy and implementation. At Conundrum, Konstantin is leading the team that develops and deploys AI-driven products that help industrial companies become AI-driven entities. Conundrum works with clients from a wide spectrum of industrial areas including oil&gas, chemical production and paper production. Konstantin is a lecturer at NVIDIA Deep Learning Institute and an NVIDIA Universities Ambassador in Russia. He holds a masters degree in theoretical physics from Lomonosov Moscow State University.



Drew Conway - Founder & CEO - Alluvium
Industrial Machine Intelligence: The Golden Braid of Data Streams, AI, and Human Expertise
Drew Conway - Alluvium
Industrial Machine Intelligence: The Golden Braid of Data Streams, AI, and Human Expertise
We are now more than a decade into the commercialization of “big data” and “data science," but these technologies have yet to meet the needs of the businesses whose work exists outside the data center. The commodity stack of big data technologies are fundamentally flawed for use in the rising tide of data streaming from connected machines in industrial settings. There are many reasons for this, as challenges abound when embedding machine intelligence into a production industrial lifecycle. Perhaps the most challenging, however, is understanding how intelligent software systems will support normal business operations and what real benefits they will provide. In this talk I will present the concept of the “golden braid” of industrial machine intelligence: blending massive data, advanced machine intelligence, and human expertise. This approach enables both the human experts and the algorithms to leverage their comparative strengths. To support this, I will provide a case study demonstrating how I have done this in practice.
Drew Conway is a world-renowned data scientist, entrepreneur, author, and speaker. He’s built companies, and has advised and consulted across many industries---ranging from fledgling start-ups to Fortune 100 companies---as well as academic institutions and government agencies at all levels. In 2015, Drew founded Alluvium to bridge the gap between industrial machine data and the business and consumer users who utilize this data to make better decisions. He serves as Alluvium’s CEO and is the driving force behind the company’s vision and growth in its early years.


Prateek Joshi - Pluto AI
Deep Learning for Water-Energy Nexus
Water-Energy Nexus refers to the production relationship between water and energy. We will be specifically focusing on the energy consumed by treatment plants to produce clean water. A water treatment plant takes in unclean water and gives out clean water. Energy is one of the biggest expenses of operating a water treatment plant. The goal of the operators is to minimize the energy spent per gallon of water produced without compromising compliance.
Deep Learning can directly address these problems because water treatment plants collect large amounts of time-series sensor data everyday. In order to extract wisdom from this data, we need to use time-series modeling, water process analysis, and Deep Learning. Within Deep Learning, we will be focusing on Sequence Learning that enables computers to think and learn on their own based on sequential measurements obtained from internet-connected sensors.
We need to leverage both existing data and outside data sources to predict the future behavior of the treatment plant, reduce energy consumption, and minimize operating costs. Digital simulations built using Artificial Intelligence have a direct impact on decision-making, which in turn reduces costs in many different ways. It can be used to solve some of the most challenging problems that are impacting billions of people around the world.
Prateek Joshi is a published author of 8 books (including a #1 best seller on Amazon), an Artificial Intelligence researcher, and a TEDx speaker. He has been featured on Forbes 30 Under 30, CNBC, TechCrunch, Silicon Valley Business Journal, and many more publications. He is the founder of Pluto AI, a venture-funded Silicon Valley startup building an operational analytics platform for water facilities. He has been an invited speaker at technology and entrepreneurship conferences including TEDx, Global Big Data Conference, Machine Learning Developers Conference, and Sensors Expo. His tech blog (www.prateekjoshi.com) has received 1.7M+ page views from 200+ countries and has 7,400+ followers. You can learn more about him on his personal website at www.prateekj.com.




Alicia Kavelaars - Co-Founder and CTO - OffWorld
An Industrial AI Revolution in Space Starts Deep Underground on Earth
Alicia Kavelaars - OffWorld
DRL for Robots in Extreme Environments
As practical applications of DRL in the field of robotics emerge, implementations become feasible not only for controlled lab scenarios but also field applications where the unstructured nature of the environment poses additional challenges. OffWorld is developing a robotic platform that makes use of DRL algorithms for operations in extreme environments on Earth, as a precursor of applications in space such as habitat development and resource mining. We will review the challenges we are facing and our DRL implementation approach for robots in extreme environments.
Alicia is Co-Founder and Chief Technology Officer at OffWorld Inc. She brings over 15 years of experience in the aerospace industry developing and successfully launching systems for NASA, NOAA and the Telecommunications industry. In 2015, Alicia made the jump to New Space to work on cutting edge innovation programs. In her tenure at OffWorld, Alicia has led the development of AI based rugged robots that will be deployed in one of the most extreme environments on Earth as a precursor to swarm robotic space operations: deep underground mines. Alicia holds a MSc. and PhD from Stanford University and a BSc. in Theoretical Physics from UAM, Spain.



COFFEE
APPLICATIONS FOR INDUSTRIAL AUTOMATION


Greg Kinsey - Vice President, Industrial Solutions & Innovation - Hitachi
Creating Practical Value from AI in Manufacturing
Greg Kinsey - Hitachi
Creating Practical Value from AI in Manufacturing
Digital technologies promise to transform manufacturing, creating a fourth industrial revolution. There is a huge buzz around IoT, advanced analytics, AI, augmented reality, additive manufacturing, and other technologies. However, many companies struggle to create value. Some companies fail by focusing purely on technology. This presentation reveals “lessons learned” from Hitachi’s own factories, and co-creation projects with clients.
Greg's presentation will cover:
A major shift in manufacturing IT architecture
A roadmap for digital transformation of manufacturing
Use case: Using AI to predict and avoid defects
Use case: Using AI to predict and avoid bottlenecks
Applying design thinking and agile development to manufacturing systems.
Greg Kinsey is a global leader in digital transformation of manufacturing. At Hitachi, he heads the incubation and commercialization of Smart Manufacturing solution. Greg has 30 years of experience in industry, technology, and consulting. He started his career as a factory automation engineer at Goodyear. He subsequently led the manufacturing platform business at Digital Equipment Corporation, creating the forerunner of today’s IoT platforms. During the 1990s he was a pioneer in the Six Sigma and Operational Excellence movement. He then led business transformation in senior executive roles at Siemens, IBM, Hewlett-Packard, and Celerant Consulting. Greg earned his MBA at Georgetown University, and BSc in Mechanical Engineering at Carnegie Mellon University.


Sam Kherat - Bradley University
On Self-Driving Machines
Artificial Intelligence (AI) and Machine Learning (ML) have played a tremendous role in the development autonomous vehicle technologies. Their impact has been even greater in the off-road, mining and construction applications because of the private nature of the applications sites.
In this presentation, Dr. Sam Kherat will overview the role of AI and ML in the adoption of robotics, automation, and operator assist programs in the construction and mining applications where the industry faced fewer skilled operators, increased scrutiny on hazardous or dangerous operations and environmental impacts like operator sound and vibration limits. Dr. Kherat will then show how some of these techniques could be applied for autonomous vehicles.
Dr. Sam Kherat is Adjunct Professor at Bradley University, Peoria, Illinois, responsible for robotics and other mechanical engineering courses. He joined Caterpillar, Inc. in 1996. Dr. Kherat helped found and was appointed Manager of Caterpillar’s automation center in Pittsburgh, PA, in November 2007. Prior to that, Dr. Kherat has been technical lead for Automation and Robotics programs including automated mining trucks, cycle planning, underground mining automation, and automated excavation. He was Caterpillar’s Project Manager for the DARPA Grand Challenges and the 2007 Urban Challenge won by the Caterpillar-Carnegie Mellon University team. Dr. Sam Kherat received his Master’s degree in Electrical Engineering from Bradley University, Peoria, Illinois (1987) and his Ph.D (1994) in Aeronautics and Astronautics from Purdue University, West Lafayette, Indiana.




Dragos Margineantu - AI Chief Technologist and Technical Fellow - Boeing Research & Technology
A Vision for Advancing Applied AI Research and Engineering
Dragos Margineantu - Boeing Research & Technology
A Vision for Advancing Applied AI Research and Engineering
AI approaches are successful nowadays on tasks with fairly narrow contexts, and need to be engineered correctly. Aside of data availability and core AI algorithmic ideas, the success of most deployed machine learning and AI solutions, depends on appropriate problem formulations and on engineering approaches that can correctly tap into the right strengths of the AI approaches. In my talk, I will outline the five main directions that AI research and engineering will need to embrace to widen the contexts of applicability and to ease the deployment of solutions.
Dragos Margineantu is a Boeing Technical Fellow and AI Chief Technologist with Boeing Research & Technology. His research interests include machine learning, in particular methods for robust machine learning, reasoning and planning for decision systems, anomaly detection, reinforcement learning, human-in-the-loop learning, inverse reinforcement learning, cost-sensitive, active, and ensemble learning. Dragos Margineantu was one of the pioneers in research on ensemble learning and cost-sensitive learning, and in statistical testing of learned models. At Boeing, he developed machine learning and AI based solutions for airplane maintenance, autonomous systems, airplane performance, surveillance, design, autonomous systems, and security. Dragos serves as the Boeing AI lead for DARPA’s “Assured Autonomy” program, and served as the Boeing principal investigator (PI) of the DARPA “Bootstrapped Learning” project for which he is designed and developed learning effects by example, by explanation, active learning, and inference components. He was also the PI of DARPA's "Learning Applied to Ground Robots" (LAGR) program. He serves and served as the PI of several Boeing IRAD research projects in machine learning, data science, and intelligent systems. Dragos designed and developed the learning and computer vision components of Boeing’s “Opportune Landing Site” effort (AFRL). Dragos serves as the Editor of the Springer book series on “Applied Machine Learning” and as the Action Editor for Special Issues for the Machine Learning Journal (MLj). He serves on the editorial board of both major machine learning journals (MLj and JMLR), and served as senior program committee member of ICML (the premier machine learning conference), KDD (the premier data mining conference) and AAAI (the premier AI conference). He was the chair of the KDD 2015 Industry and Government Track and he organized and chaired a number of scientific workshops on anomaly detection, testing of decision systems, cost-sensitive and budgeted learning. He has edited a special issue of the Machine Learning Journal on Event Detection (Machine Learning 79:3, June 2010). Dragos Margineantu served as a senior program committee member or organizer of all major machine learning, AI, and data mining conferences and as a reviewer of all major ML, AI, and data mining scientific journals. Dragos Margineantu has a Ph.D. in Computer Science from Oregon State University (2001).



LUNCH

Robot Corner
HUMAN MACHINE COLLABORATION


Binu Nair - Senior Research Scientist - United Technologies Research Center
Person Tracking and Activity Recognition in Automation and Robotics
Binu Nair - United Technologies Research Center
United Technologies Research Center (UTRC) is the innovation engine and research vehicle for United Technologies Corporation (UTC), and serves to solve challenging problems in perception, robotics and controls technologies for its business units such as OTIS, Pratt & Whitney, Climate Control and Security, and Aerospace systems. UTRC also works with government on various DOD, DARPA and ARM funded research.
In this talk, I will present the work done in the area of human action and activity localization from streaming videos. Here, non-linear manifolds and the grammar/codewords are learned using auto-encoders and conditional restricted Boltzmann machines for each category of action. For inference, these learned manifolds are traversed by the features of the test video segment to get action class and its percentage of completion at each frame. This work provides a way to realize real time human action localization with possibility of predicting the next action or sub-action from a short streaming segment of frames invariant to the speed of motion of action and frame rate of camera. Based on this work, I will discuss some of the research efforts and next steps that UTRC focusses on towards realizing human robot collaboration and human aware navigation for improving manufacturing outcomes in assembly operation in unconstrained environment.
Binu Nair is a Senior Research Scientist with United Technologies Research Center at Berkeley where he focusses on computer vision and deep learning algorithms for next-gen robotic perception and automation systems. His research interests include object tracking, person identification, and activity recognition with emphasis on human-aware robot navigation, and human robot interaction. Prior to this work, he was a Research Engineer with University of Dayton Research Institute where he built novel deep learning algorithms for machine part feature detection and recognition to automate human-level inspection tasks in manufacturing. Binu graduated with a PhD in Electrical Engineering from University of Dayton in 2015, where the dissertation was on human action recognition and localization from streaming videos. He has published for 15+ articles in top publications and is a reviewer for IEEE Transactions in Image Processing (TIP) and Journal of Electronic Imaging (JEI). Binu is also passionate about promoting diversity in STEM and has had the opportunity to drive this mission by presenting in tech conferences and universities such as UC Berkeley.




Modar Alaoui - CEO & Founder - Eyeris
Vision AI for Augmented Human Machine Interaction
Modar Alaoui - Eyeris
Vision AI for Augmented Human Machine Interaction
This session will unveil the latest vision AI technologies that ensure safe and efficient human machine interactions in the industrial automation context. Today’s human-facing industrial AI applications lack a key element for Human Behavior Understanding (HBU) that is critical for augmented safety and enhancing productivity. The second part of this session will detail how real-world applications can benefit from a comprehensive suite of visual behavior analytics that are readily available today.
Modar is a serial entrepreneur and expert in AI-based vision software development. He is currently founder and CEO at Eyeris, developer of a Deep Learning-based emotion recognition software, EmoVu, that reads facial micro-expressions. Eyeris uses Convolutional Neural Networks (CNN's) as a Deep Learning architecture to train and deploy its algorithm in to a number of today’s commercial applications. Modar combines a decade of experience between Human Machine Interaction (HMI) and Audience Behavioral Measurement. He is a frequent keynoter on “Ambient Intelligence”, a winner of several technology and innovation awards and has been featured in many major publications for his work.



PANEL: Safety in AI applied within Industrial Environments

Jeremy Marvel - Research Scientist & Project Leader - National Institute of Standards and Technology
PANELIST
Jeremy Marvel - National Institute of Standards and Technology
Performance Metrics for AI in Manufacturing HRI
Recent years have witnessed the birth of a new era in industrial robotics in which collaborative systems, designed to work safety beside the human workforce, are integrated into historically manual processes. Such technologies represent a relatively low-risk gateway solution to transitioning facilities and operations to a state of partial automation, but retain many of the characteristics of their non-collaborative predecessors. Specifically, collaborative robots largely remain difficult to program, integrate, and maintain. Although the skills of the labor force are expected to increase in the coming years, the collaborative capabilities of next-generation industrial robots must evolve to bridge the technology gap, leading to more effective human-robot teaming in manufacturing application. It is expected advances in artificial intelligence will play a significant role in aiding this transition, but prior to adoption such technologies must be validated and hardened for the industrial environment. In this talk, Dr. Jeremy Marvel from the U.S. National Institute of Standards and Technology (NIST) will discuss work in assessing and assuring artificial intelligence technology for human-robot interaction through measurement science. Dr. Marvel will present an overview of the current technology landscape, and provide some initial metrology results from NIST’s ongoing Performance of Collaborative Robot Systems project. Focal topics include artificial intelligence for collaborative robot safety, robotic system integration, and situation awareness for human-robot teaming.
Jeremy A. Marvel is a research scientist and project leader at the U.S. National Institute of Standards and Technology (NIST), a non-regulatory branch of the U.S. Department of Commerce. Prior to NIST, Dr. Marvel was a research scientist at the Institute for Research in Engineering and Applied Physics at the University of Maryland, College Park, MD. He joined the Intelligent Systems Division at NIST in 2012, and has over thirteen years of robotics research experience in both industry and government. His research expertise include intelligent and adaptive solutions for robot applications, with particular attention paid to human-robot and robot-robot collaborations, multirobot coordination, industrial robot safety, machine learning, perception, and automated parameter optimization. Dr. Marvel currently leads a team of scientists and engineers in metrology efforts at NIST toward collaborative robot performance, and developing tools to enable small and medium-sized enterprises to effectively deploy robot solutions. Dr. Marvel received the Bachelor’s degree in computer science from Boston University in 2003, the Master’s degree in computer science from Brandeis University in 2005, and the Ph.D. degree in computer engineering from Case Western Reserve University in 2010.


Aaron Bestick - Mapper
Aaron Bestick researches robotic perception and planning and their use when people and robots must work together in close physical contact. His PhD work at UC Berkeley asked how collaborative robots can seamlessly learn and adapt to their human partners' individual ergonomic preferences, much as humans do when working in teams. Outside of academia, he's worked with robotics in a variety of problem domains, including mobile localization, robotic surgery, and prosthetic limbs at Google, Intuitive Surgical, and the University of Washington. Since 2017, he's been at Mapper in San Francisco, where he helps autonomous vehicles navigate complex urban environments safely and efficiently.


Alexander Harmsen - Iris Automation
Alexander Harmsen is CEO and Co-Founder of Iris Automation, a high tech start-up building computer vision collision avoidance systems for industrial drones. With backing from Bessemer, Y Combinator, over $10M in private equity investment from other Silicon Valley investors, and operations in multiple countries, Iris Automation is attempting to radically disrupt the industrial drone sector. He also sits on the Board of Directors for Unmanned Systems Canada, a national industry representation organization that has been at the forefront of commercial unmanned systems for more than a decade.
Previously, Alexander was the first Software Developer at Matternet, a medical drone package delivery start-up, and worked on computer vision systems at NASA's Jet Propulsion Lab in Los Angeles. He is very interested in intersections between drones, autonomous vehicles and real-world applications that will affect billions of people, always excited about meeting other people making big changes in the world!



END OF SUMMIT

Humanising AI and the Ethical Implications of Technology - PANEL DISCUSSION
Panel Discussion

Investor Panel & Networking Session - NETWORKING SESSION
Panel & Networking Session