REGISTRATION & LIGHT BREAKFAST
THE CURRENT DEEP LEARNING LANDSCAPE
Deep Robotic Learning
The Limits & Potentials of Deep Learning for Robotics
Deep Imitation Learning for Complex Manipulation Tasks from VR Teleoperation
Deep Learning for Robotics and Robotics for Deep Learning
Unsupervised Meta-Learning for Reinforcement Learning
Devin Schwab - Carnegie Mellon University
Deep Reinforcement Learning for Real-Robot Soccer: A Start
We have pursued research in robot soccer for many years leading to successful teams of agile mobile robots that can manipulate a ball and strategize in the presence of an adversary. Robot soccer is a complex task. In this talk, I will present our ongoing work towards the goal of using deep reinforcement learning to learn effective robot skills. We present several examples of formalism for robot skill learning and multi-robot transfer learning. The results are promising.
Devin is a 4th year PhD student at the Robotics Institute working with Manuela Veloso. His research applies deep reinforcement learning techniques to robots and multi-agent systems, such as RoboCup Small Size League soccer.
Meta Learning and Self Play
NATURAL LANGUAGE PROCESSING
Natural Language for Human Robot Interaction
Stefanie Tellex - Brown University
Learning Models of Language, Action and Perception for Human-Robot Collaboration
Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets. To achieve complex tasks, it is essential for robots to move beyond merely interacting with people and toward collaboration, so that one person can easily and flexibly work with many autonomous robots. The aim of my research program is to create autonomous robots that collaborate with people to meet their needs by learning decision-theoretic models for communication, action, and perception. Communication for collaboration requires models of language that map between sentences and aspects of the external world. My work enables a robot to learn compositional models for word meanings that allow a robot to explicitly reason and communicate about its own uncertainty, increasing the speed and accuracy of human-robot communication.
Stefanie Tellex is an Assistant Professor of Computer Science and Assistant Professor of Engineering at Brown University. Her group, the Humans To Robots Lab, creates robots that seamlessly collaborate with people to meet their needs using language, gesture, and probabilistic inference, aiming to empower every person with a collaborative robot. Her awards include being named one of IEEE Spectrum's AI's 10 to Watch in 2013, the Richard B. Salomon Faculty Research Award at Brown University, a DARPA Young Faculty Award in 2015, a NASA Early Career Award in 2016, and a 2016 Sloan Research Fellowship.
David Held - Robotics Institute, CMU
Robot Learning through Motion and Interaction
Robots today are typically confined to operate in relatively simple, controlled environments. One reason for these limitations is that current methods for robotic perception and control tend to break down when faced with occlusions, viewpoint changes, poor lighting, unmodeled dynamics, and other challenging but common situations that occur when robots are placed in the real world. I argue that, in order to handle these variations, robots need to learn to understand how the world changes over time: how the environment can change as a result of the robot’s own actions or from the actions of other agents in the environment. I will show how we can apply this idea of understanding changes to a number of robotics problems, such as object tracking and safe robot learning. By learning how the environment can change over time, we can enable robots to operate in the complex, cluttered environments of our daily lives.
David Held is an assistant professor at Carnegie Mellon University in the Robotics Institute. His research focuses on robotic perception for autonomous driving and object manipulation. Prior to coming to CMU, David was a post-doctoral researcher at U.C. Berkeley, and he completed his Ph.D. in Computer Science at Stanford University where he developed methods for perception for autonomous vehicles. David also has a B.S. and M.S. in Mechanical Engineering at MIT. David is a recipient of the Google Faculty Research Award in 2017.
Unbiasing Semantic Segmentation For Robot Perception using Synthetic Data Feature Transfer
Josh Tobin - OpenAI
Synthetic data for robotic perception and control
Real-world robotic data can be expensive to collect and hard to label, but modern machine learning techniques are often data-intensive. As a result it would be advantageous to have the ability to learn robotic behaviors from cheap and easy to label data from a physics simulator. However, models learned in simulation often perform badly on physical robots due to the 'reality gap' that separates synthetic data from real-world robotics. In this talk we will discuss a simple and surprisingly powerful technique for bridging the reality gap called domain randomization. Domain randomization involves massively randomizing non-essential aspects of the simulator so that the model is forced to learn to ignore them. We will talk about applications of this idea in robotic perception and grasping.
Josh Tobin is a Research Scientist at OpenAI and a PhD student in Computer Science at UC Berkeley working with Professor Pieter Abbeel. Josh's research focuses on applying deep learning to problems in robotic perception and control, with a particular concentration on deep reinforcement learning, domain adaptation, and generative models. Prior to Berkeley and OpenAI, Josh was a consultant at McKinsey & Co. in New York. Josh has a BA in Mathematics from Columbia University.
Combining Semantic and Geometric Scene Understanding: From Robot Manipulation to Planetary Science
CONVERSATION & DRINKS
REGISTRATION & LIGHT BREAKFAST
Lessons Learned from Building a Social Robot
The Current State of Industrial Robotics
Robotics for Care
Alicia Kavelaars - OffWorld
DRL for Robots in Extreme Environments
As practical applications of DRL in the field of robotics emerge, implementations become feasible not only for controlled lab scenarios but also field applications where the unstructured nature of the environment poses additional challenges. OffWorld is developing a robotic platform that makes use of DRL algorithms for operations in extreme environments on Earth, as a precursor of applications in space such as habitat development and resource mining. We will review the challenges we are facing and our DRL implementation approach for robots in extreme environments.
Alicia is Co-Founder and Chief Technology Officer at OffWorld Inc. She brings over 15 years of experience in the aerospace industry developing and successfully launching systems for NASA, NOAA and the Telecommunications industry. In 2015, Alicia made the jump to New Space to work on cutting edge innovation programs. In her tenure at OffWorld, Alicia has led the development of AI based rugged robots that will be deployed in one of the most extreme environments on Earth as a precursor to swarm robotic space operations: deep underground mines. Alicia holds a MSc. and PhD from Stanford University and a BSc. in Theoretical Physics from UAM, Spain.
LEARNING TO LEARN
Pieter Abbeel - UC Berkeley
Robots that Learn to Learn
Reinforcement learning and imitation learning have seen success in many domains, including autonomous helicopter flight, Atari, simulated locomotion, Go, robotic manipulation. However, sample complexity of these methods remains very high. In contrast, humans can pick up new skills far more quickly. To do so, humans might rely on a better learning algorithm or on a better prior (potentially learned from past experience), and likely on both. In this talk I will describe some recent work on learning-to-learn for action, where agents learn the imitation/reinforcement learning algorithms and learn the prior. This has enabled acquiring new skills from just a single demonstration or just a few trials. While designed for imitation and RL, our work is more generally applicable and also advanced the state of the art in standard few-shot classification benchmarks such as omniglot and mini-imagenet.
Pieter Abbeel (Professor, UC Berkeley EECS) works in machine learning and robotics, in particular research on making robots learn by watching people (apprenticeship learning) and how to make robots learn through their own trial and error (reinforcement learning). His robots have learned: advanced helicopter aerobatics, knot-tying, basic assembly, and organizing laundry. His awards include best paper awards at ICML and ICRA, Young Investigator Awards from AFOSR, ONR, Darpa and NSF, the Sloan Fellowship, the MIT TR35, the IEEE Robotics and Automation Society Early Career Award, and the Dick Volz Best U.S. Ph.D. Thesis in Robotics and Automation Award. Pieter also founded covariant.ai and Gradescope.
Learning Robot Manipulation Skills through Experience and Generalization
Supersizing and Empowering Robot Learning
Learning Hand-eye Coordination for Robotic Grasping with Deep Learning
LUNCH & ROBOT CORNER
ROBOTICS IN SOCIETY
Deep Learning Systems for Estimating Visual Attention in Robot-Assisted Therapy of Children with Autism
Robotics for Environmental Monitoring
PANEL: Human-Centric AI - What is the Right Approach and Why?
END OF SUMMIT