Robot Perception: Breaking the Data Barrier
Machine perception has made enormous progress, largely thanks to the availability of large amounts of data made available on the web. The efficacy of this data for the purpose of robotic vision is however severely limited due to its lack of grounding, a comparative dearth of 3D and multimodal data, and the sensitivity of most successful vision algorithms to domain shift. In this talk I'll tackle the problem of learning robotic perception with a focus on addressing the data problem. I'll explore leveraging multi-modality, self-supervision, simulations, active learning, domain transfer, meta-learning, and demonstrate practical ways to improve the sample efficiency of perception algorithms in embodied settings.
Vincent Vanhoucke is a principal scientist in the Google Brain team, and director for Google's robotics research effort. His research has spanned many areas of artificial intelligence and machine learning, from speech recognition to deep learning, computer vision, and robotics. He holds a doctorate from Stanford University and a diplôme d'ingénieur from the École Centrale Paris.