Learning Semantic Environment Perception for Cognitive Robots
Robots need to perceive their environment to act in a goal-directed way. While mapping the environment geometry is a necessary prerequisite for many mobile robot applications, understanding the semantics of the environment will enable novel applications, which require more advanced cognitive abilities. In the talk, I will report on methods that we developed for learning tasks like the categorization of surfaces, the detection, recognition, and pose estimation of objects, and the transfer of manipulation skills to novel objects. By combining dense geometric modelling – which is based on registration of measurements and graph optimization – and semantic categorization – which is based on random forests, deep learning, and transfer learning – 3D semantic maps of the environment are built. Our team demonstrated the utility of semantic environment perception with cognitive robots in multiple challenging application domains, including domestic service, space exploration, search and rescue, and bin picking.
Prof. Dr. Sven Behnke is a full professor for Computer Science at University of Bonn, Germany, where he heads the Autonomous Intelligent Systems group. He has been investigating deep learning since 1997. In 1998, he proposed the Neural Abstraction Pyramid, hierarchical recurrent convolutional neural networks for image interpretation. He developed unsupervised methods for layer-by-layer learning of increasingly abstract image representations. The architecture was also trained in a supervised way to iteratively solve computer vision tasks, such as superresolution, image denoising, and face localization. In recent years, his deep learning research focused on learning object-class segmentation of images and semantic RGB-D perception.