3D Simultaneous Localization and Mapping and Navigation Planning for Mobile Robots in Complex Environments
Mobile Robots moving in complex environments, like rough terrain or inside buildings need to perceive their environment in 3D in order to navigate. We equipped autonomous ground vehicles and micro aerial vehicles with 3D laser scanners and other sensors. The distance measurements are registered and aggregated in an efficient way in order to create egocentric 3D representations of the robot surroundings. Registering and aggregating egocentric maps creates allocentric 3D environment representations. Based on these percepts traversability is assessed and navigation plans are made. Our team demonstrated 3D navigation in challenging application domains: for ground robots in search & rescue and space exploration scenarios and for flying robots in indoor and outdoor inspection tasks.
Prof. Dr. Sven Behnke is a full professor for Computer Science at University of Bonn, Germany, where he heads the Autonomous Intelligent Systems group. He has been investigating deep learning since 1997. In 1998, he proposed the Neural Abstraction Pyramid, hierarchical recurrent convolutional neural networks for image interpretation. He developed unsupervised methods for layer-by-layer learning of increasingly abstract image representations. The architecture was also trained in a supervised way to iteratively solve computer vision tasks, such as super resolution, image denoising, and face localization. In recent years, his deep learning research focused on learning object-class segmentation of images and semantic RGB-D perception.