The Difference is Night and Day: Appearance Modelling for Robust Robot Visual Navigation
Long-term localization and mapping are essential capabilities for autonomous mobile robots that must operate independently in dynamic environments. In this talk, I will describe two deep neural network systems that have been developed in my laboratory group to improve vision-based navigation. The first system, Sun-BCNN, uses the sun as an absolute orientation reference to reduce drift in visual motion estimates. Crucially, Sun-BCNN learns to recognize how solar illumination affects images, and does not require the sun to be visible in the image stream to operate. The second system relies on image-to-image translation and learned colour constancy models to transform incoming images in appearance space. This remapping enables a robot to localize against a visual map created under substantially different lighting conditions (for example, between dawn and dusk).
Dr. Jonathan Kelly directs the Space & Terrestrial Autonomous Robotic Systems (STARS) Laboratory at the University of Toronto Institute for Aerospace Studies, where his group carries out research at the nexus of sensing, planning, and control, with an emphasis on the study of fundamental problems related to perception, representation, and understanding of the world. Dr. Kelly holds a Dean's Catalyst Professorship (an early-career award for research excellence and potential) and a Canada Research Chair in Collaborative Robotics. Prior to joining the University of Toronto, he was a postdoctoral researcher in the Robust Robotics Group at the Massachusetts Institute of Technology. Dr. Kelly received his PhD degree in 2011 from the University of Southern California. Before starting graduate school, he was a Software Engineer in the Space Technologies division of the Canadian Space Agency.