The Quest for Understanding Real World With Synthetic Data
Understanding real world involves recognising objects, their physical locations and different relationships among them. This is a much higher level scene understanding than 3D reconstruction and camera pose, and has relied mainly on supervised training data which is laborious to obtain and consequently limited. In this work, we show how photo-realistic simulations and computer graphics can provide the necessary data to ameliorate this problem and help us get a better understanding of real world.
Ankur obtained his PhD in Prof. Andrew Davison's robotvision lab at Imperial College London on real-time SLAM and camera tracking. He finished his post-doctoral research at the University of Cambridge with Prof. Roberto Cipolla on scene understanding with his work on SceneNet. He then returned to Andrew's lab as a Dyson Research Fellow and continued his work on using simulations to generate data for machine learning to do scene understanding with SceneNet RGB-D. He is now with OpenAI as a Research Scientist and works on 3D scene understanding for robotics, RL, and transfer learning.