Ankur Handa

Chevron down

The Quest for Understanding Real World With Synthetic Data

Understanding real world involves recognising objects, their physical locations and different relationships among them. This is a much higher level scene understanding than 3D reconstruction and camera pose, and has relied mainly on supervised training data which is laborious to obtain and consequently limited. In this work, we show how photo-realistic simulations and computer graphics can provide the necessary data to ameliorate this problem and help us get a better understanding of real world.

Ankur obtained his PhD in Prof. Andrew Davison's robotvision lab at Imperial College London on real-time SLAM and camera tracking. He finished his post-doctoral research at the University of Cambridge with Prof. Roberto Cipolla on scene understanding with his work on SceneNet. He then returned to Andrew's lab as a Dyson Research Fellow and continued his work on using simulations to generate data for machine learning to do scene understanding with SceneNet RGB-D. He is now with OpenAI as a Research Scientist and works on 3D scene understanding for robotics, RL, and transfer learning.

Buttontwitter Buttonlinkedin

As Featured In

Original
Original
Original
Original
Original
Original

Partners & Attendees

Intel.001
Nvidia.001
Graphcoreai.001
Ibm watson health 3.001
Facebook.001
Acc1.001
Rbc research.001
Twentybn.001
Forbes.001
Maluuba 2017.001
Mit tech review.001
Kd nuggets.001