Learning to Simulate Reality
Given rich simulators and a large compute budget, AI systems are getting increasingly better at solving problems related to video games, perception, robotics and generative design. But where do simulators for such domains come from? It takes several man years to build large scale simulators and to maintain them. One popular approach is to learn implicit models of reality via deep generative modeling. However, it has been difficult to scale them beyond graphics due to the need for pixel accurate modeling. Instead, I will present an integrative approach where deep learning systems are trained to explicitly reason in the space of computer programs. The combination of deep learning, program synthesis and reinforcement learning might pave a path towards realizing systems which learn to output rich simulations conditioned on raw data. Such dynamic simulators could greatly accelerate several areas including perception and control in robotics and autonomous cars, as well as enabling Mixed Reality applications in the wild.
I am a Research Scientist at Google DeepMind. Previously, I was a PhD student at MIT under the supervision of Joshua Tenenbaum. I am primarily interested in understanding how the mind works. My current research goal is to build learning algorithms that acquire grounded common-sense knowledge.