Unsupervised Doodling and Painting with Generative Agents
The computational techniques that drive the field of machine learning are increasingly being used for creative endeavours. In particular, recent advances in generative modelling provide a tantalising glimpse of how machine learning may one day find a place in the modern artist's tool set. In this talk I will investigate using reinforcement learning agents as generative models of images. A generative agent controls a simulated painting environment, and is trained with rewards provided by a discriminator network simultaneously trained to assess the realism of the agent's samples. We find that when sufficiently constrained, generative agents can learn to produce images with a degree of visual abstraction, despite having only ever seen real photographs and no trajectories. And given enough time with the painting environment, they can produce images with considerable realism. These results show that, under the right circumstances, some aspects of human drawing can emerge from simulated embodiment, without the need for external supervision, imitation or social cues.
- Ali Eslami is a staff research scientist at DeepMind. His research is focused on getting computers to learn generative models of images that not only produce good samples but also good explanations for their observations. Prior to this, he was a post-doctoral researcher at Microsoft Research in Cambridge. He did his PhD in the School of Informatics at the University of Edinburgh, during which he was also a visiting researcher in the Visual Geometry Group at the University of Oxford.