Marta Garnelo

Chevron down

Representations for Deep Learning

Current deep learning algorithms have achieved impressive results on a variety of tasks ranging from super-human image recognition to beating the world champion at the game of Go. Despite these successes deep learning algorithms still suffer from a variety of drawbacks: they require very large amounts of training data, they lack the ability to reason on an abstract level and their operation is largely opaque to humans. One way to overcome these problems is by creating models that form useful representations which exhibit beneficial properties such as disentanglement, the ability to generalise etc. This talk focusses on recent work that has addressed this issue for deep learning models.

Marta is a research scientist at DeepMind and also currently halfway through her PhD at Imperial College London under the supervision of Prof Murray Shanahan. Her research interests include deep generative models and reinforcement learning, in particular finding meaningful representations using the former to improve the latter.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more