Jakob Uszkoreit

Chevron down

Learning Representations with Self-Attention

Self-attention has been shown to be an efficient way of learning representations of variable sized data like language but also images and music, competitive in quality with recurrent and convolutional neural networks. This talk will cover the basic mechanism and various extensions, interpretation of results from different applications and some outlook towards future research in this direction.

Jakob Uszkoreit leads the new Google Brain research lab in Berlin. There he works on neural network architectures for generating text, images and other modalities in tasks such as machine translation or image generation. Earlier, Jakob led a team in Google Research developing neural network models of language that learn from weak supervision at very large scale, deployed in Search, Ads and the Google Assistant. Before this, Jakob started the group that designed and implemented the semantic parser behind the Google Assistant after working on various aspects of Google Translate in its earlier years.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more