Learning Representations for Reinforcement Learning
The learning performance of a reinforcement learning (RL) agent is highly dependent on its data representation—the features. In this talk, I will discuss several reasons why the representation is so critical in RL, related to the fact that the agent typically learns online, needs to explore, constantly sees data in new parts of the environment and often uses algorithms that bootstrap off their own value estimates. I will describe some strategies for learning representations suitable for this setting, particularly highlighting the utility of sparse or orthogonal representations.
Key takeaways: 1. It is important to consider the role of the representation for your RL agent,
The choice of representation is not just about accuracy, but interacts with the stability of the update, the ability to explore and interference in online updating, and
There is much more to be done to understand the types of representations currently learned, and what properties we want.
Martha White is an Associate Professor of Computing Science at the University of Alberta. Before joining the University of Alberta in 2017, she was an Assistant Professor of Computer Science at Indiana University. Martha is a PI of AMII---the Alberta Machine Intelligence Institute---which is one of the top machine learning centres in the world, and a director of RLAI---the Reinforcement Learning and Artificial Intelligence Lab at the University of Alberta. She holds a Canada CIFAR AI Chair and has authored more than 40 papers in top journals and conferences. Her research focus is on developing algorithms for agents continually learning on streams of data, with an emphasis on representation learning and reinforcement learning.