Augmenting Search Quality Ratings with Logs Data
When working on a search engine, we need to measure how our improvements contribute to user satisfaction. While human ratings are often used as a proxy for that, they are costly and do not always correlate well with the user-reported satisfaction. In this talk I present a machine learning model that allows us to leverage log data to build a more accurate rater-based quality metric. Combining log data—which is easily collectible at scale—with a limited-sized raters’ labels allows us to get an estimation of user satisfaction for previously unseen document layouts at no additional cost.
Aleksandr is based at Google Research Europe, Zurich Switzerland, where he is currently working on the problems related to search-powered digital assistants. Aleksandr has a PhD from University of Amsterdam, where his research was focused around modeling and understanding user behavior on complex search engine result pages. He has a number of publications in leading information retrieval conferences and journals, and co-authored a book and multiple tutorials on click models for web search. Prior to Google, Aleksandr worked on search-related problems in Applied Research group at Yandex.