Speaker: Michael Bendersky, Google
Title: Neural Models for Learning To Rank
Date: Friday. March 25, 2022 - 1:30 - 2:30 PM EDT (North American Daylight Saving Time) via Zoom
Zoom Access: Zoom Link and reach out to Alex Taubman for the passcode.
Abstract: Learning to rank (LTR) techniques are at the core of most search and recommendation tasks. In this talk, I will provide a broad overview of the work our research team has done in the area of neural LTR with focus on optimizing various neural architectures with listwise losses. First, I will discuss some of our new neural LTR approaches, and how they compare to the hitherto state-of-the-art gradient boosted decision tree methods, such as LambdaMART. Second, I will demonstrate how listwise losses can be used to effectively adapt large language models (e.g., BERT) to ranking tasks. Finally, I will give a brief overview of the open source Tensorflow and JAX libraries we have released to democratize neural LTR research.
Bio: Michael Bendersky is a Senior Staff Software Engineer at Google Research. He is currently managing a research group focusing on applying machine learning to content search and discovery. Michael holds a Ph.D. from the University of Massachusetts Amherst, and a B.Sc. and M.Sc. from the Technion, Israel Institute of Technology. Michael co-authored over 80 publications. He served on program and organizing committees for multiple academic conferences including SIGIR, CIKM, WSDM, WWW, KDD and ICTIR. He is the co-author of two books in the "Foundations and Trends in Information Retrieval" series. Michael co-organized tutorials at SIGIR 2015, SIGIR 2019, ICTIR 2019, and WSDM 2022, on the topics of verbose query understanding, neural learning to rank, and search and discovery in personal email collections.