Searching for Answers Through Iterative Feedback

Principal Investigator:
W. Bruce Croft, PI

Center for Intelligent Information Retrieval (CIIR)
College of Information and Computer Sciences
140 Governors Drive
University of Massachusetts Amherst
Amherst, MA 01003-9264

Project Abstract

In current web search engines, the response to a query is typically a series of pages that contain ranked results (search engine result pages or SERPs). The increasing use of mobile search places a premium on the use of the limited display space that is available. Similarly, voice-based search, where both questions and answers are done by voice recognition and speech generation, is becoming more common and also creates a limitation on the interaction bandwidth between the system and the user. In these situations, the ability to deliver more precise answers to a broad range of questions, rather than a ranked display of results, becomes critical. If a search system can return a ranked list of possible answers instead of documents, and a search environment may limit the user-system bandwidth, this leads to the following important research question that is the focus of this proposal -- what is the most effective way to present and interact with a ranked list of answers, where the goal is to identify one or more satisfactory answers as quickly as possible. Understanding this problem and discovering solutions to it will have a large impact on the future development of search engines.

In this project, we will work on four research tasks: (a) develop and evaluate iterative relevance feedback models for answers; (b) develop and evaluate interactive summarization techniques for answers; (c) develop and evaluate finer-grained feedback approaches for answers; (d) develop and evaluate a conversation-based model for answer retrieval. This project will be the first to study methods and models for interacting with ranked lists of answers. Many researchers are developing neural models for the factoid question-answering task, but we are one of the few groups looking at the problem of finding non-factoid answers in passages of documents. The experience gained from developing neural models for this complex task provides the background for the unique tasks and approaches described in this proposal, which address the key, but previously ignored, issue of how we make effective use of ranked lists of answers to interact with users and improve the results from neural answer retrieval models. The later part of the project will address the use of conversational models in search, which is also becoming increasingly important but has not yet been studied.

This work is supported in part by the Center for Intelligent Information Retrieval (CIIR) and in part by the National Science Foundation (NSF IIS-1715095).
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.