Searching for Answers Through Iterative Feedback

Principal Investigator:
W. Bruce Croft, PI

Center for Intelligent Information Retrieval (CIIR)
College of Information and Computer Sciences
140 Governors Drive
University of Massachusetts Amherst
Amherst, MA 01003-9264

Project Abstract

In current web search engines, the response to a query is typically a series of pages that contain ranked results (search engine result pages or SERPs). The increasing use of mobile search places a premium on the use of the limited display space that is available. Similarly, voice-based search, where both questions and answers are done by voice recognition and speech generation, is becoming more common and also creates a limitation on the interaction bandwidth between the system and the user. In these situations, the ability to deliver more precise answers to a broad range of questions, rather than a ranked display of results, becomes critical. If a search system can return a ranked list of possible answers instead of documents, and a search environment may limit the user-system bandwidth, this leads to the following important research question that is the focus of this proposal -- what is the most effective way to present and interact with a ranked list of answers, where the goal is to identify one or more satisfactory answers as quickly as possible. Understanding this problem and discovering solutions to it will have a large impact on the future development of search engines.

In this project, we are working on four research tasks: (a) develop and evaluate iterative relevance feedback models for answers; (b) develop and evaluate interactive summarization techniques for answers; (c) develop and evaluate finer-grained feedback approaches for answers; (d) develop and evaluate a conversation-based model for answer retrieval. This project is the first to study methods and models for interacting with ranked lists of answers. Many researchers are developing neural models for the factoid question-answering task, but we are one of the few groups looking at the problem of finding non-factoid answers in passages of documents. The experience gained from developing neural models for this complex task provides the background for the unique tasks and approaches described in this project's proposal, which address the key, but previously ignored, issue of how we make effective use of ranked lists of answers to interact with users and improve the results from neural answer retrieval models. The later part of the project addresses the use of conversational models in search, which is also becoming increasingly important but has not yet been studied.


Qu, C., Yang, L., Croft, W. B. , Trippas, J., Zhang, Y. and Qui, M., "Analyzing and Characterizing User Intent in Information-seeking Conversations," to appear in the Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '18), Ann Arbor, Michigan, USA, July 8-12, 2018, pp. 989-992.

Vikraman, L., Croft, W. B. and OConnor, B., "Exploring Diversification in Non-factoid Question Answering," in the Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval (ICTIR '18),Tianjin, China, Sept 14-17, 2018, pp. 223-226.

Zhang, Y., Chen, X., Yang, L., Ai, Q. and Croft, W. B. , "Towards Conversational Search and Recommendation: System Ask, User Respond," in the Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM '18), Lingotto, Turin, Italy, October 22-26, 2018, pp. 177 - 186.

Qu, C., Yang, L., Croft, W. B. , Scholer, F. and Zhang, Y., "Answer Interaction in Non-factoid Question Answering Systems," in the Proceedings of the ACM SIGIR Conference on Human Interaction and Retrieval (CHIIR '19), Glasgow, Scotland, March 10-14, 2019, pp. 249-253.

Bi, K., Ai, Q. and Croft, W. B. , "Iterative Relevance Feedback for Answer Passage Retrieval with Passage-level Semantic Match," to appear in the Proceedings of the European Conference on Information Retrieval (ECIR '19), Cologne, Germany, April 14-18, 2019.

This work is supported in part by the Center for Intelligent Information Retrieval (CIIR) and in part by the National Science Foundation (NSF IIS-1715095).
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.