Information Retrieval (IR) systems play a fundamental role in many modern commodities, including Search Engines (SE), digital libraries, recommender systems and social networks. The IR task is particularly challenging because of the volatility of IR systems performance: users’ information needs change daily, and so do the documents to be retrieved and the concept of what is relevant to a given information need. Nevertheless, the empirical evaluation of an IR system is a costly and slow post-hoc procedure, that happens after the system deployment. Given the challenges linked to empirical IR evaluation, predicting a system’s performance before its deployment, would add significant value to the development of an IR system. In this manuscript, we place the cornerstone for the prediction of IR performance, by considering two closely related areas: the modeling of IR systems performance and the Query Performance Prediction (QPP). The former area allows us to identify those features that impact the most on the performance and that can be used as predictors, while the latter provides us with a starting point to instantiate the predictive task in IR. Concerning the modeling of IR performance, we first investigate one of the most popular statistical tools, ANOVA, by comparing the traditional ANOVA with a recent approach, bootstrap ANOVA. Secondly, using ANOVA, we study the concept of topic difficulty and observe that the topic difficulty is not an intrinsic property of the information need, but it stems from the formulation used to represent the topic. Finally, we show how to use Generalized Linear Models as an alternative to the traditional linear modeling of IR performance. We show how GLMs provide more powerful inference, with comparable stability. Our analyses on the QPP domain start with developing a predictor used to select among a set of reformulations for the same information need, the best performing one for the systematic review task. Secondly, we investigate how to classify queries as either semantic or lexical to predict whether neural models will perform better than lexical ones. Finally, given the challenges shown in the evaluation of the previous approaches, we devise a new evaluation procedure, dubbed sMARE. sMARE allows moving from single point estimation of the performance, to a distributional one, allowing to achieve improved comparisons between QPP models and more precise analyses.
Information Retrieval (IR) systems play a fundamental role in many modern commodities, including Search Engines (SE), digital libraries, recommender systems and social networks. The IR task is particularly challenging because of the volatility of IR systems performance: users’ information needs change daily, and so do the documents to be retrieved and the concept of what is relevant to a given information need. Nevertheless, the empirical evaluation of an IR system is a costly and slow post-hoc procedure, that happens after the system deployment. Given the challenges linked to empirical IR evaluation, predicting a system’s performance before its deployment, would add significant value to the development of an IR system. In this manuscript, we place the cornerstone for the prediction of IR performance, by considering two closely related areas: the modeling of IR systems performance and the Query Performance Prediction (QPP). The former area allows us to identify those features that impact the most on the performance and that can be used as predictors, while the latter provides us with a starting point to instantiate the predictive task in IR. Concerning the modeling of IR performance, we first investigate one of the most popular statistical tools, ANOVA, by comparing the traditional ANOVA with a recent approach, bootstrap ANOVA. Secondly, using ANOVA, we study the concept of topic difficulty and observe that the topic difficulty is not an intrinsic property of the information need, but it stems from the formulation used to represent the topic. Finally, we show how to use Generalized Linear Models as an alternative to the traditional linear modeling of IR performance. We show how GLMs provide more powerful inference, with comparable stability. Our analyses on the QPP domain start with developing a predictor used to select among a set of reformulations for the same information need, the best performing one for the systematic review task. Secondly, we investigate how to classify queries as either semantic or lexical to predict whether neural models will perform better than lexical ones. Finally, given the challenges shown in the evaluation of the previous approaches, we devise a new evaluation procedure, dubbed sMARE. sMARE allows moving from single point estimation of the performance, to a distributional one, allowing to achieve improved comparisons between QPP models and more precise analyses.
Modelling and Explaining IR System Performance Towards Predictive Evaluation
FAGGIOLI, GUGLIELMO
2023
Abstract
Information Retrieval (IR) systems play a fundamental role in many modern commodities, including Search Engines (SE), digital libraries, recommender systems and social networks. The IR task is particularly challenging because of the volatility of IR systems performance: users’ information needs change daily, and so do the documents to be retrieved and the concept of what is relevant to a given information need. Nevertheless, the empirical evaluation of an IR system is a costly and slow post-hoc procedure, that happens after the system deployment. Given the challenges linked to empirical IR evaluation, predicting a system’s performance before its deployment, would add significant value to the development of an IR system. In this manuscript, we place the cornerstone for the prediction of IR performance, by considering two closely related areas: the modeling of IR systems performance and the Query Performance Prediction (QPP). The former area allows us to identify those features that impact the most on the performance and that can be used as predictors, while the latter provides us with a starting point to instantiate the predictive task in IR. Concerning the modeling of IR performance, we first investigate one of the most popular statistical tools, ANOVA, by comparing the traditional ANOVA with a recent approach, bootstrap ANOVA. Secondly, using ANOVA, we study the concept of topic difficulty and observe that the topic difficulty is not an intrinsic property of the information need, but it stems from the formulation used to represent the topic. Finally, we show how to use Generalized Linear Models as an alternative to the traditional linear modeling of IR performance. We show how GLMs provide more powerful inference, with comparable stability. Our analyses on the QPP domain start with developing a predictor used to select among a set of reformulations for the same information need, the best performing one for the systematic review task. Secondly, we investigate how to classify queries as either semantic or lexical to predict whether neural models will perform better than lexical ones. Finally, given the challenges shown in the evaluation of the previous approaches, we devise a new evaluation procedure, dubbed sMARE. sMARE allows moving from single point estimation of the performance, to a distributional one, allowing to achieve improved comparisons between QPP models and more precise analyses.File | Dimensione | Formato | |
---|---|---|---|
tesi_definitiva_Guglielmo_Faggioli.pdf
accesso aperto
Dimensione
8.8 MB
Formato
Adobe PDF
|
8.8 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/98381
URN:NBN:IT:UNIPD-98381