One can't judge a click model only by how well it ranks documents, we also need to make sure it actively identified and removed biases hidden in the logged data.

That's what we showed in our recent #SIGIR23 paper with Philipp Hager, Jean-Michel Renders and Maarten de Rijke.

https://arxiv.org/abs/2304.09560

#PaperThread #ULTR #ClickModels #IR

An Offline Metric for the Debiasedness of Click Models

A well-known problem when learning from user clicks are inherent biases prevalent in the data, such as position or trust bias. Click models are a common method for extracting information from user clicks, such as document relevance in web search, or to estimate click biases for downstream applications such as counterfactual learning-to-rank, ad placement, or fair ranking. Recent work shows that the current evaluation practices in the community fail to guarantee that a well-performing click model generalizes well to downstream tasks in which the ranking distribution differs from the training distribution, i.e., under covariate shift. In this work, we propose an evaluation metric based on conditional independence testing to detect a lack of robustness to covariate shift in click models. We introduce the concept of debiasedness in click modeling and derive a metric for measuring it. In extensive semi-synthetic experiments, we show that our proposed metric helps to predict the downstream performance of click models under covariate shift and is useful in an off-policy model selection setting.

arXiv.org

The inaccuracy of this is amusing for #sigir2023 #sigir23

Luckily my team checked properly.

📢Call For Tutorials📢

Call for tutorials starts now😆

The deadline is March 28, 2023

For more information and to submit your proposal, visit https://sigir.org/sigir2023/

#ACMSIGIR #SIGIR2023 #SIGIR2023CFP #SIGIR23

SIGIR | Taipei | Taiwan | 2023