Are we evaluating rigorously? benchmarking recommendation for reproducible evaluation and fair comparison

Z Sun, D Yu, H Fang, J Yang, X Qu, J Zhang… - Proceedings of the 14th …, 2020 - dl.acm.org
Proceedings of the 14th ACM Conference on Recommender Systems, 2020dl.acm.org
With tremendous amount of recommendation algorithms proposed every year, one critical
issue has attracted a considerable amount of attention: there are no effective benchmarks for
evaluation, which leads to two major concerns, ie, unreproducible evaluation and unfair
comparison. This paper aims to conduct rigorous (ie, reproducible and fair) evaluation for
implicit-feedback based top-N recommendation algorithms. We first systematically review 85
recommendation papers published at eight top-tier conferences (eg, RecSys, SIGIR) to …
With tremendous amount of recommendation algorithms proposed every year, one critical issue has attracted a considerable amount of attention: there are no effective benchmarks for evaluation, which leads to two major concerns, i.e., unreproducible evaluation and unfair comparison. This paper aims to conduct rigorous (i.e., reproducible and fair) evaluation for implicit-feedback based top-N recommendation algorithms. We first systematically review 85 recommendation papers published at eight top-tier conferences (e.g., RecSys, SIGIR) to summarize important evaluation factors, e.g., data splitting and parameter tuning strategies, etc. Through a holistic empirical study, the impacts of different factors on recommendation performance are then analyzed in-depth. Following that, we create benchmarks with standardized procedures and provide the performance of seven well-tuned state-of-the-arts across six metrics on six widely-used datasets as a reference for later study. Additionally, we release a user-friendly Python toolkit, which differs from existing ones in addressing the broad scope of rigorous evaluation for recommendation. Overall, our work sheds light on the issues in recommendation evaluation and lays the foundation for further investigation. Our code and datasets are available at GitHub (https://github.com/AmazingDD/daisyRec).
ACM Digital Library