Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3320435.3320448acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
research-article

Are All Rejected Recommendations Equally Bad?: Towards Analysing Rejected Recommendations

Published: 07 June 2019 Publication History

Abstract

When evaluating algorithms that recommend a list of relevant items to a user, it is common to use metrics such as precision to measure the system accuracy. When computing precision, one computes the number of items that were selected by the user among the recommended items. As such, recommended items that were not selected by the user, which we call \em rejected recommendations, are all considered to be bad recommendations, resulting in no increase to the system accuracy metric. Our ultimate goal is to develop a new recommendation accuracy evaluation metric, which may assign some value to the rejected recommendations. In this paper, as a first step, we claim that some rejected recommendations are better than others. Specifically, we consider items that are similar to the item that was finally selected, as better recommendations than items that bear little similarity. We conduct a user study, showing that rejected recommendations that have high content or collaborative similarity to the selected item are perceived by users as better recommendations than items with low similarity. In addition, we study the correlations between the recommended items shown to a user and the un-recommended items that the user has selected in a real-life job posting dataset. We show that when considering item similarity rather than simple precision, the correlations are much higher. This may be attributed to the influence of the recommended items on the decisions of the user.

References

[1]
Fabian Abel, Yashar Deldjoo, Mehdi Elahi, and Daniel Kohlsdorf. 2017. Recsys challenge 2017: Offline and online evaluation. In Proceedings of the Eleventh ACM Conference on Recommender Systems. ACM, 372--373.
[2]
John S Breese, David Heckerman, and Carl Kadie. 1998. Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 43--52.
[3]
Chris Buckley and Ellen M Voorhees. 2004. Retrieval evaluation with incomplete information. In Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 25--32.
[4]
Wen-Yen Chen, Dong Zhang, and Edward Y. Chang. 2008. Combinational Collaborative Filtering for Personalized Community Recommendation. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '08). ACM, New York, NY, USA, 115--123.
[5]
Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. 2010. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems. ACM, 39--46.
[6]
Marco de Gemmis, Pasquale Lops, Cataldo Musto, Fedelucio Narducci, and Giovanni Semeraro. 2015. Semantics-aware content-based recommender systems. In Recommender Systems Handbook. Springer, 119--159.
[7]
Mukund Deshpande and George Karypis. 2004. Item-based top-n recommendation algorithms. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 143--177.
[8]
M Benjamin Dias, Dominique Locher, Ming Li, Wael El-Deredy, and Paulo JG Lisboa. 2008. The value of personalised recommender systems to e-business: a case study. In Proceedings of the 2008 ACM conference on Recommender systems. ACM, 291--294.
[9]
Mouzhi Ge, Carla Delgado-Battenfeld, and Dietmar Jannach. 2010. Beyond accuracy: evaluating recommender systems by coverage and serendipity. In Proceedings of the fourth ACM conference on Recommender systems. ACM, 257-- 260.
[10]
Asela Gunawardana and Guy Shani. 2015. Evaluating recommender systems. In Recommender Systems Handbook. Springer, 265--308.
[11]
Ido Guy, Naama Zwerdling, David Carmel, Inbal Ronen, Erel Uziel, Sivan Yogev, and Shila Ofek-Koifman. 2009. Personalized recommendation of social software items based on social relations. In Proceedings of the third ACM conference on Recommender systems. ACM, 53--60.
[12]
John Hannon, Mike Bennett, and Barry Smyth. 2010. Recommending twitter users to follow using content and collaborative filtering approaches. In Proceedings of the fourth ACM conference on Recommender systems. ACM, 199--206.
[13]
Jonathan L Herlocker, Joseph A Konstan, Loren G Terveen, and John T Riedl. 2004. Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 5--53.
[14]
Xin Jin and Bamshad Mobasher. 2003. Using semantic similarity to enhance item-based collaborative filtering. In Proceedings of The 2nd IASTED International Conference on Information and Knowledge Sharing. 1--6.
[15]
Marius Kaminskas and Derek Bridge. 2017. Diversity, serendipity, novelty, and coverage: a survey and empirical analysis of beyond-accuracy objectives in recommender systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 7, 1 (2017), 2.
[16]
George Karypis. 2001. Evaluation of item-based top-n recommendation algorithms. In Proceedings of the tenth international conference on Information and knowledge management. ACM, 247--254.
[17]
Noam Koenigstein and Yehuda Koren. 2013. Towards scalable and accurate item-oriented recommendations. In Proceedings of the 7th ACM conference on Recommender systems. ACM, 419--422.
[18]
Dokyun Lee and Kartik Hosanagar. 2014. Impact of recommender systems on sales volume and diversity. (2014).
[19]
Pasquale Lops, Marco De Gemmis, and Giovanni Semeraro. 2011. Content-based recommender systems: State of the art and trends. In Recommender systems handbook. Springer, 73--105.
[20]
Matthew R McLaughlin and Jonathan L Herlocker. 2004. A collaborative filtering algorithm and evaluation metric that accurately model the user experience. In Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 329--336.
[21]
Sean M McNee, John Riedl, and Joseph A Konstan. 2006. Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI'06 extended abstracts on Human factors in computing systems. ACM, 1097--1101.
[22]
Prem Melville, Raymond J Mooney, and Ramadass Nagarajan. 2002. Contentboosted collaborative filtering for improved recommendations. Aaai/iaai 23 (2002), 187--192.
[23]
Tien T Nguyen, Pik-Mai Hui, F Maxwell Harper, Loren Terveen, and Joseph A Konstan. 2014. Exploring the filter bubble: the effect of using recommender systems on content diversity. In Proceedings of the 23rd international conference on World wide web. ACM, 677--686.
[24]
Makbule Gulcin Ozsoy. 2016. From word embeddings to item recommendation. arXiv preprint arXiv:1601.01356 (2016).
[25]
Bhavik Pathak, Robert Garfinkel, Ram D Gopal, Rajkumar Venkatesan, and Fang Yin. 2010. Empirical analysis of the impact of recommender systems on sales. Journal of Management Information Systems 27, 2 (2010), 159--188.
[26]
Michael J Pazzani and Daniel Billsus. 2007. Content-based recommendation systems. In The adaptive web. Springer, 325--341.
[27]
Parivash Pirasteh, Jason J Jung, and Dosam Hwang. 2014. Item-based collaborative filtering with attribute correlation: a case study on movie recommendation. In Asian Conference on Intelligent Information and Database Systems. Springer, 245-- 252.
[28]
Bruno Pradel, Nicolas Usunier, and Patrick Gallinari. 2012. Ranking with nonrandom missing ratings: influence of popularity and positivity on evaluation metrics. In Proceedings of the sixth ACM conference on Recommender systems. ACM, 147--154.
[29]
Francesco Ricci, Lior Rokach, and Bracha Shapira. 2015. Recommender systems: introduction and challenges. In Recommender systems handbook. Springer, 1--34.
[30]
Tetsuya Sakai. 2007. Alternatives to bpref. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 71--78.
[31]
Oren Sar Shalom, Shlomo Berkovsky, Royi Ronen, Elad Ziklik, and Amir Amihood. 2015. Data Quality Matters in Recommender Systems. In Proceedings of the 9th ACM Conference on Recommender Systems. ACM, 257--260.
[32]
Oren Sar Shalom, Noam Koenigstein, Ulrich Paquet, and Hastagiri P Vanchinathan. 2016. Beyond collaborative filtering: The list recommendation problem. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 63--72.
[33]
Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web. ACM, 285--295.
[34]
Tobias Schnabel, Adith Swaminathan, Ashudeep Singh, Navin Chandak, and Thorsten Joachims. 2016. Recommendations as treatments: Debiasing learning and evaluation. arXiv preprint arXiv:1602.05352 (2016).
[35]
Guy Shani and Asela Gunawardana. 2011. Evaluating Recommendation Systems. Springer US, Boston, MA, 257--297.
[36]
Amit Sharma, Jake M Hofman, and Duncan J Watts. 2015. Estimating the causal impact of recommendation systems from observational data. In Proceedings of the Sixteenth ACM Conference on Economics and Computation. ACM, 453--470.
[37]
Harald Steck. 2010. Training and testing of recommender systems on data missing not at random. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 713--722.
[38]
Panagiotis Symeonidis, Alexandros Nanopoulos, and Yannis Manolopoulos. 2008. Providing justifications in recommender systems. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 38, 6 (2008), 1262--1272.
[39]
Nava Tintarev and Judith Masthoff. 2011. Designing and evaluating explanations for recommender systems. In Recommender systems handbook. Springer, 479--510.
[40]
YY Yao. 1995. Measuring retrieval effectiveness based on user preference of documents. Journal of the American Society for Information Science 46, 2 (1995), 133.
[41]
Mao Ye, Peifeng Yin, Wang-Chien Lee, and Dik-Lun Lee. 2011. Exploiting Geographical Influence for Collaborative Point-of-interest Recommendation. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '11). ACM, New York, NY, USA, 325--334.

Cited By

View all
  • (2024)Exploring the Landscape of Recommender Systems Evaluation: Practices and PerspectivesACM Transactions on Recommender Systems10.1145/36291702:1(1-31)Online publication date: 7-Mar-2024
  • (2024)Non-binary evaluation of next-basket food recommendationUser Modeling and User-Adapted Interaction10.1007/s11257-023-09369-834:1(183-227)Online publication date: 1-Mar-2024
  • (2021)Natural Language Processing for Recommender SystemsRecommender Systems Handbook10.1007/978-1-0716-2197-4_12(447-483)Online publication date: 22-Nov-2021
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
UMAP '19: Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization
June 2019
377 pages
ISBN:9781450360210
DOI:10.1145/3320435
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 June 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. evaluation
  2. recommender systems
  3. rejected items

Qualifiers

  • Research-article

Funding Sources

  • ISF

Conference

UMAP '19
Sponsor:

Acceptance Rates

UMAP '19 Paper Acceptance Rate 30 of 122 submissions, 25%;
Overall Acceptance Rate 162 of 633 submissions, 26%

Upcoming Conference

UMAP '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)35
  • Downloads (Last 6 weeks)3
Reflects downloads up to 25 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring the Landscape of Recommender Systems Evaluation: Practices and PerspectivesACM Transactions on Recommender Systems10.1145/36291702:1(1-31)Online publication date: 7-Mar-2024
  • (2024)Non-binary evaluation of next-basket food recommendationUser Modeling and User-Adapted Interaction10.1007/s11257-023-09369-834:1(183-227)Online publication date: 1-Mar-2024
  • (2021)Natural Language Processing for Recommender SystemsRecommender Systems Handbook10.1007/978-1-0716-2197-4_12(447-483)Online publication date: 22-Nov-2021
  • (2020)Second Workshop on the Impact of Recommender Systems at ACM RecSys ’20Proceedings of the 14th ACM Conference on Recommender Systems10.1145/3383313.3411471(630-631)Online publication date: 22-Sep-2020
  • (2019)First workshop on the impact of recommender systems at ACM RecSys 2019Proceedings of the 13th ACM Conference on Recommender Systems10.1145/3298689.3347060(556-557)Online publication date: 10-Sep-2019
  • (2019)Attribute-based evaluation for recommender systemsProceedings of the 13th ACM Conference on Recommender Systems10.1145/3298689.3347049(378-382)Online publication date: 10-Sep-2019
  • (2012)Evaluating Recommender SystemsRecommender Systems Handbook10.1007/978-1-0716-2197-4_15(547-601)Online publication date: 24-Feb-2012

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media