Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2043932.2043958acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
research-article

Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit

Published: 23 October 2011 Publication History

Abstract

Recommender systems research is being slowed by the difficulty of replicating and comparing research results. Published research uses various experimental methodologies and metrics that are difficult to compare. It also often fails to sufficiently document the details of proposed algorithms or the evaluations employed. Researchers waste time reimplementing well-known algorithms, and the new implementations may miss key details from the original algorithm or its subsequent refinements. When proposing new algorithms, researchers should compare them against finely-tuned implementations of the leading prior algorithms using state-of-the-art evaluation methodologies. With few exceptions, published algorithmic improvements in our field should be accompanied by working code in a standard framework, including test harnesses to reproduce the described results. To that end, we present the design and freely distributable source code of LensKit, a flexible platform for reproducible recommender systems research. LensKit provides carefully tuned implementations of the leading collaborative filtering algorithms, APIs for common recommender system use cases, and an evaluation framework for performing reproducible offline evaluations of algorithms. We demonstrate the utility of LensKit by replicating and extending a set of prior comparative studies of recommender algorithms --- showing limitations in some of the original results --- and by investigating a question recently raised by a leader in the recommender systems community on problems with error-based prediction evaluation.

References

[1]
X. Amatriain. Recommender systems: We're doing it (all) wrong. http://technocalifornia.blogspot.com/2011/04/recommender-systems-were-doing-it-all.html, Apr. 2011.
[2]
N. W. H. Blaikie. Analyzing quantitative data: from description to explanation. SAGE, Mar. 2003.
[3]
J. S. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In UAI 1998, pages 43--52. AAAI, 1998.
[4]
R. Burke. Evaluating the dynamic properties of recommendation algorithms. In ACM RecSys '10, pages 225--228. ACM, 2010.
[5]
M. Fowler. Inversion of control containers and the dependency injection pattern. http://martinfowler.com/articles/injection.html, Jan. 2004.
[6]
S. Funk. Netflix update: Try this at home. http://sifter.org/simon/journal/20061211.html, Dec. 2006.
[7]
M. Hall, E. Frank, G. Holmes, B. Pfahringer, and P. Reutemann. The WEKA data mining software: An update. SIGKDD Explorations, 11(1), 2009.
[8]
J. Herlocker, J. A. Konstan, and J. Riedl. An empirical analysis of design choices in Neighborhood-Based collaborative filtering algorithms. Inf. Retr., 5(4):287--310, 2002.
[9]
J. L. Herlocker, J. A. Konstan, L. G. Terveen, and J. T. Riedl. Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst., 22(1):5--53, 2004.
[10]
K. Jarvelin and J. Kekalainen. Cumulated gain-based a aa evaluation of IR techniques. ACM Trans. Inf. Syst. (TOIS), 20(4):422--446, Oct. 2002.
[11]
Y. Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In ACM KDD '08, pages 426--434. ACM, 2008.
[12]
N. Lathia, S. Hailes, and L. Capra. Evaluating collaborative filtering over time. In SIGIR '09 Workshop on the Future of IR Evaluation, July 2009.
[13]
J. Levandoski, M. Ekstrand, J. Riedl, and M. Mokbel. RecBench: benchmarks for evaluating performance of recommender system architectures. In VLDB 2011, 2011.
[14]
A. Paterek. Improving regularized singular value decomposition for collaborative filtering. In KDD Cup and Workshop 2007, Aug. 2007.
[15]
P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. GroupLens: an open architecture for collaborative filtering of netnews. In ACM CSCW '94, pages 175--186. ACM, 1994.
[16]
B. Sarwar, G. Karypis, J. Konstan, and J. Reidl. Item-based collaborative filtering recommendation algorithms. In ACM WWW '01, pages 285--295. ACM, 2001.
[17]
G. Shani and A. Gunawardana. Evaluating recommendation systems. In F. Ricci, L. Rokach, B. Shapira, and P. B. Kantor, editors, Recommender Systems Handbook, pages 257--297. Springer, 2010.

Cited By

View all
  • (2024)Towards a Technical Debt for AI-based Recommender SystemProceedings of the 7th ACM/IEEE International Conference on Technical Debt10.1145/3644384.3648574(36-39)Online publication date: 14-Apr-2024
  • (2024)Recommender Systems: A ReviewJournal of the American Statistical Association10.1080/01621459.2023.2279695119:545(773-785)Online publication date: 4-Jan-2024
  • (2023)FairRecKit: A Web-based Analysis Software for Recommender EvaluationsProceedings of the 2023 Conference on Human Information Interaction and Retrieval10.1145/3576840.3578274(438-443)Online publication date: 19-Mar-2023
  • Show More Cited By

Index Terms

  1. Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      RecSys '11: Proceedings of the fifth ACM conference on Recommender systems
      October 2011
      414 pages
      ISBN:9781450306836
      DOI:10.1145/2043932
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      In-Cooperation

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 23 October 2011

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. evaluation
      2. implementation
      3. recommender systems

      Qualifiers

      • Research-article

      Conference

      RecSys '11
      Sponsor:
      RecSys '11: Fifth ACM Conference on Recommender Systems
      October 23 - 27, 2011
      Illinois, Chicago, USA

      Acceptance Rates

      Overall Acceptance Rate 254 of 1,295 submissions, 20%

      Upcoming Conference

      RecSys '24
      18th ACM Conference on Recommender Systems
      October 14 - 18, 2024
      Bari , Italy

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)35
      • Downloads (Last 6 weeks)2
      Reflects downloads up to 23 Sep 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Towards a Technical Debt for AI-based Recommender SystemProceedings of the 7th ACM/IEEE International Conference on Technical Debt10.1145/3644384.3648574(36-39)Online publication date: 14-Apr-2024
      • (2024)Recommender Systems: A ReviewJournal of the American Statistical Association10.1080/01621459.2023.2279695119:545(773-785)Online publication date: 4-Jan-2024
      • (2023)FairRecKit: A Web-based Analysis Software for Recommender EvaluationsProceedings of the 2023 Conference on Human Information Interaction and Retrieval10.1145/3576840.3578274(438-443)Online publication date: 19-Mar-2023
      • (2023)Scalable and Explainable Linear Shallow Autoencoders for Collaborative Filtering from Industrial PerspectiveProceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization10.1145/3565472.3595630(290-295)Online publication date: 18-Jun-2023
      • (2023)When Newer is Not Better: Does Deep Learning Really Benefit Recommendation From Implicit Feedback?Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3591785(942-952)Online publication date: 19-Jul-2023
      • (2023) ClayRSInformation Systems10.1016/j.is.2023.102273119:COnline publication date: 1-Oct-2023
      • (2023)“Harnessing Customer Feedback for Product Recommendations: An Aspect-Level Sentiment Analysis Framework”Human-Centric Intelligent Systems10.1007/s44230-023-00018-23:2(57-67)Online publication date: 27-Mar-2023
      • (2022)Evaluating Recommender Systems: Survey and FrameworkACM Computing Surveys10.1145/355653655:8(1-38)Online publication date: 23-Dec-2022
      • (2022)RepSys: Framework for Interactive Evaluation of Recommender SystemsProceedings of the 16th ACM Conference on Recommender Systems10.1145/3523227.3551469(636-639)Online publication date: 12-Sep-2022
      • (2022)Semantics-aware Content Representations for Reproducible Recommender Systems (SCoRe)Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization10.1145/3503252.3533723(354-356)Online publication date: 4-Jul-2022
      • Show More Cited By

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media