Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Distributionally-Informed Recommender System Evaluation

Published: 07 March 2024 Publication History

Abstract

Current practice for evaluating recommender systems typically focuses on point estimates of user-oriented effectiveness metrics or business metrics, sometimes combined with additional metrics for considerations such as diversity and novelty. In this article, we argue for the need for researchers and practitioners to attend more closely to various distributions that arise from a recommender system (or other information access system) and the sources of uncertainty that lead to these distributions. One immediate implication of our argument is that both researchers and practitioners must report and examine more thoroughly the distribution of utility between and within different stakeholder groups. However, distributions of various forms arise in many more aspects of the recommender systems experimental process, and distributional thinking has substantial ramifications for how we design, evaluate, and present recommender systems evaluation and research results. Leveraging and emphasizing distributions in the evaluation of recommender systems is a necessary step to ensure that the systems provide appropriate and equitably distributed benefit to the people they affect.

References

[1]
Himan Abdollahpouri, Robin Burke, and Bamshad Mobasher. 2017. Recommender systems as multistakeholder environments. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. ACM, 347–348. DOI:
[2]
Gediminas Adomavicius and Jingjing Zhang. 2012. Stability of recommendation algorithms. ACM Transactions on Information Systems 30, 4 (Nov.2012), 1–31. DOI:
[3]
J. J. Allaire, Charles Teague, Carlos Scheidegger, Yihui Xie, and Christophe Dervieux. 2022. Quarto. DOI:
[4]
Maria Antoniak and David Mimno. 2018. Evaluating the stability of embedding-based word similarities. Transactions of the Association for Computational Linguistics 6 (Feb.2018), 107–119. DOI:
[5]
Krisztian Balog, David Maxwell, Paul Thomas, and Shuo Zhang. 2021. Report on the 1st simulation for information retrieval workshop (Sim4IR 2021) at SIGIR 2021. ACM SIGIR Forum 55, 2 (Dec.2021), 10:1–16. DOI:
[6]
Rodger Benham, Ben Carterette, J. Shane Culpepper, and Alistair Moffat. 2020. Bayesian inferential risk evaluation on multiple IR systems. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 339–348. DOI:
[7]
Asia J. Biega, Krishna P. Gummadi, and Gerhard Weikum. 2018. Equity of attention: Amortizing individual fairness in rankings. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. ACM, 405–414. DOI:
[8]
John S. Breese, David Heckerman, and Carl Kadie. 1998. Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence. 43–52.
[9]
Andrei Broder. 2002. A taxonomy of web search. ACM SIGIR Forum 36, 2 (Sept.2002), 3–10. DOI:
[10]
Rocío Cañamares and Pablo Castells. 2017. A probabilistic reformulation of memory-based collaborative filtering: Implications on popularity biases. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 215–224. DOI:
[11]
Rocío Cañamares and Pablo Castells. 2018. Should I follow the crowd?: A probabilistic analysis of the effectiveness of popularity in recommender systems. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. ACM, 415–424. DOI:
[12]
Bob Carpenter, Andrew Gelman, Matthew Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: A probabilistic programming language. Journal of Statistical Software 76, 1 (2017), 1–32. DOI:
[13]
Ben Carterette. 2009. On rank correlation and the distance between rankings. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 436–443. DOI:
[14]
Ben Carterette. 2011. System effectiveness, user models, and user utility: A conceptual framework for investigation. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 903–912. DOI:
[15]
Ben Carterette. 2015. Bayesian inference for information retrieval evaluation. In Proceedings of the 2015 International Conference on the Theory of Information Retrieval. ACM, 31–40. DOI:
[16]
Ben Carterette. 2019. Statistical significance testing in theory and in practice. In Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval. ACM, 257–259. DOI:
[17]
Ben Carterette, Paul N. Bennett, David Maxwell Chickering, and Susan T. Dumais. 2008. Here or there. In Advances in Information Retrieval: ECIR 2008. Craig Macdonald, Iadh Ounis, Vassilis Plachouras, Ian Ruthven, and Ryen W. White (Eds.), Lecture Notes in Computer Science, Vol. 4956, Springer, 16–27. DOI:
[18]
Ben Carterette, Evangelos Kanoulas, and Emine Yilmaz. 2011. Simulating simple user behavior for system effectiveness evaluation. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management. ACM, 611–620. DOI:
[19]
Ben Carterette, Evangelos Kanoulas, and Emine Yilmaz. 2012. Incorporating variability in user behavior into systems-based evaluation. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management. ACM, 135–144. DOI:
[20]
Allison J. B. Chaney, Brandon M. Stewart, and Barbara E. Engelhardt. 2018. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In Proceedings of the 12th ACM Conference on Recommender Systems. ACM, 224–232. DOI:
[21]
Olivier Chapelle, Shihao Ji, Ciya Liao, Emre Velipasaoglu, Larry Lai, and Su-Lin Wu. 2011. Intent-based diversification of web search results: Metrics and algorithms. Information Retrieval 14, 6 (Dec.2011), 572–592. DOI:
[22]
Olivier Chapelle, Donald Metlzer, Ya Zhang, and Pierre Grinspan. 2009. Expected reciprocal rank for graded relevance. In Proceedings of the 18th ACM Conference on Information and Knowledge Management. ACM, 621–630. DOI:
[23]
Olivier Chapelle and Ya Zhang. 2009. A Dynamic bayesian network click model for Web Search Ranking. In Proceedings of the 18th International Conference on World Wide Web . ACM, 1–10. DOI:
[24]
Kenneth Ward Church and Valia Kordoni. 2022. Emerging trends: SOTA-chasing. Natural Language Engineering 28, 2 (March2022), 249–269. DOI:
[25]
Charles L. A. Clarke, Maheedhar Kolla, Gordon V. Cormack, Olga Vechtomova, Azin Ashkan, Stefan Büttcher, and Ian MacKinnon. 2008. Novelty and diversity in information retrieval evaluation. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 659–666. DOI:
[26]
Cyril Cleverdon. 1967. The cranfield tests on index language devices. Aslib Proceedings 19, 6 (June1967), 173–194. DOI:
[27]
Mukund Deshpande and George Karypis. 2004. Item-based top-N recommendation algorithms. Transactions on Information Systems 22, 1 (Jan.2004), 143–177. DOI:
[28]
Fernando Diaz, Bhaskar Mitra, Michael D. Ekstrand, Asia J. Biega, and Ben Carterette. 2020. Evaluating stochastic rankings with expected exposure. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management. ACM. DOI:
[29]
Michael D. Ekstrand. 2020. LensKit for Python: Next-generation software for recommender system experiments. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management. ACM, 2999–3006. DOI:
[30]
Michael D. Ekstrand. 2021. Multiversal Simulacra: Understanding Hypotheticals and Possible Worlds Through Simulation. arXiv:2110.00811 (Oct.2021). https://arxiv.org/abs/2110.00811
[31]
Michael D. Ekstrand, Allison Chaney, Pablo Castells, Robin Burke, David Rohde, and Manel Slokom. 2021. SimuRec: Workshop on synthetic data and simulation methods for recommender systems research. In Proceedings of the 15th ACM Conference on Recommender Systems. ACM, 803–805. DOI:
[32]
Michael D. Ekstrand, Anubrata Das, Robin Burke, and Fernando Diaz. 2022. Fairness in information access systems. Foundations and Trends® in Information Retrieval 16, 1–2 (2022), 1–177. DOI:
[33]
Michael D. Ekstrand and Daniel Kluver. 2021. Exploring author gender in book rating and recommendation. User Modeling and User-Adapted Interaction 31, 3 (July2021), 377–420. DOI:
[34]
Michael D. Ekstrand and Vaibhav Mahant. 2017. Sturgeon and the cool kids: Problems with Top-N recommender evaluation. In Proceedings of the 30th Florida Artificial Intelligence Research Society Conference. AAAI Press.
[35]
Michael D. Ekstrand, Mucun Tian, Ion Madrazo Azpiazu, Jennifer D. Ekstrand, Oghenemaro Anuyah, David McNeill, and Maria Soledad Pera. 2018. All the cool kids, how do they fit in?: Popularity and demographic biases in Recommender Evaluation and Effectiveness. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Sorelle A. Friedler and Christo Wilson (Eds.), Vol. 81. PMLR, 172–186.
[36]
Andres Ferraro, Xavier Serra, and Christine Bauer. 2021. Break the loop: Gender imbalance in music recommenders. In Proceedings of the 2021 Conference on Human Information Interaction and Retrieval. ACM, 249–254. DOI:
[37]
Norbert Fuhr. 2017. Some common mistakes in IR evaluation, and how they can be avoided. ACM SIGIR Forum 51, 3 (Dec.2017), 32–41. DOI:
[39]
Asela Gunawardana, Guy Shani, and Sivan Yogev. 2022. Evaluating recommender systems. In Recommender Systems Handbook (3rd ed.). Francesco Ricci, Lior Rokach, and Bracha Shapira (Eds.), Springer US, New York, NY, 547–601. DOI:
[40]
Jonathan Herlocker, Joseph A. Konstan, Loren Terveen, and John Riedl. 2004. Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems 22, 1 (2004), 5–53. DOI:
[41]
Y. Hu, Y. Koren, and C. Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In Proceedings of the 2008 8th IEEE International Conference on Data Mining. IEEE, 263–272. DOI:
[42]
Eyke Hüllermeier and Willem Waegeman. 2021. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning 110, 3 (March2021), 457–506. DOI:
[43]
Ngozi Ihemelandu and Michael D. Ekstrand. 2021. Statistical inference: The missing piece of recsys experiment reliability discourse. In Proceedings of the Perspectives on the Evaluation of Recommender Systems Workshop 2021, Vol. 2955. CEUR-WS.
[44]
Dietmar Jannach, Omer Sar Shalem, and Joseph A. Konstan. 2019. Towards more impactful recommender systems research. In Proceedings of the ImpactRS Workshop at RecSys 2019.
[45]
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems 20, 4 (Oct.2002), 422–446. DOI:
[46]
Rosie Jones and Kristina Lisa Klinkner. 2008. Beyond the session timeout: Automatic hierarchical segmentation of search topics in query logs. In Proceedings of the 17th ACM Conference on Information and Knowledge Management. ACM, 699–708. DOI:
[47]
Pigi Kouki, Ilias Fountalis, Nikolaos Vasiloglou, Xiquan Cui, Edo Liberty, and Khalifeh Al Jadda. 2020. From the lab to production: A case study of session-based recommendations in the home-improvement domain. In Proceedings of the 14th ACM Conference on Recommender Systems. ACM, 140–149. DOI:
[48]
Stefan Larson. 2022. Towards yet another checklist for new datasets. In Proceedings of the ML Evaluation Standards Workshop at ICLR 2022.
[49]
Neal Lathia, Stephen Hailes, Licia Capra, and Xavier Amatriain. 2010. Temporal diversity in recommender systems. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 210–217. DOI:
[50]
Benjamin M. Marlin, Richard S. Zemel, Sam Roweis, and Malcolm Slaney. 2007. Collaborative filtering and the missing at random assumption. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence. AUAI, 50–54.
[51]
James McInerney, Ehtsham Elahi, Justin Basilico, Yves Raimond, and Tony Jebara. 2021. Accordion: A trainable simulator for long-term interactive systems. In Proceedings of the 15th ACM Conference on Recommender Systems. ACM, 102–113. DOI:
[52]
Rishabh Mehrotra, Ashton Anderson, Fernando Diaz, Amit Sharma, Hanna Wallach, and Emine Yilmaz. 2017. Auditing search engines for differential satisfaction across demographics. In Proceedings of the 26th International Conference on World Wide Web Companion. International World Wide Web Conferences Steering Committee, 626–633. DOI:
[53]
Martin Mladenov, Chih-Wei Hsu, Vihan Jain, Eugene Ie, Christopher Colby, Nicolas Mayoraz, Hubert Pham, Dustin Tran, Ivan Vendrov, and Craig Boutilier. 2021. RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems. arXiv:2103.08057 (March2021). http://arxiv.org/abs/2103.08057
[54]
Alistair Moffat and Justin Zobel. 2008. Rank-biased precision for measurement of retrieval effectiveness. Transactions on Information Systems 27, 1 (Dec.2008), 2:1–27. DOI:
[55]
Elinor Ostrom, Roy Gardner, James Walker, James M. Walker, and Jimmy Walker. 1994. Rules, Games, and Common-Pool Resources. University of Michigan Press.
[56]
Javier Parapar, David E. Losada, Manuel A. Presedo-Quindimil, and Alvaro Barreiro. 2020. Using score distributions to compare statistical significance tests for information retrieval evaluation. Journal of the Association for Information Science and Technology 71, 1 (2020), 98–113. DOI:
[57]
Isabella Peters and Wolfgang G. Stock. 2008. Folksonomy and information retrieval. Proceedings of the American Society for Information Science and Technology 44, 1 (Oct.2008), 1–28. DOI:
[58]
Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. 2008. Learning diverse rankings with multi-armed bandits. In Proceedings of the 25th International Conference on Machine Learning. ACM, 784–791. DOI:
[59]
Amifa Raj and Michael D. Ekstrand. 2022. Measuring fairness in ranked results: An analytical and empirical comparison. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 726–736. DOI:
[60]
Marco Tulio Ribeiro, Anisio Lacerda, Adriano Veloso, and Nivio Ziviani. 2012. Pareto-efficient hybridization for multi-objective recommender systems. In Proceedings of the 6th ACM Conference on Recommender Systems. ACM, 19–26. DOI:
[61]
Pedro Rodriguez, Joe Barrow, Alexander Miserlis Hoyle, John P. Lalor, Robin Jia, and Jordan Boyd-Graber. 2021. Evaluation examples are not equally informative: How should that change NLP leaderboards? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). ACL, 4486–4503. DOI:
[62]
David Rohde, Stephen Bonner, Travis Dunlop, Flavian Vasile, and Alexandros Karatzoglou. 2018. RecoGym: A Reinforcement Learning Environment for the Problem of Product Recommendation in Online Advertising. arXiv:1808.00720 (Aug.2018). https://arxiv.org/abs/1808.00720
[63]
Tetsuya Sakai. 2020. On Fuhr’s guideline for IR evaluation. ACM SIGIR Forum 54, 1 (June2020), 12:1–8. DOI:
[64]
Tetsuya Sakai and Stephen Robertson. 2008. Modelling A user population for designing information retrieval metrics. In Proceedings of the 2nd International Workshop on Evaluating Information Access.
[65]
Gerard Salton. 1991. The smart project in automatic document retrieval. In Proceedings of the 14th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 356–358. DOI:
[66]
Piotr Sapiezynski, Wesley Zeng, Ronald E. Robertson, Alan Mislove, and Christo Wilson. 2019. Quantifying the impact of user attentionon fair group representation in ranked lists. In Companion Proceedings of the 2019 World Wide Web Conference. ACM, 553–562. DOI:
[67]
Mark D. Smucker, James Allan, and Ben Carterette. 2007. A comparison of statistical significance tests for information retrieval evaluation. In Proceedings of the 16th ACM Conference on Information and Knowledge Management. ACM, 623–632. DOI:
[68]
Ian Soboroff. 2021. The Datasets Were Not Built to Be Solved. They Were Built as Tools to Understand the Problem and the Systems We Build to “Solve” Them.https://twitter.com/ian_soboroff/status/1426901262369439751
[69]
Harald Steck. 2010. Training and testing of recommender systems on data missing not at random. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 713–722. DOI:
[70]
Jacopo Tagliabue, Federico Bianchi, Tobias Schnabel, Giuseppe Attanasio, Ciro Greco, Gabriel de Souza P. Moreira, and Patrick John Chia. 2022. EvalRS: A Rounded Evaluation of Recommender Systems. arXiv:2207.05772 (July2022). http://arxiv.org/abs/2207.05772
[71]
Jean Tague, Michael Nelson, and Harry Wu. 1980. Problems in the simulation of bibliographic retrieval systems. In Proceedings of the 3rd Annual ACM Conference on Research and Development in Information Retrieval. 236–255. DOI:
[72]
Gábor Takács, István Pilászy, and Domonkos Tikk. 2011. Applications of the conjugate gradient method for implicit feedback collaborative filtering. In Proceedings of the 5th ACM Conference on Recommender Systems. ACM, 297–300. DOI:
[73]
Mucun Tian and Michael D. Ekstrand. 2020. Estimating error and bias in offline evaluation results. In Proceedings of the 2020 Conference on Human Information Interaction and Retrieval. ACM, 392–396. DOI:
[74]
Julián Urbano. 2015. Test collection reliability: A study of bias and robustness to statistical assumptions via stochastic simulation. Information Retrieval Journal 19, 3 (Dec.2015), 313–350. DOI:
[75]
Julián Urbano, Harlley Lima, and Alan Hanjalic. 2019. Statistical significance testing in information retrieval: An empirical analysis of type I, type II and type III errors. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 505–514. DOI:
[76]
Joost van Doorn, Daan Odijk, Diederik M. Roijers, and Maarten de Rijke. 2016. Balancing relevance criteria through multi-objective optimization. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 769–772. DOI:
[77]
Ellen M. Voorhees. 2021. Coopetition in IR research. ACM SIGIR Forum 54, 2 (Aug.2021), 1:1–1:3. DOI:
[78]
Ellen M. Voorhees, Daniel Samarov, and Ian Soboroff. 2017. Using replicates in information retrieval evaluation. Transactions on Information Systems 36, 2 (Sept.2017), 12:1–12:21. DOI:
[79]
Lidan Wang, Paul N. Bennett, and Kevyn Collins-Thompson. 2012. Robust ranking models via risk-sensitive optimization. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 761–770. DOI:
[80]
Lequn Wang and Thorsten Joachims. 2021. User fairness, item fairness, and diversity for rankings in two-sided markets. In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval. ACM, New York, NY, 23–41. DOI:
[81]
Haolun Wu, Bhaskar Mitra, Chen Ma, Fernando Diaz, and Xue Liu. 2022. Joint multisided exposure fairness for recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 703–714. DOI:
[82]
Shengliang Xu, Shenghua Bao, Ben Fei, Zhong Su, and Yong Yu. 2008. Exploring folksonomy for personalized search. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 155–162. DOI:
[83]
Longqi Yang, Yin Cui, Yuan Xuan, Chenyang Wang, Serge Belongie, and Deborah Estrin. 2018. Unbiased offline recommender evaluation for missing-not-at-random implicit feedback. In Proceedings of the 12th ACM Conference on Recommender Systems. ACM, 279–287. DOI:
[84]
Emine Yilmaz, Milad Shokouhi, Nick Craswell, and Stephen Robertson. 2010. Expected browsing utility for web search evaluation. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management. ACM, 1561–1564. DOI:

Cited By

View all
  • (2024)TriMLP: A Foundational MLP-like Architecture for Sequential RecommendationACM Transactions on Information Systems10.1145/3670995Online publication date: 10-Jun-2024
  • (2024)Introduction to the Special Issue on Perspectives on Recommender Systems EvaluationACM Transactions on Recommender Systems10.1145/36483982:1(1-5)Online publication date: 7-Mar-2024
  • (2024)Systematic Literature Review on Recommender System: Approach, Problem, Evaluation Techniques, DatasetsIEEE Access10.1109/ACCESS.2024.335927412(19827-19847)Online publication date: 2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Recommender Systems
ACM Transactions on Recommender Systems  Volume 2, Issue 1
March 2024
346 pages
EISSN:2770-6699
DOI:10.1145/3613520
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 March 2024
Online AM: 05 August 2023
Accepted: 06 July 2023
Revised: 23 June 2023
Received: 14 December 2022
Published in TORS Volume 2, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Evaluation
  2. distributions
  3. exposure
  4. statistics

Qualifiers

  • Research-article

Funding Sources

  • National Science Foundation

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)378
  • Downloads (Last 6 weeks)38
Reflects downloads up to 23 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)TriMLP: A Foundational MLP-like Architecture for Sequential RecommendationACM Transactions on Information Systems10.1145/3670995Online publication date: 10-Jun-2024
  • (2024)Introduction to the Special Issue on Perspectives on Recommender Systems EvaluationACM Transactions on Recommender Systems10.1145/36483982:1(1-5)Online publication date: 7-Mar-2024
  • (2024)Systematic Literature Review on Recommender System: Approach, Problem, Evaluation Techniques, DatasetsIEEE Access10.1109/ACCESS.2024.335927412(19827-19847)Online publication date: 2024
  • (2024)Towards an Adaptive Gamification Recommendation Approach for Interactive Learning EnvironmentsAdvanced Information Networking and Applications10.1007/978-3-031-57840-3_31(341-352)Online publication date: 11-Apr-2024
  • (2024)Not Just Algorithms: Strategically Addressing Consumer Impacts in Information RetrievalAdvances in Information Retrieval10.1007/978-3-031-56066-8_25(314-335)Online publication date: 24-Mar-2024
  • (2024)Measuring Item Fairness in Next Basket Recommendation: A Reproducibility StudyAdvances in Information Retrieval10.1007/978-3-031-56066-8_18(210-225)Online publication date: 24-Mar-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media