Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3173574.3173951acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions

Published: 21 April 2018 Publication History

Abstract

Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.

Supplementary Material

ZIP File (pn3289.zip)
suppl.mov (pn3289-file5.mp4)
Supplemental video
MP4 File (pn3289.mp4)

References

[1]
Stavros Antifakos, Nicky Kern, Bernt Schiele, and Adrian Schwaninger. 2005. Towards improving trust in context-aware systems by displaying system confidence. In Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices & Services. ACM, 9--14.
[2]
Liliana Ardissono, Anna Goy, Giovanna Petrone, Marino Segnan, and Pietro Torasso. 2003. Intrigue: personalized recommendation of tourist attractions for desktop and hand held devices. Applied Artificial Intelligence 17, 8--9 (2003), 687--714.
[3]
Bettina Berendt and Sören Preibusch. 2014. Better decision support through exploratory discrimination-aware data mining: foundations and empirical evidence. Artificial Intelligence and Law 22, 2 (2014), 175--209.
[4]
Joel Brockner and Batia M Wiesenfeld. 1996. An integrative framework for explaining reactions to decisions: interactive effects of outcomes and procedures. Psychological bulletin 120, 2 (1996), 189.
[5]
Joanna J Bryson. 2010. Robots should be slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues (2010), 63--74.
[6]
Jenna Burrell. 2016. How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 2053951715622512.
[7]
Adrian Bussone, Simone Stumpf, and Dympna O'Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In Healthcare Informatics (ICHI), 2015 International Conference on. IEEE, 160--169.
[8]
William J Clancey. 1983. The epistemology of a rule-based expert system - a framework for explanation. Artificial intelligence 20, 3 (1983), 215--251.
[9]
Cory J Clark, Eric Evan Chen, and Peter H Ditto. 2015. Moral coherence processes: Constructing culpability and consequences. Current Opinion in Psychology 6 (2015), 123--128.
[10]
Jason A Colquitt, Donald E Conlon, Michael J Wesson, Christopher OLH Porter, and K Yee Ng. 2001. Justice at the Millennium: A Meta-Analytic Review of 25 Years of Organizational Justice Research. Journal of Applied Psychology 86, 3 (2001).
[11]
Jason A Colquitt and Jessica B Rodell. 2015. Measuring justice and fairness. In Oxford Handbook of Justice in the Workplace, Russell Cropanzano and Maureen L Ambrose (Eds.). Oxford University Press, Oxford, 187--202.
[12]
Pádraig Cunningham, Dónal Doyle, and John Loughrey. 2003. An evaluation of the usefulness of case-based explanation. Case-Based Reasoning Research and Development (2003), 1065--1065.
[13]
Anupam Datta, Shayak Sen, and Yair Zick. 2016. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Security and Privacy (SP), 2016 IEEE Symposium on. IEEE, 598--617.
[14]
Michael A DeVito, Darren Gergle, and Jeremy Birnholtz. 2017. Algorithms ruin everything: #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI'17. 3163--3174.
[15]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. (2017). https://arxiv.org/abs/1702.08608
[16]
Paul Dourish. 1997. Accounting for system behaviour: Representation, reflection and resourceful action. Computers and Design in Context, MIT Press, Cambridge, MA, USA (1997), 145--170.
[17]
Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI'17. ACM, 278--288.
[18]
Donal Doyle, Alexey Tsymbal, and Padraig Cunningham. 2003. A Review of Explanation and Explanation in Case-Based Reasoning (TCD-CS-2003--41). Trinity College Dublin, Dublin.
[19]
Mary T Dzindolet, Scott A Peterson, Regina A Pomranky, Linda G Pierce, and Hall P Beck. 2003. The role of trust in automation reliance. International Journal of Human-Computer Studies 58, 6 (2003), 697--718.
[20]
Lilian Edwards and Michael Veale. 2017. Slave to the Algorithm? Why a 'Right to an Explanation' is Probably Not the Remedy You Are Looking For. Duke Law & Technology Review 16, 1 (2017), 18--84.
[21]
Kate Ehrlich, Susanna E Kirk, John Patterson, Jamie C Rasmussen, Steven I Ross, and Daniel M Gruen. 2011. Taking advice from intelligent systems: The double-edged sword of explanations. In Proceedings of the 16th International conference on Intelligent User Interfaces. ACM, 125--134.
[22]
Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First I "like" it, then I hide it: Folk theories of social feeds. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI'16. ACM, 2371--2382.
[23]
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '15). 259--268.
[24]
Robert Fildes, Paul Goodwin, Michael Lawrence, and Konstantinos Nikolopoulos. 2009. Effective forecasting and judgmental adjustments: An empirical evaluation and strategies for improvement in supply-chain planning. International Journal of Forecasting 25, 1 (2009), 3--23.
[25]
Batya Friedman and Peter H Kahn. 1992. Human agency and responsible computing: Implications for computer system design. Journal of Systems and Software 17, 1 (1992), 7--14.
[26]
Kate Goddard, Abdul Roudsari, and Jeremy C Wyatt. 2011. Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators. Journal of the American Medical Informatics Association 19, 1 (2011), 121--127.
[27]
Jerald Greenberg and R Cropanzano. 1993. The social side of fairness: Interpersonal and informational classes of organizational justice. Justice in the workplace: Approaching fairness in human resource management, Lawrence Erlbaum Associates, Hillsdale, NJ (1993).
[28]
Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P Gummadi, and Adrian Weller. 2016. The case for process fairness in learning: Feature selection for fair decision making. In NIPS Symposium on Machine Learning and the Law, Barcelona, Spain, 8 December 2016.
[29]
Sara Hajian and Josep Domingo-Ferrer. 2012. Direct and indirect discrimination prevention methods. In Discrimination and Privacy in the Information Society, Bart Custers, Toon Calders, Bart Schermer, and Tal Zarsky (Eds.). Springer, Berlin, Heidelberg, 241--254.
[30]
Jonathan L Herlocker, Joseph A Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work. ACM, 241--250.
[31]
Mireille Hildebrandt and Serge Gutwirth. 2008. Profiling the European Citizen. Springer.
[32]
David C Hsia. 1978. Credit scoring and the equal credit opportunity act. Hastings LJ 30 (1978), 371.
[33]
Hilary Johnson and Peter Johnson. 1993. Explanation facilities and interactive systems. In Proceedings of the 1st International Conference on Intelligent User Interfaces. ACM, 159--166.
[34]
Been Kim, Caleb M Chacha, and Julie A Shah. 2015. Inferring Robot Task Plans from Human Team Meetings: A Generative Modeling Approach with Logic-Based Prior. Journal of Artificial Intelligence Research 52 (2015), 361--398.
[35]
René F Kizilcec. 2016. How much information?: Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2390--2395.
[36]
Pigi Kouki, James Schaffer, Jay Pujara, John O'Donovan, and Lise Getoor. 2017. User Preferences for Hybrid Explanations. In Proceedings of the Eleventh ACM Conference on Recommender Systems. ACM, 84--88.
[37]
Carmen Lacave and Francisco J Díez. 2002. A review of explanation methods for Bayesian networks. The Knowledge Engineering Review 17, 2 (2002), 107--127.
[38]
Stefan Lessmann, Bart Baesens, Hsin-Vonn Seow, and Lyn C Thomas. 2015. Benchmarking state-of-the-art classification algorithms for credit scoring: An update of research. European Journal of Operational Research 247, 1 (2015), 124--136.
[39]
Brian Y Lim, Anind K Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2119--2128.
[40]
E Allan Lind and Tom R Tyler. 1988. The social psychology of procedural justice. Springer Science & Business Media.
[41]
Zachary C Lipton. 2016. The Mythos of Model Interpretability. In 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). New York. https://arxiv.org/abs/1606.03490
[42]
Jerry L Mashaw. 1985. Bureaucratic justice: Managing social security disability claims. Yale University Press.
[43]
Viktor Mayer-Schönberger and Kenneth Cukier. 2013. Big Data: A Revolution That Will Transform How We Live, Work and Think. John Murray, London.
[44]
Gregory Mitchell. 2004. Libertarian Paternalism is an Cxymoron. Nw. UL Rev. 99 (2004), 1245.
[45]
Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. 2017. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition 65 (2017), 211--222.
[46]
Johanna D Moore and William R Swartout. 1991. A reactive approach to explanation: Taking the user's feedback into account. In Natural Language Generation in Artificial Intelligence and Computational Linguistics, Cecile Paris, William R. Swartout, and William C. Mann (Eds.). Springer, 3--48.
[47]
Bonnie M Muir. 1994. Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37, 11 (1994), 1905--1922.
[48]
Clifford Nass, Jonathan Steuer, Ellen Tauber, and Heidi Reeder. 1993. Anthropomorphism, agency, and ethopoeia: computers as social actors. In INTERACT'93 and CHI'93 Conference Companion on Human Factors in Computing Systems. ACM, 111--112.
[49]
Clifford Nass, Jonathan Steuer, and Ellen R Tauber. 1994. Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 72--78.
[50]
Robert Neches, William R. Swartout, and Johanna D. Moore. 1985. Enhanced maintenance and explanation of expert systems through explicit models of their development. IEEE Transactions on Software Engineering 11 (1985), 1337--1351.
[51]
Janni Nielsen, Torkil Clemmensen, and Carsten Yssing. 2002. Getting access to what goes on in people's heads?: reflections on the think-aloud technique. In Proceedings of the Nordic Conference on Human-Computer Interaction. ACM, 101--110.
[52]
Donald A Norman. 1983. Some observations on mental models. Mental Models 7, 112 (1983), 7--14.
[53]
Geoff Norman. 2010. Likert scales, levels of measurement and the "laws" of statistics. Advances in Health Sciences Education 15, 5 (2010), 625--632.
[54]
Conor Nugent and Pádraig Cunningham. 2005. A case-based explanation system for black-box systems. Artificial Intelligence Review 24, 2 (2005), 163--178.
[55]
Cathy O'Neil. 2017. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
[56]
Cathy O'Neill. 2016. How algorithms rule our working lives. The Guardian (2016).
[57]
Dilek Önkal, Paul Goodwin, Mary Thomson, Sinan Gönül, and Andrew Pollock. 2009. The relative influence of advice from human experts and statistical methods on forecast adjustments. Journal of Behavioral Decision Making 22, 4 (2009), 390--409.
[58]
Frank Pasquale. 2015. Digital Star Chamber. Aeon (2015). https://aeon.co/essays/ judge-jury-and-executioner-the-unaccountable-algorithm
[59]
Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware Data Mining. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '08). ACM, New York, NY, USA, 560--568.
[60]
Pearl Pu and Li Chen. 2006. Trust building with explanation interfaces. In Proceedings of the 11th International Conference on Intelligent User Interfaces. ACM, 93--100.
[61]
Bashir Rastegarpanah, Mark Crovella, and Krishna P Gummadi. 2017. Exploring Explanations for Matrix Factorization Recommender Systems. (2017).
[62]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1135--1144.
[63]
Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. 2016. Evaluating the visualization of what a deep neural network has learned. IEEE Transactions on Neural Networks and Learning Systems (2016).
[64]
Paul Schwartz. 1991. Data processing and government administration: the failure of the American legal response to the computer. Hastings LJ 43 (1991), 1321.
[65]
Andrew D. Selbst and Solon Barocas. forthcoming. Regulating Inscrutable Systems. On file with authors. (forthcoming).
[66]
Ben Shneiderman and Pattie Maes. 1997. Direct manipulation vs. interface agents. interactions 4, 6 (1997), 42--61.
[67]
Tom Simonite. 2017. Machine Learning for Everyone. MIT Tech Review (2017). https://www.technologyreview.com/s/ 600987/machine-learning-for-everyone/
[68]
Carolin Strobl, Anne-Laure Boulesteix, Achim Zeileis, and Torsten Hothorn. 2007. Bias in random forest variable importance measures: Illustrations, sources and a solution. BMC Bioinformatics 8, 1 (2007), 1.
[69]
Randy L Teach and Edward H Shortliffe. 1981. An analysis of physician attitudes regarding computer-based clinical consultation systems. Computers and Biomedical Research 14, 6 (1981), 542--558.
[70]
Maartje ter Hoeve, Mathieu Heruer, Daan Odijk, Anne Schuth, and Maarten de Rijke. 2017. Do News Consumers Want Explanations for Personalized News Rankings?. In FATREC Workshop on Responsible Recommendation Proceedings.
[71]
John Thibaut, Laurens Walker, Stephen LaTour, and Pauline Houlden. 1973. Procedural justice as fairness. Stan. L. Rev. 26 (1973), 1271.
[72]
Lyn Thomas, Jonathan Crook, and David Edelman. 2017. Credit scoring and its applications. SIAM.
[73]
Alan B Tickle, Robert Andrews, Mostefa Golea, and Joachim Diederich. 1998. The truth will come to light: Directions and challenges in extracting the knowledge embedded within trained artificial neural networks. IEEE Transactions on Neural Networks 9, 6 (1998), 1057--1068.
[74]
Nava Tintarev and Judith Masthoff. 2015. Explaining recommendations: Design and evaluation. In Recommender Systems Handbook, F. Ricci, L. Rokach, B. Shapira, and P.B. Kantor (Eds.). Springer, 353--382.
[75]
John W Tukey. 1949. Comparing individual means in the analysis of variance. Biometrics (1949), 99--114.
[76]
Joe Tullio, Anind K Dey, Jason Chalecki, and James Fogarty. 2007. How it works: a field study of non-technical users interacting with an intelligent system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 31--40.
[77]
Giovanni Vaia, Erran Carmel, William DeLone, Harald Trautsch, and Flavio Menichetti. 2012. Vehicle Telematics at an Italian Insurer: New Auto Insurance Products and a New Industry Ecosystem. MIS Quarterly Executive 11, 3 (2012).
[78]
Kees Van den Bos, E Allan Lind, Riël Vermunt, and Henk AM Wilke. 1997. How do I judge my outcome when I do not know the outcome of others? The psychology of the fair process effect. Journal of Personality and Social Psychology 72, 5 (1997), 1034.
[79]
Kees Van den Bos, Lynn Van der Velden, and Allan Lind. 2014. On the role of perceived procedural justice in citizens' reactions to government decisions and the handling of conflicts. Utrecht Law Review 10, 4 (2014).
[80]
Jesse Vig, Shilad Sen, and John Riedl. 2009. Tagsplanations: explaining recommendations using tags. In Proceedings of the 14th International Conference on Intelligent User Interfaces. ACM, 47--56.
[81]
Sandra Wachter, Brent Mittelstadt, and Luciano Floridi. 2017. Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law 7, 2 (2017), 76--99.
[82]
Adrian Weller. 2017. Challenges for Transparency. In 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017). New York. https://arxiv.org/abs/1708.01870
[83]
Joseph Jay Williams, Juho Kim, Anna Rafferty, Samuel Maldonado, Krzysztof Z Gajos, Walter S Lasecki, and Neil Heffernan. 2016. Axis: Generating explanations at scale with learnersourcing and machine learning. In Proceedings of the Third (2016) ACM Conference on Learning@ Scale. ACM, 379--388.
[84]
Lan Xia, Kent B Monroe, and Jennifer L Cox. 2004. The price is unfair! A conceptual framework of price fairness perceptions. Journal of Marketing 68, 4 (2004), 1--15.
[85]
L Richard Ye and Paul E Johnson. 1995. The impact of explanation facilities on user acceptance of expert systems advice. MIS Quarterly (1995), 157--172.
[86]
Jiaming Zeng, Berk Ustun, and Cynthia Rudin. 2017. Interpretable classification models for recidivism prediction. Journal of the Royal Statistical Society: Series A (Statistics in Society) 180, 3 (2017), 689--722.

Cited By

View all
  • (2025)Post hoc explanations improve consumer responses to algorithmic decisionsJournal of Business Research10.1016/j.jbusres.2024.114981186(114981)Online publication date: Jan-2025
  • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
  • (2024)Artificial intelligence (AI): Theoretical framework and events industry application in sports venuesMarketing10.5937/mkng2403163D55:3(163-174)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
      April 2018
      8489 pages
      ISBN:9781450356206
      DOI:10.1145/3173574
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 April 2018

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. algorithmic decision-making
      2. explanation
      3. fairness
      4. justice
      5. machine learning
      6. transparency

      Qualifiers

      • Research-article

      Funding Sources

      • UK Engineering and Physical Sciences Research Council

      Conference

      CHI '18
      Sponsor:

      Acceptance Rates

      CHI '18 Paper Acceptance Rate 666 of 2,590 submissions, 26%;
      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI '25
      CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,010
      • Downloads (Last 6 weeks)153
      Reflects downloads up to 21 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Post hoc explanations improve consumer responses to algorithmic decisionsJournal of Business Research10.1016/j.jbusres.2024.114981186(114981)Online publication date: Jan-2025
      • (2025)Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanationsInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103376193(103376)Online publication date: Jan-2025
      • (2024)Artificial intelligence (AI): Theoretical framework and events industry application in sports venuesMarketing10.5937/mkng2403163D55:3(163-174)Online publication date: 2024
      • (2024)Navigating the Digital Landscape With the Integration of Social Media and Artificial Intelligence (AI) in Tourism IndustryImpact of AI and Tech-Driven Solutions in Hospitality and Tourism10.4018/979-8-3693-6755-1.ch013(235-256)Online publication date: 30-Jun-2024
      • (2024)Mitigating Bias in AI-Driven Recruitment : The Role of Explainable Machine Learning (XAI)International Journal of Scientific Research in Computer Science, Engineering and Information Technology10.32628/CSEIT24105103710:5(461-469)Online publication date: 9-Oct-2024
      • (2024)Ethical AI in Information Technology: Navigating Bias, Privacy, Transparency, and AccountabilitySSRN Electronic Journal10.2139/ssrn.4845268Online publication date: 2024
      • (2024)ReLax: Efficient and Scalable Recourse Explanation Benchmarking using JAXJournal of Open Source Software10.21105/joss.065679:103(6567)Online publication date: Nov-2024
      • (2024)Algorithmic Discrimination From the Perspective of Human DignitySocial Inclusion10.17645/si.716012Online publication date: 13-May-2024
      • (2024)Leading with AI in critical care nursing: challenges, opportunities, and the human factorBMC Nursing10.1186/s12912-024-02363-423:1Online publication date: 14-Oct-2024
      • (2024)"You Can either Blame Technology or Blame a Person..." --- A Conceptual Model of Users' AI-Risk Perception as a Tool for HCIProceedings of the ACM on Human-Computer Interaction10.1145/36869968:CSCW2(1-25)Online publication date: 8-Nov-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media