Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3173574.3173677acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Explanations as Mechanisms for Supporting Algorithmic Transparency

Published: 19 April 2018 Publication History

Abstract

Transparency can empower users to make informed choices about how they use an algorithmic decision-making system and judge its potential consequences. However, transparency is often conceptualized by the outcomes it is intended to bring about, not the specifics of mechanisms to achieve those outcomes. We conducted an online experiment focusing on how different ways of explaining Facebook's News Feed algorithm might affect participants' beliefs and judgments about the News Feed. We found that all explanations caused participants to become more aware of how the system works, and helped them to determine whether the system is biased and if they can control what they see. The explanations were less effective for helping participants evaluate the correctness of the system's output, and form opinions about how sensible and consistent its behavior is. We present implications for the design of transparency mechanisms in algorithmic decision-making systems based on these results.

Supplementary Material

ZIP File (pn1694.nb.html.zip)
The supplemental file presents the experiment manipulation and survey instrument, descriptive statistics about each question, a table of correlations between all of the quantitative variables, and the full output of the regression models.The file is a HTML file and should be viewable in any web browser.

References

[1]
Mike Ananny and Kate Crawford. 2017. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 33, 4 (2017), 1--17.
[2]
Association for Computing Machinery U.S. Public Policy Council. 2017. Statement on algorithmic transparency and accountability. (2017). https://www.acm.org/binaries/content/assets/publicpolicy/2017_usacm_statement_algorithms.pdf
[3]
Shlomo Berkovsky, Ronnie Taib, and Dan Conway. 2017. How to recommend? User trust factors in movie recommender systems. In International Conference on Intelligent User Interfaces. 287--300.
[4]
Anol Bhattacherjee. 2001. Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly 25, 3 (2001), 351--370.
[5]
Mustafa Bilgic and Raymond J Mooney. 2005. Explaining recommendations: Satisfaction vs. promotion. In Beyond Personalization Workshop, IUI, Vol. 5. 153.
[6]
Engin Bozdag and Jeroen van den Hoven. 2015. Breaking the filter bubble: Democracy and design. Ethics and Information Technology 17, 4 (2015), 249--265.
[7]
Jenna Burrell. 2016. How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 1--12.
[8]
Federal Trade Commission. 2016. Big Data: A tool for inclusion or exclusion? Understanding the issues. (2016). https://www.ftc.gov/system/files/documents/reports/ big-data-tool-inclusion-or-exclusion-understandingissues/160106big-data-rpt.pdf
[9]
Kelley Cotter, Janghee Cho, and Emilee Rader. 2017. Explaining the news feed algorithm: An analysis of the News Feed FYI blog. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 1553--1560.
[10]
Henriette Cramer, Vanessa Evers, Satyan Ramlal, Maarten van Someren, Lloyd Rutledge, Natalia Stash, Lora Aroyo, and Bob Wielinga. 2008. The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction 18, 5 (2008), 455--496.
[11]
Michael A. DeVito, Darren Gergle, and Jeremy Birnholtz. 2017. "Algorithms ruin everything": #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 3163--3174.
[12]
Nicholas Diakopoulos. 2016. Accountability in algorithmic decision making. Commun. ACM 59, 2 (2016), 56--62.
[13]
Nicholas Diakopoulos. 2017. Enabling Accountability of Algorithmic Media: Transparency as a Constructive and Critical Lens. In Transparent Data Mining for Big and Small Data, Tania Cerquitelli, Daniele Quercia, and Frank Pasquale (Eds.). Vol. 32. Springer International Publishing, Cham, Switzerland, 25--43.
[14]
Nicholas Diakopoulos and Michael Koliska. 2016. Algorithmic transparency in the news media. Digital Journalism 5, 7 (2016), 809--828.
[15]
Nicole B. Ellison, Charles Steinfield, and Cliff Lampe. 2007. The benefits of Facebook "friends:" Social capital and college students' use of online social network sites. Journal of Computer-Mediated Communication 12, 4 (2007), 1143--1168.
[16]
Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. I always assumed that I wasn't really that close to {her}": Reasoning about invisible algorithms in News Feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 153--162.
[17]
Motahhare Eslami, Kristen Vaccaro, Karrie Karahalios, and Kevin Hamilton. 2017. Be careful; things can be worse than they appear": Understanding biased algorithms and users' behavior around them in rating platforms. In The International AAAI Conference on Web and Social Media. 62--71.
[18]
Kenneth R. Fleischmann and William A. Wallace. 2005. A covenant with transparency: Opening the black box of models. Commun. ACM 48, 5 (2005), 93--97.
[19]
Mikkel Flyverbom. 2016. Transparency: Mediation and the Management of Visibilities. International Journal of Communication 10 (2016), 1--13.
[20]
Gerhard Friedrich and Markus Zanker. 2011. A taxonomy for generating explanations in recommender systems. AI Magazine 32, 3 (2011), 90--98.
[21]
Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2014. How should I explain? A comparison of different explanation types for recommender systems. International Journal of Human-Computer Studies 72, 4 (2014), 367--382.
[22]
Justin Scott Giboney, Susan A. Brown, Paul Benjamin Lowry, and Jay F. Nunamaker. 2015. User acceptance of knowledge-based system recommendations: Explanations, arguments, and fit. Decision Support Systems 72 (2015), 1--10.
[23]
Tarleton Gillespie. 2014. The relevance of algorithms. In Media Technologies: Essays on Communication, Materiality, and Society, Tarleton Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot (Eds.). The MIT Press, Cambridge, Mass., 167--194.
[24]
Shirley Gregor and Izak Benbasat. 1999. Explanations from intelligent systems: Theoretical foundations and implications for practice. 23, 4 (1999), 497--530.
[25]
Eszter Hargittai and Yuli Patrick Hsieh. 2011. Succinct survey measures of web-use skills. Social Science Computer Review 30, 1 (2011), 95--107.
[26]
Bernie Hogan. 2015. From invisible algorithms to interactive affordances: Data after the ideology of machine learning. In Roles, Trust, and Reputation in Social Media Knowledge Markets: Theory and Methods, Elisa Bertino and Sorin Adam Matei (Eds.). Springer International Publishing, 103--117.
[27]
Lucas D. Introna. 2016. Algorithms, Governance, and Governmentality: On Governing Academic Writing. Science, Technology, & Human Values 41, 1 (2016), 17--49.
[28]
Dietmar Jannach, Paul Resnick, Alexander Tuzhilin, and Markus Zanker. 2016. Recommender systems: Beyond matrix completion. Commun. ACM 59, 11 (2016), 94--102.
[29]
Joseph A Konstan and John Riedl. 2012. Recommender systems: from algorithms to user experience. User Modeling and User-Adapted Interaction 22, 1--2 (2012), 101--123.
[30]
Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. 2016. Accountable algorithms. University of Pennsylvania Law Review 165, 3 (2016).
[31]
Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too much, too little, or just right? Ways explanations impact end users' mental models. In 2013 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 3--10.
[32]
Min Kyung Lee, Daniel Kusbit, Evan Metsky, and Laura Dabbish. 2015. Working with machines: The impact of algorithmic and data-driven management on human workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 1603--1612.
[33]
Xin Li, Traci J Hess, and Joseph S Valacich. 2008. Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems 17, 1 (2008), 39--71.
[34]
Brian Y. Lim and Anind K. Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing. 195--204.
[35]
Zachary C Lipton. 2016. The mythos of model interpretability. In 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). 96--100.
[36]
Claudia Marino, Livio Finos, Alessio Vieno, Michela Lenzi, and Marcantonio M. Spada. 2017. Objective Facebook behaviour: Differences between problematic and non-problematic users. Computers in Human Behavior 73 (2017), 541--546.
[37]
Joseph E. Mercado, Michael A. Rupp, Jessie Y. C. Chen, Michael J. Barnes, Daniel Barber, and Katelyn Procci. 2016. Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors: The Journal of the Human Factors and Ergonomics Society 58, 3 (2016), 401--415.
[38]
Brent Mittelstadt. 2016. Automation, algorithms, and politics: Auditing for transparency in content personalization systems. International Journal of Communication 10 (2016), 4991--5002.
[39]
Sayooran Nagulendra and Julita Vassileva. 2016. Providing awareness, explanation and control of personalized filtering in a social networking site. Information Systems Frontiers 18, 1 (2016), 145--158.
[40]
Kenya Freeman Oduor and Eric N. Wiebe. 2008. The effects of automated decision algorithm modality and transparency on reported trust and task performance. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 52, 4 (2008), 302--306.
[41]
Julian A. Oldmeadow, Sally Quinn, and Rachel Kowert. 2013. Attachment style, social skills, and Facebook use amongst adults. Computers in Human Behavior 29, 3 (2013), 1142--1149.
[42]
Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. 2012. A generalized taxonomy of explanations styles for traditional and social recommender systems. Data Mining and Knowledge Discovery 24, 3 (2012), 555--583.
[43]
Frank Pasquale. 2015. The black box society: the secret algorithms that control money and information. Harvard University Press, Cambridge, MA.
[44]
John D. Podesta, Penny Pritzker, Ernest J. Moniz, John P. Holdren, and Jeffrey D. Zients. 2014. Big data: Seizing opportunities, preserving values. (2014). https://obamawhitehouse.archives.gov/sites/default/ files/docs/big_data_privacy_report_may_1_2014.pdf
[45]
Emilee Rader. 2017. Examining user surprise as a symptom of algorithmic filtering. International Journal of Human Computer Studies 98 (2017), 72--88.
[46]
Emilee Rader and Rebecca Gray. 2015. Understanding user beliefs about algorithmic curation in the Facebook News Feed. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 173--182.
[47]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why should I trust you?" Explaining the Predictions of Any Classifier. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1135--1144.
[48]
Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry (2014).
[49]
Rashmi Sinha and Kirsten Swearingen. 2002. The role of transparency in recommender systems. In Proceedings of the 2002 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 830--831.
[50]
Nava Tintarev and Judith Masthoff. 2007. A survey of explanations in recommender systems. In Proceedings of the 2007 IEEE 23rd International Conference on Data Engineering Workshop. 801--810.
[51]
Nava Tintarev and Judith Masthoff. 2011. Designing and Evaluating Explanations for Recommender Systems. In Recommender Systems Handbook, Francesco Ricci, Lior Rokach, Bracha Shapira, and Paul B. Kantor (Eds.). Springer-Verlag, New York, New York, 479--510.
[52]
Nava Tintarev and Judith Masthoff. 2012. Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction 22, 4--5 (2012), 399--439.
[53]
Viswanath Venkatesh, James Y. L. Thong, Frank K. Y. Chan, Paul Jen-Hwa Hu, and Susan A. Brown. 2011. Extending the two-stage information systems continuance model: incorporating UTAUT predictors and the role of context. Information Systems Journal 21, 6 (2011), 527--555.
[54]
Weiquan Wang and Izak Benbasat. 2007. Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems 23, 4 (2007), 217--246.
[55]
Markus Zanker and Daniel Ninaus. 2010. Knowledgeable explanations for recommender systems. In Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Vol. 1. 657--660.
[56]
Tal Zarsky. 2016. The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values 41, 1 (2016), 118--132.

Cited By

View all
  • (2024)Innovating RealityThe Pioneering Applications of Generative AI10.4018/979-8-3693-3278-8.ch004(85-105)Online publication date: 28-Jun-2024
  • (2024)Reparations of the horse? Algorithmic reparation and overspecialized remediesBig Data & Society10.1177/2053951724127067011:3Online publication date: 24-Sep-2024
  • (2024)Behind the black box: The moderating role of the machine heuristic on the effect of transparency information about automated journalism on hostile media bias perceptionJournalism10.1177/14648849241284575Online publication date: 13-Sep-2024
  • Show More Cited By

Index Terms

  1. Explanations as Mechanisms for Supporting Algorithmic Transparency

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
    April 2018
    8489 pages
    ISBN:9781450356206
    DOI:10.1145/3173574
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. algorithmic decision-making
    2. explanations
    3. transparency

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    CHI '18
    Sponsor:

    Acceptance Rates

    CHI '18 Paper Acceptance Rate 666 of 2,590 submissions, 26%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,169
    • Downloads (Last 6 weeks)136
    Reflects downloads up to 21 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Innovating RealityThe Pioneering Applications of Generative AI10.4018/979-8-3693-3278-8.ch004(85-105)Online publication date: 28-Jun-2024
    • (2024)Reparations of the horse? Algorithmic reparation and overspecialized remediesBig Data & Society10.1177/2053951724127067011:3Online publication date: 24-Sep-2024
    • (2024)Behind the black box: The moderating role of the machine heuristic on the effect of transparency information about automated journalism on hostile media bias perceptionJournalism10.1177/14648849241284575Online publication date: 13-Sep-2024
    • (2024)The Impact of Machine Authorship on News Audience Perceptions: A Meta-Analysis of Experimental StudiesCommunication Research10.1177/00936502241229794Online publication date: 14-Feb-2024
    • (2024)Explaining the Wait: How Justifying Chatbot Response Delays Impact User TrustProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665550(1-16)Online publication date: 8-Jul-2024
    • (2024)The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy FeaturesProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645210(155-180)Online publication date: 18-Mar-2024
    • (2024)Linguistically Differentiating Acts and Recalls of Racial Microaggressions on Social MediaProceedings of the ACM on Human-Computer Interaction10.1145/36373668:CSCW1(1-36)Online publication date: 26-Apr-2024
    • (2024)Communicating the Privacy-Utility Trade-off: Supporting Informed Data Donation with Privacy Decision Interfaces for Differential PrivacyProceedings of the ACM on Human-Computer Interaction10.1145/36373098:CSCW1(1-56)Online publication date: 26-Apr-2024
    • (2024)Powered by AIProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314147:4(1-24)Online publication date: 12-Jan-2024
    • (2024)A Critical Survey on Fairness Benefits of Explainable AIProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658990(1579-1595)Online publication date: 3-Jun-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media