Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3678884.3682043acmconferencesArticle/Chapter ViewAbstractPublication PagescscwConference Proceedingsconference-collections
extended-abstract
Free access

Empowering Individuals in Automated Decision-Making: Explainability, Contestability and Beyond

Published: 13 November 2024 Publication History

Abstract

When decisions crucial to their lives and well-being are made by opaque automated decision-making (ADM) systems powered by AI technologies, including machine learning and deep learning, individuals often find themselves disempowered. They may be unaware of the existence of such systems or their rights related to ADM, such as those granted by the EU's General Data Protection Regulation (GDPR) and emerging AI regulations. Even when they are aware of the adverse impacts of ADM, numerous barriers - ranging from algorithmic to organizational to regulatory - hinder their ability to exert their rights and maintain agency and control over ADM decisions. My dissertation aims to address this disempowerment by examining how explainability, contestability, and beyond within the ADM ecosystem can empower individuals. Through a combination of qualitative workshops, large-scale user experiments, and interdisciplinary studies at the intersection of human-computer interaction (HCI) and AI governance, this research investigates what empowerment entails at technical, organizational, and societal levels. This paper provides an overview of various streams of my research, contributing to the understanding and addressing of the complexity involved in empowering those affected by ADM.

References

[1]
Amarasinghe, K., Rodolfa, K. T., Lamba, H., and Ghani, R. Explainable machine learning for public policy: Use cases, gaps, and research directions. e5.
[2]
Araujo, T., Helberger, N., Kruikemeier, S., and De Vreese, C. H. In AI we trust? perceptions about automated decision-making by artificial intelligence. 611--623.
[3]
Bennett Moses, L. Regulating in the face of sociotechnical change. In The Oxford Handbook of Law, Regulation and Technology, R. Brownsword, E. Scotford, and K. Yeung, Eds., Oxford Handbooks. Oxford University Press, 2017. online edn, Oxford Academic, 1 Sept. 2016.
[4]
Burrell, J. How the machine 'thinks': Understanding opacity in machine learning algorithms. 2053951715622512. Publisher: SAGE Publications Ltd.
[5]
Cheng, H.-F., Wang, R., Zhang, Z., O'Connell, F., Gray, T., Harper, F. M., and Zhu, H. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. 1--12.
[6]
Chromik, M., Eiband, M., Buchner, F., Krüger, A., and Butz, A. I think i get your point, AI! the illusion of explanatory depth in explainable AI. 307--317.
[7]
Doshi-Velez, F., and Kim, B. Towards a rigorous science of interpretable machine learning, 2017.
[8]
Ehsan, U., Saha, K., De Choudhury, M., and Riedl, M. O. Charting the sociotechnical gap in explainable AI: A framework to address the gap in XAI. 1--32.
[9]
Fanni, R., Steinkogler, V. E., Zampedri, G., and Pierson, J. Enhancing human agency through redress in artificial intelligence systems. 537--547.
[10]
Hao, K. The coming war on the hidden algorithms that trap people in poverty.
[11]
Hoffman, R. R., Mueller, S. T., Klein, G., and Litman, J. Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Publisher: Frontiers.
[12]
Julia Black, M. H., and Band, C. Making a success of principles-based regulation. Law and Financial Markets Review 1, 3 (2007), 191--206.
[13]
Kawakami, A., Sivaraman, V., Cheng, H.-F., Stapleton, L., Cheng, Y., Qing, D., Perer, A., Wu, Z. S., Zhu, H., and Holstein, K. Improving human-ai partnerships in child welfare: Understanding worker practices, challenges, and desires for algorithmic decision support, 2022.
[14]
Lyons, H., Velloso, E., and Miller, T. Conceptualising contestability: Perspectives on contesting algorithmic decisions. 1--25.
[15]
Mansi, G., and Riedl, M. Why don't you do something about it? outlining connections between AI explanations and user actions.
[16]
Naiseh, M., Simkute, A., Zieni, B., Jiang, N., and Ali, R. C-XAI: A conceptual framework for designing XAI tools that support trust calibration. 100076.
[17]
Pi, Y. INFEATURE: An interactive feature-based-explanation framework for nontechnical users. 262--273.
[18]
Ploug, T., and Holm, S. The four dimensions of contestable ai diagnostics - a patient-centric approach to explainable ai. Artificial intelligence in medicine 107 (2020), 101901.
[19]
Schmidthuber, L., Hilgers, D., and Randhawa, K. Public crowdsourcing: Analyzing the role of government feedback on civic digital platforms. 960--977. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/padm.12811.
[20]
Spaa, A., Durrant, A., Elsden, C., and Vines, J. Understanding the boundaries between policymaking and HCI. 1--15.
[21]
Stauffer, B. Automated neglect: How the world bank?s push to allocate cash assistance using algorithms threatens rights.
[22]
Wang, X., and Yin, M. Effects of explanations in AI-assisted decision making: Principles and comparisons. 1--36.
[23]
Yang, Q., Wong, R. Y., Gilbert, T., Hagan, M. D., Jackson, S., Junginger, S., and Zimmerman, J. Designing technology and policy simultaneously: Towards a research agenda and new practice. 1--6.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CSCW Companion '24: Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing
November 2024
755 pages
ISBN:9798400711145
DOI:10.1145/3678884
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 November 2024

Check for updates

Author Tags

  1. ai governance
  2. automated decision making
  3. contestable ai
  4. empowerment
  5. xai

Qualifiers

  • Extended-abstract

Conference

CSCW '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 2,235 of 8,521 submissions, 26%

Upcoming Conference

CSCW '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 17
    Total Downloads
  • Downloads (Last 12 months)17
  • Downloads (Last 6 weeks)17
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media