Abstract
A number of artificial intelligence (AI) systems have been proposed to assist users in identifying the issues of algorithmic fairness and transparency. These AI systems use diverse bias detection methods from various perspectives, including exploratory cues, interpretable tools, and revealing algorithms. This study explains the design of AI systems by probing how users make sense of fairness and transparency as they are hypothetical in nature, with no specific ways for evaluation. Focusing on individual perceptions of fairness and transparency, this study examines the roles of normative values in over-the-top (OTT) platforms by empirically testing their effects on sensemaking processes. A mixed-method design incorporating both qualitative and quantitative approaches was used to discover user heuristics and to test the effects of such normative values on user acceptance. Collectively, a composite concept of transparent fairness emerged around user sensemaking processes and its formative roles regarding their underlying relations to perceived quality and credibility. From a sensemaking perspective, this study discusses the implications of transparent fairness in algorithmic media platforms by clarifying how and what should be done to make algorithmic media more trustable and reliable platforms. Based on the findings, a theoretical model is developed to define transparent fairness as an essential algorithmic attribute in the context of OTT platforms.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Ananny M, Crawford K (2018) Seeing without knowing. New Media Soc 20(3):973–989. https://doi.org/10.1177/1461444816676645
Crain M (2018) The limits of transparency. New Media Soc 20(1):88–104. https://doi.org/10.1177/1461444816657096
Dervin B (2003) Sense-making’s journey from metatheory to methodology to methods. In: Dervin B (ed) In sense-making methodology reader. Hampton Press Inc, New York, pp 141–146
Diakopoulos N, Koliska M (2016) Algorithmic transparency in the news media. Digit J 5(7):809–828. https://doi.org/10.1080/21670811.2016.1208053
Gu J, Yan N, Rzeszotarski J (2021) Understanding user sensemaking in machine learning fairness assessment systems. Proceedings of the Web Conference. ACM, New York. https://doi.org/10.1145/3442381.3450092
Helberger N, Karppinen K, D’Acunto L (2018) Exposure diversity as a design principle for recommender systems. Inf Commun Soc 21(2):191–207. https://doi.org/10.1080/1369118X.2016.1271900
Hoffmann A (2019) Where fairness fails. Inf Commun Soc 22(7):900–915. https://doi.org/10.1080/1369118X.2019.1573912
Just N, Latzer M (2017) Governance by algorithms. Media Cult Soc 39(2):238–258. https://doi.org/10.1177/0163443716643157
Kemper J, Kolkman D (2019) Transparent to whom? Inf Commun Soc 22(14):2081–2096. https://doi.org/10.1080/1369118X.2018.1477967
Kitchin R (2017) Thinking critically about and researching algorithms. Inf Commun Soc 20(1):14–29. https://doi.org/10.1080/1369118X.2016.1154087
Kleek M, Seymour W, Veale M, Binns R (2018) The need for sensemaking in networked privacy and algorithmic responsibility. Sensemaking Workshop: CHI 2018, April 2018, Montréal, Canada
Kolkman D (2021) The credibility of algorithmic models to non-experts. Inf Commun Soc. https://doi.org/10.1080/1369118X.2020.1761860
Lee M (2018) Understanding perception of algorithmic decisions. Big Data Soc 5(1):1–16. https://doi.org/10.1177/2053951718756684
Lepri B et al (2018) Fair, transparent, and accountable algorithmic decision-making processes. Philos Technol 31(4): 611–627. https://doi:hdl.handle.net/1721.1/122933
Meijer A (2014) Transparency. In: Mark B, Robert EG, Thomas S (eds) In the Oxford Handbook of Public Accountability. Oxford University Press, Oxford. https://doi.org/10.1093/oxfordhb/9780199641253.013.0043
Moller J, Trilling D, Helberger N, van Es B (2018) Do not blame it on the algorithm. Inf Commun Soc 21(7):959–977. https://doi.org/10.1080/1369118X.2018.1444076
Montal T, Reich Z (2017) I, robot you, journalist. Who is the author? Digit J 5(7):829–849. https://doi.org/10.1080/21670811.2016.1209083
Park YJ (2021) The future of digital surveillance: why digital monitoring will never lose its appeal in a world of algorithm-driven AI. University of Michigan Press, Ann Arbor
Park YJ, Jones-Jang SM (2022) Surveillance, security, and AI as technological acceptance. AI Soc. https://doi.org/10.1007/s00146-021-01331-9
Pu P, Chen L, Hu R (2012) Evaluating recommender systems from the user perspective. User Model User Adapt Interact 22(4):317–355. https://doi.org/10.1007/s11257-011-9115-7
Rosenfeld A, Richardson A (2019) Explainability in human-agent systems. Auton Agent Multi Agent Syst 33(6):673–705
Sandvig C, Hamilton K, Karaholios K, Langbort C (2016) When the algorithm itself is a racist. Int J Commun 10:4972–4990
Schildt H, Mantere S, Cornelissen J (2020) Power in sensemaking processes. Organ Stud 41(2):241–265. https://doi.org/10.1177/0170840619847718
Shin D (2021) The perception of humanness in conversational journalism. New Media Soc. https://doi.org/10.1177/1461444821993801
Shin D, Park Y (2019) Role of fairness, accountability, and transparency in algorithmic affordance. Comput Hum Behav 98:277–284. https://doi.org/10.1016/j.chb.2019.04.019
Shin D, Zaid B, Biocca F, Rasul A (2022) In platforms we trust? Unlocking the black-box of news algorithms through interpretable AI. J Broadcasting Electron Media. https://doi.org/10.1080/08838151.2022.2057984
Soffer O (2019) Algorithmic personalization and the two-step flow of communication. Commun Theory 31:297–315
Sundar S, Kim J, Beth-Oliver M, Molina M (2020) Online privacy heuristics that predict information disclosure. CHI '20, April 25–30. https://doi.org/10.1145/3313831.3376854
Thurman N, Moeller J, Helberger N, Trilling D (2019) My friends, editors, algorithms, and I. Digit J 7(4):447–469. https://doi.org/10.1080/21670811.2018.1493936
Weick K, Sutcliffe K, Obstfeld D (2005) Organizing and the process of sensemaking. Organ Sci 16(4):409–421. https://doi.org/10.1287/orsc.1050.0133
Funding
This project has been funded by the Office of Research and the Institute for Social and Economic Research at Zayed University (The Policy Research Incentive Program 2022). It also received the support from Provost's Research Fellowship Award of Zayed University (R21050/2022).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Shin, D., Lim, J.S., Ahmad, N. et al. Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform. AI & Soc 39, 477–490 (2024). https://doi.org/10.1007/s00146-022-01525-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-022-01525-9