Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3605390.3605413acmotherconferencesArticle/Chapter ViewAbstractPublication PageschitalyConference Proceedingsconference-collections
short-paper
Open access

Transparency is Crucial for User-Centered AI, or is it? How this Notion Manifests in the UK Press Coverage of GPT

Published: 20 September 2023 Publication History

Abstract

Transparency is a core principle for a user-centered AI present in all recent regulatory initiatives. Is it equally present in the public discourse? In this study, we focus on a type of AI that reached the media, i.e., GPT. We collected a corpus of national newspaper articles published in the United Kingdom (UK) while GPT-3 was the latest version (June 2020-November 2022) and investigated whether transparency was mentioned and, if so, in which terms. We used a mixed quantitative and qualitative approach, through which articles are both parsed for word frequency and manually coded. The results show that transparency was rarely explicitly mentioned, but issues underpinning transparency were addresssed in most texts. As a follow-up of the initial study, the scant presence of the term transparency is confirmed in an additional corpus of UK national newspaper articles published since the launch of ChatGPT (November 2022 - May 2023). The implications of missing transparency as a reference for AI ethical concerns in the public discourse are discussed.

References

[1]
Gloria Andrada, Robert W. Clowes, and Paul R. Smart. 2022. Varieties of transparency: exploring agency within AI systems. AI & SOCIETY. https://doi.org/10.1007/s00146-021-01326-6
[2]
Amanda Askell, Miles Brundage, and Gillian Hadfield. 2019. The Role of Cooperation in Responsible AI Development.
[3]
Susanne Barth, Dan Ionita, and Pieter Hartel. 2023. Understanding Online Privacy—A Systematic Review of Privacy Visualizations and Privacy by Design Guidelines. ACM Computing Surveys 55, 3: 1–37. https://doi.org/10.1145/3502288
[4]
BritainThinks. 2022. AI Governance. Retrieved from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146010/CDEI_AI_White_Paper_Final_report.pdf
[5]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, 1877–1901. Retrieved from https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
[6]
Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Rana Omer, Pankesh Patel, Bin Qian, Zhenyu Wen, Tejal Shah, Graham Morgan, and Rajiv Ranjan. 2023. Explainable AI (XAI): Core Ideas, Techniques, and Solutions. ACM Computing Surveys 55, 9: 1–33. https://doi.org/10.1145/3561048
[7]
European Commission, Directorate-General for Communications Networks, Content and Technology AI HLEG. 2020. On Artificial Intelligence - A European approach to excellence and trust. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020DC0065
[8]
European Union. 2016. General Data Protection Regulation. Retrieved from https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng
[9]
European Union. 2022. Digital Services Act. Retrieved from https://eur-lex.europa.eu/eli/reg/2022/2065/oj
[10]
European Union. Artificial Intelligence Act (proposal). Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
[11]
Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz, and Aurelia Tamò-Larrieux. 2020. Towards Transparency by Design for Artificial Intelligence. Science and Engineering Ethics 26, 6: 3333–3361. https://doi.org/10.1007/s11948-020-00276-4
[12]
Margaret Hagan. 2020. Legal Design as a Thing: A Theory of Change and a Set of Methods to Craft a Human-Centered Legal System. Design Issues 36, 3: 3–15. https://doi.org/10.1162/desi_a_00600
[13]
High-Level Expert Group on AI. 2019. Ethics guidelines for trustworthy AI. European Commission. Retrieved from https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
[14]
Davinder Kaur, Suleyman Uslu, Kaley J. Rittichier, and Arjan Durresi. 2023. Trustworthy Artificial Intelligence: A Review. ACM Computing Surveys 55, 2: 1–38. https://doi.org/10.1145/3491209
[15]
Ida Koivisto. 2022. The Transparency Paradox. Oxford University Press.
[16]
Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, Jiquan Pei, Jinfeng Yi, and Bowen Zhou. 2023. Trustworthy AI: From Principles to Practices. ACM Computing Surveys 55, 9: 1–46. https://doi.org/10.1145/3555803
[17]
Dan Milmo. 2023. ChatGPT reaches 100 million users two months after launch. The Guardian. Retrieved April 8, 2023 from https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app
[18]
Brent Mittelstadt. 2022. Interpretability and Transparency in Artificial Intelligence. In The Oxford Handbook of Digital Ethics (1st ed.), Carissa Véliz (ed.). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198857815.013.20
[19]
Joel R Reidenberg, Travis Breaux, Lorrie Faith Cranor, Brian French, Amanda Grannis, James T Graves, Fei Liu, Aleecia McDonald, Thomas B Norton, and Rohan Ramanath. 2015. Disagreeable privacy policies: Mismatches between meaning and users’ understanding. Berkeley Tech. LJ 30: 39.
[20]
Rishi Bommasani, Kevin Klyma, Daniel Zhang, and Percy Liang. 2023. Do Foundation Model Providers Comply with the Draft EU AI Act? Retrieved from https://crfm.stanford.edu/2023/06/15/eu-ai-act.html
[21]
Stuart J. Russell. 2020. Human compatible: artificial intelligence and the problem of control. Penguin Books, New York.
[22]
John R Searle. 1980. Minds, brains, and programs. Behavioral and brain sciences 3, 3: 417–424.
[23]
Statista. 2023. Artificial Intelligence: in-depth market analysis. Market Insights report. Retrieved from https://www.statista.com/study/50485/in-depth-report-artificial-intelligence/
[24]
Elham Tabassi. 2023. AI Risk Management Framework: AI RMF (1.0). National Institute of Standards and Technology, Gaithersburg, MD. https://doi.org/10.6028/NIST.AI.100-1
[25]
The Ethics and Research Integrity Sector, DG R&I. 2021. Ethics By Design and Ethics of Use Approaches for Artificial Intelligence. European Commission. Retrieved from https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwic66qRmp_-AhW9VPEDHbj5AycQFnoECAoQAQ&url=https%3A%2F%2Fec.europa.eu%2Finfo%2Ffunding-tenders%2Fopportunities%2Fdocs%2F2021-2027%2Fhorizon%2Fguidance%2Fethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf&usg=AOvVaw0dEoBG6A2oAxXf4LQfXdQ6
[26]
USA. 2022. American Data Privacy and Protection Act (proposal). Retrieved from https://www.congress.gov/bill/117th-congress/house-bill/8152/text
[27]
Joel Walmsley. 2021. Artificial intelligence and the value of transparency. AI & SOCIETY 36, 2: 585–595. https://doi.org/10.1007/s00146-020-01066-z
[28]
Will Douglas Heaven. 2020. OpenAI's new language generator GPT-3 is shockingly good—and completely mindless. the MIT Technology Review. Retrieved from https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/
[29]
Terry Winograd, Fernando Flores, and Fernando F Flores. 1986. Understanding computers and cognition: A new foundation for design. Intellect Books.
[30]
Shoshana Zuboff. 2019. The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile books.
[31]
Department for Science, Innovation and Technology. 2023. A pro-innovation approach to AI regulation, presented to Parliament by the Secretary of State for Science, Innovation and Technology by command of His Majesty. London, UK.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter
September 2023
416 pages
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 September 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. AI
  2. GPT
  3. Transparency
  4. press
  5. stakeholders

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Conference

CHItaly 2023

Acceptance Rates

Overall Acceptance Rate 109 of 242 submissions, 45%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)331
  • Downloads (Last 6 weeks)51
Reflects downloads up to 26 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media