Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3593013.3594067acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Regulating ChatGPT and other Large Generative AI Models

Published: 12 June 2023 Publication History

Abstract

Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. The paper argues for three layers of obligations concerning LGAIMs (minimum standards for all LGAIMs; high-risk obligations for high-risk use cases; collaborations along the AI value chain). In general, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA's content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers.

Supplemental Material

PDF File
Technical Report (Annex to Regulating ChatGPT and other Large Generative AI Models)

References

[1]
Glaese, A., McAleese, N., Trębacz, M., Aslanides, J., Firoiu, V., Ewalds, T., Rauh, M., Weidinger, L., Chadwick, M. and Thacker, P. Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375 (2022).
[2]
Shuster, K., Xu, J., Komeili, M., Ju, D., Smith, E. M., Roller, S., Ung, M., Chen, M., Arora, K. and Lane, J. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188 (2022).
[3]
Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ilić, S., Hesslow, D., Castagné, R., Luccioni, A. S., Yvon, F. and Gallé, M. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 (2022).
[4]
Zuiderveen Borgesius, F. J. Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights, 24, 10 (2020), 1572-1593.
[5]
Lee, D. and Yoon, S. N. Application of artificial intelligence-based technologies in the healthcare industry: Opportunities and challenges. International Journal of Environmental Research and Public Health, 18, 1 (2021), 271.
[6]
Aung, Y. Y., Wong, D. C. and Ting, D. S. The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. British medical bulletin, 139, 1 (2021), 4-15.
[7]
Marcus, G. A Skeptical Take on the A.I. Revolution. The Ezra Klein Show, The New York Times, 2023.
[8]
Beuth, P. Wie sich ChatGPT mit Worten hacken lässt. Der Spiegel, 2023.
[9]
Bergman, A. S., Abercrombie, G., Spruit, S., Hovy, D., Dinan, E., Boureau, Y.-L. and Rieser, V. Guiding the release of safer E2E conversational AI through value sensitive design. Association for Computational Linguistics, 2022.
[10]
Mirsky, Y., Demontis, A., Kotak, J., Shankar, R., Gelei, D., Yang, L., Zhang, X., Pintor, M., Lee, W. and Elovici, Y. The threat of offensive ai to organizations. Computers & Security (2022), 103006.
[11]
Satariano, A. and Mozur, P. The People Onscreen Are Fake. The Disinformation Is Real., 2023.
[12]
Edwards, L. Regulating AI in Europe: four problems and four solutions (2022), 2022.
[13]
Hacker, P., Engel, A. and List, T. Understanding and regulating ChatGPT, and other large generative AI models. 2023.
[14]
Gutierrez, C. I., Aguirre, A., Uuk, R., Boine, C. C. and Franklin, M. A Proposal for a Definition of General Purpose Artificial Intelligence Systems. Working Paper, https://ssrn.com/abstract=4238951 (2022).
[15]
Heikkilä, M. The EU wants to regulate your favorite AI tools. 2023.
[16]
KI-Bundesverband Large European AI Models (LEAM) as Leuchtturmprojekt für Europa. 2023.
[17]
Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M. and Sedova, K. Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. arXiv preprint arXiv:2301.04246 (2023).
[18]
Chee, F. Y. and Mukherjee, S. Exclusive: ChatGPT in spotlight as EU's Breton bats for tougher AI rules. Reuters, 2023.
[19]
Smith, B. Meeting the AI moment: advancing the future through responsible AI. 2023.
[20]
Lieu, T. I'm a Congressman Who Codes. A.I. Freaks Me Out., 2023.
[21]
An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT., 2023.
[22]
Helberger, N. and Diakopoulos, N. ChatGPT and the AI Act. Internet Policy Review, 12, 1 (2023).
[23]
Veale, M. and Borgesius, F. Z. Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22, 4 (2021), 97-112.
[24]
Hacker, P. The European AI Liability Directives - Critique of a Half-Hearted Approach and Lessons for the Future. Working Paper, https://arxiv.org/abs/2211.13960 (2022).
[25]
Douek, E. Content Moderation as Systems Thinking. Harv. L. Rev., 136 (2022), 526.
[26]
De Gregorio, G. Democratising online content moderation: A constitutional framework. Computer Law & Security Review, 36 (2020), 105374.
[27]
Heldt, A. P. EU Digital Services Act: The white hope of intermediary regulation. Palgrave, 2022.
[28]
Meyer, P. ChatGPT: How Does It Work Internally?, 2022.
[29]
Eifert, M., Metzger, A., Schweitzer, H. and Wagner, G. Taming the giants: The DMA/DSA package. Common Market Law Review, 58, 4 (2021), 987-1028.
[30]
Laux, J., Wachter, S. and Mittelstadt, B. Taming the few: Platform regulation, independent audits, and the risks of capture created by the DMA and DSA. Computer Law & Security Review, 43 (2021), 105613.
[31]
Kasy, M. and Abebe, R. Fairness, equality, and power in algorithmic decision-making. 2021.
[32]
Barabas, C., Doyle, C., Rubinovitz, J. and Dinakar, K. Studying up: reorienting the study of algorithmic fairness around issues of power. 2020.
[33]
Koops, E. Should ICT Regulation Be Technology-Neutral? TMC Asser Press, 2006.
[34]
Bhuta, N., Beck, S. and Geiβ, R. Autonomous weapons systems: law, ethics, policy. Cambridge University Press, 2016.
[35]
Sassoli, M. Autonomous weapons and international humanitarian law: Advantages, open technical questions and legal issues to be clarified. International Law Studies, 90, 1 (2014), 1.
[36]
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosselut, A. and Brunskill, E. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
[37]
Ganguli, D., Hernandez, D., Lovitt, L., Askell, A., Bai, Y., Chen, A., Conerly, T., Dassarma, N., Drain, D. and Elhage, N. Predictability and surprise in large generative models. ACM Conference on Fairness, Accountability, and Transparency (2022), 1747-1764.
[38]
Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., Casas, D. d. L., Hendricks, L. A., Welbl, J. and Clark, A. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 (2022).
[39]
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. and Polosukhin, I. Attention is all you need. Advances in neural information processing systems, 30 (2017).
[40]
Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[41]
Radford, A., Narasimhan, K., Salimans, T. and Sutskever, I. Improving language understanding by generative pre-training (2018).
[42]
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V. and Zettlemoyer, L. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019).
[43]
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G. and Askell, A. Language models are few-shot learners. Advances in neural information processing systems, 33 (2020), 1877-1901.
[44]
Kim, B., Kim, H., Lee, S.-W., Lee, G., Kwak, D., Jeon, D. H., Park, S., Kim, S., Kim, S. and Seo, D. What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers. arXiv preprint arXiv:2109.04650 (2021).
[45]
Bienert, J. and Klös, H.-P. Große KI-Modelle als Basis für Forschung und wirtschaftliche Entwicklung. IW-Kurzbericht, 2022.
[46]
Dao, T., Fu, D. Y., Ermon, S., Rudra, A. and Ré, C. Flashattention: Fast and memory-efficient exact attention with io-awareness. arXiv preprint arXiv:2205.14135 (2022).
[47]
Geiping, J. and Goldstein, T. Cramming: Training a Language Model on a Single GPU in One Day. arXiv preprint arXiv:2212.14034 (2022).
[48]
OECD Measuring the Environmental Impacts of AI Compute and Applications: The AI Footprint. 2022.
[49]
Freitag, C., Berners-Lee, M., Widdicks, K., Knowles, B., Blair, G. S. and Friday, A. The real climate and transformative impact of ICT: A critique of estimates, trends, and regulations. Patterns, 2, 9 (2021), 100340.
[50]
ACM, T. P. C. ACM TechBrief: Computing and Climate Change. 2021.
[51]
Cowls, J., Tsamados, A., Taddeo, M. and Floridi, L. The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations. AI & Society (2021), 1-25.
[52]
Taddeo, M., Tsamados, A., Cowls, J. and Floridi, L. Artificial intelligence and the climate emergency: Opportunities, challenges, and recommendations. One Earth, 4, 6 (2021), 776-779.
[53]
Balestriero, R., Ibrahim, M., Sobal, V., Morcos, A., Shekhar, S., Goldstein, T., Bordes, F., Bardes, A., Mialon, G. and Tian, Y. A Cookbook of Self-Supervised Learning. arXiv preprint arXiv:2304.12210 (2023).
[54]
Ananthaswamy, A. The Physics Principle That Inspired Modern AI Art. 2023.
[55]
Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N. and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. PMLR, 2015.
[56]
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H. and Neubig, G. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys, 55 (2021), 1 - 35.
[57]
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K. and Ray, A. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 (2022).
[58]
Luccioni, A. S. and Viviano, J. D. What's in the Box? A Preliminary Analysis of Undesirable Content in the Common Crawl Corpus. arXiv preprint arXiv:2105.02732 (2021).
[59]
Nadeem, M., Bethke, A. and Reddy, S. StereoSet: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456 (2020).
[60]
Zhao, Z., Wallace, E., Feng, S., Klein, D. and Singh, S. Calibrate before use: Improving few-shot performance of language models. PMLR, 2021.
[61]
Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A. and McKinnon, C. Constitutional AI: Harmlessness from AI Feedback. arXiv preprint arXiv:2212.08073 (2022).
[62]
Perrigo, B. OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. 2023.
[63]
Bertuzzi, L. Leading MEPs exclude general-purpose AI from high-risk categories – for now. 2022.
[64]
Bertuzzi, L. AI Act: EU Parliament's crunch time on high-risk categorisation, prohibited practices. 2023.
[65]
Bertuzzi, L. AI Act: MEPs close in on rules for general purpose AI, foundation models. 2023.
[66]
Bennett, C. C. and Hauser, K. Artificial intelligence framework for simulating clinical decision-making: A Markov decision process approach. Artificial intelligence in medicine, 57, 1 (2013), 9-19.
[67]
Geradin, D., Karanikioti, T. and Katsifis, D. GDPR Myopia: how a well-intended regulation ended up favouring large online platforms. European Competition Journal, 17, 1 (2021), 47-92.
[68]
Bertuzzi, L. MEPs seal the deal on Artificial Intelligence Act. 2023.
[69]
Liang, P., Bommasani, R., Creel, K. and Reich, R. The time is now to develop community norms for the release of foundation models. 2022.
[70]
Bornstein, M., Appenzeller, G. and Casado, M. Who Owns the Generative AI Platform?, 2023.
[71]
Stuyck, J. Consumer Concepts in EU Secondary Law. De Gruyter, 2015.
[72]
Micklitz, H.-W., Stuyck, J., Terryn, E. and School, I. C. Consumer law. Hart London, 2010.
[73]
Blaschke, T. and Bajorath, J. Fine-tuning of a generative neural network for designing multi-target compounds. Journal of Computer-Aided Molecular Design, 36, 5 (2022/05/01 2022), 363-371.
[74]
Ziegler, D. M., Stiennon, N., Wu, J., Brown, T. B., Radford, A., Amodei, D., Christiano, P. and Irving, G. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 (2019).
[75]
Widder, D. G. and Nafus, D. Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers' Notions of Responsibility. arXiv preprint arXiv:2209.09780 (2022).
[76]
Meyers, J. M. Artificial intelligence and trade secrets. Landslide, 11 (2018), 17.
[77]
Bertuzzi, L. Leading EU lawmakers propose obligations for General Purpose AI. 2023.
[78]
Drexl, J., Hilty, R., Desaunettes-Barbero, L., Globocnik, J., Gonzalez Otero, B., Hoffmann, J., Kim, D., Kulhari, S., Richter, H. and Scheuerer, S. Artificial Intelligence and Intellectual Property Law-Position Statement of the Max Planck Institute for Innovation and Competition of 9 April 2021 on the Current Debate. Max Planck Institute for Innovation & Competition Research Paper, 21-10 (2021).
[79]
Calvin, N. and Leung, J. Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter. Future of Humanity Institute, February (2020).
[80]
Deeks, A. The judicial demand for explainable artificial intelligence. Columbia Law Review, 119, 7 (2019), 1829-1850.
[81]
Spindler, G. Die Vorschläge der EU-Kommission zu einer neuen Produkthaftung und zur Haftung von Herstellern und Betreibern Künstlicher Intelligenz. Computer und Recht (2022), 689-704.
[82]
Wagner, G. Liability Rules for the Digital Age - Aiming for the Brussels Effect. European Journal of Tort Law (forthcoming) (2023), https://ssrn.com/abstract=4320285.
[83]
McKown, J. R. Discovery of Trade Secrets. Santa Clara Computer & High Tech. LJ, 10 (1994), 35.
[84]
Roberts, J. Too little, too late: Ineffective assistance of counsel, the duty to investigate, and pretrial discovery in criminal cases. Fordham Urb. LJ, 31 (2003), 1097.
[85]
Shepherd, G. B. An empirical study of the economics of pretrial discovery. International Review of Law and Economics, 19, 2 (1999), 245-263.
[86]
Subrin, S. N. Discovery in Global Perspective: Are We Nuts. DePaul L. Rev., 52 (2002), 299.
[87]
Kötz, H. Civil justice systems in Europe and the United States. Duke J. Comp. & Int'l L., 13 (2003), 61.
[88]
Daniel, P. F. Protecting Trade Secrets from Discovery. Tort & Ins. LJ, 30 (1994), 1033.
[89]
Shavell, S. Foundations of Economic Analysis of Law. Harvard U Press, 2004.
[90]
Shavell, S. On liability and insurance. Bell Journal of Economics, 13 (1982), 120-132.
[91]
Hacker, P. Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55, 4 (2018), 1143-1186.
[92]
Adams‐Prassl, J., Binns, R. and Kelly‐Lyth, A. Directly Discriminatory Algorithms. The Modern Law Review (2022).
[93]
Wachter, S. The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law. arXiv preprint arXiv:2205.01166 (2022).
[94]
Wachter, S., Mittelstadt, B. and Russell, C. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review, 41 (2021), 105567.
[95]
Barocas, S. and Selbst, A. D. Big data's disparate impact. California Law Review (2016), 671-732.
[96]
Wachter, S. Affinity profiling and discrimination by association in online behavioral advertising. Berkeley Tech. LJ, 35 (2020), 367.
[97]
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T. and Filar, B. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228 (2018).
[98]
Kiela, D., Firooz, H., Mohan, A., Goswami, V., Singh, A., Fitzpatrick, C. A., Bull, P., Lipstein, G., Nelli, T. and Zhu, R. The hateful memes challenge: Competition report. PMLR, 2021.
[99]
Kiela, D., Firooz, H., Mohan, A., Goswami, V., Singh, A., Ringshia, P. and Testuggine, D. The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in Neural Information Processing Systems, 33 (2020), 2611-2624.
[100]
Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F. and Choi, Y. Defending against neural fake news. Advances in neural information processing systems, 32 (2019).
[101]
Seeha, S. Prompt Engineering and Zero-Shot/Few-Shot Learning [Guide]. 2022.
[102]
Deb, M., Deiseroth, B., Weinbach, S., Schramowski, P. and Kersting, K. AtMan: Understanding Transformer Predictions Through Memory Efficient Attention Manipulation. arXiv preprint arXiv:2301.08110 (2023).
[103]
European, C., Directorate-General for Communications Networks, C., Technology, Hoboken, J., Quintais, J., Poort, J. and Eijk, N. Hosting intermediary services and illegal content online : an analysis of the scope of article 14 ECD in light of developments in the online service landscape : final report. Publications Office, 2019.
[104]
Brüggemeier, G., Ciacchi, A. C. and O'Callaghan, P. Personality rights in european tort law. cambridge university press, 2010.
[105]
Wilman, F. The Digital Services Act (DSA)-An Ooverview. Available at SSRN 4304586 (2022).
[106]
Gerdemann, S. and Spindler, G. Das Gesetz über digitale Dienste (Digital Services Act) (Part 2). Gewerblicher Rechtsschutz und Urheberrecht (2023), 115-125.
[107]
Korenhof, P. and Koops, B.-J. Gender Identity and Privacy: Could a Right to Be Forgotten Help Andrew Agnes Online? Working Paper, https://ssrn.com/abstract=2304190 (2014).
[108]
Lianos, I. and Motchenkova, E. Market dominance and search quality in the search engine market. Journal of Competition Law & Economics, 9, 2 (2013), 419-455.
[109]
Geroski, P. A. and Pomroy, R. Innovation and the evolution of market structure. The journal of industrial economics (1990), 299-314.
[110]
Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L.-M., Rothchild, D., So, D., Texier, M. and Dean, J. Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350 (2021).
[111]
Bertuzzi, L. AI Act: MEPs want fundamental rights assessments, obligations for high-risk users. 2023.
[112]
Grinbaum, A. and Adomaitis, L. The Ethical Need for Watermarks in Machine-Generated Language. arXiv preprint arXiv:2209.03118 (2022).
[113]
Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I. and Goldstein, T. A Watermark for Large Language Models. arXiv preprint arXiv:2301.10226 (2023).
[114]
Mitchell, E., Lee, Y., Khazatsky, A., Manning, C. D. and Finn, C. DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. arXiv preprint arXiv:2301.11305 (2023).
[115]
Solaiman, I. The Gradient of Generative AI Release: Methods and Considerations. arXiv preprint arXiv:2302.04844 (2023).
[116]
Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., Radford, A., Krueger, G., Kim, J. W. and Kreps, S. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203 (2019).
[117]
Crootof, R. Artificial intelligence research needs responsible publication norms. Lawfare Blog (2019).
[118]
Hoffmann-Riem, W. Innovation und Recht-Recht und Innovation: Recht im Ensemble seiner Kontexte. Mohr Siebeck, 2016.
[119]
Bennett Moses, L. Regulating in the face of sociotechnical change (2016).
[120]
Bennett Moses, L. Recurring dilemmas: The law's race to keep up with technological change. U. Ill. JL Tech. & Pol'y (2007), 239.

Cited By

View all
  • (2025)Politics of InformationEncyclopedia of Libraries, Librarianship, and Information Science10.1016/B978-0-323-95689-5.00094-8(454-463)Online publication date: 2025
  • (2024)transformative potential of Generative Artificial Intelligence (GenAI) in businessESIC Market10.7200/esicm.55.33355:2(e333)Online publication date: 31-May-2024
  • (2024)Cheating Better with ChatGPT: A Framework for Teaching Students When to Use ChatGPT and other Generative AI BotsInformation Systems Education Journal10.62273/BZSU716022:3(47-60)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. Regulating ChatGPT and other Large Generative AI Models
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
        June 2023
        1929 pages
        ISBN:9798400701924
        DOI:10.1145/3593013
        This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 12 June 2023

        Check for updates

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        FAccT '23

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)13,689
        • Downloads (Last 6 weeks)1,043
        Reflects downloads up to 10 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)Politics of InformationEncyclopedia of Libraries, Librarianship, and Information Science10.1016/B978-0-323-95689-5.00094-8(454-463)Online publication date: 2025
        • (2024)transformative potential of Generative Artificial Intelligence (GenAI) in businessESIC Market10.7200/esicm.55.33355:2(e333)Online publication date: 31-May-2024
        • (2024)Cheating Better with ChatGPT: A Framework for Teaching Students When to Use ChatGPT and other Generative AI BotsInformation Systems Education Journal10.62273/BZSU716022:3(47-60)Online publication date: 2024
        • (2024)Balancing Innovation and Regulation in the Age of Generative Artificial IntelligenceJournal of Information Policy10.5325/jinfopoli.14.2024.001214Online publication date: 2-Jul-2024
        • (2024)Inteligencia artificial generativa: determinismo tecnológico o artefacto construido socialmentePalabra Clave10.5294/pacla.2024.27.1.927:1(1-23)Online publication date: 20-Mar-2024
        • (2024)A Multifaceted Approach at Discerning Redditors Feelings Towards ChatGPTEAI Endorsed Transactions on Internet of Things10.4108/eetiot.644710Online publication date: 28-Jun-2024
        • (2024)An Inquiry Into the Use of Generative AI and Its Implications in EducationInternational Journal of Adult Education and Technology10.4018/IJAET.34923315:1(1-14)Online publication date: 24-Jul-2024
        • (2024)From Code to ConscienceResponsible Implementations of Generative AI for Multidisciplinary Use10.4018/979-8-3693-9173-0.ch006(165-188)Online publication date: 20-Sep-2024
        • (2024)An Introduction to Generative AIGenerative AI and Implications for Ethics, Security, and Data Management10.4018/979-8-3693-8557-9.ch001(1-16)Online publication date: 21-Aug-2024
        • (2024)Artificial Intelligence and ChatGPT Models in HealthcarePioneering Paradigms in Organizational Research and Consulting Interventions10.4018/979-8-3693-7327-9.ch003(35-60)Online publication date: 29-Aug-2024
        • Show More Cited By

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media