Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3614321.3614325acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicegovConference Proceedingsconference-collections
research-article
Open access

Stereotypes in ChatGPT: an empirical study

Published: 20 November 2023 Publication History

Abstract

ChatGPT is rapidly gaining interest and attracts many researchers, practitioners and users due to its availability, potentials and capabilities. Nevertheless, there are several voices and studies that point out the flaws of ChatGPT such as its hallucinations, factually incorrect statements, and potential for promoting harmful social biases. Being the focus area of this contribution, harmful social biases may result in unfair treatment or discrimination of (a member of) a social group. This paper aims at gaining insight into social biases incorporated in ChatGPT language models. To this end, we study the stereotypical behavior of ChatGPT. Stereotypes associate specific characteristics to groups and are related to social biases. The study is empirical and systematic, where about 2300 stereotypical probes in 6 formats (like questions and statements) and from 9 different social group categories (like age, country and profession) are posed to ChatGPT. Every probe is a stereotypical question or statement where a word is masked and ChatGPT is asked to fill in the masked word. Subsequently, as part of our analysis, we map the suggestions of ChatGPT to positive and negative sentiments to get a measure of stereotypical behavior of a language model of ChatGPT. We observe that ChatGPT stereotypical behavior differs per social group category, for some categories the average sentiment is largely positive (e.g., for religion), while for others it is negative (e.g., for political). Further, our work empirically affirms the previous claims that the formats of probing affect the sentiments of the stereotypical outcomes of ChatGPT. Our results can be used by practitioners and policy makers to devise societal interventions to change the image of a category or a social group, as captured in ChatGPT language model(s), and/or to decide to appropriately influence the stereotypical behavior of such language models.

References

[1]
Baidoo-Anu, D., & Owusu Ansah, L. 2023. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Available at SSRN 4337484.
[2]
Bargh, M. S., & Choenni, S. January 2023. Towards an Integrated Approach for Preserving Data Utility, Privacy and Fairness. In Conference on Multidisciplinary Research (MyRes),p. 290.
[3]
Bird, S., Loper, E., & Klein, E. 2009. Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O'Reilly Media.
[4]
Biswas, S. S. 2023. Role of chat gpt in public health. Annals of Biomedical Engineering, 1-2.
[5]
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of Advances in Neural Information Processing Systems 29 (NIPS’16), pages 4349–4357
[6]
Borji, A. 2023. A categorical archive of ChatGPT failures. arXiv preprint arXiv:2302.03494.
[7]
Brants, T., Popat, A. C., Xu, P., Och, F. J., & Dean, J. 2007. Large language models in machine translation.
[8]
Caliskan, A., Bryson, J.J., and Narayanan, A. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186
[9]
Choenni, R., Shutova, E., & van Rooij, R. 2021. Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? In M-C. Moens, X. Huang, L. Specia, & S. W. Yih (Eds.), 2021 Conference on Empirical Methods in Natural Language Processing: EMNLP 2021 : proceedings of the conference : November 7-11, 2021 (pp. 1477-1491).
[10]
Choenni, S., Netten, N., Bargh, M.S., & Choenni, R. December 2018. On the usability of big (social) data. In 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom), pp. 1167-1174, IEEE.
[11]
Choi, J. H., Hickman, K. E., Monahan, A., & Schwarcz, D. 2023. ChatGPT goes to law school. Available at SSRN.
[12]
de Vassimon Manela, D., Errington, D., Fisher, T., van Breugel, B., & Minervini, P. April 2021. Stereotype and skew: Quantifying gender bias in pre-trained and fine-tuned language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, pp. 2232-2242.
[13]
Dixon, L., Li, J., Sorensen, J., Thain, N., & Vasserman, L. December 2018. Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 67-73)
[14]
Dovidio, J. F., Hewstone, M., Glick, P., & Esses, V. M. 2010. Prejudice, stereotyping and discrimination: Theoretical and empirical overview. Prejudice, stereotyping and discrimination, 3-28, Sage Publications.
[15]
Deshpande, A., Murahari, V., Rajpurohit, T., Kalyan, A., & Narasimhan, K. 2023. Toxicity in ChatGPT: Analyzing Persona-assigned Language Models. arXiv preprint arXiv:2304.05335.
[16]
Frieder, S., Pinchetti, L., Griffiths, R. R., Salvatori, T., Lukasiewicz, T., Petersen, P. C., ... & Berner, J. 2023. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867.
[17]
Gardner, M., Grus, J., Neumann, M., Tafjord, O., Dasigi, P., Liu, N.F., Peters, M., Schmitz, M. and Zettlemoyer, L. 2018. Allennlp: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6. Association for Computational Linguistics.
[18]
Kashefi, A., & Mukerji, T. 2023. ChatGPT for programming numerical methods. arXiv preprint arXiv:2303.12093.
[19]
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., ... & Kasneci, G. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274.
[20]
Lee, N., Madotto, A., & Fung, P. August 2019. Exploring Social Bias in Chatbots using Stereotype Knowledge. In Proceedings of the Workshop on Widening (NLP@ACL), Florence, Italy, July 28, pp. 177-180.
[21]
Li, J., Dada, A., Kleesiek, J., & Egger, J. 2023. ChatGPT in Healthcare: A Taxonomy and Systematic Review. medRxiv, 2023-03.
[22]
May, C., Wang, A., Bordia, S., Bowman, S. R., & Rudinger, R. 2019. On measuring social biases in sentence encoders. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, p.p. 622–628, Minneapolis, Minnesota. Association for Computational Linguistics
[23]
Mohammad, S. M., & Turney, P. D. 2013. Crowdsourcing a word-emotion association lexicon. Computational Intelligence, 29(3), 436-465.
[24]
Nangia, N., Vania, C., Bhalerao, R. and Bowman, S. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967.
[25]
Rudolph, J., Tan, S., & Tan, S. 2023. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?. Journal of Applied Learning and Teaching, 6(1).
[26]
Salah, M., Alhalbusi, H., Ismail, M. M., & Abdelfattah, F. 2023. Chatting with ChatGPT: Decoding the Mind of Chatbot Users and Unveiling the Intricate Connections between User Perception, Trust and Stereotype Perception on Self-Esteem and Psychological Well-being.
[27]
Sap, M., Card, D., Gabriel, S., Choi, Y., and Smith, N.A. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1668–1678, Florence, Italy. Association for Computational Linguistics.
[28]
Singh, S. and Ramakrishnan, N. 2023. Is ChatGPT Biased? A Review.
[29]
Surameery, N.M.S., & Shakor, M.Y. 2023. Use Chat GPT to Solve Programming Bugs. International Journal of Information Technology & Computer Engineering (IJITC) ISSN: 2455-5290, 3(01), 17-22.
[30]
Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P. S., ... & Gabriel, I. 2021. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
[31]
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457.
[32]
Zhuo, T. Y., Huang, Y., Chen, C., & Xing, Z. 2023. Exploring ai ethics of ChatGPT: A diagnostic analysis. arXiv preprint arXiv:2301.12867.

Cited By

View all
  • (2024)Is a Sunny Day Bright and Cheerful or Hot and Uncomfortable? Young Children's Exploration of ChatGPTProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685397(1-15)Online publication date: 13-Oct-2024
  • (2024)Large Language Models and Personalized Storytelling for Postpartum WellbeingCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681921(653-657)Online publication date: 11-Nov-2024
  • (2024)Surprising gender biases in GPTComputers in Human Behavior Reports10.1016/j.chbr.2024.100533(100533)Online publication date: Nov-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICEGOV '23: Proceedings of the 16th International Conference on Theory and Practice of Electronic Governance
September 2023
509 pages
ISBN:9798400707421
DOI:10.1145/3614321
Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 November 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. ChatGPT
  2. Language models
  3. Sentiments
  4. Social bias
  5. Social groups
  6. Stereotypes

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICEGOV 2023

Acceptance Rates

Overall Acceptance Rate 350 of 865 submissions, 40%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,952
  • Downloads (Last 6 weeks)288
Reflects downloads up to 20 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Is a Sunny Day Bright and Cheerful or Hot and Uncomfortable? Young Children's Exploration of ChatGPTProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685397(1-15)Online publication date: 13-Oct-2024
  • (2024)Large Language Models and Personalized Storytelling for Postpartum WellbeingCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681921(653-657)Online publication date: 11-Nov-2024
  • (2024)Surprising gender biases in GPTComputers in Human Behavior Reports10.1016/j.chbr.2024.100533(100533)Online publication date: Nov-2024
  • (2024)Decoding Cosmetic Surgery—Can Artificial Intelligence Chatbots Aid in Informed Surgeon Selection?Aesthetic Plastic Surgery10.1007/s00266-024-04141-8Online publication date: 30-May-2024
  • (2024)Equity Issues Derived from Use of Large Language Models in EducationNew Media Pedagogy: Research Trends, Methodological Challenges, and Successful Implementations10.1007/978-3-031-63235-8_28(425-440)Online publication date: 1-Jul-2024
  • (2023)Generating Synthetic Data from Large Language Models2023 15th International Conference on Innovations in Information Technology (IIT)10.1109/IIT59782.2023.10366424(73-78)Online publication date: 14-Nov-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media