Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3593013.3594072acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias

Published: 12 June 2023 Publication History

Abstract

Warning: The content of this paper may be upsetting or triggering.
Nine language-vision AI models trained on web scrapes with the Contrastive Language-Image Pretraining (CLIP) objective are evaluated for evidence of a bias studied by psychologists: the sexual objectification of girls and women, which occurs when a person’s human characteristics, such as emotions, are disregarded and the person is treated as a body or a collection of body parts. We replicate three experiments in the psychology literature quantifying sexual objectification and show that the phenomena persist in trained AI models. A first experiment uses standardized images of women from the Sexual OBjectification and EMotion Database, and finds that human characteristics are disassociated from images of objectified women: the model’s recognition of emotional state is mediated by whether the subject is fully or partially clothed. Embedding association tests (EATs) return significant effect sizes for both anger (d > 0.80) and sadness (d > 0.50), associating images of fully clothed subjects with emotions. GRAD-CAM saliency maps highlight that CLIP gets distracted from emotional expressions in objectified images where subjects are partially clothed. A second experiment measures the effect in a representative application: an automatic image captioner (Antarctic Captions) includes words denoting emotion less than 50% as often for images of partially clothed women than for images of fully clothed women. A third experiment finds that images of female professionals (scientists, doctors, executives) are likely to be associated with sexual descriptions relative to images of male professionals. A fourth experiment shows that a prompt of "a [age] year old girl" generates sexualized images (as determined by an NSFW classifier) up to 73% of the time for VQGAN-CLIP (age 17), and up to 42% of the time for Stable Diffusion (ages 14 and 18); the corresponding rate for boys never surpasses 9%. The evidence indicates that language-vision AI models trained on automatically collected web scrapes learn biases of sexual objectification, which propagate to downstream applications.

References

[1]
Sandhini Agarwal, Gretchen Krueger, Jack Clark, Alec Radford, Jong Wook Kim, and Miles Brundage. 2021. Evaluating CLIP: Towards Characterization of Broader Capabilities and Downstream Implications. arXiv preprint arXiv:2108.02818 (2021).
[2]
Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, 2022. CM3: A Causal Masked Multimodal Model of the Internet. arXiv preprint arXiv:2201.07520 (2022).
[3]
Luca Andrighetto, Fabrizio Bracco, Carlo Chiorri, Michele Masini, Marcello Passarelli, and Tommaso Francesco Piccinno. 2019. Now you see me, now you don’t: Detecting sexual objectification through a change blindness paradigm. Cognitive Processing 20, 4 (2019), 419–429.
[4]
Dane Archer, Bonita Iritani, Debra D Kimes, and Michael Barrios. 1983. Face-ism: Five studies of sex differences in facial prominence.Journal of Personality and social Psychology 45, 4 (1983), 725.
[5]
Jennifer Stevens Aubrey and Cynthia M Frisby. 2011. Sexual objectification in music videos: A content analysis comparing gender and genre. Mass Communication and Society 14, 4 (2011), 475–501.
[6]
Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan. 2022. Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. arXiv preprint arXiv:2211.03759 (2022).
[7]
Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. Multimodal datasets: misogyny, pornography, and malignant stereotypes. arXiv preprint arXiv:2110.01963 (2021).
[8]
Abeba Birhane, Vinay Uday Prabhu, and John Whaley. 2022. Auditing Saliency Cropping Algorithms. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 4051–4059.
[9]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77–91.
[10]
Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, Robert Wolfe, and Mahzarin R Banaji. 2022. Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics. In In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society.
[11]
Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183–186.
[12]
Rachel M Calogero. 2012. Objectification theory, self-objectification, and body image. (2012).
[13]
Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. 2020. Generative pretraining from pixels. In International Conference on Machine Learning. PMLR, 1691–1703.
[14]
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022).
[15]
Kathryn BH Clancy, Robin G Nelson, Julienne N Rutherford, and Katie Hinde. 2014. Survey of academic field experiences (SAFE): Trainees report harassment and assault. PloS one 9, 7 (2014), e102172.
[16]
Jacob Cohen. 1992. Statistical power analysis. Current directions in psychological science 1, 3 (1992), 98–101.
[17]
Katherine Crowson, Stella Biderman, Daniel Kornis, Dashiell Stander, Eric Hallahan, Louis Castricato, and Edward Raff. 2022. VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance. arXiv preprint arXiv:2204.08583 (2022).
[18]
Elizabeth A Daniels, Eileen L Zurbriggen, and L Monique Ward. 2020. Becoming an object: A review of self-objectification in girls. Body Image 33 (2020), 278–299.
[19]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248–255.
[20]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations.
[21]
Alice H Eagly and Antonio Mladinic. 1989. Gender stereotypes and attitudes toward women and men. Personality and social psychology bulletin 15, 4 (1989), 543–558.
[22]
Alice H Eagly and Antonio Mladinic. 1994. Are people prejudiced against women? Some answers from research on attitudes, gender stereotypes, and judgments of competence. European review of social psychology 5, 1 (1994), 1–35.
[23]
Lensa Image Editor. [n. d.]. In https://prisma-ai.com/lensa.
[24]
Patrick Esser, Robin Rombach, and Bjorn Ommer. 2021. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12873–12883.
[25]
Amber L Ferris, Sandi W Smith, Bradley S Greenberg, and Stacy L Smith. 2007. The content of reality dating shows and viewer perceptions of dating. Journal of Communication 57, 3 (2007), 490–510.
[26]
Barbara L Fredrickson and Tomi-Ann Roberts. 1997. Objectification theory: Toward understanding women’s lived experiences and mental health risks. Psychology of women quarterly 21, 2 (1997), 173–206.
[27]
Donna Freitas. 2017. The happiness effect: How social media is driving a generation to appear perfect at any cost. Oxford University Press.
[28]
Andrea Frome, Greg Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visual-semantic embedding model. (2013).
[29]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (2021), 86–92.
[30]
Sarah J Gervais, Theresa K Vescio, Jens Förster, Anne Maass, and Caterina Suitner. 2012. Seeing women as objects: The sexual body part recognition bias. European Journal of Social Psychology 42, 6 (2012), 743–753.
[31]
Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. 2021. Multimodal neurons in artificial neural networks. Distill 6, 3 (2021), e30.
[32]
Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. 2021. Open-vocabulary Object Detection via Vision and Language Knowledge Distillation. arXiv preprint arXiv:2104.13921 2 (2021).
[33]
Wei Guo and Aylin Caliskan. 2021. Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 122–133.
[34]
Drew Harwell. 2019. A face-scanning algorithm increasingly decides whether you deserve the job. In Ethics of Data and Analytics. Auerbach Publications, 206–211.
[35]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
[36]
Nathan A Heflick, Jamie L Goldenberg, Douglas P Cooper, and Elisa Puvia. 2011. From women to objects: Appearance focus, target gender, and perceptions of warmth, morality and competence. Journal of Experimental Social Psychology 47, 3 (2011), 572–581.
[37]
Melissa Heikkiläa. 2022. The viral AI avatar app Lensa undressed me—without my consent. The viral AI avatar app Lensa undressed me—without my consent
[38]
Léo Hemamou, Ghazi Felhi, Vincent Vandenbussche, Jean-Claude Martin, and Chloé Clavel. 2019. Hirenet: A hierarchical attention model for the automatic analysis of asynchronous video job interviews. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 573–581.
[39]
Herminia Ibarra, Robin Ely, and Deborah Kolb. 2013. Women rising: The unseen barriers. Harvard business review 91, 9 (2013), 60–66.
[40]
Gabriel Ilharco, Mitchell Wortsman, Ross Wightman, Cade Gordon, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, Hongseok Namkoong, John Miller, Hannaneh Hajishirzi, Ali Farhadi, and Ludwig Schmidt. 2021. OpenCLIP. https://doi.org/10.5281/zenodo.5143773 If you use this software, please cite it as below.
[41]
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yunhsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision. arXiv e-prints (2021), arXiv–2102.
[42]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 4401–4410.
[43]
Eugenia Kim, De’Aira Bryant, Deepak Srikanth, and Ayanna Howard. 2021. Age Bias in Emotion Detection: An Analysis of Facial Emotion Recognition Performance on Young, Middle-Aged, and Older Adults. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 638–644.
[44]
Gant Laborde. [n. d.]. Deep NN for NSFW Detection. https://github.com/GantMan/nsfw_model
[45]
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 7871–7880.
[46]
Ang Li, Allan Jabri, Armand Joulin, and Laurens van der Maaten. 2017. Learning visual n-grams from web data. In Proceedings of the IEEE International Conference on Computer Vision. 4183–4192.
[47]
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. arXiv preprint arXiv:2201.12086 (2022).
[48]
Frederick Liu and Besim Avci. 2019. Incorporating priors with feature attribution on text classification. arXiv preprint arXiv:1906.08286 (2019).
[49]
Maddalena Marini and Mahzarin R Banaji. 2020. An implicit gender sex-science association in the general population and STEM faculty. The Journal of General Psychology (2020), 1–28.
[50]
Chandler May, Alex Wang, Shikha Bordia, Samuel Bowman, and Rachel Rudinger. 2019. On Measuring Social Biases in Sentence Encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 622–628.
[51]
Merriam-Webster. [n. d.]. In https://www.merriam-webster.com/dictionary/.
[52]
Ron Mokady, Amir Hertz, and Amit H Bermano. 2021. Clipcap: Clip prefix for image captioning. arXiv preprint arXiv:2111.09734 (2021).
[53]
Kirsten N Morehouse, Benedek Kurdi, Ece Hakim, and Mahzarin R Banaji. 2022. When a Stereotype Dumbfounds: Probing the Nature of the Surgeon= Male Belief. Current Research in Ecological and Social Psychology (2022), 100044.
[54]
Norman Mu, Alexander Kirillov, David Wagner, and Saining Xie. 2021. SLIP: Self-supervision meets Language-Image Pre-training. arXiv preprint arXiv:2112.12750 (2021).
[55]
Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456 (2020).
[56]
Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2021. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021).
[57]
Joon Sung Park, Michael S Bernstein, Robin N Brewer, Ece Kamar, and Meredith Ringel Morris. 2021. Understanding the Representation and Representativeness of Age in AI Data Sets. arXiv preprint arXiv:2103.09058 (2021).
[58]
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, and Alex Hanna. 2021. Data and its (dis)contents: A survey of dataset development and use in machine learning research. Patterns 2, 11 (2021), 100336. https://doi.org/10.1016/j.patter.2021.100336
[59]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020 (2021).
[60]
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
[61]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125 (2022).
[62]
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092 (2021).
[63]
Christina Richey. 2015. The CSWA survey on workplace climate and anti-harassment policies. In AAS/Division for Planetary Sciences Meeting Abstracts# 47, Vol. 47. 406–01.
[64]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10684–10695.
[65]
Daniela Ruzzante, Bianca Monachesi, Noemi Orabona, and Jeroen Vaes. 2021. The Sexual OBjectification and EMotion database: A free stimulus set and norming data of sexually objectified and non-objectified female targets expressing multiple emotions. Behavior Research Methods (2021), 1–15.
[66]
Babak Saleh and Ahmed Elgammal. 2015. Large-scale classification of fine-art paintings: Learning the right metric on the right feature. arXiv preprint arXiv:1505.00855 (2015).
[67]
Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. 2021. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114 (2021).
[68]
Ramprasaath R Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. 2016. Grad-CAM: Why did you say that?arXiv preprint arXiv:1611.07450 (2016).
[69]
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2556–2565.
[70]
Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 3407–3412.
[71]
Abhishek Singhania, Abhishek Unnam, and Varun Aggarwal. 2020. Grading video interviews with fairness considerations. arXiv preprint arXiv:2007.05461 (2020).
[72]
Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. 2013. Zero-Shot Learning Through Cross-Modal Transfer. In Advances in Neural Information Processing Systems. 935–943.
[73]
Ryan Steed and Aylin Caliskan. 2021. Image representations learned with unsupervised pre-training contain human-like biases. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 701–713.
[74]
Janet K Swim, Lauri L Hyers, Laurie L Cohen, and Melissa J Ferguson. 2001. Everyday sexism: Evidence for its incidence, nature, and psychological impact from three daily diary studies. Journal of Social issues 57, 1 (2001), 31–53.
[75]
Yonglong Tian, Dilip Krishnan, and Phillip Isola. 2019. Contrastive Representation Distillation. In International Conference on Learning Representations.
[76]
Marika Tiggemann and Amy Slater. 2015. The role of self-objectification in the mental health of early adolescent girls: Predictors and consequences. Journal of pediatric psychology 40, 7 (2015), 704–711.
[77]
Saurabh Tiwary. 2021. Turing Bletchley: A Universal Image Language Representation model by Microsoft. https://www.microsoft.com/en-us/research/blog/turing-bletchley-a-universal-image-language-representation-model-by-microsoft/
[78]
the-eye.eu. [n. d.]. Antarctic Captioner training corpus. https://the-eye.eu/public/AI/models/antarctic-captions/postcache.txt.
[79]
Emily van der Nagel. 2020. Verifying images: Deepfakes, control, and consent. Porn Studies 7, 4 (2020), 424–429.
[80]
Jialu Wang, Yang Liu, and Xin Eric Wang. 2021. Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in Image Search. arXiv preprint arXiv:2109.05433 (2021).
[81]
L Monique Ward. 2016. Media and sexualization: State of empirical research, 1995–2015. The Journal of Sex Research 53, 4-5 (2016), 560–577.
[82]
Robert Wolfe and Aylin Caliskan. 2022. American==White in Multimodal Language-and-Image AI. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society.
[83]
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 335–340.
[84]
Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D Manning, and Curtis P Langlotz. 2020. Contrastive learning of medical visual representations from paired images and text. arXiv preprint arXiv:2010.00747 (2020).

Cited By

View all
  • (2024)Who's in and who's out? A case study of multimodal CLIP-filtering in DataCompProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694702(1-17)Online publication date: 29-Oct-2024
  • (2024)A Survey of Trustworthy Representation Learning Across DomainsACM Transactions on Knowledge Discovery from Data10.1145/365730118:7(1-53)Online publication date: 19-Jun-2024
  • (2024)Grounding and Evaluation for Large Language Models: Practical Challenges and Lessons Learned (Survey)Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671467(6523-6533)Online publication date: 25-Aug-2024
  • Show More Cited By

Index Terms

  1. Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
        June 2023
        1929 pages
        ISBN:9798400701924
        DOI:10.1145/3593013
        This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 12 June 2023

        Check for updates

        Author Tags

        1. AI bias
        2. AI bias in applications
        3. AI bias propagation
        4. gender bias
        5. generative AI
        6. language-vision AI
        7. representation learning
        8. sexualization
        9. text-to-image generators

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        • NIST

        Conference

        FAccT '23

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)1,137
        • Downloads (Last 6 weeks)98
        Reflects downloads up to 10 Nov 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Who's in and who's out? A case study of multimodal CLIP-filtering in DataCompProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694702(1-17)Online publication date: 29-Oct-2024
        • (2024)A Survey of Trustworthy Representation Learning Across DomainsACM Transactions on Knowledge Discovery from Data10.1145/365730118:7(1-53)Online publication date: 19-Jun-2024
        • (2024)Grounding and Evaluation for Large Language Models: Practical Challenges and Lessons Learned (Survey)Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671467(6523-6533)Online publication date: 25-Aug-2024
        • (2024)Playgrounds and Prejudices: Exploring Biases in Generative AI For Children.Proceedings of the 23rd Annual ACM Interaction Design and Children Conference10.1145/3628516.3659404(839-843)Online publication date: 17-Jun-2024
        • (2024)LLM Diagnostic Toolkit: Evaluating LLMs for Ethical Issues2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10650995(1-8)Online publication date: 30-Jun-2024
        • (2024)Online images amplify gender biasNature10.1038/s41586-024-07068-x626:8001(1049-1055)Online publication date: 14-Feb-2024
        • (2024)Safeguarding human values: rethinking US law for generative AI’s societal impactsAI and Ethics10.1007/s43681-024-00451-4Online publication date: 7-May-2024
        • (2024)Artificial intelligence products and their influence on individuals’ objectification: a narrative reviewCurrent Psychology10.1007/s12144-024-06747-2Online publication date: 25-Sep-2024
        • (2023)Artificial intelligence, bias, and ethicsProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/799(7007-7013)Online publication date: 19-Aug-2023
        • (2023)Disambiguating Algorithmic Bias: From Neutrality to JusticeProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604695(691-704)Online publication date: 8-Aug-2023
        • Show More Cited By

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media