Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3514094.3534136acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Open access

American == White in Multimodal Language-and-Image AI

Published: 27 July 2022 Publication History

Abstract

Three state-of-the-art language-and-image AI models, CLIP, SLIP, and BLIP, are evaluated for evidence of a bias previously observed in social and experimental psychology: equating American identity with being White. Embedding association tests (EATs) using standardized images of self-identified Asian, Black, Latina/o, and White individuals from the Chicago Face Database (CFD) reveal that White individuals are more associated with collective in-group words than are Asian, Black, or Latina/o individuals, with effect sizes >.4 for White vs. Asian comparisons across all models. In assessments of three core aspects of American identity reported by social psychologists, single-category EATs reveal that images of White individuals are more associated with patriotism and with being born in America, but that, consistent with prior findings in psychology, White individuals are associated with being less likely to treat people of all races and backgrounds equally. Additional tests reveal that the number of images of Black individuals returned by an image ranking task is more strongly correlated with state-level implicit bias scores for White individuals (Pearson's ρ=.63 in CLIP, ρ=.69 in BLIP) than are state demographics (ρ=.60), suggesting a relationship between regional prototypicality and implicit bias. Three downstream machine learning tasks demonstrate biases associating American with White. In a visual question answering task using BLIP, 97% of White individuals are identified as American, compared to only 3% of Asian individuals. When asked in what state the individual depicted lives in, the model responds China 53% of the time for Asian individuals, but always with an American state for White individuals. In an image captioning task, BLIP remarks upon the race of Asian individuals as much as 36% of the time, and the race of Black individuals as much as 18% of the time, but never remarks upon race for White individuals. Finally, when provided with an initialization image of individuals from the CFD and the text "an American person," a synthetic image generator (VQGAN) using the text-based guidance of CLIP consistently lightens the skin tone of individuals of all races (by 35% for Black individuals, based on mean pixel brightness), and generates output images of White individuals with blonde hair. The results indicate that societal biases equating American identity with being White are learned by multimodal language-and-image AI, and that these biases propagate to downstream applications of such models.

Supplementary Material

MP4 File (aies032.mp4)
Presentation video for the paper "American==White in Multimodal Language-and-Image AI" by Robert Wolfe and Aylin Caliskan, at AIES 2022.

References

[1]
Sandhini Agarwal, Gretchen Krueger, Jack Clark, Alec Radford, Jong Wook Kim, and Miles Brundage. 2021. Evaluating CLIP: Towards Characterization of Broader Capabilities and Downstream Implications. arXiv preprint arXiv:2108.02818 (2021).
[2]
Brian E Armenta, Richard M Lee, Stephanie T Pituc, Kyoung-Rae Jung, Irene JK Park, José A Soto, Su Yeong Kim, and Seth J Schwartz. 2013. Where are you from? A validation of the Foreigner Objectification Scale and the psychological correlates of foreigner objectification among Asian Americans and Latinos. Cultural Diversity and Ethnic Minority Psychology, Vol. 19, 2 (2013), 131.
[3]
Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. Multimodal datasets: misogyny, pornography, and malignant stereotypes. arXiv preprint arXiv:2110.01963 (2021).
[4]
Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2019. Understanding the origins of bias in word embeddings. In International Conference on Machine Learning. PMLR, 803--811.
[5]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77--91.
[6]
Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, Robert Wolfe, and Mahzarin R. Banaji. 2022. Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society.
[7]
Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, Vol. 356, 6334 (2017), 183--186.
[8]
Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. 2021. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3558--3568.
[9]
Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. 2020. Generative pretraining from pixels. In International Conference on Machine Learning. PMLR, 1691--1703.
[10]
Jacob Cohen. 1992. Statistical power analysis. Current directions in psychological science, Vol. 1, 3 (1992), 98--101.
[11]
Katherine Crowson, Stella Biderman, Daniel Kornis, Dashiell Stander, Eric Hallahan, Louis Castricato, and Edward Raff. 2022. VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance. arXiv preprint arXiv:2204.08583 (2022).
[12]
Felix Danbold and Yuen J Huo. 2015. No longer "all-American"? Whites' defensive reactions to their numerical decline. Social Psychological and Personality Science, Vol. 6, 2 (2015), 210--218.
[13]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248--255.
[14]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171--4186. https://doi.org/10.18653/v1/N19-1423
[15]
Thierry Devos and Mahzarin R Banaji. 2005. American= white? Journal of personality and social psychology, Vol. 88, 3 (2005), 447.
[16]
Thierry Devos, Melody Sadler, David Perry, and Kumar Yogeeswaran. 2021. Temporal fluctuations in context ethnic diversity over three decades predict implicit national inclusion of Asian Americans. Group Processes & Intergroup Relations, Vol. 24, 1 (2021), 3--25.
[17]
Jesse Dodge, Maarten Sap, Ana Marasovic, William Agnew, Gabriel Ilharco, Dirk Groeneveld, and Matt Gardner. 2021. Documenting the english colossal clean crawled corpus. arXiv preprint arXiv:2104.08758 (2021).
[18]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations.
[19]
Patrick Esser, Robin Rombach, and Bjorn Ommer. 2021. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12873--12883.
[20]
Andrea Frome, Greg Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc'Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visual-semantic embedding model. (2013).
[21]
Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. 2021. Multimodal neurons in artificial neural networks. Distill, Vol. 6, 3 (2021), e30.
[22]
Anthony G Greenwald, Debbie E McGhee, and Jordan LK Schwartz. 1998. Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology, Vol. 74, 6 (1998), 1464.
[23]
Feliks Gross. 1999. Citizenship and ethnicity: the growth and development of a democratic multiethnic institution. Number 128. Greenwood Publishing Group.
[24]
Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. 2021. Open-vocabulary Object Detection via Vision and Language Knowledge Distillation. arXiv preprint arXiv:2104.13921, Vol. 2 (2021).
[25]
Wei Guo and Aylin Caliskan. 2021. Detecting emergent intersectional biases: Contextualized word embeddings contain a distribution of human-like biases. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 122--133.
[26]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[27]
Que-Lam Huynh, Thierry Devos, and Hannah R Altman. 2015. Boundaries of American Identity: Relations Between Ethnic Group Prototypicality and Policy Attitudes. Political Psychology, Vol. 36, 4 (2015), 449--468.
[28]
Que-Lam Huynh, Thierry Devos, and Laura Smalarz. 2011. Perpetual foreigner in one's own land: Potential implications for identity and psychological adjustment. Journal of social and clinical psychology, Vol. 30, 2 (2011), 133--162.
[29]
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yunhsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision. arXiv e-prints (2021), arXiv-2102.
[30]
Kenneth Joseph and Jonathan Morgan. 2020. When do Word Embeddings Accurately Reflect Surveys on our Beliefs About People?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 4392--4415.
[31]
Eugenia Kim, De'Aira Bryant, Deepak Srikanth, and Ayanna Howard. 2021. Age Bias in Emotion Detection: An Analysis of Facial Emotion Recognition Performance on Young, Middle-Aged, and Older Adults. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 638--644.
[32]
Austin C Kozlowski, Matt Taddy, and James A Evans. 2019. The geometry of culture: Analyzing the meanings of class through word embeddings. American Sociological Review, Vol. 84, 5 (2019), 905--949.
[33]
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2016. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. https://arxiv.org/abs/1602.07332
[34]
Ang Li, Allan Jabri, Armand Joulin, and Laurens van der Maaten. 2017. Learning visual n-grams from web data. In Proceedings of the IEEE International Conference on Computer Vision. 4183--4192.
[35]
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. arXiv preprint arXiv:2201.12086 (2022).
[36]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision. Springer, 740--755.
[37]
Debbie S Ma, Joshua Correll, and Bernd Wittenbrink. 2015. The Chicago face database: A free stimulus set of faces and norming data. Behavior research methods, Vol. 47, 4 (2015), 1122--1135.
[38]
Chandler May, Alex Wang, Shikha Bordia, Samuel Bowman, and Rachel Rudinger. 2019. On Measuring Social Biases in Sentence Encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 622--628.
[39]
Norman Mu, Alexander Kirillov, David Wagner, and Saining Xie. 2021. SLIP: Self-supervision meets Language-Image Pre-training. arXiv preprint arXiv:2112.12750 (2021).
[40]
Pandu Nayak. 2021. MUM: A new AI milestone for understanding information. https://blog.google/products/search/introducing-mum/
[41]
Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2021. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021).
[42]
Rourke O'Brien, Tiffany Neman, Nathan Seltzer, Linnea Evans, and Atheendar Venkataramani. 2020. Structural racism, economic opportunity and racial health disparities: Evidence from US counties. SSM-Population health, Vol. 11 (2020), 100564.
[43]
Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, Vol. 24 (2011).
[44]
Akshat Pandey and Aylin Caliskan. 2021. Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 822--833.
[45]
Joon Sung Park, Michael S Bernstein, Robin N Brewer, Ece Kamar, and Meredith Ringel Morris. 2021. Understanding the Representation and Representativeness of Age in AI Data Sets. arXiv preprint arXiv:2103.09058 (2021).
[46]
J. Weston Phippen. 2021. "A $10-Million Scarecrow': The Quest for the Perfect "Smart Wall'. Politico (2021).
[47]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020 (2021).
[48]
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, Vol. 1, 8 (2019), 9.
[49]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125 (2022).
[50]
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. arXiv preprint arXiv:2102.12092 (2021).
[51]
Alan Rappeport and Kashmir Hill. 2022. I.R.S. to End Use of Facial Recognition for Identity Verification. The New York Times (Feb 2022).
[52]
Kate A Ratliff, Nicole Lofaro, Jennifer L Howell, Morgan A Conway, Calvin K Lai, B O'Shea, CT Smith, C Jiang, L Redford, G Pogge, et al. 2020. Documenting bias from 2007--2015: Pervasiveness and correlates of implicit attitudes and stereotypes II. Unpublished Manuscript (2020).
[53]
Lauren Rhue. 2018. Racial influence on automated perceptions of emotions. Available at SSRN 3281765 (2018).
[54]
Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Commun. ACM, Vol. 8, 10 (1965), 627--633.
[55]
Edward Said. 2014. Orientalism. Routledge.
[56]
Christoph Schuhmann. 2021. LAION-400-Million Open Dataset. https://laion.ai/laion-400-open-dataset/
[57]
Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. 2021. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114 (2021).
[58]
Howard Schuman, Charlotte Steeh, Lawrence Bobo, and Maria Krysan. 1997. Racial attitudes in America: Trends and interpretations, Rev. (1997).
[59]
U.S. Internal Revenue Service. 2022. IRS announces transition away from use of third-party verification involving facial recognition. IRS Newsroom (Feb 2022).
[60]
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2556--2565.
[61]
Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. 2013. Zero-Shot Learning Through Cross-Modal Transfer. In Advances in Neural Information Processing Systems. 935--943.
[62]
Ryan Steed and Aylin Caliskan. 2021. Image representations learned with unsupervised pre-training contain human-like biases. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 701--713.
[63]
Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. 2016. YFCC100M: The new data in multimedia research. Commun. ACM, Vol. 59, 2 (2016), 64--73.
[64]
Yonglong Tian, Dilip Krishnan, and Phillip Isola. 2019. Contrastive Representation Distillation. In International Conference on Learning Representations.
[65]
Saurabh Tiwary. 2021. Turing Bletchley: A Universal Image Language Representation model by Microsoft. https://www.microsoft.com/en-us/research/blog/turing-bletchley-a-universal-image-language-representation-model-by-microsoft/
[66]
Autumn Toney-Wails and Aylin Caliskan. 2021. ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries. Empirical Methods in Natural Language Processing (EMNLP) (2021).
[67]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008.
[68]
Tobias Walter, Celina Kirschner, Steffen Eger, Goran Glavavs, Anne Lauscher, and Simone Paolo Ponzetto. 2021. Diachronic Analysis of German Parliamentary Proceedings: Ideological Shifts through the Lens of Political Biases. arXiv preprint arXiv:2108.06295 (2021).
[69]
Jialu Wang, Yang Liu, and Xin Eric Wang. 2021. Are Gender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in Image Search. arXiv preprint arXiv:2109.05433 (2021).
[70]
Michael Wenzel, Amélie Mummendey, and Sven Waldzus. 2008. Superordinate identities and intergroup conflict: The ingroup projection model. European review of social psychology, Vol. 18, 1 (2008), 331--372.
[71]
Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. 2019. Predictive inequity in object detection. arXiv preprint arXiv:1902.11097 (2019).
[72]
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, Online, 38--45. https://www.aclweb.org/anthology/2020.emnlp-demos.6
[73]
Robert Wolfe, Mahzarin Banaji, and Aylin Caliskan. 2022. Evidence for Hypodescent in Visual Semantic AI. ACM Conference on Fairness, Accountability, and Transparency (2022).
[74]
Robert Wolfe and Aylin Caliskan. 2021. Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models. Proceedings of Empirical Methods in Natural Language Processing (EMNLP) (2021).
[75]
Robert Wolfe and Aylin Caliskan. 2022 a. Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations. Association for Computational Linguistics (2022).
[76]
Robert Wolfe and Aylin Caliskan. 2022 b. Markedness in Visual Semantic AI. ACM Conference on Fairness, Accountability, and Transparency (2022).
[77]
Robert Wolfe and Aylin Caliskan. 2022 c. VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models. In Proceedings of the 36th AAAI Conference on Artificial Intelligence.
[78]
Kumar Yogeeswaran and Nilanjana Dasgupta. 2010. Will the "real" American please stand up? The effect of implicit national prototypes on discriminatory behavior and judgments. Personality and Social Psychology Bulletin, Vol. 36, 10 (2010), 1332--1345.
[79]
Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D Manning, and Curtis P Langlotz. 2020. Contrastive learning of medical visual representations from paired images and text. arXiv preprint arXiv:2010.00747 (2020).
[80]
Linda X Zou and Sapna Cheryan. 2017. Two axes of subordination: A new model of racial position. Journal of personality and social psychology, Vol. 112, 5 (2017), 696.

Cited By

View all
  • (2025)From Google Gemini to OpenAI Q* (Q-Star): A Survey on Reshaping the Generative Artificial Intelligence (AI) Research LandscapeTechnologies10.3390/technologies1302005113:2(51)Online publication date: 30-Jan-2025
  • (2025)Revealing Gender Bias from Prompt to Image in Stable DiffusionJournal of Imaging10.3390/jimaging1102003511:2(35)Online publication date: 24-Jan-2025
  • (2024)Evaluating model bias requires characterizing its mistakesProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692109(938-954)Online publication date: 21-Jul-2024
  • Show More Cited By

Index Terms

  1. American == White in Multimodal Language-and-Image AI

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
      July 2022
      939 pages
      ISBN:9781450392471
      DOI:10.1145/3514094
      This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 July 2022

      Check for updates

      Author Tags

      1. bias in ai
      2. multimodal models
      3. racial bias
      4. visual semantics

      Qualifiers

      • Research-article

      Funding Sources

      • National Institute of Standards and Technology (NIST)

      Conference

      AIES '22
      Sponsor:
      AIES '22: AAAI/ACM Conference on AI, Ethics, and Society
      May 19 - 21, 2021
      Oxford, United Kingdom

      Acceptance Rates

      Overall Acceptance Rate 61 of 162 submissions, 38%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)550
      • Downloads (Last 6 weeks)56
      Reflects downloads up to 05 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)From Google Gemini to OpenAI Q* (Q-Star): A Survey on Reshaping the Generative Artificial Intelligence (AI) Research LandscapeTechnologies10.3390/technologies1302005113:2(51)Online publication date: 30-Jan-2025
      • (2025)Revealing Gender Bias from Prompt to Image in Stable DiffusionJournal of Imaging10.3390/jimaging1102003511:2(35)Online publication date: 24-Jan-2025
      • (2024)Evaluating model bias requires characterizing its mistakesProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692109(938-954)Online publication date: 21-Jul-2024
      • (2024)Generative AI and the politics of visibilityBig Data & Society10.1177/2053951724125213111:2Online publication date: 13-May-2024
      • (2024)Who's in and who's out? A case study of multimodal CLIP-filtering in DataCompProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization10.1145/3689904.3694702(1-17)Online publication date: 29-Oct-2024
      • (2024)Better Little People Pictures: Generative Creation of Demographically Diverse AnthropographicsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641957(1-14)Online publication date: 11-May-2024
      • (2024)Pattern Recognition and Prediction in Time Series Data Through Retrieval-Augmented Techniques2024 International Conference on Electrical Electronics and Computing Technologies (ICEECT)10.1109/ICEECT61758.2024.10738936(1-6)Online publication date: 29-Aug-2024
      • (2024)Auditing and instructing text-to-image generation models on fairnessAI and Ethics10.1007/s43681-024-00531-5Online publication date: 1-Aug-2024
      • (2024)Situating the social issues of image generation models in the model life cycle: a sociotechnical approachAI and Ethics10.1007/s43681-024-00517-3Online publication date: 24-Jul-2024
      • (2023)Artificial intelligence, bias, and ethicsProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/799(7007-7013)Online publication date: 19-Aug-2023
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media