Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3313831.3376315acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Adhering, Steering, and Queering: Treatment of Gender in Natural Language Generation

Published: 23 April 2020 Publication History

Abstract

Natural Language Generation (NLG) supports the creation of personalized, contextualized, and targeted content. However, the algorithms underpinning NLG have come under scrutiny for reinforcing gender, racial, and other problematic biases. Recent research in NLG seeks to remove these biases through principles of fairness and privacy. Drawing on gender and queer theories from sociology and Science and Technology studies, we consider how NLG can contribute towards the advancement of gender equity in society. We propose a conceptual framework and technical parameters for aligning NLG with feminist HCI qualities. We present three approaches: (1) adhering to current approaches of removing sensitive gender attributes, (2) steering gender differences away from the norm, and (3) queering gender by troubling stereotypes. We discuss the advantages and limitations of these approaches across three hypothetical scenarios; newspaper headlines, job advertisements, and chatbots. We conclude by discussing considerations for implementing this framework and related ethical and equity agendas.

References

[1]
Sara Ahmed. 2006. Queer phenomenology: Orientations, objects, others. Duke University Press.
[2]
Matthias Baldauf, Raffael Bösch, Christian Frei, Fabian Hautle, and Marc Jenny. 2018. Exploring Requirements and Opportunities of Conversational User Interfaces for the Cognitively Impaired. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI '18). ACM, NY, NY, USA, 119--126.
[3]
David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014. Gender identity and lexical variation in social media. Journal of Sociolinguistics 18, 2 (2014), 135--160.
[4]
Shaowen Bardzell. 2010. Feminist HCI: taking stock and outlining an agenda for design. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 1301--1310.
[5]
Shaowen Bardzell and Jeffrey Bardzell. 2011. Towards a Feminist HCI Methodology: Social Science, Feminism, and HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, NY, NY, USA, 675--684.
[6]
Christoph Bartneck and Jun Hu. 2008. Exploring the abuse of robots. Interaction Studies 9, 3 (2008), 415--433.
[7]
Eric P.S. Baumer, Timothy Berrill, Sarah C. Botwinick, Jonathan L. Gonzales, Kevin Ho, Allison Kundrik, Luke Kwon, Tim LaRowe, Chanh P. Nguyen, Fredy Ramirez, Peter Schaedler, William Ulrich, Amber Wallace, Yuchen Wan, and Benjamin Weinfeld. 2018. What Would You Do?: Design Fiction and Ethics. In Proceedings of the 2018 ACM Conference on Supporting Groupwork (GROUP '18). ACM, NY, NY, USA, 244--256.
[8]
Hilary Bergen and others. 2016. 'I'd blush if I could': Digital assistants, disembodied cyborgs and the problem of gender. Word and Text, A Journal of Literary Studies and Linguistics 6, 1 (2016), 95--113.
[9]
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems. 4349--4357.
[10]
Sheryl Brahnam. 2006. Gendered bods and bot abuse. In Proceedings of CHI06 Workshop On the Misuse and Abuse of Interactive Technologies, Montréal, Québec, Canada. 13--17.
[11]
Judith Butler. 2002. Gender Trouble. Routledge.
[12]
Serina Chang and Kathleen McKeown. 2019. Automatically Inferring Gender Associations from Language.
[13]
Alexandra Chasin and others. 1995. Class and its close relations: Identities among women, servants, and machines. Posthuman bodies (1995), 73--96.
[14]
Eugene Cho. 2019. Hey Google, Can I Ask You Something in Private?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, NY, NY, USA, Article 258, 9 pages.
[15]
Elizabeth Clark, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, and Noah A. Smith. 2018. Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories. In 23rd International Conference on Intelligent User Interfaces (IUI '18). ACM, NY, NY, USA, 329--340.
[16]
Herbert H Clark. 1994. Discourse in production. (1994).
[17]
Raewyn Connell. 2005. Masculinities. Polity.
[18]
Amitava Das and Björn Gambäck. 2014. Poetic Machine: Computational Creativity for Automatic Poetry Generation in Bengali. In ICCC. 230--238.
[19]
Jeffrey Dastin. 2018. Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women, REUTERS. Online. (9 October 2018). Retrieved September 19, 2019 from https://perma.cc/5UPB-NHLE.
[20]
Maria De-Arteaga, Alexey Romanov, Hanna M. Wallach, Jennifer T. Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Cem Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, January 29--31, 2019. 120--128.
[21]
Anthony Dunne and Fiona Raby. 2013. Speculative everything: design, fiction, and social dreaming. MIT press.
[22]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science conference. 214--226.
[23]
Jacqueline Feldman. 2017. The Dignified Bot, The Paris Review. Online. (13 December 2017). Retrieved April 24, 2019 from https://www.theparisreview.org/blog/ 2017/12/13/the-dignified-bot.
[24]
Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research 61 (2018), 65--170.
[25]
Danielle Gaucher, Justin Friesen, and Aaron C Kay. 2011. Evidence that gendered wording in job advertisements exists and sustains gender inequality. Journal of personality and social psychology 101, 1 (2011), 109.
[26]
Pablo Gervás. 2019. Exploring Quantitative Evaluations of the Creativity of Automatic Poets. In Computational Creativity. Springer, 275--304.
[27]
J. Gilmore. 2019. Fixed It. Penguin Random House Australia. https://books.google.com.au/books?id=SkWQDwAAQBAJ
[28]
Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862 (2019).
[29]
Michal Gordon and Cynthia Breazeal. 2015. Designing a Virtual Assistant for in-Car Child Entertainment. In Proceedings of the 14th International Conference on Interaction Design and Children (IDC '15). ´ Association for Computing Machinery, New York, NY, USA, 359362. http://dx.doi.org/10.1145/2771839.2771916
[30]
Donna J Haraway. 2016. Staying with the trouble: Making kin in the Chthulucene. Duke University Press.
[31]
G. Hartley. 2018. Fed Up: Emotional Labor, Women, and the Way Forward. HarperOne. https://books.google.com.au/books?id=l3xLDwAAQBAJ
[32]
Alexander Hoyle, Hanna Wallach, Isabelle Augenstein, Ryan Cotterell, and others. 2019. Unsupervised Discovery of Gendered Language through Latent-Variable Modeling. arXiv preprint arXiv:1906.04760 (2019).
[33]
Ting-Yao Hsu, Yen-Chia Hsu, and Ting-Hao (Kenneth) Huang. 2019. On How Users Edit Computer-Generated Visual Stories. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA '19). ACM, NY, NY, USA, Article LBW2711, 6 pages.
[34]
Soomin Kim, JongHwan Oh, and Joonhwan Lee. 2016. Automated News Generation for TV Program Ratings. In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video (TVX '16). ACM, NY, NY, USA, 141--145.
[35]
Meng-Chieh Ko and Zih-Hong Lin. 2018. CardBot: A Chatbot for Business Card Management. In Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion (IUI 18 Companion). ´ Association for Computing Machinery, New York, NY, USA, Article Article 5, 2 pages.
[36]
Corina Koolen and Andreas van Cranenburgh. 2017. These are not the Stereotypes You are Looking For: Bias and Fairness in Authorial Gender Attribution. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing. 12--22.
[37]
Preethi Lahoti, Krishna P Gummadi, and Gerhard Weikum. 2019a. ifair: Learning individually fair data representations for algorithmic decision making. In 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 1334--1345.
[38]
Preethi Lahoti, Krishna P Gummadi, and Gerhard Weikum. 2019b. Operationalizing Individual Fairness with Pairwise Fair Representations. arXiv preprint arXiv:1907.01439 (2019).
[39]
Robin Lakoff. 1973. Language and woman's place. Language in society 2, 1 (1973), 45--79.
[40]
Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau. 2018. Multiple-attribute text rewriting. (2018).
[41]
Brian N Larson. 2017. Gender as a variable in natural-language processing: Ethical considerations. (2017).
[42]
Ann Light. 2011. HCI as heterodoxy: Technologies of identity and the queering of interaction with computers. Interacting with Computers 23, 5 (03 2011), 430--438.
[43]
David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309 (2018).
[44]
Daniel McNamara, Cheng Soon Ong, and Robert C Williamson. 2019. Costs and benefits of fair representation learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM, 263--270.
[45]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A Survey on Bias and Fairness in Machine Learning. arXiv preprint arXiv:1908.09635 (2019).
[46]
Mateus Mendes, Francisco C Pereira, and Amílcar Cardoso. 2004. Creativity in natural language: Studying lexical relations. In The Workshop Programme. 44.
[47]
Miriam Meyerhoff. 2015. Introducing sociolinguistics. Routledge.
[48]
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111--3119.
[49]
Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating Style Transfer for Text. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 495--504.
[50]
Gina Neff and Peter Nagy. 2016. Automation, algorithms, and politics| talking to Bots: Symbiotic agency and the case of Tay. International Journal of Communication 10 (2016), 17.
[51]
Nitya Parthasarthi, Sameer Singh, and others. 2019. GenderQuant: Quantifying Mention-Level Genderedness. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 2959--2969.
[52]
Thao Phan. 2019. Amazon Echo and the Aesthetics of Whiteness. Catalyst: Feminism, Theory, Technoscience 5, 1 (2019).
[53]
Xu Qiongkai, Qu Lizhen, Chenchen Xu, and Ran Cui. 2019a. Privacy-Aware Text Rewriting. In Proceedings of the 12th International Conference on Natural Language Generation.
[54]
Xu Qiongkai, Chenchen Xu, and Lizhen Qu. 2019b. ALTER: Auxiliary Text Rewriting Tool for Natural Language Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing.
[55]
Lizhen Qu, Gabriela Ferraro, Liyuan Zhou, Weiwei Hou, Nathan Schneider, and Timothy Baldwin. 2015. Big data small data, in domain out-of domain, known word unknown word: The impact of word representation on sequence labelling tasks. CoNLL (2015).
[56]
Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase generation. In Proceedings of the 2004 conference on empirical methods in natural language processing. 142--149.
[57]
Sravana Reddy and Kevin Knight. 2016. Obfuscating gender in social media writing. In Proceedings of the First Workshop on NLP and Computational Social Science. 17--26.
[58]
Jennifer A Rode. 2011. A theoretical agenda for feminist HCI. Interacting with Computers 23, 5 (2011), 393--400.
[59]
Jennifer A. Rode and Erika Shehan Poole. 2018. Putting the Gender Back in Digital Housekeeping. In Proceedings of the 4th Conference on Gender & IT (GenderIT '18). ACM, NY, NY, USA, 79--90.
[60]
Ahmed Sara. 2017. Living a Feminist Life. (2017).
[61]
Ari Schlesinger, Kenton P. O'Hara, and Alex S. Taylor. 2018. Let's Talk About Race: Identity, Chatbots, and AI. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, NY, NY, USA, Article 315, 14 pages.
[62]
Benedikt Schmidt, Reuben Borrison, Andrew Cohen, Marcel Dix, Marco Gärtler, Martin Hollender, Benjamin Klöpper, Sylvia Maczey, and Shunmuga Siddharthan. 2018. Industrial Virtual Assistants: Challenges and Opportunities. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers (UbiComp '18). ACM, NY, NY, USA, 794--801.
[63]
Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, and Joelle Pineau. 2018. A survey of available corpora for building data-driven dialogue systems: The journal version. Dialogue & Discourse 9, 1 (2018), 1--49.
[64]
Marie Louise Juul Søndergaard and Lone Koefoed Hansen. 2018. Intimate Futures: Staying with the Trouble of Digital Personal Assistants Through Design Fiction. In Proceedings of the 2018 Designing Interactive Systems Conference (DIS '18). ACM, NY, NY, USA, 869--880.
[65]
Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, and Stefano Ermon. 2019. Learning controllable fair representations. arXiv preprint arXiv:1812.04218 (2019).
[66]
Katta Spiel, Os Keyes, Ashley Marie Walker, Michael A. DeVito, Jeremy Birnholtz, Emeline Brulé, Ann Light, PBarlas, Jean Hardy, Alex Ahmed, Jennifer A. Rode, Jed R. Brubaker, and Gopinaath Kannabiran. 2019. Queer(Ing) HCI: Moving Forward in Theory and Practice. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA '19). ACM, NY, NY, USA, Article SIG11, 4 pages.
[67]
Yolande Strengers, Jenny Kennedy, Paula Arcari, Larissa Nicholls, and Melissa Gregg. 2019. Protection, Productivity and Pleasure in the Smart Home: Emerging Expectations and Gendered Insights from Australian Early Adopters. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). ACM, NY, NY, USA, Article 645, 13 pages.
[68]
Miriam Sweeney. 2014. Not just a pretty (inter) face: A critical analysis of Microsoft's' Ms. Dewey'. Ph.D. Dissertation. University of Illinois at Urbana-Champaign.
[69]
Rob Voigt, David Jurgens, Vinodkumar Prabhakaran, Dan Jurafsky, and Yulia Tsvetkov. 2018. RtGender: A corpus for studying differential responses to gender. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation.
[70]
Mark West, Rebecca Kraut, and Han Ei Chew. 2019. I'd blush if I could: closing gender divides in digital skills through education. (2019).
[71]
Shaomei Wu, Jeffrey Wieland, Omid Farivar, and Julie Schiller. 2017. Automatic Alt-Text: Computer-Generated Image Descriptions for Blind Users on a Social Network Service. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17). Association for Computing Machinery, New York, NY, USA, 1180 S1192. http://dx.doi.org/10.1145/2998181.2998364
[72]
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning. 2048--2057.
[73]
Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International Conference on Machine Learning. 325--333.
[74]
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender Bias in Contextualized Word Embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2--7, 2019, Volume 1 (Long and Short Papers). 629--634.
[75]
Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. Learning Gender-Neutral Word Embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018. 4847--4853.
[76]
Ran Zmigrod, Sebastian J. Mielke, Hanna M. Wallach, and Ryan Cotterell. 2019. Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers. 1651--1661.

Cited By

View all
  • (2024)Coimagining the Future of Voice Assistants with Cultural SensitivityHuman Behavior and Emerging Technologies10.1155/2024/32387372024(1-21)Online publication date: 25-Mar-2024
  • (2024)Making Trouble: Techniques for Queering Data and AI SystemsCompanion Publication of the 2024 ACM Designing Interactive Systems Conference10.1145/3656156.3658393(381-384)Online publication date: 1-Jul-2024
  • (2024)Partiality and Misconception: Investigating Cultural Representativeness in Text-to-Image ModelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642877(1-25)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. Adhering, Steering, and Queering: Treatment of Gender in Natural Language Generation

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
    April 2020
    10688 pages
    ISBN:9781450367080
    DOI:10.1145/3313831
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 23 April 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. feminist hci
    2. natural language generation

    Qualifiers

    • Research-article

    Conference

    CHI '20
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)117
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 01 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Coimagining the Future of Voice Assistants with Cultural SensitivityHuman Behavior and Emerging Technologies10.1155/2024/32387372024(1-21)Online publication date: 25-Mar-2024
    • (2024)Making Trouble: Techniques for Queering Data and AI SystemsCompanion Publication of the 2024 ACM Designing Interactive Systems Conference10.1145/3656156.3658393(381-384)Online publication date: 1-Jul-2024
    • (2024)Partiality and Misconception: Investigating Cultural Representativeness in Text-to-Image ModelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642877(1-25)Online publication date: 11-May-2024
    • (2024)The Usage of Voice in Sexualized Interactions with Technologies and Sexual Health Communication: An OverviewCurrent Sexual Health Reports10.1007/s11930-024-00383-416:2(47-57)Online publication date: 27-Mar-2024
    • (2023)Gender Nuances in Human-Computer Interaction ResearchProceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems10.1145/3638067.3638077(1-12)Online publication date: 16-Oct-2023
    • (2023)When Biased Humans Meet Debiased AI: A Case Study in College Major RecommendationACM Transactions on Interactive Intelligent Systems10.1145/361131313:3(1-28)Online publication date: 11-Sep-2023
    • (2023)“I’m fully who I am”: Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language GenerationProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594078(1246-1266)Online publication date: 12-Jun-2023
    • (2023)TikTok as a Stage: Performing Rural #farmqueer Utopias on TikTokProceedings of the 2023 ACM Designing Interactive Systems Conference10.1145/3563657.3596038(946-956)Online publication date: 10-Jul-2023
    • (2023)“I'm” Lost in Translation: Pronoun Missteps in Crowdsourced Data SetsExtended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544549.3585667(1-6)Online publication date: 19-Apr-2023
    • (2023)Transcending the “Male Code”: Implicit Masculine Biases in NLP ContextsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581017(1-19)Online publication date: 19-Apr-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media