Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3687272.3690877acmconferencesArticle/Chapter ViewAbstractPublication PageshaiConference Proceedingsconference-collections
poster

Designing Bias Suppressing Robots for `fair' Robot moderated Human-Human Interactions.

Published: 24 November 2024 Publication History

Abstract

Research has shown that data-driven robots deployed in social settings are likely to unconsciously perpetuate systemic social biases. Despite this, robots can also be deployed to promote fair behaviour in humans. These phenomena have led to the development of two broad sub-disciplines in HRI concerning ‘fairness’: a data-centric approach to ensuring robots operate fairly and a human-centric approach which aims to use robots as interventions to promote fairness in society. To date, these two fields have developed independently, thus it is unknown how data-driven robots can be used to suppress biases in human-human interactions. In this paper, we present a conceptual framework and hypothetical example of how robots might deploy data-driven fairness interventions, to actively suppress social biases in human-human interactions.

References

[1]
Arifah Addison, Christoph Bartneck, and Kumar Yogeeswaran. 2019. Robots Can Be More Than Black And White: Examining Racial Bias Towards Robots. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM, Honolulu HI USA, 493–498. https://doi.org/10.1145/3306618.3314272
[2]
Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, and ProPublica. 2016. Machine Bias There’s software used across the country to predict future criminals. And it’s biased against blacks.https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[3]
Eduardo Bonilla-Silva. 2002. The Linguistics of Color Blind Racism: How to Talk Nasty about Blacks without Sounding “Racist”. Critical Sociology 28, 1–2 (Jan. 2002), 41–64. https://doi.org/10.1177/08969205020280010501
[4]
Alessandro Castelnovo, Riccardo Crupi, Greta Greco, Daniele Regoli, Ilaria Giuseppina Penco, and Andrea Claudio Cosentini. 2022. A clarification of the nuances in the fairness metrics landscape. Scientific Reports 12, 1 (March 2022), 4209. https://doi.org/10.1038/s41598-022-07939-1
[5]
Yuan-Chia Chang, Daniel J. Rea, and Takayuki Kanda. 2024. Investigating the Impact of Gender Stereotypes in Authority on Avatar Robots. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. ACM, Boulder CO USA, 106–115. https://doi.org/10.1145/3610977.3634985
[6]
Chun-Wei Chiang, Zhuoran Lu, Zhuoyan Li, and Ming Yin. 2023. Are Two Heads Better Than One in AI-Assisted Decision Making? Comparing the Behavior and Performance of Groups and Individuals in Human-AI Collaborative Recidivism Risk Assessment. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–18. https://doi.org/10.1145/3544548.3581015
[7]
Houston Claure, Mai Lee Chang, Seyun Kim, Daniel Omeiza, Martim Brandundefinedo, Min Kyung Lee, and Malte Jung. 2022. Fairness and Transparency in Human-Robot Interaction. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (Sapporo, Hokkaido, Japan) (HRI ’22). IEEE Press, 1244–1246.
[8]
Norah E. Dunbar, Bradley Dorn, Mohemmad Hansia, Becky Ford, Matt Giles, Miriam Metzger, Judee K. Burgoon, Jay F. Nunamaker, and V. S. Subrahmanian. 2021. Dominance in Groups: How Dyadic Power Theory Can Apply to Group Discussions. Springer International Publishing, Cham, 75–97. https://doi.org/10.1007/978-3-030-54383-9_5
[9]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. 2011. Fairness Through Awareness. CoRR abs/1104.3913 (2011). arXiv:1104.3913http://arxiv.org/abs/1104.3913
[10]
Tom Hitron, Noa Morag Yaar, and Hadas Erel. 2023. Implications of AI Bias in HRI: Risks (and Opportunities) when Interacting with a Biased Robot. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. ACM, Stockholm Sweden, 83–92. https://doi.org/10.1145/3568162.3576977
[11]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2022. A Survey on Bias and Fairness in Machine Learning. Comput. Surveys 54, 6 (July 2022), 1–35. https://doi.org/10.1145/3457607
[12]
Mike Noon. 2018. Pointless Diversity Training: Unconscious Bias, New Racism and Agency. Work, Employment and Society 32, 1 (Feb. 2018), 198–209. https://doi.org/10.1177/0950017017719841
[13]
Maria Teresa Parreira, Sarah Gillet, Katie Winkle, and Iolanda Leite. 2023. How Did We Miss This?: A Case Study on Unintended Biases in Robot Social Behavior. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction. ACM, Stockholm Sweden, 11–20. https://doi.org/10.1145/3568294.3580032
[14]
Lucía Vicente and Helena Matute. 2023. Humans inherit artificial intelligence biases. Scientific Reports 13, 1 (Oct. 2023), 15737. https://doi.org/10.1038/s41598-023-42384-8
[15]
Katie Winkle, Ryan Blake Jackson, Gaspar Isaac Melsión, Dražen Brščić, Iolanda Leite, and Tom Williams. 2022. Norm-Breaking Responses to Sexist Abuse: A Cross-Cultural Human Robot Interaction Study. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (Sapporo, Hokkaido, Japan) (HRI ’22). IEEE Press, 120–129.
[16]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. 2017. Fairness Constraints: Mechanisms for Fair Classification. https://arxiv.org/abs/1507.05259

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
HAI '24: Proceedings of the 12th International Conference on Human-Agent Interaction
November 2024
502 pages
ISBN:9798400711787
DOI:10.1145/3687272
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 24 November 2024

Check for updates

Author Tags

  1. Fairness in AI
  2. Human-Robot Interaction
  3. Robot Moderated human-human Interaction.
  4. Social Bias

Qualifiers

  • Poster
  • Research
  • Refereed limited

Funding Sources

Conference

HAI '24
Sponsor:
HAI '24: International Conference on Human-Agent Interaction
November 24 - 27, 2024
Swansea, United Kingdom

Acceptance Rates

Overall Acceptance Rate 121 of 404 submissions, 30%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 14
    Total Downloads
  • Downloads (Last 12 months)14
  • Downloads (Last 6 weeks)4
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media