Abstract
This chapter summarizes contributions made by Ricardo Baeza-Yates, Francesco Bonchi, Kate Crawford, Laurence Devillers and Eric Salobir in the session chaired by Françoise Fogelman-Soulié on AI & Human values at the Global Forum on AI for Humanity. It provides an overview of key concepts and definitions relevant for the study of inequalities and Artificial Intelligence. It then presents and discusses concrete examples of inequalities produced by AI systems, highlighting their variety and potential harmfulness. Finally, we conclude by discussing how putting human values at the core of AI requires answering many questions, still open for further research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
See also chapter 2 of this book.
- 6.
- 7.
- 8.
See also chapter 3 of this book.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
See also chapters 9–10-11 of this book.
- 18.
See also chapters 1 and 15 of this book.
References
Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91, 21 January 2018
Baeza-Yates, R.: Bias on the Web. Commun. ACM 61(6), 54–61 (2018)
AI High-Level Expert Group: Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self-assessment. Report European Commission, July 2020
Stanford Encyclopedia of Philosophy, Privacy (2018)
Moore, A.D.: Privacy, Security and Accountability: Ethics Law and Policy. Rowman & Littlefield Publishers, Lanham (2015)
Information Security and Privacy Advisory Board. Meeting June 11, 12 and 13, 2014. https://csrc.nist.gov/CSRC/media/Events/ISPAB-JUNE-2014-MEETING/documents/ispab_jun2014_big-data-privacy_blumenthal.pdf
Norberg, P., Horne, D.R., Horne, D.A.: The privacy paradox: personal information disclosure intentions versus behaviors. J. Consum. Aff. 41, 100–126 (2007)
General Data Protection Regulations, Official Journal of European Union (2016)
Hajian, S., Bonchi, F., Castillo, C.: Algorithmic bias: from discrimination discovery to fairness-aware data mining. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2125–2126, 13 August 2016
West, S.M., Whittaker, M., Crawford, K.: Discriminating systems: gender, race and power in AI. AI Now Institute (2019)
Pitoura, E., et al.: On measuring bias in online information. ACM SIGMOD Rec. 46(4), 16–21 (2018)
Fairness measures. Datasets and software for detecting algorithmic discrimination. https://www.fairness-measures.org
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P.: Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. 31(4), 611–627 (2018)
Schiller, B.: First degree price discrimination using big data. Brandeis University, Department of Economics, 30 January 2014
Kahneman, D.: Thinking, Fast and Slow. Farrar, Straus and Giroux (2011). ISBN 978-0374275631
Thaler, R.H., Sunstein, C.R.: Nudge: Improving Decisions About Health, Wealth and Happiness. Yale University Press, New Haven (2008)
Johnson, E., Goldstein, D.: Do defaults save lives? Science 302, 5649 (2003)
Schneider, C., Weinmann, M., vom Brocke, J.: Digital nudging: guiding online user choices through interface design. Commun. ACM 61(7), 67–73 (2018). https://doi.org/10.1145/3213765
Zook, M., et al.: Ten simple rules for responsible big data research. PLoS Comput. Biol. 13(3), e1005399 (2017)
Richardson, R., Schultz, J., Crawford, K.: Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, 13 February 2019
Merler, M., Ratha, N., Feris, R.S., Smith, J.R.: Diversity in faces, 29 January 2019. arXiv preprint arXiv:1901.10436
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning, 17 September 2019. arXiv preprint arXiv:1908.09635
Salobir, E., Davet, J.-L.: Artificial intelligence, solidarity and insurance in Europe and Canada. Roadmap for international cooperation. Optic, 20 January 2020
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., Mullainathan, S.: Human decisions and machine predictions. Q. J. Econ. 133(1), 237–293 (2017)
Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores, 19 September 2016. arXiv preprint
Harrison, J., Patel, M.: Designing nudges for success in health care. AMA J. Ethics 22(9), E796-801 (2020)
Affective computing committee (IEEE Initiative on Ethics)
The Bad Robot Project. https://dataia.eu/en/research/bad-nudge-bad-robot-project
Devillers, L., chair AI HUMAAINE, 2020–24. https://humaaine-chaireia.fr/
Damgaard, M., Nielsen, H.: Nudging in education. Econ. Educ. Rev. 64, 313–342 (2018). https://doi.org/10.1016/j.econedurev.2018.03.008
Joachims, T., Swaminathan, A., Schnabel, T.: Unbiased learning-to-rank with biased feedback. In: Proceedings of the Tenth ACM International Conference on Web Search and Data Mining WCDM 2017, pp. 781–789 (2017)
Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: FA*IR: a fair top-k ranking algorithm. In: Proceedings of 26th ACM International Conference on Information and Knowledge Management CIKM 2017, Singapore, pp. 1569–1578, 6–10 November 2017
Cave, S., Nyrup, R., Vold, K., Weller, A.: Motivations and risks of machine ethics. Proc. IEEE 107(3), 562–574 (2018)
Ethically Aligned Design: Prioritizing Human Wellbeing with Autonomous and Intelligent Systems. IEEE (2019). https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf
ACM US Public Policy Council (USACM): Statement on Algorithmic Transparency and Accountability, 12 January 2017
ACM US Technology Policy Committee, Statement on Facial Recognition Technologies, 30 June 2020
ACM Conference on Fairness, Accountability, and Transparency (ACM FAT). https://facctconference.org/2020/
AAAI/ACM. Conference in AI, Ethics, and Society. https://www.aies-conference.com/
Chang, S., et al.: Mobility network models of COVID-19 explain inequities and inform reopening. Nature 10, 1–8 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Devillers, L., Fogelman-Soulié, F., Baeza-Yates, R. (2021). AI & Human Values. In: Braunschweig, B., Ghallab, M. (eds) Reflections on Artificial Intelligence for Humanity. Lecture Notes in Computer Science(), vol 12600. Springer, Cham. https://doi.org/10.1007/978-3-030-69128-8_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-69128-8_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-69127-1
Online ISBN: 978-3-030-69128-8
eBook Packages: Computer ScienceComputer Science (R0)