Nothing Special   »   [go: up one dir, main page]

To read this content please select one of the options below:

“It would be pretty immoral to choose a random algorithm”: Opening up algorithmic interpretability and transparency

Helena Webb (Department of Computer Science, University of Oxford, Oxford, UK)
Menisha Patel (Department of Computer Science, University of Oxford, Oxford, UK)
Michael Rovatsos (School of Informatics, University of Edinburgh, Edinburgh, UK)
Alan Davoust (Department of Computer Science and Engineering, Université du Québec en Outaouais, Gatineau, Québec, Canada)
Sofia Ceppi (Prowler.io, Cambridge, UK)
Ansgar Koene (Horizon Institute of Digital Economy Research, University of Nottingham, Nottingham, UK)
Liz Dowthwaite (Horizon Institute of Digital Economy Research, University of Nottingham, Nottingham, UK)
Virginia Portillo (Horizon Institute of Digital Economy Research, University of Nottingham, Nottingham, UK)
Marina Jirotka (Department of Computer Science, University of Oxford, Oxford, UK)
Monica Cano (Horizon Institute of Digital Economy Research, University of Nottingham, Nottingham, UK)

Journal of Information, Communication and Ethics in Society

ISSN: 1477-996X

Article publication date: 9 April 2019

Issue publication date: 4 September 2019

364

Abstract

Purpose

The purpose of this paper is to report on empirical work conducted to open up algorithmic interpretability and transparency. In recent years, significant concerns have arisen regarding the increasing pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly problematic is the lack of transparency surrounding the development of these algorithmic systems and their use. It is often suggested that to make algorithms more fair, they should be made more transparent, but exactly how this can be achieved remains unclear.

Design/methodology/approach

An empirical study was conducted to begin unpacking issues around algorithmic interpretability and transparency. The study involved discussion-based experiments centred around a limited resource allocation scenario which required participants to select their most and least preferred algorithms in a particular context. In addition to collecting quantitative data about preferences, qualitative data captured participants’ expressed reasoning behind their selections.

Findings

Even when provided with the same information about the scenario, participants made different algorithm preference selections and rationalised their selections differently. The study results revealed diversity in participant responses but consistency in the emphasis they placed on normative concerns and the importance of context when accounting for their selections. The issues raised by participants as important to their selections resonate closely with values that have come to the fore in current debates over algorithm prevalence.

Originality/value

This work developed a novel empirical approach that demonstrates the value in pursuing algorithmic interpretability and transparency while also highlighting the complexities surrounding their accomplishment.

Keywords

Acknowledgements

The authors would like to acknowledge the contribution of all research participants who took part in this study. The research undertaken in this study formed part of the EPSRC funded study “UnBias: Emancipating users against algorithmic biases for a trusted digital economy”. EPSRC reference EP/N02785X/1.

Citation

Webb, H., Patel, M., Rovatsos, M., Davoust, A., Ceppi, S., Koene, A., Dowthwaite, L., Portillo, V., Jirotka, M. and Cano, M. (2019), "“It would be pretty immoral to choose a random algorithm”: Opening up algorithmic interpretability and transparency", Journal of Information, Communication and Ethics in Society, Vol. 17 No. 2, pp. 210-228. https://doi.org/10.1108/JICES-11-2018-0092

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Emerald Publishing Limited

Related articles