Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

AI, big data, and the future of consent

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

In this paper, we discuss several problems with current Big data practices which, we claim, seriously erode the role of informed consent as it pertains to the use of personal information. To illustrate these problems, we consider how the notion of informed consent has been understood and operationalised in the ethical regulation of biomedical research (and medical practices, more broadly) and compare this with current Big data practices. We do so by first discussing three types of problems that can impede informed consent with respect to Big data use. First, we discuss the transparency (or explanation) problem. Second, we discuss the re-repurposed data problem. Third, we discuss the meaningful alternatives problem. In the final section of the paper, we suggest some solutions to these problems. In particular, we propose that the use of personal data for commercial and administrative objectives could be subject to a ‘soft governance’ ethical regulation, akin to the way that all projects involving human participants (e.g., social science projects, human medical data and tissue use) are regulated in Australia through the Human Research Ethics Committees (HRECs). We also consider alternatives to the standard consent forms, and privacy policies, that could make use of some of the latest research focussed on the usability of pictorial legal contracts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Availability of data and material

Not applicable.

Code availability

Not applicable.

Notes

  1. One problem with the term ‘Big Data’, as Luciano Floridi (2012) points out, is that it is slightly ambiguous, since the predicate ‘big’ is vague. In other words, there is no precise point at which a dataset changes from small to big. In this paper we will use the term ‘Big Data’ in the sense that is most commonly adopted at present—namely, to describe data sets (of ever-increasing sizes) that are too big for humans to analyse for the purpose of identifying new patterns, correlations, and insights. AI algorithms become useful in these domains due to the speed and scale at which they can operate. An ethical issue arises here because AI algorithms have the potential to reveal novel forms of personal information from such data sets. Individuals may have a strong desire for such personal information not to be made public, shared to third parties, or used to modify their behaviour. In short, there is a risk that serious harm can be caused to individuals by the improper use of AI and big data. For a more precise definition of Big Data, see Levin et al. (2015). They characterise Big Data in terms of four key attributes—namely Volume, which refers to the terabytes of new data being added each day; Velocity, which refers to the real time speed at which analyses can now be performed on these data; Variety, which refers to the different types of data, and variety of sources, that are now being collected; and Veracity, which pertains to the trustworthiness of the data sources (Levin et al. 2015 pp. 1661–1662).

  2. In a literature review on the ethics of Big Data by Mittelstadt and Floridi (2016), it was found that informed consent was one of the biggest concerns of researchers.

  3. See also the ‘National Statement on Ethical Conduct in Human Research’ (National Health and Medical Research Council 2007 [updated 2018]). This statement provides ethical guidance for Australian researchers whose work involving human subjects.

  4. It also worth remembering that the concept of informed consent has not always been considered integral to medical ethics. If we look at the Hippocratic physicians of ancient Greece, we not only find a lack of concern for informed consent but also an absence of concern for the truth. The Corpus Hippocraticum (the corpus of early medical texts associated with Hippocrates), for example, for all its innovation and focus on the responsibilities of physicians, features instructions to conceal information from the patient where doing so would be useful (Faden and Beauchamp 1986, p. 61).

    Interestingly, this formulation echoes the findings of the High Court of Australia in the landmark medical negligence case of Rogers v Whitaker (1992) 175 CLR 479. The issue was whether the failure to warn a patient, who was about to undergo eye surgery, of a very unlikely risk constituted negligence on the part of the surgeon. With this decision the court moved past the traditional ‘doctor knows best’ approach (whereby the decision on whether or not to warn of a certain risk fell within the discretion of the health professional) and embraced a doctrine that upholds the autonomy of the individual patient and their ability to attach significance to particular risks (Sappideen 2010).

  5. See Australian Competition & Consumer Commission (2019), for the details of this report.

  6. This is most evident in Article 15 of the GDPR—‘Right of access by the data subject’—where it is stated that data subjects have the right to (i) obtain information about what their data will be used for; (ii) know which parties have access to their data; and (iii) know the length of time their data will be stored for (GDPR 2018). For a recent critique of the GDPR’s capacity to ensure that a right to an explanation is secured, see Wachter et al. (2017). They argue that the GDPR does not, in its current form, give data subjects a right to an explanation, due to the fact that the document’s language is ambiguous in parts. In their article, they make recommendations about how this issue can be resolved.

  7. In Australia, at present, this is primarily a moral problem, as the protections currently afforded by the Privacy Act 1988 (Cth) are minimal, while in Europe it is also a legal problem (see GDPR 2018). As mentioned above, the recent ‘Digital Platforms Inquiry’ conducted by the ACCC (Australian Competition and Consumer Commission.

    2019) found current Australian legislative and regulatory protections wanting on a number of levels and made recommendations to depart from exclusive reliance on principle-based regulation, and add more rule-based protective requirements, some of which is inspired by the GDPR (2018).

  8. A risk assessment of this kind need not be an elaborate one of course. It is the kind we perform in our everyday lives. For example, when one gets into a car, one (should) know that there is a small chance that they could get into a crash and get seriously injured. Most of us continue to travel by car, however, because it is convenient, and the probability of crashing is typically low. In circumstances where new information is presented to us, however, we may need to revise such probabilities. For example, if one learns that the driver of a car one is about to get into is inebriated, or does not possess a driver’s licence, one would typically not consent to allowing them to drive them home. It would be simply too risky for most people—given that the high probability of getting injured significantly outweighs the gains (in this case convenience). Analogously for repurposed data. If repurposing data introduces new risks for data subjects, it is not fair to subject them to such risks unless they first have knowledge of them and have agreed to proceed anyway.

  9. Furthermore, ethical approval of data-use policies by an independent HREC-like body could become the new frontier of the fast-growing field of ‘corporate ethics’, potentially leading to legislative reform. While it is worth signalling this aspect of the issue, it is beyond the scope of this paper to provide an in-depth analysis of corporate ethics in this space.

  10. The idea of a paradigm shift is from Thomas Kuhn (1962), who applies it to scientific revolutions.

References

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adam J. Andreotta.

Ethics declarations

Conflict of interest

Not applicable.

Ethics approval

Not applicable.

Consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andreotta, A.J., Kirkham, N. & Rizzi, M. AI, big data, and the future of consent. AI & Soc 37, 1715–1728 (2022). https://doi.org/10.1007/s00146-021-01262-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01262-5

Keywords

Profiles

  1. Adam J. Andreotta
  2. Marco Rizzi