Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

From computerised thing to digital being: mission (Im)possible?

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Artificial intelligence (AI) is one of the main drivers of what has been described as the “Fourth Industrial Revolution”, as well as the most innovative technology developed to date. It is a pervasive transformative innovation, which needs a new approach. In 2017, the European Parliament introduced the notion of the “electronic person”, which sparked huge debates in philosophical, legal, technological, and other academic settings. The issues related to AI should be examined from an interdisciplinary perspective. In this paper, we examine this legal innovation—that has been proposed by the European Parliament—from not only legal but also technological points of view. In the first section, we define AI and analyse its main characteristics. We argue that, from a technical perspective, it appears premature and probably inappropriate to introduce AI personhood now. In the second section, justifications for the European Parliament’s proposals are explored in contrast with the opposing arguments that have been presented. As the existing mechanisms of liability could be insufficient in scenarios where AI systems cause harm, especially when algorithms of AI learn and evolve on their own, there is a need to depart from traditional liability theories.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. In this paper we do not discuss the issues of robot rights, but we agree with the opinion of T. Jaynes that there is question to ask “how not biological intelligence can gain citizenship in nations without a monarchist system of government or being based within a human subject”. “While not biological intelligence systems may be granted citizenship, this citizenship does not provide many useful protections for the not biological intelligence system—if any are granted at all, which is a subject that has not even been adequately addressed by the nation of Saudi Arabia” (Jaynes 2019, 8).

  2. As was mentioned above, we do not discuss the question of the robots rights, but we agree with the opinion of Dowel that actually the non-biological intelligence does not clearly fit into any extent category of persons and the path forward is uncertain, legislators are beginning to explore potential issues (Dowell 2018). However, the foundation is being put down now. E.g., see Gunkel (2018).

  3. (European Commission 2018a, b; European Commission’s High-Level Expert Group on Artificial Intelligence 2018, 2019).

  4. For example, the European High-Level Expert Group has launched a nine-page document containing only the definition of AI (European Commission’s High-Level Expert Group on Artificial Intelligence 2018).

  5. It must be noted that the term “software” is not defined in EU law; Directive 2009/24/EC provides a definition of “computer program” in recitals. The two definitions are often used interchangeably, but there is a technical difference: a “program” is a set of instructions that tell a computer what to do while “software” can be made up of more than one program.

  6. When our article was ready for publishing, EC Joint Research Center in February 2020 has released technical Repot “AI Watch. Defining Artificial Intelligence”, where JRC also tend to define the object in consideration After have considered 55 documents that address the AI domain from different perspectives, the authors of this Report not surprisingly take the definition of AI proposed by HLEG on AI as the starting point for further development, but also assert that‚ considering that the HLEG definition is comprehensive, hence highly technical and detailed, […] the definitions provided by the EC JRC Flagship report on AI (2018) […] are suitable alternatives (Samoili et al. 2020: 9). But AI Watch also assert, that despite the increased interest in AI by the academia, industry and public institutions, there is no standard definition of what AI actually involves … human intelligence is also difficult to define and measure … as a consequence, most definitions found in research, policy or market reports are vague and propose an ideal target rather than a measurable research concept” (Samoili et al. 2020: 7).

  7. Max Tegmark proposed the hypothesis that our physical reality is a mathematical structure (Tegmark 2014). Steven Pinker argued that intelligence does not come from a special kind of spirit or matter or energy, but from a different commodity, namely information. Information is a correlation between two things that are produced by a lawful process. Correlation is a mathematical and logical concept (Pinker 1998). However, Cathy O’Neil demonstrates the opposite, namely that many algorithms are not inherently fair just because they have a mathematical basis (O’Neil 2016).

  8. In 1950 A. Turing in his paper “Computing Machinery and Intelligence” has introduced his famous test, called Turing test, but the author by itself does not call his idea "Turing test", but rather the "Imitation Game". Turing opens with the words: "I propose to consider the question, 'Can machines think?'", because “thinking” is difficult to define and he chooses to replace the question by another, which is closely related to it and is expressed in relatively unambiguous words (Turing 1950).

  9. E.g., for W. Barfield the term “electronic person” is sufficient to describe virtual avatar or virtual agent, acting only in cyber space (Barfield 2006), but it is not sufficient and does not explore all the extend of applications of AI-based systems.

  10. An artificial agent may be instantiated by an optical, chemical, quantum or, indeed, biological—rather than an electronic—computing process.

  11. Autonomy is one of the main features of AI-based systems from a technological point of view, but it presents serious ontological issues (Chinen 2019). It must be noted that autonomy is presented as one of main characteristics of AI, IoT and robotics technologies in the EC Report on the safety and liability of AI, IoT and robotics (European Commission 2020).

  12. In this paper we do not question morality issues, but the opposite opinion is expressed by E. Schwitzgebel and M. Garza. They use notion psycho-social view of moral status and they suggest that it shouldn’t matter to one’s moral status what kind of body one has, except insofar as one’s body influences one’s psychological and social properties. Similarly, it shouldn’t matter to one’s moral status what kind of underlying architecture one has, except insofar as underlying architecture influences one’s psychological and social properties. Only psychological and social properties are directly relevant to moral status (Schwitzgebel and Garza 2015).

  13. In accordance with the formula suggested in the resolution, liability for damage caused by cognitive computing should be proportional to the actual level of instructions given to the robot and its degree of autonomy, so that the greater a robot's learning capability or autonomy, and the longer a robot’s training, the greater responsibility of its trainer should be.

  14. European AI strategy supports an ethical, secure, and cutting-edge AI made in Europe, and based on three pillars: (1) increasing public and private investments in AI; (2) preparing for socio-economic changes; and (3) ensuring an appropriate ethical and legal framework (European Commission 2018c: 1).

  15. This includes the possibility to bear fundamental legal rights and duties, like citizenship, free speech, privacy, to sue and be sued, enter contracts, own property, inherit, and so on.

  16. For example, pacemakers are active implants and as such come within the scope of the Active Medical Devices Directive. To be able to place such devices on the European market, manufacturers must go through a conformity assessment procedure. This procedure is undertaken by a certification body. After the products have been placed on the market, notified bodies continue their supervising role and have to carry out regular inspections (Leeuwen and Verbruggen 2015: 912–913).

  17. Some legal liabilities cannot be met by insurance (Solum 2017: 1245).

  18. For example, Union product safety legislation does not address the human oversight in the context of self-learning AI, also the risks to safety derived from faulty data (p. 8), the increasing risks derived from the opacity of self-learning systems, fragmentation of liability regimes and the significant differences between the tort laws of all Member States, the outcome of cases will often be different depending on which jurisdiction applies; damage caused by self-learning algorithms on financial markets, often remain uncompensated, because some legal systems do not provide tort law protection of such interests at all (European Commission 2020: 19); Emerging digital technologies make it difficult to apply fault-based liability rules, due to the lack of well-established models of proper functioning of these technologies and the possibility of their developing as a result of learning without direct human control (European Commission 2020: 23).

  19. Complexity is reflected in plurality of economic operators involved in the supply chain, multiplicity of components, software and services as well as interconnectivity with other devices (European Commission 2020: 2).

  20. For ex-post mechanism of enforcement, it is decisive that humans can be able to understand how the algorithmic decisions of the artificially intelligent agent have been reached. The self-learning artificially intelligent agents will be able to take decisions that may deviate from what was initially intended by the producers. The producer’s control may be limited if the product’s operation requires software or data provided by third parties or collected from the environment, and depends on self-learning processes and personalizing settings chosen by the user. This dilutes the traditional role of a producer, when a multitude of actors contribute to the design, functioning and use of the AI product/system (European Commission 2020: 28).

  21. AI products are open to updates, upgrades, self-learning data after placement on the market.

  22. The more complex the interplay of various factors that either jointly or separately contributed to the damage, the more crucial links in the chain of events are within the defendant’s control, the more difficult it will be for the victim to succeed in establishing causation (European Commission 2020: 20).

  23. The more complex the circumstances leading to the victim’s harm are, the harder it is to identify relevant evidence. It can be difficult and costly to identify a bug in a long and complicated software code. Examining the process leading to a specific result (how the input data led to the output data) may be difficult, very time-consuming and expensive (European Commission 2019: 24).

  24. Only the strict liability of producers for defective products is harmonized at EU level by the Product Liability Directive, while all other regimes are regulated by the Member States themselves (European Commission 2019: 3).

References

Download references

Funding

This research is funded by the European Social Fund according to the activity, “Improvement of Researchers”, qualification, by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712. Special acknowledgment is dedicated to prof. J. Gordon (the chief-researcher of this project) for his valuable ideas and contribution to this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Darius Amilevičius.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kiršienė, J., Gruodytė, E. & Amilevičius, D. From computerised thing to digital being: mission (Im)possible?. AI & Soc 36, 547–560 (2021). https://doi.org/10.1007/s00146-020-01051-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01051-6

Keywords

Navigation