Abstract
Artificial intelligence (AI) is one of the main drivers of what has been described as the “Fourth Industrial Revolution”, as well as the most innovative technology developed to date. It is a pervasive transformative innovation, which needs a new approach. In 2017, the European Parliament introduced the notion of the “electronic person”, which sparked huge debates in philosophical, legal, technological, and other academic settings. The issues related to AI should be examined from an interdisciplinary perspective. In this paper, we examine this legal innovation—that has been proposed by the European Parliament—from not only legal but also technological points of view. In the first section, we define AI and analyse its main characteristics. We argue that, from a technical perspective, it appears premature and probably inappropriate to introduce AI personhood now. In the second section, justifications for the European Parliament’s proposals are explored in contrast with the opposing arguments that have been presented. As the existing mechanisms of liability could be insufficient in scenarios where AI systems cause harm, especially when algorithms of AI learn and evolve on their own, there is a need to depart from traditional liability theories.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
In this paper we do not discuss the issues of robot rights, but we agree with the opinion of T. Jaynes that there is question to ask “how not biological intelligence can gain citizenship in nations without a monarchist system of government or being based within a human subject”. “While not biological intelligence systems may be granted citizenship, this citizenship does not provide many useful protections for the not biological intelligence system—if any are granted at all, which is a subject that has not even been adequately addressed by the nation of Saudi Arabia” (Jaynes 2019, 8).
As was mentioned above, we do not discuss the question of the robots rights, but we agree with the opinion of Dowel that actually the non-biological intelligence does not clearly fit into any extent category of persons and the path forward is uncertain, legislators are beginning to explore potential issues (Dowell 2018). However, the foundation is being put down now. E.g., see Gunkel (2018).
For example, the European High-Level Expert Group has launched a nine-page document containing only the definition of AI (European Commission’s High-Level Expert Group on Artificial Intelligence 2018).
It must be noted that the term “software” is not defined in EU law; Directive 2009/24/EC provides a definition of “computer program” in recitals. The two definitions are often used interchangeably, but there is a technical difference: a “program” is a set of instructions that tell a computer what to do while “software” can be made up of more than one program.
When our article was ready for publishing, EC Joint Research Center in February 2020 has released technical Repot “AI Watch. Defining Artificial Intelligence”, where JRC also tend to define the object in consideration After have considered 55 documents that address the AI domain from different perspectives, the authors of this Report not surprisingly take the definition of AI proposed by HLEG on AI as the starting point for further development, but also assert that‚ considering that the HLEG definition is comprehensive, hence highly technical and detailed, […] the definitions provided by the EC JRC Flagship report on AI (2018) […] are suitable alternatives (Samoili et al. 2020: 9). But AI Watch also assert, that despite the increased interest in AI by the academia, industry and public institutions, there is no standard definition of what AI actually involves … human intelligence is also difficult to define and measure … as a consequence, most definitions found in research, policy or market reports are vague and propose an ideal target rather than a measurable research concept” (Samoili et al. 2020: 7).
Max Tegmark proposed the hypothesis that our physical reality is a mathematical structure (Tegmark 2014). Steven Pinker argued that intelligence does not come from a special kind of spirit or matter or energy, but from a different commodity, namely information. Information is a correlation between two things that are produced by a lawful process. Correlation is a mathematical and logical concept (Pinker 1998). However, Cathy O’Neil demonstrates the opposite, namely that many algorithms are not inherently fair just because they have a mathematical basis (O’Neil 2016).
In 1950 A. Turing in his paper “Computing Machinery and Intelligence” has introduced his famous test, called Turing test, but the author by itself does not call his idea "Turing test", but rather the "Imitation Game". Turing opens with the words: "I propose to consider the question, 'Can machines think?'", because “thinking” is difficult to define and he chooses to replace the question by another, which is closely related to it and is expressed in relatively unambiguous words (Turing 1950).
E.g., for W. Barfield the term “electronic person” is sufficient to describe virtual avatar or virtual agent, acting only in cyber space (Barfield 2006), but it is not sufficient and does not explore all the extend of applications of AI-based systems.
An artificial agent may be instantiated by an optical, chemical, quantum or, indeed, biological—rather than an electronic—computing process.
Autonomy is one of the main features of AI-based systems from a technological point of view, but it presents serious ontological issues (Chinen 2019). It must be noted that autonomy is presented as one of main characteristics of AI, IoT and robotics technologies in the EC Report on the safety and liability of AI, IoT and robotics (European Commission 2020).
In this paper we do not question morality issues, but the opposite opinion is expressed by E. Schwitzgebel and M. Garza. They use notion psycho-social view of moral status and they suggest that it shouldn’t matter to one’s moral status what kind of body one has, except insofar as one’s body influences one’s psychological and social properties. Similarly, it shouldn’t matter to one’s moral status what kind of underlying architecture one has, except insofar as underlying architecture influences one’s psychological and social properties. Only psychological and social properties are directly relevant to moral status (Schwitzgebel and Garza 2015).
In accordance with the formula suggested in the resolution, liability for damage caused by cognitive computing should be proportional to the actual level of instructions given to the robot and its degree of autonomy, so that the greater a robot's learning capability or autonomy, and the longer a robot’s training, the greater responsibility of its trainer should be.
European AI strategy supports an ethical, secure, and cutting-edge AI made in Europe, and based on three pillars: (1) increasing public and private investments in AI; (2) preparing for socio-economic changes; and (3) ensuring an appropriate ethical and legal framework (European Commission 2018c: 1).
This includes the possibility to bear fundamental legal rights and duties, like citizenship, free speech, privacy, to sue and be sued, enter contracts, own property, inherit, and so on.
For example, pacemakers are active implants and as such come within the scope of the Active Medical Devices Directive. To be able to place such devices on the European market, manufacturers must go through a conformity assessment procedure. This procedure is undertaken by a certification body. After the products have been placed on the market, notified bodies continue their supervising role and have to carry out regular inspections (Leeuwen and Verbruggen 2015: 912–913).
Some legal liabilities cannot be met by insurance (Solum 2017: 1245).
For example, Union product safety legislation does not address the human oversight in the context of self-learning AI, also the risks to safety derived from faulty data (p. 8), the increasing risks derived from the opacity of self-learning systems, fragmentation of liability regimes and the significant differences between the tort laws of all Member States, the outcome of cases will often be different depending on which jurisdiction applies; damage caused by self-learning algorithms on financial markets, often remain uncompensated, because some legal systems do not provide tort law protection of such interests at all (European Commission 2020: 19); Emerging digital technologies make it difficult to apply fault-based liability rules, due to the lack of well-established models of proper functioning of these technologies and the possibility of their developing as a result of learning without direct human control (European Commission 2020: 23).
Complexity is reflected in plurality of economic operators involved in the supply chain, multiplicity of components, software and services as well as interconnectivity with other devices (European Commission 2020: 2).
For ex-post mechanism of enforcement, it is decisive that humans can be able to understand how the algorithmic decisions of the artificially intelligent agent have been reached. The self-learning artificially intelligent agents will be able to take decisions that may deviate from what was initially intended by the producers. The producer’s control may be limited if the product’s operation requires software or data provided by third parties or collected from the environment, and depends on self-learning processes and personalizing settings chosen by the user. This dilutes the traditional role of a producer, when a multitude of actors contribute to the design, functioning and use of the AI product/system (European Commission 2020: 28).
AI products are open to updates, upgrades, self-learning data after placement on the market.
The more complex the interplay of various factors that either jointly or separately contributed to the damage, the more crucial links in the chain of events are within the defendant’s control, the more difficult it will be for the victim to succeed in establishing causation (European Commission 2020: 20).
The more complex the circumstances leading to the victim’s harm are, the harder it is to identify relevant evidence. It can be difficult and costly to identify a bug in a long and complicated software code. Examining the process leading to a specific result (how the input data led to the output data) may be difficult, very time-consuming and expensive (European Commission 2019: 24).
Only the strict liability of producers for defective products is harmonized at EU level by the Product Liability Directive, while all other regimes are regulated by the Member States themselves (European Commission 2019: 3).
References
Aleksandre FM (2017) The legal status of artificially intelligent robots. Personhood, taxation, and control. Dissertation, University of Tilburg
Asaro PM (2012) A body to kick, but still no soul to damn: legal perspectives on robotics. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics, intelligent robotics, and autonomous agents. MIT Press, Cambridge, pp 169–186
Atabekov A, Yastrebov O (2018) Legal status of artificial intelligence across countries: legislation on the move. Eur Res Stud J 21(4):773–782
Atkinson R (2016) “It’s going to kill us!” and other myths about the future of artificial intelligence. NCSSS J 21(1):8–11
Barfield W (2006) Intellectual property rights in virtual environments: considering the rights of owners, programmers and virtual avatars. Akron Law Rev 39(3):649–700
Beck S (2016) Intelligent agents and criminal law—negligence, diffusion of liability, and electronic personhood. Robot Auton Syst 86:138–143
BEUC (2017) The consumer voice in Europe. Review of product liability rules. BEUC Position Paper. https://www.beuc.eu/publications/beuc-x-2017-039_csc_review_of_product_liability_rules.pdf. Accessed 7 Jan 2020
Borden MA (2006) Mind as machine: a history of cognitive science. Oxford University Press, Oxford
Bryson JJ, Diamantis ME, Grant TD (2017) Of, for, and by people: the legal lacuna of synthetic persons. Artif Intell Law 25:273–291
Čerka P, Grigienė J, Sirbikytė G (2017) Is it possible to grant legal personality to artificial intelligence software system? Comput Law Secur Rev Int J Technol Law Pract. https://doi.org/10.1016/j.clsr.2017.03.022
Chinen M (2019) Law and autonomous machines. Edward Elgar, Cheltenham
Clarke A (1962) Hazards of prophecy: the failure of imagination. In: Profiles of the future: an enquiry into the limits of the possible. Gollancz, London
Clark E (2017) Embodied, situated, and distributed cognition. In: A companion to cognitive science. Wiley
Council Directive (1985) On the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, 85/374/EEC (OJ L 210, 7.8.1985)
Domingos P (2018) The master algorithm: how the quest for the ultimate learning machine will remake our world. Basic Books, New York
Dowell R (2018) Fundamental protections for non-biological intelligences (or: how we learn to stop worrying and love our robot Brethren). Minn J Law Sci Technol 19:305–335
European Commission (2018a) Staff Working Document on liability for emerging digital technologies accompanying the document Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions "Artificial Intelligence for Europe" SWD (2018) 137 final
European Commission (2018b) Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe COM (2018) 237 final
European Commission (2018c) Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic, and Social Committee and the Committee of the Regions—Coordinated Plan on Artificial Intelligence. COM (2018) 795 final
European Commission (2019) Liability for artificial intelligence and other emerging digital technologies. Report from the Expert Group on Liability and New Technologies—New Technologies Formation, European Union. https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608 Accessed 27 Dec 2019
European Commission (2020) Report on the safety and liability implications of artificial intelligence, the Internet of Things and robotics. https://ec.europa.eu/info/sites/info/files/report-safety-liability-artificial-intelligence-feb2020_en_1.pdf Accessed 30 Mar 2020
European Commission’s High-Level Expert Group on Artificial Intelligence (2018) A definition of AI: main capabilities and scientific disciplines. Brussel. https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines. Accessed 17 June 2019
European Commission’s High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI. Brussel. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. Accessed 20 June 2019
European Parliament (2017) Civil rules on robotics. European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.pdf. Accessed 15 June 2019
European Parliament and Council Directive (1999) Amending Council Directive 85/374/EEC on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, 1999/34/EC, (OJ L 141, 4.6.1999)
European Union (2019) Independent high-level expert group on artificial intelligence set up. Ethics guidelines for trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. Accessed 22 June 2019
Florian R (2003) Autonomous artificial intelligent agents. Center for cognitive and neural studies. https://coneural.org/reports/Coneural-03-01.pdf. Accessed 02 Feb 2020
Franklin S, Graesser A (1996) Is it an agent, or just a program? A taxonomy for autonomous agents. In: Proceedings of the third international workshop on agent theories, architectures, and languages. Springer
Goertzel B, Pennachin C (eds) (2007) Artificial general intelligence. Springer, Berlin
Gordon JS (2018) What do we owe to intelligent robots?”. AI Soc. https://doi.org/10.1007/s00146-018-0844-6
Gunkel DJ (2018) Robot rights. The MIT Press, Cambridge
Hofstadter D (1999) Gödel, Escher, bach: an eternal golden braid. Basic Books, New York
Jaynes TL (2019) Legal personhood for artificial intelligence: citizenship as the exception to the rule. AI Soc https://doi.org/10.1007/s00146-019-00897-9. https://www.researchgate.net/publication/334003203_Legal_personhood_for_artificial_intelligence_citizenship_as_the_exception_to_the_rule
Kritikos M (2019) European Parliament. Artificial intelligence ante portas: Legal and ethical reflections. Briefing. European Parliamentary Research Service. https://www.europarl.europa.eu/at-your-service/files/be-heard/religious-and-non-confessional-dialogue/events/en-20190319-artificial-intelligence-ante-portas.pdf. Accessed 22 June 2019
Leeuwen VB, Verbruggen P (2015) Resuscitating EU product Liability law? Eur Rev Private Law 23(5):899–915
Legg S, Hutter M (2007) A collection of definitions of intelligence, in Proc. Conf. on Adv. in Artif. Gen. Intell, IOS, Amsterdam
Maes P (1995) Artificial life meets entertainment: life like autonomous agents. Commun ACM 38:108–114
McCarthy J (2007) What is artificial intelligence? https://www-formal.stanford.edu/jmc/whatisai.pdf. Accessed 12 June 2019
Menary R (2007) Cognitive integration. Palgrave Macmillan, UK
O’Neil C (2016) Weapons of math destruction. Crown Publishers, New York
Pagallo U (2018a) Vital, Sophia, and Co.—The quest for the legal personhood of robots. Information 9:1–11
Pagallo U (2018b) Apples, oranges, robots: four misunderstandings in today’s debate on the legal status of AI systems. Philos Trans R Soc A. https://doi.org/10.1098/rsta.2018.0168
Pinker S (1998) How the mind works. Penguin Press, London
Radutniy OE (2017) Criminal liability of the artificial intelligence. Probl Legal 138:132–141
Renda A (2019) Artificial intelligence: ethics, governance, and policy challenges. Report of a CEPS Task Force. Centre for European Policy Studies (CEPS), Brussels
Riek LD, Howard D (2014) A code of ethics for the human–robot interaction profession, we robot. https://robots.law.miami.edu/2014/wp-content/uploads/2014/03/a-code-of-ethics-for-the-human-robot-interaction-profession-riek-howard.pdf. Accessed 14 June 2019
Robson RA (2010) Crime and punishment: rehabilitating retribution as a justification for organizational criminal liability. Am Bus Law J 47(1):109–144
Rosenschein SJ (1999) Intelligent agent architecture. In: Wilson RA, Keil F (eds) The MIT encyclopedia of cognitive sciences. MIT Press, Cambridge
Russell S, Dewey D, Tegmark M (2015) Research priorities for robust and beneficial artificial intelligence. AI Mag 36(4):105–114
Samoili S, Lopez Cobo M, Gomez Gutierrez E, De Prato G, Martinez-Plumed F, Delipetrev B (2020) AI WATCH. Defining Artificial Intelligence, Publications Office of the European Union, Luxembourg, 2020, (online). https://doi.org/10.2760/382730 (online), JRC118163. https://ec.europa.eu/jrc/en/publication/ai-watch-defining-artificial-intelligence. Accessed 02 Apr 2020
Schwitzgebel E, Mara G (2015) A defense of the rights of artificial intelligences. Midwest Stud Philos 30:98–119
Selwood M (2017) The road to autonomy. San Diego Law Rev 54:829–873
Singh S (2017) Attribution of legal personhood to artificially intelligent beings. Bharati Law Rev 54:194–201
Smithers T (1995) Are autonomous agents information processing systems? In: Steels L, Brooks R (eds) The artificial life route to artificial intelligence: building embodied, situated agents. Lawrence Erlbaum Associates, Hillsdale
Solum B (2017) Legal personhood for artificial intelligences. N C Law Rev 70(4):1231–1287
Stone P, Brooks R, Brynjolfsson E, Calo R, Etzioni O, Hager G, Hirschberg J, Kalyanakrishnan S, Kamar E, Kraus S, Leyton-Brown K, Parkes D, Press W, Saxenian AL, Shah J, Tambe M, Teller A (2016) Artificial intelligence and life in 2030. One hundred year study on artificial intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. https://ai100.stanford.edu/2016-report. Accessed 17 July 2019
Stuart R, Norvig P (2010) Artificial intelligence: a modern approach, 3rd edn, Pearson Prentice Hall, Upper Saddle River
Suetonius De vita Caesarum (1913) Caligula, 55. https://penelope.uchicago.edu/Thayer/E/Roman/Texts/Suetonius/12Caesars/Caligula*.html#55. Accessed 03 Mar 2020
Tegmark M (2014) Our mathematical universe. Random House LLC, New York
Turing A (1950) Computing machinery and intelligence. Mind 59(236):433–460. https://doi.org/10.1093/mind/LIX.236.433
Vladeck DC (2014) Machines without principals: liability rules and artificial intelligence. Wash Law Rev 89(1):117–150
Winiger B, Karner E, Oliphant K (2018) Essential cases on misconduct. Digest of European Tort Law. De Gruyter, Berlin
World Commission on the Ethics of Scientific Knowledge and Technology (2017) Report of COMEST on robotics ethics. https://unesdoc.unesco.org/ark:/48223/pf0000253952. Accessed 20 July 2019.
Zevenbergen B, Finlayson M, Kortz M, Pagallo U, Borg JS, Zapušek T (2018) Appropriateness and feasibility of legal personhood for AI systems. In: International conference on robot ethics and standards (ICRES 2018). https://users.cs.fiu.edu/~markaf/doc/w16.zevenbergen.2018.procicres.3.x_camera.pdf. Accessed 14 June 2019
Funding
This research is funded by the European Social Fund according to the activity, “Improvement of Researchers”, qualification, by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712. Special acknowledgment is dedicated to prof. J. Gordon (the chief-researcher of this project) for his valuable ideas and contribution to this research.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Kiršienė, J., Gruodytė, E. & Amilevičius, D. From computerised thing to digital being: mission (Im)possible?. AI & Soc 36, 547–560 (2021). https://doi.org/10.1007/s00146-020-01051-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-020-01051-6