Nothing Special   »   [go: up one dir, main page]

Ethical Safe Lawful A Toolkit For Artificial Intelligence Projects

Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

Ethical, safe, lawful

A toolkit for artificial intelligence projects


linklaters.com/ai
Click on a toolkit component below to skip to section

Introduction
Artificial intelligence is starting to come of age. Businesses
looking to exploit this technology are having to confront new
practical, legal and ethical challenges.
This guide provides a brief overview of the legal issues that arise
when rolling out artificial intelligence within your business. It is
based partly on our experience of using artificial intelligence
within Linklaters. The guide focuses on the position in the
United Kingdom, but much of its content is equally relevant in
other jurisdictions. However, it does not consider autonomous
vehicles or robotics, which raise their own regulatory and
commercial issues.
The guide starts with a short technical primer. This describes
some of the recent advances in artificial intelligence and the
likely limitations on this technology.

1
FOREWORD

Artificial intelligence is starting to come of age. Businesses looking to exploit this technology are
having to confront new practical, legal and ethical challenges.

This toolkit provides a brief overview of the legal issues that arise when 5. Financial services – We outline some of the specific concerns for
rolling out artificial intelligence within your business. It is based partly on financial services firms, including lessons to be learnt from the
our experience of using artificial intelligence within Linklaters. The toolkit rules on algorithmic trading.
focuses on the position in the United Kingdom, but much of its content
is equally relevant in other jurisdictions. However, it does not consider Finally, we address the broader ethical challenges raised by this
autonomous vehicles or robotics, which raise their own regulatory and technology and the way in which the government and regulators are
commercial issues. addressing these challenges.

The toolkit starts with a short technical primer. This describes some of We hope this toolkit helps you to engage with this exciting new
the recent advances in artificial intelligence and the likely limitations of technology ethically, safely and lawfully.
this technology.

We then address five key issues:

1. Collaboration & “ownership” – We consider why many projects


will require collaboration and what rights you might want in any Richard Cumbley
technology you develop. Partner, Global Head of TMT/IP
Tel: (+44) 20 7456 4681
2. Developing AI & data – Recent developments in artificial richard.cumbley@linklaters.com
intelligence are largely driven by data. This section looks at how
you can obtain that data and the constraints on its use. It also looks
at the use of regulatory and development sandboxes.

3. Liability & regulation – This section summarises your responsibility


for an artificial intelligence tool and the regulatory controls on
its use, particularly for automated decisions. We also consider
competition law risks, such as collusion between artificially
intelligent agents.

4. Safe use – We consider the safeguards you should apply when


using an artificially intelligent tool in a live environment, such as
testing, supervision and circuit breakers.

Edited by Peter Church


CHECKLIST

A summary of the key items you should consider in relation to the legal aspect of AI programmes.

Collaboration & “ownership” Developing AI & data Liability & regulation


If your artificial intelligence project involves a collaboration with Data will be the key to many artificial intelligence projects. You Artificially intelligent systems may make decisions that you will be
another party, you should assess who will “own” the outcome of should consider whether your use of data to train and test your responsible for.
that project. artificial intelligence project is compatible with data protection
 If the system is provided under a contract, address the
and confidentiality laws. standards the system must meet and include appropriate
Identify the value you, and your partner, bring to
the project. Ensure artificial intelligence systems are trained on limitation and exclusion provisions.
sufficient, high-quality data.
Identify which elements of the project you are most  If the system is not provided under a contract, consider
interested in; the algorithm, the data, the output of the E
 nsure your use of data is compatible with your your liability in tort and the use of an appropriate disclaimer
algorithm, etc. confidentiality obligations. with end users.

Clarify your commercial aims for each element – i.e. identify


 E
 nsure your use of personal data complies with  If the system is embedded into a product, consider whether
positive rights (rights to use) and negative rights (rights to the GDPR. the product liability regime applies.
stop use by others) you want to assert.
Consider the impact of third parties accessing the data and  Where the system takes decisions about individuals, ensure
Determine what intellectual property rights will arise for
 ensure you have either a data sharing or data processing that processing is fair and lawful and avoids discriminatory
each element, as this will determine what rights you can agreement. outcomes.
assert against third parties.
Where possible, anonymise the data to avoid  Where the system takes significant decisions about individuals,
 Agree on who will own the intellectual property rights to these concerns. additional controls apply. You may have to inform the individual
reflect your commercial aims and put in place appropriate and let them ask for a human re-evaluation.
documentation to achieve that. Consider the use of regulatory or development sandboxes.
Ensure the use of the system complies with the GDPR and
 Include other contractual protections to achieve Conduct a data protection impact assessment conduct a data protection impact assessment
your commercial aims, e.g. rights to use, exclusivity where necessary. where necessary.
arrangements and confidentiality obligations.
Make sure your system is secure and protected against
 Assess other forms of collaboration, e.g. taking an equity cyber-attacks.
stake or entering into a joint venture.
If the system is involved in pricing decisions, consider the
risk of the system acting in breach of competition law.
CHECKLIST

Safe use Financial services Ethical use


Artificially intelligent systems may be opaque and could behave Financial services firms must ensure their use of artificial You should take a broad, forward-looking approach to predict
unpredictably. intelligence complies with the broader regulatory obligations and anticipate the future impact of artificial intelligence on your
placed on them. business.
 Factor the use of artificial intelligence into your broader risk
management framework.  Ensure appropriate systems and controls are in place.  Consider how artificial intelligence fits into your
wider business values and your approach to corporate
Ensure artificial intelligence systems are properly tested  Consider how the use of artificial intelligence fits into the social responsibility.
before use. senior manager regime.
 Consider whether you need an ethics board or other board
 Use counterfactuals to identify edge cases and use other Comply with the rules on algorithmic trading and high- committee to address this issue.
tools to try and verify the system’s decision making. frequency trading.
Evaluate the potential impact of artificial intelligence on
 Provide ongoing supervision during the operation of your workforce.
the tool, including the use of circuit breakers where the
behaviour exceeds certain boundaries. Be open and transparent about your use of artificial
intelligence where possible.
 Ensure your staff can properly interpret and understand the
decisions made by the system. Keep track of government and regulatory responses to
artificial intelligence.
TECHNICAL PRIMER

The history of the development of artificial intelligence has been cyclical; periods of great interest and
investment followed by disappointment and criticism (the so-called “AI winters”).

We are now in an AI summer. Developments in areas such as language For the purpose of this toolkit, we adopt a less abstract definition and use
translation, facial recognition and driverless cars have led to interest in the “artificial intelligence” to refer to the use of technology to carry out a task
sector from investors, and also from regulators and governments. that would normally need human intelligence. 2

Artificial intelligence is unlikely to fulfil all the extravagant predictions The term is also widely misused. Simple decision trees or expert systems
about its potential, but the technology has made, and is likely to continue are often labelled “artificial intelligence”. While these tools can be
to make, significant advances. extremely useful, they cannot sensibly be described as intelligent.

What is artificial intelligence?


There are a number of different ways to define artificial intelligence.
Historically, there have been four approaches to this topic: 1 Peter Church
Counsel, Technology Practice

Thinking Thinking
humanly rationally
Technology that uses
“Artificial intelligence is a key part of
Technology that uses
thought processes and logical processes the fourth industrial revolution. Strong
reasoning in the same to create rational general artificial intelligence is still
way as a human. thought. some way off, but narrow domain-specific
tools will be important in a range of
AI sectors from healthcare to finance.”

Acting
humanly
Acting rationally
The use of technology
that mimics human The use of technology
behaviour, regardless of that acts to achieve the
its internal thought best outcome.
processes.
TECHNICAL PRIMER

What is artificial intelligence good at? What types of tasks can artificial intelligence take on?
Many of the misconceptions about artificial intelligence arise out of There are a range of factors that determine whether artificial intelligence is
anthropomorphism: the attribution of human traits to this technology. In suited to a particular task. They include:
other words, the assumption that an artificially intelligent machine will
>> Closed or open context – A crucial factor is whether the task exists in a
think and feel as we do.
closed or open environment. Games such as chess are easier to tackle
This is not likely to happen in the short to medium term. There is no because the position of each player is clearly described (by the position
current prospect of a general human-like intelligence capable of dealing of each piece) and rules are easy to define. In contrast, open context
with multiple different and unconnected problems. Artificially intelligent tasks, such as imitating a human in natural language conversation 3 are
systems do many wonderful things, but still cannot deal with the wide much harder to solve.
variety of unconnected intellectual puzzles we all have to grapple with on
>> Known or unknown state – It is also more difficult for artificial
a daily basis.
intelligence to tackle a task if there are missing facts. At one end of
What artificial intelligence systems can do is master specific tasks or the spectrum is chess, in which the whole state is known, i.e. both
domains. The capabilities of these systems have improved over time. players know where all the pieces are and whose turn is next. It is also
For example, chess playing computers first appeared in the late 1970s, possible to deal with some “known unknowns”, for example by making
but it was not until 1997 that IBM’s Deep Blue stunned the world by probabilistic assumptions, but this is more difficult. For example, a
beating Garry Kasparov. More recently, AlphaZero, the gameplaying machine could play a mean hand of poker by making an assessment
system created by DeepMind, taught itself to play chess to a superhuman of the possible cards held by the other players. However, it is unlikely
standard in hours. to be able to deal well with “unknown unknowns”, where it is difficult to
even know what information is missing.
The table below provides a broad overview of developments in AI over
the years: >> Evaluation of success – Some artificial intelligence tools learn by taking
So what can artificial intelligence do? decisions based on an assessment of which is most successful, or trying
multiple courses of action and reinforcing approaches that achieve the
2007
Optimal most success. For example, a machine playing chess will typically make
Super human
1997 moves to maximise the chance of winning. This technology is harder to
2005 2005
use where there is no easy way to evaluate success.
Human
2018
2018 2018
2018
>> Data, data, data – Underlying many of these recent developments is the
Sub-human
use of data to allow the artificially intelligent system to learn. Getting hold
of the right data is key to solving many artificial intelligence challenges.
Dr

Ch

Sp

Fa

Dr

Tr

Vo

Le
an
au

ivi

ga
cia

ice
am
es

sla
ng

la
gh

re
re
re

dv
tio
ts

co
co
co

ice
n

gn
gn
gn

itio
itio
itio

n
n
n
TECHNICAL PRIMER

From a commercial perspective, artificial intelligence tools are


Open source and AIaaS
normally best suited to well-defined, high-volume, repetitive tasks. The
efficiencies of automating those tasks are most likely to justify the likely The use of artificial intelligence is accelerating as a result of the increasing
significant investment in developing, testing and supervising an artificial availability of off-the-shelf technology.
intelligence tool.
This includes the availability of open source software. For example,
TensorFlow is an open source tool used for machine learning released
Robot lawyers under the Apache open source licence. Similarly, Hadoop is a collection of
open source software utilities – based on Google’s Map Reduce software
Artificial intelligence has recently made inroads into the legal – that allow networked computers to solve problems involving massive
profession through use of technology such as predictive coding (for amounts of data and computation.
disclosure review) and contract analysis tools (for due diligence and Added to this is the availability of cloud-based computing resources to
contract management). These have helped to automate a number provide data storage and processing. For example, Hadoop is available
of routine and relatively straightforward tasks. in a wide range of cloud environments, including Amazon Elastic
However, the factors above suggest that advice from a full-spectrum MapReduce. This allows rapid and cost-efficient deployment.
“robot” lawyer is some way off. The provision of legal advice does This is being supplemented by full scale AIaaS (Artificial Intelligence
not take place against a closed context. A lawyer must understand as a Service). There are already AIaaS products on the market such as
the beauty of a summer cricket game 4 and why a snail in a bottle of Azure Machine Learning and Google’s Cloud AutoML: suites of machine
ginger beer is a bad thing. 5 It is unlikely any machine would obtain learning products that enable programmers with limited machine learning
this level of contextual understanding in the short term. expertise to train high-quality models specific to their business needs.
There are also no clear success criteria for legal advice. Unlike
chess, it is very hard to create a general-purpose heuristic to
determine whether advice was “good” or whether it will help the
client “win”. Different legal areas, different jurisdictions, different
types of legal question and even different clients will all affect what
“good” looks like.
Without clear success criteria, it is hard for the machine to learn
and improve.
TECHNICAL PRIMER

Machines that learn from data


Underpinning many advances in artificial intelligence is machine learning.
There are three forms of machine learning:
Reinforcement learning
The system will take action in a
particular environment and assess whether
those actions help to achieve its goals.
Those actions that lead to the
best outcomes will be prioritised and
thus the machine learns how best to
achieve its goals.
Supervised learning
The system is given a block of
training data containing both inputs
and desired outputs.
It uses that information to “learn” how
to complete the relevant task.
The system will then be tested against
a separate block of testing data to
confirm that it is generating the
correct outputs.
Unsupervised learning
With unsupervised learning, the system
is given a block of data that has not been
labelled (e.g. classified or categorised).
Since the data is not labelled it is not possible
to ensure specific outcomes but it may still
be possible to analyse the data to spot
clusters or groupings.
TECHNICAL PRIMER

Key to many of these approaches is access to sufficient, high-quality


data. A classic example is language translation. Computer scientists DIY chess data
struggled for years to programme computers to translate languages by
building increasingly complicated models to map the words and grammar A pre-existing block of data is not always necessary. In some cases,
for one language onto another. These attempts failed. The complex, you can generate the data yourself.
context-specific nature of human language, together with the lack of easily One example is AlphaZero, the gameplaying system created by
described rules of grammar, defeated all comers. For example, to translate DeepMind, which was tasked with becoming a champion chess player.
“I have lost my wife”, one needs to ask whether the speaker is in front of It started with details of the rules of chess but no information about
an undertaker or the maze at Hampton Court. 6 chess strategy, such as what constituted a good position or move.
To solve the problem, the computer scientists just used data. The EU has To learn, it played itself around a billion times, using the data
generated millions of professionally translated documents over the years. 7 from those games for reinforcement learning – i.e. to identify what
These were fed into a machine learning tool which broke up the original constitutes a good game state and strategy. For example, AlphaZero
text into short phrases and learnt the likely corresponding short phrase started with no concept of whether the queen is a valuable piece.
in the second language. The tool also learnt which phrase combinations Only by playing games against itself did it learn the connection
were likely to appear in the second language. This is an example of between the queen’s survival and success.
supervised learning.
Within hours, AlphaZero achieved a superhuman level of play and
Once trained, in order to translate a new passage the tool again breaks defeated other world-champion chess programs. 8
up the original text into short phrases and assesses both: (i) the range of
possible translated short phrases in the target language; and (ii) which
combinations of those potential translated phrases are most likely in the Black boxes – Opaque algorithms
target language, i.e. it assesses the probability of which overall translation
One of the implications of the machine learning for itself, rather than
makes most sense.
being instructed by a programmer, is that the algorithm will likely be a
Importantly, the programmers did not tell the machine how to translate “black box”. Understanding how it takes a decision will, at best, need a
and may not have even spoken the relevant languages. Instead, they gave detailed forensic analysis of the various weights and interactions in the
the machine the data and it learnt for itself. algorithm. In many cases, it will be impossible.
We consider these issues further in the Developing AI & data section.
TECHNICAL PRIMER

This has a number of significant regulatory and practical implications:


Specification Robustness Assurance
>> it may be difficult from a regulatory perspective to show that the The specification sets The robustness of Assurance is a
decision-making is fair and is based on rational and objectively out how the system the algorithm is a means to monitor
justifiable criteria. Worse, the algorithm might, beneath the surface, be should operate. measure of how well and control the
taking decisions on a discriminatory basis; and it operates when artificial intelligence
The risk is that the
faced with new system in operation.
>> from a practical perspective, there is a risk the algorithm initially ideal specification for
data or events, or
behaves correctly in the training and testing environment but then the system, i.e. the This involves
adversarial attacks.
becomes unpredictable or unreliable in the real world. This might true wishes of the both seeking to
either be because the algorithm is inherently unstable and chaotic, or human designer, do There is a risk that understand the
not match either the either the system operation of the
because the training and testing datasets are not representative of real
technical specification will behave in an algorithm (see
world data.
or actual behaviour of unpredictable way Verification and
the system. (see Doomsday counterfactuals) and
warnings) or that to ensure the system
A weather detector The CoastRunners
it can be gamed is interruptible (see
example below
by others to trick Contractual agents
A classic example of the need for good training data is an algorithm illustrates the
the system into and circuit breakers).
developed by the military to identify whether there is a tank in a problems with
producing unwanted
photograph. The algorithm worked well in the training environment specification
results (see Racist
but completely unpredictably in the live environment. loopholes.
chatbots).
An investigation revealed that most of the photographs in the
training set containing tanks were taken on a sunny day, and most
of those without tanks, on an overcast day. The algorithm was thus CoastRunners 11
more suitable for weather detection than tank detection. 9
This is a computer game in which a boat must race round a track.
The aim is to finish the lap as quickly as possible. However, this
These issues are especially important as there is no “common sense” general goal is hard to translate into a technical reward function
or “ethical” override. Unlike a human, the algorithm has no higher-level so instead the artificial intelligence system is rewarded for hitting
assessment of whether what it is doing is obviously “wrong”. waypoints laid out around the route.
This all means that additional care must be taken to ensure safe operation In practice, the system exhibited perverse behaviour. Instead of
of the system. This can be approached in three ways. 10 racing to complete the game, it drove in circles round the waypoints
waiting for the waypoint to repopulate, 12 whilst repeatedly crashing
and catching fire. In other words, the technical specification for the
system led to behaviour that was far removed from the initial ideal
specification (i.e. what the system was really supposed to do).
TECHNICAL PRIMER

Sub-human performance and centaurs The future


The discussion above assumes that that it is vital the artificially intelligent This toolkit is focused on what is currently possible, i.e. narrow artificial
tool comes to exactly the right conclusion. intelligence with the ability to take on domain-specific tasks. There is no
immediate prospect of an artificial general intelligence capable of flexible
This is not always necessary. In some situations, sub-standard
human-like intelligence.
performance is acceptable because of the other benefits provided by
artificial intelligence. For example, while automated language translation Entities such as HAL 9000, Ava or Ultron 14 remain safely in the realms
has improved in leaps and bounds, few would claim its output is the same of science fiction. However, we should not be complacent about the
as that from a professional translator. However, it is “good enough” to long-term challenges of artificial general intelligence. These include:
order a meal in a foreign restaurant and is available instantaneously and
>> A different form of intelligence. After Garry Kasparov was beaten
free of charge, which more than makes up for any technical shortcomings
by IBM’s Deep Blue, he said: “The computer was beyond my
in that translation.
understanding and I was scared”. We should not assume that an
In other situations, the shortcomings of artificial intelligence can be artificial general intelligence will think like we do or share our worldview.
overcome by augmenting human intelligence, also known as a “centaur”
>> Ethical intelligence. Neither should we assume that an artificial general
solution – i.e. half human, half machine. This can combine the higher-level
intelligence will think ethically or can easily be programmed to think
understanding of a human with the speed and predictability of a computer.
ethically. Any system that is complex enough to be intelligent will likely
be too complex to control. Asimov famously produced three laws of
robotics but hard coding this into a sophisticated artificial intelligence
A moment’s violence
will be challenging. A system capable of independent thought may just
choose to ignore them.
One example of the use of a centaur is subtitles for television
programmes. People on television speak in a wide range of dialects, >> A different scale of intelligence. Finally, the artificial general intelligence
using idioms and non-lexical utterances (“huh”, “uh”, “um” and so may be on a different scale from human intelligence. Artificially intelligent
on), and with varying degrees of clarity. This presents real challenges systems are often predicted to be “smarter than Einstein” but the
for any speech recognition technology. One way to deal with this is for comparison might well be not between Einstein and another human, but
a human to listen to the audio stream and “re-speak” the words. The between Einstein and a dog. In other words, we risk creating intelligence
speech recognition software can then recognise that clear and uniformly of such a scale that we have the same chance of understanding,
pronounced re-speaking much more easily than the original speaker. 13 reasoning with or controlling it as a dog does of its master.
However, this solution is still not perfect. The BBC’s subtitling of Fortunately, these are not immediate concerns. Whether they must be
the Queen Mother’s funeral informed viewers: “We’ll now have a addressed in future editions of this toolkit remains to be seen.
moment’s violence for the Queen Mother”.
01
Identify the value you, and your partner, bring
to the project.
Identify which elements of the project you are most
interested in; the algorithm, the data, the output of the
algorithm, etc.
Clarify your commercial aims for each element – i.e.
identify positive rights (rights to use) and negative rights
COLLABORATION (rights to stop use by others) you want to assert.

& “OWNERSHIP” Determine what intellectual property rights will arise for
each element, as this will determine what rights you
can assert against third parties.
Agree on who will own the intellectual property rights to
reflect your commercial aims and put in place
appropriate documentation to achieve that.
Include other contractual protections to achieve your
commercial aims, e.g. rights to use, exclusivity
arrangements and confidentiality obligations.
Assess other forms of collaboration, e.g. taking an
equity stake or entering into a joint venture.
01 COLLABORATION & “OWNERSHIP”

Artificial intelligence projects will often involve a collaboration between those with expertise in artificial
intelligence and an industry partner. In this section, we consider the key issues to address as part of
that collaboration. In particular:

>> Contribution – Why do these collaborations take place and what does
each person bring to that collaboration?
Nakhoda
>> “Ownership” – Discussions about the relationship often focus on who
“owns” the technology and the data. This is not always a helpful way to At Linklaters, we have developed our own AI and technology
frame the commercial relationship especially given intellectual property platform, Nakhoda, which began as a collaboration with the
rights do not always allow for “ownership”. London-based artificial intelligence company Eigen Technologies
Limited and eventually experimented with technologies from
>> Contract and beyond contracts – We consider other contractual
companies such as RAVN and Kira.
mechanisms to support each party’s commercial aims and other means
to structure the collaboration. These collaborations allowed us to fuse our legal expertise with the
technical expertise of these companies to create highly customised
What do you bring to the party? and flexible artificial intelligence solutions which can considerably
enhance the efficiency of many legal processes. Examples include:
The development of artificial intelligence may require collaboration,
typically between those with expertise in artificial intelligence and an >> working with a leading financial institution to experiment
industry partner. Why do these collaborations take place and what does with the application of artificial intelligence to the review of non-
each side contribute? disclosure agreements

>> The tech company – The technology company will bring expertise >> applying artificial intelligence to the large scale review of English
in artificial intelligence. This may mean its own technology but also land registry documents in loan portfolio transactions, delivering
expertise in the use of existing tools in the market. The technology substantial improvements in efficiency and accuracy for our clients
company might also bring faster and more agile ways of working.

>> The industry partner – The industry partner will have sector-specific
knowledge of the issue and may hold the data needed for the machine
to learn to solve the problem. Perhaps most importantly, they are
a customer, both in hypothetical terms by specifying what “good
looks like”, but also as an actual paying end user of the technology.
Credentials from a marquee customer in a particular field will be
invaluable for a young and untested technology company.
01 COLLABORATION & “OWNERSHIP”

>> an “exploitation right” – To use and exploit the asset as they see fit; and
Who pays?
>> a “monopoly/property right” – To stop others from using and exploiting
The contribution of each party might also be financial. The question is:
the asset.
who pays and how much?
That experience of ownership does not read across to the world of data
The starting assumption might be that the tech company has unique
and other intangible rights in a simple manner or at all. This often results
skills and expertise in artificial intelligence and so should be more richly
in a difficult and unhelpful debate. There are three levels of analysis.
remunerated for its contribution. This assumption should be challenged,
particularly where the tech company is just deploying open source
artificial intelligence tools.

In contrast, the industry partner might well make an equally, or not more,
valuable contribution by providing access and use of its own data and Stuart Bedford
sector-specific knowhow, something it has a monopoly over. Partner, Technology M&A
This debate has been particularly acute in some sectors, such as
healthcare. For example, how should the NHS ensure that it is
adequately compensated for the value of any data it provides as part of a
collaboration? This is one issue that the data trusts set up in response to “Businesses across the globe are looking
the Hall-Pesenti Report are tasked with addressing (see Box: How can I at whether AI can help them innovate and
get the data?). adapt their business models and bring them
Each party’s financial contribution might, of course, need to reflect a critical competitive advantage. We have
simpler commercial realities. Smaller tech companies might not have already seen this lead to a significant number
the means to support themselves without financial support from their
of acquisitions of AI start-ups by the major
industry partner. In this situation, it may be worth considering other ways
to collaborate (see box: Get involved or get left behind). tech companies, as established players strive
to bring in new technologies that complement
What does “ownership” mean? and evolve their existing offerings. This M&A
This issue is often one of the major stumbling blocks in any collaboration, activity is not though the preserve of the
sometimes marked by a dogged attachment to “ownership” without tech giants and we continue to see companies
properly understanding what it means.
in the financial services, energy, consumer
When non-lawyers assess ownership, they usually bring to that discussion and other sectors undertaking acquisitions and
their own experience in everyday life of owning things; houses, dogs,
collaborations with start-ups to enable them to
bananas and so on. Ownership of a physical thing gives two commercially
important rights: access the benefits that AI can bring.”
01 COLLABORATION & “OWNERSHIP”

The Emotional The emotional discussions typically revolve Typically, a party seeking ownership wants the
around who will “own it” without a proper analysis exploitation right plus the monopoly/property right.
of what “it” is or what “ownership” means. The claim “I want ownership” is just shorthand for
this position but fails to recognise that ownership
Vague claims to ownership can result in heated is a fluid spectrum of options, not a narrow binary
and unproductive arguments. They can also lead outcome.
to positions that are neither clear nor helpful.

The Legal In contrast, the legal analysis is likely to narrowly to exceptions and limitations. On the other
focus on the individual rights generated by the hand, they do not comprise exploitation rights;
project. Because the components of an artificial ownership of certain intellectual property rights in
collaboration are not physical things, like bananas, technology or data carries with it no guarantee of
legal ownership will mainly be through intellectual your legal right to exploit it as you see fit.
property rights such as copyright or database rights.
Any legal rights in technology and data may also
Intellectual property rights are inherently negative apply in a fragmented and piecemeal way and
rights, i.e. the right to stop other people from may not provide a comprehensive response to
doing certain things in relation to protected works. challenges of ownership.
So, typically, they do provide a monopoly/property
right in respect of protected works, although The box Does anyone own data? provides an
this may be narrower in scope than monopoly/ example of these limitations.
property rights for physical things and subject

The Commercial The most productive discussion is a commercial >> Monopoly/property rights: Do I want to control
analysis of the rights each party wants. By this or prevent others, including my collaboration
we mean: partner, from using the output from the project
or from adapting and modifying it?
>> Exploitation rights: What rights do I want in
respect of the output of the project? Do I just Once there is agreement on the commercial
want the right to use the output? Or do I also position, it should be possible to support it with
want the right to adapt and modify it? Or to appropriate ownership and licensing of intellectual
license it to third parties? property rights, combined with other contractual
rights. This then allows a substantive assessment of
whether it reflects the emotional need for ownership.
01 COLLABORATION & “OWNERSHIP”

What do you own?


Any conversation about who “owns it” also requires an assessment of Nemone Franks
what “it” is. For a typical collaboration there are a number of potential Partner, Intellectual Property
“buckets” of ownership:

>> The AI algorithm 15 – The core of the project is likely to be the artificial
intelligence algorithm. This will normally be provided by the tech
“Intellectual property issues are important
company, who will want to retain ownership of it. However, the value of
this contribution may sometimes be overstated. The tech company may in any collaboration, but the interaction
well be repurposing an existing open source algorithm. with artificial intelligence technology is
>> The data – The industrial partner may provide data to train and test not straightforward. This needs a clear,
the artificial intelligence algorithm. The industrial partner will normally structured approach that focuses on each
want to retain ownership of its own data (but see box: Does anyone own party’s commercial objectives.”
data?) and will need to be alert to any regulatory, contractual or other
constraints over the use of that data (see Developing AI & data).

>> The enhancements to the AI algorithm as part of the project – More


difficult questions start to arise over who “owns” any enhancements to
How do you own it?
the artificial intelligence algorithm. An algorithm trained to carry out a
specific task will be significantly more valuable than the base artificial The commercial position needs to be reflected in the legal rights accorded
intelligence algorithm, but equally will be inseparable from it. to each party. This is likely to start with an analysis of intellectual property
rights. In this context it is likely to mean:
>> The enhancements to the AI algorithm containing the data – Further
complexities arise where the enhancements to the algorithm contain the >> Copyright – Copyright may subsist in any original literary (i.e. written)
actual input data; for example, where the algorithm creates and retains a works, among other things, and arises automatically without the need
library of exemplars against which future comparisons can be made. for registration or other formalities. The source code for the artificial
intelligence algorithm is very likely to be protected by copyright (though
>> The output of the AI algorithm – Finally, who should own the output
it may have been developed by a third party and used under a licence).
of the artificial intelligence algorithm? This is likely to require a case-
However, the ownership of the final tool may also be complex. Copyright
by-case assessment. It is also closely entwined with the question of
protects “computer programs” being “programs in any form” that are
whether such works can be “owned” (see box: Monkey selfies and
the “intellectual creation” of an author. 16 This protects any software code
other AI works).
written by a human 17 but it is less clear whether it would, for example,
Working out what rights each party has in each bucket may require some protect automatically generated neural weights18 created through the
thought, and some negotiation, but is essential if you want a principled training of a machine learning algorithm. 19 Similarly, it is not clear if
and clear position on ownership. copyright will vest in the data or output of the artificial intelligence
algorithm (see box: Monkey selfies and other AI works).
01 COLLABORATION & “OWNERSHIP”

>> Database rights 20 – EU database rights arise in a database (i.e. a The approach to intellectual property rights can also be supplemented
collection of data that is arranged in a systematic or methodical way with cruder commercial tools. For example:
and individually accessible by electronic means) where there has
>> Confidentiality – The agreement might specifically require the other
been a substantial investment in obtaining, verifying or presenting the
party to keep certain materials confidential or to only use them for
contents. “Investment” refers to resources used to find and collate
specific purposes.
existing, independently created materials. Investment in creating
the data does not count. Database rights are most likely to protect >> Exclusivity – The parties might also include obligations to provide data
collections of data, but there are significant limitations on the scope of on an exclusive basis, to deal with each other on an exclusive basis or
their protection (see box: Does anyone own data?). not to deal with competitors. These would need to be carefully assessed
to ensure they are enforceable.
>> Patents – A patent grants a national monopoly right in relation to an
invention that is new and inventive. While this is a powerful form of In some cases, the complexity and potential fragility of a purely
protection, obtaining a patent can be expensive and time consuming. contractual relationship may mean that a broader and deeper relationship
In the EU, patents are also not available for a computer program as is more appropriate. See Get involved or get left behind for other models
such 21 or a way of doing business (though the position is different in to kick-start innovation within your organisation.
other jurisdictions). For some collaborations, it may be worth discussing
whether any patent applications will be made and, if so, who will make
them and in which jurisdiction(s).

In relation to all of these intellectual property rights, ownership is not the


end of the story. The party in whom the intellectual property rights vest
can grant wide licences to the other party to give rights akin to ownership.
Joint ownership is also possible, but it does not provide an easy solution to
the question of ownership as it restricts each co-owner’s rights to exploit
the intellectual property. Moreover, these restrictions vary from country-to-
country, meaning that one co-owner can (in the absence of an agreement
to the contrary) be in a different position as regards its rights to exploit
the jointly owned intellectual property in its home jurisdiction to that of its
foreign co-owner in its home jurisdiction.
01 COLLABORATION & “OWNERSHIP”

Get involved or get left behind: models for kick-starting innovation in your organisation

Incubation Acceleration
What? Ideas generated inside the business are developed by an What? A group of start-ups are selected to participate in a limited-
internal entity that enjoys complete autonomy from the rest of the time programme run by the company and then returned to the
business. outside economy (or are acquired by the company).

Why? The technology under development is not mature enough to Why? You are not ready to invest and wish to explore different
be integrated into your business as a whole; an incubator allows the options before a potential future investment.
ideas to be developed on a stand-alone basis, even whilst the entity
itself remains part of the corporate group. Pros Cons

Pros Cons >> Access to creative >> Limited control over


thinkers and new talent innovation assets and the
>> Autonomy from >> Similar to R&D if no spin-in direction of travel of the
>> Limited financial investment
internal processes (bringing the entity into start-ups
(seed financing)
the main group) or spin-off >> Start-up can eventually sell
>> Full control of >> Limited duration
(selling to a third party) out to competitors
innovation assets (4-18 months)
>> Difficult to implement
>> Safe way to introduce >> Broad focus on
new kinds of thinking >> Long-term timeline to return different ideas
inside a business whilst on investment
maintaining stability of its
>> No access to external talent
existing operations
and knowhow
01 COLLABORATION & “OWNERSHIP”

Commercial co-operation Joint venture/consortium


What? An established company and a start-up co-operate on the What? A group of investors/corporates works together through an
basis of a commercial contract. incorporated or unincorporated entity.

Why? You wish to access technology without making an investment Why? You can team up with other investors/corporates to develop
or exposing yourself to the risk of the start-up failing. the technology.
Collaborations between (potential) competitors should be carefully
Pros Cons considered under the applicable antitrust laws.

>> Limited financial investment >> Very limited control Pros Cons
or risk
>> Assumes a certain level of
>> Getting to know the organisational maturity from >> Share innovation/combine >> Complexity of establishment
products/services the start-up technologies and decision-making (and
of unwind in case of failure)
>> Getting to know the team >> Others may benefit from the >> Expansion from established
same products/services business lines >> Slow and delicate
implementation
>> Contractual complexity >> Risk sharing and cost
savings >> High rate of failure

Acquisition Pros Cons


What? A corporation acquires a majority or minority interest in a
start-up or scale-up. >> More control over innovation >> More complex governance of
relationship with the founders
Why? The technology already exists and is mature enough to be >> More due diligence, so more
on an ongoing basis
incorporated into your business. You get access to a technology, certainty on what you are
service or product that is not developed internally. This would getting and understanding >> How to fix the right valuation
save time and resources, and enable you to engage in more risky of the risks and pricing mechanics
ventures in an external entity.
>> Technology may already
be developed and ready to
integrate into your business
Does anyone own data?

There are limits on the extent to which intellectual property rights can >> Database rights: Database rights will also only provide limited
protect various aspects of an artificial intelligence project. One protection here. It is likely that data on the scores in aptitude tests or
example is data. performance grading will not be protected, as that data is created by
the employer – i.e. the investment is not the right sort of investment
It is easiest to use a hypothetical example. An employer wants to
(obtaining, verification or presentation of pre-existing data).
improve its graduate recruitment process. To do this it collects the
following information: This means the employer’s desire for “ownership” is not necessarily
supported by significant protection under intellectual property law.
>> Application Information: The organisation collects biographical details
The employer will need to rely on duties of confidence and other
for each applicant consisting of university attended, degree class and
contractual protections to prevent misuse.
A-Level results. It also collects the scores it awards to each applicant for
aptitude and psychometric tests taken as part of the recruitment process.

>> Performance Information: The employer then collects information


on the performance of the graduates it recruits consisting of each
employee’s length of service and performance grades.

This data will then be used to train a machine learning algorithm


to identify which applicants are most likely to succeed within the
organisation. In layman’s terms, the employer “owns” this data.
However, intellectual property rights only provide limited protection:

>> Copyright: Each individual data point in the database (university


attended, performance grade, etc.) is unlikely to be substantial or
creative enough on its own to attract copyright.

>> Database copyright: Copyright subsists in a database as a whole (as


opposed to each individual data point within the database) when
the selection or arrangement of its contents constitutes the author’s
own intellectual creation. This is unlikely to be the case here, where
all relevant data points are likely to be selected and arranged in an
unoriginal manner.
Monkey selfies and other AI works

The protection for the output of an artificial intelligence algorithm 22 is it sufficiently substantial or creative to be a literary work? This is
is also potentially limited. The key intellectual property right is likely important because in the UK at least, computer-generated data is
to be copyright. There is a specific provision in UK law addressing unlikely to be protected by copyright.
this issue:
“Monkey selfies”
“In the case of a literary, dramatic, musical or artistic work which
is computer-generated, the author shall be taken to be the person A useful analogy comes from the nature photographer David Slater,
by whom the arrangements necessary for the creation of the work who travelled to Indonesia to photograph macaque monkeys. The
are undertaken.” 23 macaques would not let him get near enough for a close-up, so he set
up his camera to allow them to take selfies of themselves.
However, this provision poses as many questions as it answers.
In particular: The US Copyright Office ruled the photographs could not be
copyrighted as protection does not extend to “photographs and
>> Who is making the arrangements? Is it the person who supplied the
artwork created by animals or by machines without human
artificial intelligence algorithm, the person who trained the algorithm
intervention”. This need for human input into the work is also
or the person who runs the algorithm? 24
applicable to works created by artificial intelligence.
>> Is the “arrangement” substantive enough? For a copyright work to
A separate, and more bizarre, aspect of the dispute is still
acquire copyright it seems likely that the human involvement in the
outstanding. In 2015, the campaign group, People for the Ethical
arrangements must have some substantive content – i.e. drawing
Treatment of Animals, filed a lawsuit claiming that the monkey, who
a portrait with creative assistance from an electronic paint program
they called Naruto, owns the copyright. The dispute focused on the
would be substantive, just pressing a button would not. Under
standing of animals to seek legal action. However, it seems unlikely the
US, not UK law, this point was made by the US Copyright Office:
English courts will allow an artificially intelligent entity to own property
“the Office will not register works produced by a machine or mere
any time soon.
mechanical process that operates randomly or automatically without
any creative input or intervention from a human author”.

>> Is the output a literary or other protectable work at all? This,


for example, is a particular issue in the application of artificial
intelligence to adtech. Is a user’s propensity to download this report,
as opposed to another report from Linklaters, 25 capable of being
represented as a literary or other protected work and when it is,
02
Ensure artificial intelligence systems are trained on
sufficient, high-quality data.
Ensure your use of data is compatible with your
confidentiality obligations.

Developing Ai
Ensure your use of personal data complies with
the GDPR.
Consider the impact of third parties accessing the data
and ensure you have either a data sharing or data DEVELOPING
processing agreement.
Where possible, anonymise the data to avoid
these concerns.
& Data & Data AI & DATA
Consider the use of regulatory or
development sandboxes.
Conduct a data protection impact assessment
Recent developments in artificial intelligence
where necessary.
are largely driven by data. This section looks
at how you can obtain that data and the
constraints on its use. It also looks at the use
of regulatory and development sandboxes.
02 DEVELOPING AI & DATA

The key advances in artificial intelligence over the past few years have been driven by machine
learning which, in turn, is fuelled by data. In many cases, businesses are free to use the data they
hold for whatever purpose they want, including developing artificial intelligence algorithms. However,
the following issues should be considered carefully:

>> Data quality – It is vital you use sufficient high-quality, well formatted system will face in the live environment. If the system has not been
data to train and test your artificial intelligence tool. trained to recognise and deal with particular scenarios it might act
unpredictably.
>> Confidentiality – If the data relates to a third party, it might be
confidential or provided under a limited licence. >> Bias – The data may itself contain decisions that reflect human biases.
These biases may then be picked up by the artificial intelligence
>> Data protection – Where personal data is used to develop, train or test
system. One example is facial recognition systems which have been
artificial intelligence algorithms, that processing will need to be fair and
shown to perform badly on darker-skinned women. This is thought to be
lawful and otherwise comply with data protection law.
because they have been trained on datasets that predominantly contain
>> Third-party involvement – If third parties will have access to the data, pictures of white men. 26
that may complicate the data protection and confidentiality issues.
>> Discriminatory data – The data used to train the artificial intelligence
These constraints are discussed below. One solution is the use should also be appropriate and not likely to lead to discriminatory
of development “sandboxes” to provide a safe means to conduct outcomes. For example, there would be no basis for using data on an
development work (see box Secure development sandboxes). It may also applicant’s race or sexual orientation to train an artificial intelligence tool to
be worth considering use of the regulatory sandboxes provided by the decide on mortgage applications.
Information Commissioner and the Financial Conduct Authority.
>> Inappropriate data – Even where use of the data might be objectively
justified, it may not be socially acceptable or appropriate on public policy
Data quality
grounds. Imagine there was clear evidence that the most successful
It is essential that you use sufficient high-quality and consistently trainee solicitors were themselves the children of lawyers. A tool used
formatted data to train and test your artificial intelligence tool. Poor-quality to shortlist employment applicants for interview might make “better”
or inappropriate data raises the following concerns: decisions by factoring this in, but it would clearly be wrong to do so. It is
important that the training set is sifted appropriately to remove the risk of
>> Quality – It is essential that the data is accurate, complete and properly
discriminatory or inappropriate decision-making.
formatted. An algorithm trained on poor-quality data will only ever
deliver poor-quality decisions.

>> Thin and unrepresentative data – The data should be comprehensive


enough to reflect the variety of situations the artificial intelligence
02 DEVELOPING AI & DATA

Confidentiality and licence scope


Garbage In; Garbage Out – Sexist recruitment
Where data relates to a third party, you will need to consider whether it is
Consider the hypothetical example of an employer who uses an subject to a duty of confidence and the terms of any licence you have to
artificial intelligence solution to review CVs from graduates and use it.
then create shortlists for interview.
Where data is provided to you under a data licence, the licence will
The artificial intelligence system learns using historic data generated typically contain express restrictions on your rights to use and disclose the
from past human reviews of CVs – i.e. data showing which CVs a data. The licence should therefore be reviewed carefully; internal uses for
human shortlisted in the past and which were rejected. Some of this testing or development may or may not be permitted, depending on its
data is used for training and the rest for testing. precise terms.

The testing of the system is a great success with 99% accuracy – i.e. You may also need to consider contractual and equitable duties of
it matches the human decisions ninety-nine times out of a hundred. confidence. The scope of those duties will vary. In many cases, they
However, when the system is rolled out two problems arise. should not prevent the internal use of confidential information for testing
and development work. However, this depends on the context. For
Sex discrimination
example, if a contract limits your use of that information to a particular
The system only shortlists 5% of male candidates, compared to purpose, that might prevent use for development purposes. In any event,
15% of female candidates.27 sharing confidential information with third parties as part of a collaboration
may be problematic.
On further investigation, this reflects the proportion of male and
female candidates shortlisted by the human review process. This may Duties of confidence are particularly relevant when using medical
well be sex discrimination. The male applicant may have been treated information. This is likely to be subject not only to the so-called ‘common
less favourably simply on the grounds of his sex, which could give rise law duty of confidence’ 28 but also the various guidance and codes. 29 In
to a claim under the Equality Act 2010. The artificial intelligence has practice, the use of confidential medical information for the development
adopted biases in the underlying data it was trained and tested on. of artificial intelligence may need a section 251 application to the relevant
More fundamentally, why was the sex of the candidate fed into the Confidentiality Advisory Group. 30
algorithm in the first place? Given the potentially discriminatory
outcomes, it would be better not to have used this data in the first place. Data protection

New types of data Where personal data is used is develop, train or test the system, you must
ensure that use is fair and lawful under data protection laws, including the
The employer expands the pool of universities from which it will GDPR – see Key obligations under the GDPR.
accept applications. This is to attract a wider range of graduates.
However, none of the candidates from that wider pool are shortlisted. These rules apply even if you are only using that personal data within your
own development environment. In particular, you must satisfy a statutory
In our hypothetical example, this might be because the system processing condition, which in many cases will be the so-called legitimate
does not recognise, or does not allocate any value to, those interest test. A crucial factor in determining whether that test is satisfied is the
universities. The artificial intelligence may not react predictably safeguards applied to that personal data; the use of a development sandbox
where there are changes in the types of data it is having to process. may help (see box: Playing Safely – Secure development sandboxes).
02 DEVELOPING AI & DATA

You may also need to document this evaluation. This will be through either
Third-party involvement
a data protection impact assessment or legitimate interests assessment
– see Impact assessments. If the testing and development is part of an These issues are likely to be more difficult if a third party is involved: for
ongoing programme, you may want to conduct a framework assessment, example, either to provide the technology or in a more substantive role,
rather than an individual assessment for each project. such as a commercial data-sharing collaboration. In particular:

>> Reasonable expectations – Individuals may not reasonably expect their


personal data to be disclosed to third parties in this way.
Information Commissioner: Regulatory Sandbox
>> Increased infringement – Allowing a third party access to the personal
data increases the potential infringement of the individual’s privacy rights.
The UK Information Commissioner is looking to create a regulatory
sandbox to help encourage innovation, such as the development of Much will depend on the terms under which the third party accesses
artificial intelligence. that information. From a data protection perspective, the key question is
whether the third party acts as:
The Information Commissioner is still consulting on the structure of
the sandbox. It might give participants enhanced access to the >> Data controller – This means that the third party will make independent
Information Commissioner’s expertise through advice or “informal use of the personal data. It will be harder to comply with the GDPR and
steers” on difficult issues. it is likely you will need a data-sharing agreement with the third party,
including strong confidentiality provisions; or
In some cases, the Information Commissioner might also provide
“letters of comfort” or negative confirmation regarding the >> Data processor – This means that the third party just acts on your
compliance of the project as it transitions out of the sandbox into a instructions. This will typically be easier to justify under data protection
live environment. law, but will mean the third party’s use of the personal data will be
very constrained. You must also have a contract with the processor
containing mandatory data processor obligations, such as audit rights. 31

Similarly, if a third party is involved in the development work, any


confidentiality undertakings given in relation to the underlying information
would need to be checked carefully. The English courts may take a strict
approach to non-disclosure obligations. 32
02 DEVELOPING AI & DATA

>> Combination with other data sources – Finally, the original dataset could
be combined with other data sources to identify someone. Determining
Ed Chan whether this is the case is a very difficult exercise and there is often
Partner, Head of AI Working Group not a bright line test to determine when data is identifiable. The
UK Information Commissioner recommends a “motivated intruder”
test – i.e. considering whether a person who starts without any prior
knowledge but wants to identify the persons in the original dataset
“We have spent a lot of time looking could identify anyone. As a result, true anonymisation can be very
at the use of artificial intelligence hard. Statements by engineers or business people that the data is
“anonymised” should be treated with caution and challenged; “What do
at Linklaters to streamline repetitive you mean by that?”. The Netflix example (opposite) demonstrates how
processes. Artificial intelligence will difficult full anonymisation can be.
change the very nature of our work, and That said, even partial anonymisation will normally be a useful exercise
we need to mould this technology so that and will be a powerful factor justifying this use of the information. There are
it truly supports what our lawyers do.” also statutory protections under the Data Protection Act 2018 which make
re-identification of anonymised personal data a criminal offence in some
circumstances. 35 However, anonymisation is not always a silver bullet.

Anonymisation – Pitfall or silver bullet?


Has Netflix told the world what you are watching?
These restrictions make the use of anonymised personal data an attractive
option. That bypasses the need to comply with data protection 33 and From 2006 to 2009, Netflix released the rankings of 500,000
some confidentiality obligations, and makes working with third parties customers for 100 million films as part of an annual prize
much easier. competition to help create a better film recommendation algorithm.
The information was pseudonymised (the customers’ names were
However, proper anonymisation is hard. You must consider whether the
replaced with a key) and “noise” was added to the ratings by slightly
relevant individual can still be identified from the dataset using:
increasing or decreasing those ratings.
>> Direct identifiers – Simply deleting names and addresses from the data
At first glance, this appears more than enough to protect its
may not be sufficient to anonymise the data if it contains other direct
customers’ privacy. However, researchers36 found that the
identifiers (e.g. account numbers or telephone numbers).
combination of ratings formed a distinctive “fingerprint” that could
>> A combination of identifiers – It may be possible to identify the individual be matched to movie ratings in the public IMDB database (i.e. the
through a combination of identifiers. For example, the combination of films a person likes, and hates, can be a unique and identifying
postcode and date of birth will normally identify an individual. 34 factor). This allowed some customers to be identified.
02 DEVELOPING AI & DATA

Playing safely – Secure development sandboxes

One way to help manage confidentiality and data protection issues is Behavioural protections: The technical controls should be backed
to conduct your testing and development in a secure development up with appropriate behavioural controls over those using the
sandbox. The sandbox would typically involve: sandbox. This might involve additional confidentially obligations and/
or an express prohibition on attempting to identify individuals from
Clean data sources: The data placed in the sandbox should be
pseudonymised data.
reviewed carefully to ensure that use within the sandbox complies
with data protection and confidentiality laws, and the use of data is Controls on third-party access: If third parties have access to the
consistent with any licence attaching to that data. The data in the sandbox, they should normally be subject to appropriate contractual
sandbox might be: and confidentiality obligations.

>> Fully anonymised data. If the anonymisation process has been carried If the third party acts as a data processor, it must be subject to data
out correctly, use of this data should not be subject to data protection processor clauses.
or confidentiality laws (see Anonymisation – Pitfall or silver bullet?).
Genomics England: A good example of the use of a sandbox-type
>> Pseudonymised data. This is data that has been manipulated so environment is the approach of Genomics England which is in the
that it is only possible to identify the relevant individual using other process of sequencing 100,000 genomes from around 70,000 people.
information. For example, replacing the individual’s name with a key Third parties wanting to access Genomics England’s data services
code. Pseudonymised data is still personal data but it is generally must first pass a rigorous ethical review and have their research
easier to justify its use. proposal approved by an Ethics Advisory Committee. In addition, no
raw genome data can be taken away. The genome data is always kept
>> Raw data. This is data that has not been anonymised or redacted.
within Genomics England’s data centres and can only be accessed
Technical protections: Those controls will depend on the aim of the remotely. In other words, third parties are provided with a reading
sandbox but might include a one-way data gate – i.e. data can flow library and not a lending library.
into the sandbox but cannot generally be taken out of the sandbox.
This would help to minimise any privacy intrusion.

This should be backed up by suitable access controls and audit


logs to prevent misuse of data in the sandbox and/or or allow it to
be investigated.
02 DEVELOPING AI & DATA

How can I get the data?

Access to data is essential to the development of artificial intelligence arrangements, including the value of the data being provided.
tools. Without data it is very difficult to compete; particularly for small
>> Public data – The UK Government is also taking steps to make
and medium sized companies who may not be able to pay for access
public data available for reuse, including use to develop artificial
to data or be able to create their own data at scale.
intelligence. For example, various health datasets are available
The problem. There are three principal challenges: from Public Health England. The Office for National Statistics and
bodies such as the UK Data Service also make a number of datasets
>> Privacy and confidentiality – The sharing of information about
available for reuse. It might also be possible to obtain information
identified individuals or companies must respect those persons’
from the Government using the Freedom of Information Act 2000.
privacy and confidentiality rights. This can be a significant barrier, as
evidenced by the controversy over the arrangements between The >> Data mining rights – The EU’s proposed Directive on copyright in
Royal Free Hospital and DeepMind in relation to health records (see the digital single market contains a proposed right to allow research
box: Digital Health: Apps, GDPR and Confidentiality). organisations to carry out text and data mining of copyright works to
which they have lawful access. This is similar to the existing rights in
>> Competitive incentives – There may be no commercial incentive on
the UK for non-commercial research. 37
those holding data to provide others with access. Companies with
very large datasets may well want to keep that data to themselves >> Competition law remedies – In some circumstances, competition
in order to improve their own products and services. There may be law could be used to obtain data. Where the holder of the data
little benefit in providing potential competitors with that data. has a dominant position, it might be possible to compel the holder
to provide a licence of that data if it is an “essential facility” – i.e.
>> Market fragmentation – In some markets there is significant
the refusal: (i) is preventing the emergence of a new product for
fragmentation with data being held by multiple different entities each
which there is a potential consumer demand; (ii) is “unjustified”;
of which may take a different approach to providing access to third
and (iii) excludes competition in the secondary market. 38 Similarly,
parties and store the data in different formats (such as the NHS).
discriminatory access terms or exclusive data supply arrangements
The solution. This problem is recognised and is being addressed in could also raise competition issues. However, using competition law
various ways: to get access to data will likely be expensive and uncertain.
>> Data Trusts – The Hall-Pesenti Report suggests the creation of >> Data portability – In certain circumstances, individuals have the
“data trusts”. These would not create a legal trust as such, and are right to data portability under the GDPR, i.e. to be provided with
instead a framework for data sharing. This includes: (i) template a copy of their information in a machine-readable form. However,
terms and conditions; (ii) helping the parties define the purposes for this is unlikely to generate sufficient volumes of data to support the
which the data will be used; (iii) agreeing the technical mechanisms development of artificial intelligence.
for transfer and storage; and (iv) helping determine the financial
DATA PROTECTION – A QUICK OVERVIEW

Data protection laws in the EU are mainly set out in the General Data Protection Regulation
(“GDPR”). 39 This is supplemented in the UK by the Data Protection Act 2018.

The GDPR applies to the processing of personal data. This is information >> Keep personal data secure – This is a particular concern for some
that relates to identified or identifiable living individuals. It does not protect artificial intelligence algorithms and big data projects which use large
information about companies or other non-personal data, e.g. share amounts of personal data. Some security breaches must be notified to
prices or weather reports. regulators and individuals (see Cyber threats).

Those processing personal data do so as either a controller (who >> Sensitive personal data – Additional restrictions apply if you are using
determines the purpose and means of the processing) or as a processor information about criminal offences or certain sensitive characteristics
(who simply acts on the controller’s instructions). The majority of the (known as special personal data 43). This type of personal data can only
obligations in the GDPR apply to controllers. If you are rolling out an be used where specific statutory conditions are satisfied. 44 There is also
artificial intelligence system for your own purposes, you are likely to do so an increased risk of discrimination when using this type of information.
as a controller. 40
The GDPR also includes an “accountability” principle, meaning that
Key obligations under the GDPR you must not only comply with the law but also be able demonstrate how
you comply.
The GDPR imposes a wide range of obligations. Those specifically
relevant to artificial intelligence systems are set out below. 41
Data processing conditions
>> Look after your data – Your use of personal data should be fair and
When you use personal data, you must also satisfy at least one statutory
lawful, and you should only use personal data for purposes for which it
processing condition. 45 There are six different processing conditions:
was collected or for other compatible purposes. You should ensure the
personal data you use is accurate, not excessive and not kept for longer >> Consent – This applies where the individual has given consent. Under
than necessary. the GDPR, a consent will only be valid if there is a clear and specific
request to the individual, and the individual actively agrees to that
>> Tell people what you are doing – You should normally tell individuals if
use. It is not possible to imply consent and you cannot rely on legalese
you are processing their personal data. There are additional obligations
buried deep within your terms and conditions. This high threshold
if you are using a system to carry out automated decision making.
means consent will rarely be appropriate to justify processing in an
These obligations are discussed in the section on Liability & regulation.
artificial intelligence project.
>> Respect individual rights – Individuals have rights not to be subject to
>> Necessary for performance of a contract – This applies where the
automated decision making (see Liability & regulation). Individuals also
processing is necessary for the performance of a contract with the
have rights to object to processing or to ask that their data is quarantined
individual or in order to take steps at the request of the individual prior
or erased. These other rights are complex and may need to be factored
to entering into a contract. It is not relevant where the contract is with a
into your project. 42
third party.
DATA PROTECTION – A QUICK OVERVIEW

>> Legal obligation – This applies where the processing is necessary for >> Is the processing necessary for that purpose? This may be a bigger
compliance with a legal obligation under EU or Member State law. challenge. For development work, the Information Commissioner may
well want to know why it could not be conducted with pseudonymised or
>> Vital interests – This applies where the processing is necessary in order
anonymised data (especially if that personal data is private in nature).
to protect the vital interests of the individual or of another natural person.
This is typically limited to processing needed for medical emergencies. >> Do the individual’s interests override those legitimate interests? This will
depend on a range of factors including the sensitivity of the personal
>> Public functions – This applies where the processing is necessary
data, the reasonable expectations of the individual and the public interest
for the performance of a task carried out in the public interest or in
in the underlying purpose. Safeguards will be an important part of this
the exercise of official authority vested in the controller under EU or
balancing exercise.
Member State law.

>> Legitimate interests – This applies where the processing is necessary


for the purposes of “the legitimate interests” (see below) except where
such interests are overridden by the interests of the individual.
Georgina Kon
Partner, Technology Practice
Legitimate interests and further processing
In many cases, particularly when developing artificial intelligence, you will
need to rely on the final processing condition, the legitimate interests test.
“The GDPR was designed with technology such
This applies where your use, as controller:
as artificial intelligence in mind. The law
“is necessary for the purposes of the legitimate interests pursued by the
has a number of features, such as control
controller or by a third party, except where such interests are overridden
by the interests or fundamental rights and freedoms of the data subject over automated decision making and mandatory
which require protection of personal data, in particular where the data impact assessments, to help ensure the safe
subject is a child.” and ethical use of this technology.”
This is a subjective and context-specific test. The UK Information
Commissioner recommends a three-step test:

>> What is the legitimate interest? The law recognises that businesses have
a legitimate interest in a wide range of activities, such as marketing or
increasing the internal efficiency of the business. However, where the
purpose serves an obvious public interest (e.g. detecting fraud or cyber-
attacks) that interest will carry greater weight.
DATA PROTECTION – A QUICK OVERVIEW

Fair and lawful processing generally means that personal data should
only be used for the purpose for which it was originally collected.
However, the GDPR allows use for new purposes (such as development
of new technology) if the new purpose is compatible. This requires an
assessment of a range of factors including: (i) any link between the
original and new purpose and the context of the new purpose; (ii) the type
of personal data being processed; (iii) the consequences for individuals;
and (iv) the safeguards used.46

Impact assessments
The use of personal data for the development of artificial intelligence
is likely to engage a range of relatively complex issues that require a
number of value judgements to be made. In most cases, you will need to
document this evaluation. This will be through either a:

>> Data protection impact assessment – These are mandatory if the


processing is “high risk” and must involve your data protection officer
(if appointed). If the assessment shows that there are unmitigated high
risks, you must consult the Information Commissioner before rolling out
that system; or

>> Legitimate interests assessment – If the legitimate interests processing


condition is relied on (see above) the Information Commissioner will
expect to see that assessment documented. This is a much quicker and
more lightweight process than a full data protection impact assessment
and can be recorded in a relatively short and informal document.

In many cases, the deployment of artificial intelligence systems will trigger


the need for a full data protection impact assessment. Guidance from the
European Data Protection Board indicates that the use of new technology,
automated decision making and similar activities will trigger the need
for a data protection impact assessment. 47 In the UK, the Information
Commissioner has issued a list of activities that prima facie will require
a data protection impact assessment. It specifically refers to “Artificial
intelligence, machine learning and deep learning” as a factor that may
trigger the need for such an assessment. 48
03
If the system is provided under a contract, address
the standards the system must meet and include
appropriate limitation and exclusion provisions.
If the system is not provided under a contract,
consider your liability in tort and the use of an
appropriate disclaimer with end users.
If the system is embedded into a product, consider
LIABILITY & whether the product liability regime applies.

REGULATION Where the system takes decisions about individuals,


ensure that processing is fair and lawful and avoids
discriminatory outcomes.
Where the system takes significant decisions about
individuals, additional controls apply. You may have
to inform the individual and let them ask for a
human re-evaluation.
Ensure the use of the system complies with the
GDPR and conduct a data protection impact
assessment where necessary.
Make sure your system is secure and protected
against cyber-attacks.
If the system is involved in pricing decisions,
consider the risk of the system acting in breach of
competition law.
03 LIABILITY & REGULATION

The aim of an artificially intelligent system is to be intelligent – to analyse, decide, and potentially act,
with a degree of independence from its maker.

This is a potential concern. The algorithm at the heart of the artificially Contractual liability
intelligent system may be opaque and, unlike a human, there is no
Where you provide an artificially intelligent system or service to a third party
common-sense safety valve. Delegating decisions to a machine which
under contract, there is a risk of contractual liability if the system fails to
you do not control or even understand raises interesting issues. You
perform. However, that liability can be regulated in two important ways.
should consider:
First, and most importantly, the contract can define the basis on which
>> Liability – Liability will primarily be determined by contract. In the
you provide the system. For example, this might: (a) impose an “output
absence of a contract, liability could arise in tort (though this may be
duty” to ensure the output of the system meets specified standards, such
subject to the restrictive rules around pure economic loss) or under
as percentage accuracy; (b) impose a lesser “supervision duty” to take
product liability regimes.
reasonable care developing or deploying the system; or (c) make it clear
>> Fair use of personal data – Data protection issues arise not only that the system is provided “as is” and that use is entirely at the third
when developing artificial intelligence (as previous discussed) but also party’s risk.
when deploying that technology, including restrictions on automated
Where you are dealing with a consumer, you would need to ensure
decision making.
that your terms are consistent with the statutory implied terms that
>> Competition law assessment – There is a risk that an artificial digital products are of satisfactory quality and fit for purpose under the
intelligence solution, particularly a pricing bot, could lead to anti- Consumer Rights Act 2015.
competitive behaviour.
Secondly, the contract can exclude or limit your liability. These protections
We consider these issues below. In many cases, you will need to include would, however, be subject to the normal statutory controls. In business
suitable measures to supervise the operation of the artificially intelligent contracts, that means the Unfair Contract Terms Act 1979 and, in a
system (see Safe use) to mitigate these liabilities and meet your regulatory consumer contract, the Consumer Rights Act 2015.
responsibilities.

There is currently very limited specific regulation of artificial intelligence.


However, this an evolving area and could change over time (see
Government and regulatory responses).
03 LIABILITY & REGULATION

This, of course, assumes that liability is assessed using traditional duty


Tortious liability
of care concepts. Given the potential for independent and autonomous
If you deploy an artificially intelligent system, you should also consider the action, there have been suggestions that the “owner” of an artificial
potential liability in tort. intelligence should be subject to more stringent and strict liability for its
actions, either through the development of the law of tort (such as an
The starting point is whether there is the potential for physical harm or
extension to the laws of vicarious liability or the concepts in Rylands v
damage to tangible property. If this is the case, a duty of care is much
Fletcher) or statutory intervention.
more likely to arise. For example, the maker of a fully autonomous car will
likely have various duties of care to passengers and other road users. 49

If there is no tangible damage, the claim is likely to be for pure economic


loss. The position here is less clear and will depend on the following issues: Marly Didizian
>> Does a duty of care arise? Where the relationship is not one in which Partner, Healthcare Sector Leader
there is an established duty of care (e.g. doctor and patient 50 ) the
courts are likely to start with an assessment of whether anyone has
assumed responsibility for the actions of the artificial intelligence
system. They will also use the threefold test, namely: (a) is loss a “Artificial intelligence is showing
foreseeable consequence? (b) is the relationship sufficiently proximate? real promise in the healthcare space,
and (c) is it fair, just and reasonable to impose a duty of care? In particularly in relation to repetitive
doing so, they will likely take a cautious approach to any incremental
tasks such as reviewing and diagnosing
extension of circumstances in which such duties are recognised. 51
scans for conditions such as eye disease
>> A duty of care to do what? Another open question is the scope of the
and cancer. However, this technology needs
duty. There is a wide spectrum of potential duties. The most demanding
would be an “output duty” to ensure that each decision meets a certain to be subject to rigorous clinical trials
standard. Lesser “supervision duties” might require ongoing supervision before being used in practice.”
of the system or perhaps a more limited requirement to take reasonable
care in the development of the system.
>> Who has a duty of care? Added to this is the question of who has the
duty. It is most likely to be the last person in the supply chain – i.e.
the person making the artificial intelligence system available for use
by others, but duties might conceivably extend to others such as the
provider of the technology or data.

>> What is the impact of any disclaimer? It may be possible to remove


some of this uncertainty, or reduce or avoid any potential liability,
through the use of suitable disclaimers.
03 LIABILITY & REGULATION

Product liability
Is there a duty to treat job applicants fairly?
Strict liability could arise under product liability laws. In the UK,
businesses supplying products to consumers have strict liability
Imagine a large employer uses an artificial intelligence system to
obligations to ensure their safety. 52
automatically shortlist candidates for interview. The solution uses
information extracted from the candidate’s CV and performance in The term “product” includes “all movables … even though incorporated
aptitude tests. into another movable or into an immovable”. This term would not cover
a business’s internal use of artificial intelligence tools (as they are not
Question: Does an applicant have any remedy in tort if the system
provided to a consumer) or web-based access to an artificial intelligence
“wrongly” rejects their application?
tool (as there is no product). However, product liability will be relevant if
Answer: It is unlikely that the employer has a duty of care to the artificial intelligence were embedded in a product sold to a consumer.
properly consider the applicant for shortlisting. This is not a
One such example is an autonomous vehicle. The application of product
relationship in which there is an established duty of care. While
liability law to autonomous vehicles raises a number of interesting issues
there is clearly proximity between the employer and the applicant,
including what standard of safety a person might reasonably expect of
there would be strong arguments that it is not reasonable, as a
the product 53 and the application of various defences, such as where the
matter of public policy, to impose this duty on employers given
defect could not have been discovered at the time the product was put on
they might receive thousands of applications for positions and they
the market. However, these issues are outside the scope of this toolkit and
cannot reasonably be expected to review all of them in detail.
are largely superseded by the specific insurance and liability provisions in
Finally, this would be a significant extension to the law of the UK Automated and Electric Vehicles Act 2018.
negligence and so would fail the incremental test. Such a duty
would have significant and wide-ranging effects. Put differently, is it Unfairness, bias and discrimination
reasonable to require an employer to carefully review every CV it is
One of the key principles under data protection law is that personal data must
presented with?
be processed fairly and lawfully. This is a broad common-sense concept.
While the applicant is unlikely to have a remedy in tort, their interests
Importantly, data protection laws do not regulate the minds of humans.
are likely to be protected under data protection law, particularly
Human decisions cannot generally be challenged on data protection law
because of the controls placed on automated decision making. The
unless based on inaccurate or unlawfully processed data. 54 In contrast, a
applicant might also have remedy if the decision is discriminatory.
decision made by the mind of a machine may well be open to challenge
on general grounds of unfairness.

Moreover, under the accountability principle in the GDPR you are obliged not
to just ensure your processing is fair but be able to demonstrate this is the
case. The challenge is to square this with the use of an opaque algorithm.
03 LIABILITY & REGULATION

Similarly, if the algorithm is opaque there is a risk that it will make >> Performance of contract – Automated decision making is also permitted
decisions that are either discriminatory or reflect bias in the underlying where it is necessary for the performance of a contract or in order to
dataset. This is not just a potential breach of data protection law but might enter into a contract. An example might be carrying out credit checks
also breach the Equalities Act 2010. on a new customer.

The solution will depend greatly on the context and the impact on the >> Authorised by law – Finally, automated decision-making processing is
individual; the inner workings of an AI-generated horoscope 55 require permitted where it is authorised by law.
much less scrutiny than an algorithm to decide whether to grant someone
Even where automated decisions are permitted, you must put suitable
a mortgage.
safeguards in place to protect the individual’s interests. This means notifying
There are various options to address the use of opaque algorithms the individual (see below) and giving them the right to a human evaluation of
including properly testing the algorithm, filleting the input data to avoid the decision and to contest the decision. The Information Commissioner also
discrimination, or producing counterfactuals. We consider these issues in recommends the use of counterfactuals to help the individual understand
the Safe use section. how the decision was made (see Verification and counterfactuals).

These are all issues you should address in your data protection impact Beyond the technical requirements of data protection law, there is a wider
assessment or legitimate interests assessment – see Impact assessments. ethical question of whether it is appropriate to delegate the final decision
about an individual to a machine. As the human rights organisation Liberty
Automated decisions – “The computer says no” submitted to a recent Select Committee hearing: “where algorithms are used
in areas that would engage human rights, they should at best be advisory”. 57
The GDPR contains controls on the use of automated decision making, i.e.:

“a decision based solely on automated processing, including profiling,


which produces legal effects concerning him or her or similarly Protection against automated decisions
significantly affects him or her”. 56

Guidance from regulators suggests that this will include a range of A large employer uses an artificial intelligence solution to
different activities, such as deciding on loan applications or changing automatically shortlist candidates for interview. This constitutes
credit card limits. Automated decision making is only permitted in the automated decision making as the decision is made solely by
following situations: automated means and significantly affects applicants.

>> Human involvement – If a human is involved in the decision-making The employer is permitted under the GDPR to make these
process it will not be a decision based solely on automated processing. automated decisions as they are taken with a view to entering an
However, that involvement would have to be meaningful and substantive. employment contract with the individual.58 However, in addition
It must be more than just rubber-stamping the machine’s decision. to the general steps described above to ensure fair and lawful
processing, the employer must:
>> Consent – Automated decision making is permitted where the individual
has provided explicit consent. While this sounds like an attractive >> notify applicants that the decision not to shortlist them was taken
option, the GDPR places a very high threshold on consent and this will using automated means; and
only be valid where the relevant decision-making process has been >> allow the applicant to contest the decision and ask for human
clearly explained and agreed to. evaluation.
03 LIABILITY & REGULATION

data being compromised that could be a breach of the GDPR. If the


Automated decisions – Transparency and a
security breach creates risks to individuals, you must tell the Information
“right to an explanation”
Commissioner within 72 hours. If it creates high risks to individuals, those
Data protection law also requires you to tell individuals what information individuals must also be told.
you hold about them and how it is being used. This means that if you are
Security obligations arise under a range of laws, including the GDPR, the
going to use artificial intelligence to process someone’s personal data, you
Network and Information Systems Regulations 2018 (in relation to critical
normally need to tell them about it.
infrastructure) and product liability laws. These laws do not generally
More importantly, where automated decision making takes place (see require that a system must be unhackable. However, where a breach
above) there is a “right of explanation”. You must tell affected individuals: could lead to serious damage, personal injury or death they are likely to
set a high threshold.
>> of the fact of automated decision making;

>> about the significance of the automated decision making; and Anti-competitive pricing bots
>> how the automated decision making operates. You should also address the risk that an algorithm might result in anti-
competitive behaviour. There are four areas of concern. 61
The obligation is to provide “meaningful information about the logic
involved”. This can be challenging if the algorithm is opaque. The logic The first and least controversial is the messenger scenario; where the
used may not be easy to describe and might not even be understandable technology is intended to monitor or implement a cartel – i.e. it is a tool
in the first place. These difficulties are recognised by regulators who do to execute a human intention to act anti-competitively. One example
not expect organisations to provide a complex explanation of how the is two poster sellers who agreed not to undercut each other’s prices
algorithm works or disclosure of the full algorithm itself. 59 However, you on Amazon’s UK website. That agreement was implemented using
should provide as full a description about the data used in the decision- automated repricing software. 62
making process as possible, the broad aim of the processing and
The second concern arises where more than one business is relying on
counterfactual scenarios (see Safe use) as an alternative.
the same pricing algorithm, a so-called (inadvertent) hub and spoke
arrangement. In Eturas, 63 the administrator of an online travel booking
Cyber threats
system sent out a notice to travel agents informing them of a restriction
The security of the system will be essential. A breach could have serious on discount rates which had been built into the system. The Court of
consequences including: Justice decided that those travel agents could be liable if, knowing of
this message, they failed to distance themselves from it. Neither this, nor
>> Uncontrolled behaviour – The security breach could allow the hacker to
the first scenario, necessarily involve artificial intelligence or stretch the
take control of the system. One visceral example is someone hijacking a
boundaries of competition law.
driverless car, which could result in personal injury or death. 60 However,
it is easy to imagine other situations in which an out of control artificial The third, predictable agent, scenario is more interesting. This arises
intelligence could cause serious damage. where a number of parties across an industry unilaterally deploy their
own artificially intelligent systems based on fast, predictive and similar
>> Unauthorised disclosure – If the security breach results in personal
03 LIABILITY & REGULATION

analytics, each of which integrates competitors’ reactions drawn from data


collected from past experience of price variations. This is described as The $23 million textbook
“tacit collusion on steroids”, 64 leading to greater transparency between
market players and enabling easy detection and punishment of price Two companies were selling a textbook called The Making of a Fly.
variations, which in turn could lead to pricing levels that are sustained One of those sellers used an algorithm which matched its rival’s
at higher levels than would otherwise exist in a competitive market. This price. The other used an algorithm which always set a price 27%
collusion might take place without any intention, or possibly knowledge, higher than the first.
by the parties.
As a result, the price spiralled upwards. By the time someone
Whether the parties are liable for this anti-competitive behaviour is an noticed what was going on the book was being offered at
open issue. Competition law does not necessarily prohibit the use by a $23,698,655.93. The Making of a Fly is currently available, for
non-dominant undertaking of pricing algorithms that act independently; £16.95 from all good booksellers.
even if that results in tacit collusion that can lead to higher prices.

However, these algorithms are coming under greater scrutiny and EU


Competition Commissioner Vestager has said that “pricing algorithms need
to be built in a way that doesn’t allow them to collude”. 65 In particular,
artificial intelligence tools are not really intelligent and do not appear
by magic. They are domain-specific tools trained for particular tasks,
typically through the use of data and a means to evaluate the success
of the system’s decision. It will be difficult to blame the computer if the
training data or success criteria predispose the system to anti-competitive
behaviour. Indeed, Commissioner Vestager advocates a positive obligation
to avoid these effects, so-called “antitrust compliance by design”.

The fourth situation is the digital eye. An all-seeing and fully intelligent
artificial intelligence is able to survey the market and extend tacit collusion
beyond oligopolistic markets to non-price factors. This type of artificial
intelligence envisages users of the algorithm being able to tell it to “make
me money” and, through a process of “learning by doing”, the algorithm
reaches an optimal solution for achieving this aim. However, such
advanced technology does not seem likely in the short term.

This is a developing area of law and one to be watched carefully:


“companies can’t escape responsibility for collusion by hiding behind a
computer program”. 66
04
Factor the use of artificial intelligence into your broader
risk management framework.
Ensure artificial intelligence systems are properly
tested before use.
Use counterfactuals to identify edge cases and
use other tools to try and verify the system’s
decision making.
Provide ongoing supervision during the operation of the
SAFE USE
tool, including the use of circuit breakers where the
behaviour exceeds certain boundaries.
Ensure your staff can properly interpret and
understand the decisions made by the system.
04 SAFE USE

It is important to put the right systems and controls in place to ensure that live use of artificial
intelligence systems is properly supervised.

As part of your normal risk management framework, you should: We consider these safeguards below.

Does the algorithm work?


Identify Like any information technology project, it is important to properly test
the risks your artificial intelligence tool before deploying it into a live environment.
associated with its
use of artificial However, testing an opaque algorithm is difficult. Without a proper
intelligence understanding of the system there is a risk that the testing process will
not adequately cover all of the required test scenarios. If the system
Monitor and
Assess those behaves chaotically, even small changes in input variables can lead to
report
risks very different outcomes.
on those risks
Embed In other words, the algorithm may react unusually or unpredictably in
suitable relation to particular combinations of input and this may not be detected
controls to during the testing process.
mitigate and
minimise
those risks
Doomsday warnings

Google Translate has recently been shown to provide bizarre


This assessment should factor in the fact that the operation of the artificial translations when presented with repeated words. For example,
intelligence system may be opaque and potentially unpredictable. You asking Google to translate the word “dog” typed out twenty times
should specifically address: from Maori to English gives the message “Doomsday Clock is three
minutes to twelve We are experiencing characters and a dramatic
>> Testing – As with any system, an artificial intelligence tool should be developments in the world, which indicate that we are increasingly
thoroughly tested before use. approaching the end of times and Jesus’ return.”
>> Verification – This testing should ideally be supported with some degree The reason for this curious translation is not clear. It might be
of insight into how the artificial intelligence system is making decisions. because the translation tool was trained on religious texts (which are
>> Supervision – It may also be necessary to carry out ongoing testing widely available in less common languages). Fed with nonsense,
and supervision of the system. You should consider technical or Google’s tool probably tried to formulate a response it recognised as
contractual circuit breakers to limit the artificial intelligence if it acts a successful translation.
beyond normal bounds.
04 SAFE USE

The answer to this is, partly, more data. The more data you have to train and Another way to assess the operation of an artificially intelligent system
test the system, the more confident you can be that it is working properly. is to produce counterfactuals. For example, where a loan application is
rejected by an artificially intelligent system it could provide the applicant
Dynamic and complex systems not just with a rejection but also with an assessment of the minimum
change needed for the application to be successful (e.g. the loan would
An added complication is that the situations faced by the artificial
be granted if it were for £2,000 less or the borrower’s income £5,000
intelligence system change over time. The system’s reaction to a change
more). These counterfactuals could be produced by varying the input
in environment may not be predictable.
data until a positive result was achieved.
Similarly, as artificial intelligence systems become more prevalent, it
These counterfactuals can be used to create a series of edge cases, i.e.
will be necessary to consider the potential interactions between these
situations in which there is a tipping point between a positive and negative
different systems. A system may well operate properly in an insulated test
decision. The edge cases will provide some insight into the decision-
environment but generate complex and undesirable behaviours when
making process and will help to test the soundness of those decisions.
combined with other systems (see The $23 million textbook).
Analysing these edge cases may need visualisation tools given the likely
complex dependencies between the inputs to the system.
Verification and counterfactuals
The UK Information Commissioner advocates the use of counterfactuals
This testing process should, in some cases, be accompanied by some form
when conducting automated decision making about individuals.
of human verification of the artificial intelligence’s decision-making process.
Appropriate counterfactuals should be provided to help individuals
For simple tasks, there might be easy ways to do this. For example, a understand the basis for the decision. Another means recommended
picture classification algorithm might highlight the pixels that strongly by the Information Commissioner is qualified transparency. This would
influence the classification decision. This might help a human to gain involve the use of an expert to test the quality, validity and reliability of the
some comfort that the artificial intelligence is working properly. machine’s decision making.

There are a number of tools to carry out some of these activities.


For example, Google’s What-If Tool is an open source extension to
A ruler detector TensorBoard that allows the creation of counterfactuals and tools to
analyse performance and algorithmic fairness. However, further research
An attempt to create an algorithm to recognise skin cancer
into algorithmic interpretability will likely be needed as the role of artificial
instead created a ruler detector. This was because many of the
intelligence grows.
photographs of larger skin cancers used to train the algorithm had
a ruler next to them for scale. 67
Supervision
Knowing which pixels were strongly influencing the algorithm’s
Unlike a human, an artificially intelligent system cannot self-assess its
decision would likely help the developers of the system to more
own performance on the basis of common sense or ethical acceptability.
quickly identify this issue.
These higher-level concepts are beyond the capability of the current
generation of artificially intelligent systems.
04 SAFE USE

This means that continuing supervision will be essential. That supervision


could be provided in a number of ways: Racist chatbots
>> Sampling & management information – A sample of outputs from the
Microsoft carried out an experiment to create a Twitter chatbot
system should be reviewed on an ongoing basis to confirm the quality of
called “Tay”. Tay exchanged Tweets with other Twitter users
its output, and to confirm it is not making discriminatory or inappropriate
and through that process was supposed to learn how to have
decisions. This should be backed up with management information
conversations with humans on social networks.
about the overall performance of the system. The cost of this supervision
will need to be built into the business case for the system. However, within hours Twitter users trained Tay to start sending
out racist and sexually-charged Tweets. This was partly because
>> Retraining – It may be necessary to retrain the system from time to
users asked Tay to repeat their own Tweets, but soon Tay started
time, particularly if there are changes in the scenarios it is having to
to say strange and offensive things on its own. Microsoft shut Tay
deal with. Again, the cost will need to be built into the business case.
down in under a day but in that time, it still managed to send over
This is an essential part of the maintenance to the system.
90,000 Tweets.
>> User alerts – It may be sensible to include a mechanism to allow
Without the human qualities of common sense or ethical
users to trigger an alert if the system is behaving incorrectly and
acceptability Tay rapidly ran out of control.
unpredictably.

>> Circuit breakers – It will usually be worth adding circuit breakers to


the system so that if its outputs exceed certain limits, either a warning Contractual agents and circuit breakers
is triggered or the system suspended. Those limits might either be
English law is flexible and has proven capable of adapting to the use of
predefined or set by reference to a less sophisticated (and thus more
new technology to form contracts. 68 There is no obstacle, in principle, to
predictable) decision-making system. There might also be a “kill switch”
using an artificially intelligent system to contract on your behalf.
to allow a human to manually override the system.
However, there is potential for disputes where artificial intelligence
Similar controls are already required for financial services firms
systems behave in an unexpected manner – for example, might one party
carrying out algorithmic trading and high-frequency trading (see box:
claim the contract is void for mistake? 69 The legal position is not entirely
Algorithmic Trading).
clear and further complicated where two artificially intelligent systems
contract with each other. Traditional concepts such as offer, acceptance
and mistake are based on human knowledge and intention and are not
easy to apply where no human is involved.

The best solution is to create a contractual framework with the


relevant third parties with whom you contract via artificial intelligence to
expressly deal with these issues. For example, it might expressly state that
a party is bound by all contracts made by its artificially intelligent system
in all instances.
04 SAFE USE

Alternatively, the contracts might include circuit breakers – i.e. provisions


to either delay the point at which a contract is formed (so it can be I’m 98% sure it’s fraud 71

aborted if the system has gone rogue) or reserving the right to revoke the
contract in certain circumstances. So long as that framework is clear, it is Take a hypothetical example. Consider an insurance company that
very likely it would be enforceable under English law. develops an artificial intelligence tool to detect fraudulent claims.
Assume that:
Our paper on Smart Contracts and Distributed Ledger – A Legal
Perspective, co-authored with the International Swaps and Derivatives >> the tool is 98% accurate. This means 98% of fraudulent claims
Association, contains a detailed assessment of some of these issues. 70 are picked up and 98% of valid claims are determined not to be
fraudulent; and
Understanding and interpreting outputs
>> one in 500 claims is actually fraudulent.
Much of the discussion in this section assumes the artificial intelligence
Question: What is the chance that a claim flagged by the tool as
system is, at least in part, the decision maker.
fraudulent is in fact fraudulent?
However, for many practical applications of the technology the artificial
Answer: The answer is not 98%. In fact, it is only 9%. 72
intelligence system will simply provide assistance to a human decision
maker. For example, highlighting patients who may be developing a medical In other words, the majority of the claims flagged as fraudulent
condition or transactions on a bank account that appear to be fraudulent. will actually be valid. Knowing this is important to ensure not
only that those claims are dealt with without an automatic
Where the artificial intelligence system is providing this input, it is vital that
assumption of guilt, but also that the large number of non-
the human understands the limits on that information and can interpret
fraudulent claims being flagged is not necessarily a failure by the
that information correctly. The example below illustrates the risk of
artificial intelligence system.
misinterpreting the data.
05
Ensure appropriate systems and controls are in place.
Consider how the use of artificial intelligence fits into
the senior manager regime.
Comply with the rules on algorithmic trading and
high-frequency trading.

FINANCIAL
SERVICES
05 FINANCIAL SERVICES

Financial services firms must ensure that their approach to artificial intelligence reflects the additional
regulatory requirements placed upon them. This toolkit does not provide an exhaustive review of the
implications of financial services regulation on artificial intelligence but simply highlights some of the
more important considerations.

Risk management framework Finally, financial services firms should consider how they ensure that
artificial intelligence used for trading, only trades within the approved
The starting point is that the use of artificial intelligence will need to
framework of the firm, and how they can ensure transactions entered into
be factored into the firm’s overall risk management framework. This
by artificial intelligence are legally enforceable (see Contractual agents
means ensuring that it takes reasonable care to organise and control its
and circuit breakers).
affairs responsibly and effectively, with adequate risk management and
appropriate systems and controls put in place. 73 This will include:
Senior Managers and Certification Regime
>> Governance: Putting in place a clear and formalised
Similarly, it is important to identify where ultimate responsibility for
governance framework.
the use of artificial intelligence should lie. The Senior Managers and
>> Compliance: Ensuring sufficient appropriately trained technical, Certification Regime is intended to enhance individual accountability
legal, monitoring, risk and compliance staff with at least a general within firms. Documentation must be provided to the regulators stating
understanding of the artificially intelligent systems deployed. the responsibilities of Senior Managers. Certain firms must also provide
a responsibilities map showing that there are no gaps in the allocation of
>> Outsourcing: Where part of the artificial intelligence project is outsourced,
responsibilities.
the firm remains fully responsible for its regulatory obligations.
For firms subject to the Senior Managers and Certification Regime,
This is likely to require an assessment of the various issues addressed in
senior management will need to consider how they intend to allocate
the previous chapter on Safe use.
responsibility for managing the risks associated with artificial intelligence.
Financial services firms should consider whether their Compliance Depending on the type of firm, this may sit with the Senior Manager who
and Audit functions have the right skills and experience in order to performs the Chief Operations function and is responsible for managing
undertake that supervision. Similarly, they would need to consider the internal operations, systems and technology of a firm.
what documentation they need to demonstrate they have undertaken
appropriate testing and supervision.
Regulatory sandbox
FCA and Big Data
While the uncontrolled deployment of new technology could be harmful,
regulators also appreciate the benefit that innovation could bring to firms The Financial Conduct Authority has been considering artificial
and consumers. intelligence and data analytics for some time. In 2016, it carried
out a review of Big Data in retail general insurance, issuing a
One of the measures that the Financial Conduct Authority has taken to
feedback statement in September 2016 (FS16/5).
support innovation is the creation of a regulatory sandbox to give a range
of businesses (not just authorised firms) the ability to test products and Overall, it found broadly positive consumer outcomes. Big Data
services in a controlled environment. provides a means to transform how consumers deal with firms,
encourages innovation and streamlines the sales and claims
These sandbox tests are intended for projects that provide a public benefit
processes. On that basis it decided not to launch an in-depth
and are conducted on a small scale, e.g. for limited duration with a
market study. However, there were two areas of concern:
limited number of customers. The sandbox is also closely overseen by the
Financial Conduct Authority and appropriate safeguards will be needed to >> Big Data allows increased risk segmentation, so categories of
protect consumers. customers may find it harder to obtain insurance.
However, in return for these restrictions a number of tools are on offer, such >> Big Data could allow firms’ ability to identify opportunities to
as restricted authorisation, individual guidance, informal steers, waivers charge certain types of customer more, for example charging
and no enforcement action letters. These help to reduce time-to-market. customers more if they have a low sensitivity to prices and are
less likely to shop around.
Recent sandbox projects include:

>> Veridu Labs which has a KYC and AML solution backed by machine
learning and network analyses to facilitate onboarding and access to
business banking; and

>> Multiply, a service that combines financial modelling and


machine learning to provide financial plans and specific product
recommendations directly to consumers.

The use of regulatory sandboxes is part of the Financial Conduct


Authority’s wider response to new technology which also includes
TechSprints and industry roundtables, which have amongst other things
considered the use of machine learning to tackle money laundering and
financial crime.
Algorithmic trading

The Markets in Financial Instruments Directive (2014/65/EU) (“MiFID >> Kill functionality: Firms must have emergency ‘kill functionality’,
II”) introduced specific rules for algorithmic trading and high- allowing them to cancel all unexecuted orders with immediate effect.
frequency trading to avoid the risk of rapid and significant market
>> Testing: The systems must be properly tested and deployed only
distortion. These restrictions are relevant to some artificial intelligence
with proper controls and authority.
tools deployed by financial services firms and are also an interesting
illustration of the sorts of legislative controls that might be based on Beyond these requirements, the regime does not regulate the outcome
this technology. of the algorithmic trading strategy as such. In other words, the aim
is not to ensure that the algorithms make good profitable decisions,
Under these rules, algorithmic trading is defined as trading where a
rather it is to ensure an orderly market.
computer algorithm automatically determines parameters of orders
(e.g. initiation, timing, quantity or price) subject to certain exemptions.
Where a firm conducts algorithmic trading, it must comply with the
general MiFID II requirements and notify the relevant competent
authorities. In addition:
>> Controls: Firms must put in place effective systems and risk controls
to ensure that their trading systems are resilient and have sufficient
capacity, are subject to appropriate trading thresholds and limits
and prevent the sending of erroneous orders. This should include
real-time monitoring of all activity under their trading code for signs
of disorderly trading.

>> Market Abuse: Firms must put in place effective systems and risk
controls to ensure the trading systems cannot be used for market
abuse or in breach of the rules of a trading venue.

>> Resilience: Firms must put in place effective business continuity


arrangements to deal with any failure of their trading systems and
shall ensure that their systems are fully tested and properly monitored.
Robo advice
Julian Cunningham-Day
Partner, Global Co-head of Fintech
The Financial Conduct Authority has stated previously that in its
view there is nothing particularly special about robo advice in
comparison with other forms of financial advice.

Financial advice powered by artificial intelligence (or any form of “Artificial intelligence is a key
automation) is subject to the same regulatory obligations as more ingredient in the Fintech sector. We are
traditional financial advice delivered by humans, and the obligations
will fall on the firm offering the system rather than (for instance) a
seeing more and more clients looking to
third-party provider who creates the relevant artificial intelligence. It exploit this technology.”
is up to regulated firms to ensure that any advice offered by them
using artificial intelligence is “suitable” for the client.

A well-designed model could potentially reduce the risk of mis-


selling by removing human error or certain elements of discretion
on the part of human advisers. However, equally firms will need to
ensure that they maintain appropriate oversight of the activities of Richard Hay
the robo advice and are able to validate the suitability of the advice UK Head of Fintech
in the same manner as they would for human advisers.

Demonstrating the potential pitfalls of automated advice, the


Financial Conduct Authority conducted a review of firms 74 “The sandbox model has proven effective
(published in May 2018) offering online discretionary management
in the UK at fostering shared learning
and retail investment advice through automated channels. This
review found various deficiencies in relation to the provision between industry and the regulator.
of such services, including that several firms had failed to give That the model is being replicated at
adequate disclosures to clients, or seek and maintain adequate global level through the Global Finance
information from clients that would be required to ensure suitability
Innovation Network (GFIN) is a testament
due to the nature of their offerings.
to its effectiveness, and a sign of the
It is clear from this that firms need to think carefully about their
growing importance of Fintech.”
approach to complying with their regulatory obligations in the
context of robo advice.
ETHICS AND GOVERNMENT RESPONSES

This toolkit considers the legal issues associated with artificial intelligence. However, the law
sometimes focuses on past problems and only provides a narrow view of the issues.

You should also take a broad, forward-looking approach to predict and


anticipate the future impact of this technology. This should address: Innovative governance structures

>> Values – How will your approach to artificial intelligence reflect your Some organisations have taken stronger and more innovative steps
company’s values and approach to corporate social responsibility? to provide accountability and transparency.
>> Employees – What impact will artificial intelligence have on your For example, DeepMind, the artificial intelligence company,
workforce, both in terms of ensuring your employees have the right skill appointed a number of public figures to independently review its
set and in terms of changes to your employees’ working environment? healthcare business. These Independent Reviewers meet four
>> Transparency – How will your use of artificial intelligence affect your times a year to assess and scrutinise DeepMind’s operation and
reputation and how can you be transparent about your use of this issue a publicly available annual report outlining their findings. The
technology? Independent Reviewer’s latest report is available here 75 and sets
out 12 ethical principles with which they consider DeepMind and
This section also provides a brief overview of the various UK and EU other healthcare technology companies should comply.
initiatives to respond to, and regulate, artificial intelligence. Similarly, SAP has created an external artificial intelligence ethics
board. The five-person committee includes technical experts and
Your values – More than just a legal issue a theologian. It will ensure the adoption of artificial intelligence
Any business using artificial intelligence should be mindful of the wider principles in collaboration with the AI steering committee at SAP.
ethical implications of using that technology. This means taking a broad
future-looking view of the likely implications of artificial intelligence on
your business, your employees, the environment, communities and
countries.

Large businesses may also want to consider how this fits into their wider
accountability framework. For example, which board committee should be
tasked with assessing the wider impacts of artificial intelligence and how
can it ensure that it can access the right expertise to supervise this area?
ETHICS AND GOVERNMENT RESPONSES

Those values will be different for every business, but the UN Guiding
Principles on Business and Human Rights, “Protect, Respect and
AI at Microsoft
Remedy” provide a useful framework from which to conduct this
analysis. For example, they mandate the use of impact assessments,
Microsoft has issued a set of AI principles to ensure its work is built
transparency and remedies which can all be used when assessing the
on ethical foundations. 76 There are four key principles:
use of artificial intelligence.
1. F
 airness. AI must maximise efficiencies without destroying
dignity and guard against bias. Your employees – Robots in the workplace
2. Accountability. AI must have algorithmic accountability. The introduction of artificial intelligence into the workplace may have an
impact on your workforce. This might include:
3. Transparency. AI must be transparent.
>> Workforce displacement – There are predictions that artificial
4. E
 thics. AI must assist humanity and be designed for
intelligence will replace many white-collar jobs, in much the same way
intelligent privacy.
as the automation of manufacturing has greatly reduced the number
This is supported by five design principles: of blue-collar manufacturing jobs. For example, the chief economist
of the Bank of England has warned that “large swathes” of people
1. H
 umans are the heroes. People first, technology second.
may become “technologically unemployed” as artificial intelligence
Design experiences that augment and unlock human
makes many jobs obsolete. 77 Alternatively, those who are displaced by
potential.
artificial intelligence will move to lower skilled jobs that still need human
2. K
 now the context. Context defines meaning. Design for intelligence, but against the background of a depressed labour market.
where and how people work, play, and live. This could lead to a “minimum wage economy” for many with much
greater inequality. Employers should be mindful of the opportunities to
3. B
 alance EQ and IQ. Design experiences that bridge
retrain and redeploy displaced employees and the overall impact on
emotional and cognitive intelligence.
employee morale.
4. E
 volve over time. Design for adaptation. Tailor experiences
>> Skill sets – It will also be necessary to ensure that your workforce has
for how people use technology.
the right skill set to adapt to a changing environment. This might involve
5. H
 onor societal values. Design to respect differences and reskilling your existing employees or hiring employees with different skill
celebrate a diversity of experiences. sets in the future.
ETHICS AND GOVERNMENT RESPONSES

The best way to respond is to be clear about your approach to artificial


intelligence and communicate that in a transparent way.
Richard Cumbley This might involve consulting affected individuals and other stakeholders
Partner, Global Head of TMT/IP
though workshops and citizens’ juries. This will not only help to properly
inform them about your proposals but also allow you to better understand
their concerns to improve your own processes. In particular, if you are
carrying out a data protection impact assessment, the GDPR specifically
“Artificial intelligence is a powerful recommends consultation with affected individuals.
but emotive tool. You need to step back
Clarity and openness will help ensure better acceptance and buy-in for
and ask yourself; ‘Am I doing the right your project. This will avoid some of the problems that have arisen through
thing?’, ‘Can I explain the purpose of this misconceptions about the scope and nature of this sort of technology
project?’. A project that does not respect project (see box: Digital health: Apps, the GDPR and confidentiality).
individual rights or follow your business’
Government and regulatory responses
values will be problematic regardless of
the technical legal analysis.” Artificial intelligence has been a focus for the UK Government and UK
Parliament for some time, with important recent report, including:

>> Algorithms in decision-making, House of Commons Select Committee


(May 2018).
>> Working environment – The working environment for some employees
>> AI in the UK: ready, willing and able, House of Lords Select Committee
may change as a result of the introduction of artificial intelligence.
(April 2018) and the Government’s response (June 2018).
Employers will need to consider the longer term emotional and
psychological impacts on employees in an environment in which >> AI Sector Deal, BEIS and DDCMS (April 2018).
human-human interactions are increasingly replaced by human-robot
>> The Hall/Pesenti Report, Growing the artificial intelligence industry in
interactions. For example, taxi drivers whose work patterns are dictated
the UK by Dame Wendy Hall & Dr Jerome Pesenti (October 2017).
by apps and algorithms, and not the human contact afforded by a
human dispatcher. 78 This has also been a topic for regulators, for example the Information
Commissioner’s Guidance on AI 79 and the Financial Conduct Authority’s
Your reputation – Too creepy? statement on Big Data. 80

You should also consider the public’s perception of the use of The NHS has also issued a new code of conduct for artificial intelligence
artificial intelligence, given increasing sensitivity to this and other data- and other data-driven technologies to allow NHS patients to benefit from
heavy technologies. the latest innovations. The code has 10 principles setting out how the
government will make it easier for companies to work with the NHS and
what the NHS expects in return. 81
ETHICS AND GOVERNMENT RESPONSES

The UK Government is responding to the challenge of artificial intelligence


by setting up three new organisations:
Digital health: Apps, the GDPR and confidentiality
>> The Centre for Data Ethics and Innovation, which will strengthen the
existing governance landscape and supply government with In 2015, the Royal Free Hospital in North London started a project
independent, expert advice. with DeepMind to help detect acute kidney injury. This is a very
serious condition estimated to cause 40,000 deaths, and costs the
>> The AI Council, which will bring together leading figures from industry
NHS over £1 billion, a year.
and academia to provide strategic leadership, promote the growth of the
sector and ensure delivery of the sector deal commitments. The project started to attract public interest and criticism in early
2016. This led to an investigation by the National Data Guardian
>> The Office for AI will be the secretariat for the Council, made up of
and enforcement by the Information Commissioner following which
civil servants, and will drive implementation and lead co-ordination on
the Royal Free agreed to commission a third-party audit.
artificial intelligence within government.
We carried out that audit. Our conclusion was that the Royal Free’s
There are similar developments in the European Union with a recent
use of the App is lawful and complies with data protection laws,
communication from the European Commission 82 and the appointment
though there were some areas in which improvements could be
of a High-Level Expert Group on Artificial Intelligence. The European
made. The audit addresses a number of interesting legal issues.
Commission is aiming to produce draft artificial intelligence ethics
Contrary to press reports at the time 83 :
guidelines in 2018 taking into account the Charter of Fundamental Rights
of the European Union. >> The App does not use artificial intelligence. Instead, it implements
a simple decision tree used across the whole of the NHS.

>> DeepMind only uses patient information for the purpose of


providing the App. It does so under the direction of the Royal
Free and in strictly controlled conditions. DeepMind is not
permitted to use patient information for any other purpose.

Following the public interest in 2016, DeepMind took a number


of measures to provide more information about its arrangements
with the Royal Free. This includes publishing its agreements
with the Royal Free and hosting various events with patients as
part of its patient and public engagement strategy. If this level of
transparency and openness had been provided from the outset,
it is possible the initial controversy surrounding this arrangement
could have been avoided.

Our audit report is available here.84


GLOSSARY

Algorithm. Opaque algorithms.


A process or set of rules used by a computer to carry out calculations or Algorithms whose internal processes or rules are not clearly defined
other problem-solving operations. or understood.

Data controller. Reinforcement learning.


Someone who decides the purpose and means of processing, and so is The system taking action and reinforcing those actions that help to
subject to the whole of the GDPR, see Data protection – A quick overview. achieve the system’s goals.

Data processor. Supervised learning.


Someone who just processes personal data on the instructions of a data Learning based on the use of data containing both inputs and
controller, and so is subject to more limited obligations under the GDPR, desired outputs.
see Data protection – A quick overview.
Unsupervised learning.
Data protection impact assessment. The analysis of unlabelled data to spot clusters or groupings.
A formal data protection assessment of a particular product or process,
see Data protection – A quick overview.

GDPR.
The EU General Data Protection Regulation, see Data protection –
A quick overview.

Hall/Pesenti Report.
The Hall/Pesenti Report, Growing the artificial intelligence industry in the
UK by Dame Wendy Hall & Dr Jerome Pesenti.

Legitimate interest test.


The statutory processing condition in the GDPR that permits processing
of personal data where there is a legitimate interest not overridden by the
rights of the individual, see Data protection – A quick overview.

Legitimate interests assessment.


A short and informal assessment of whether the legitimate interest test
is satisfied.
FOOTNOTES

1 Artificial Intelligence: A Modern Approach, Stuart Russell and Peter Norvig. 21 Though a computer program might be patentable if there is some technical contribution over and above
that provided by the program itself. See the EPO’s Guidelines for Examination on Artificial Intelligence and
2 This might be a dynamic definition. Historically, once a task is easily accomplished by a computer it often Machine Learning.
ceases to be considered artificial intelligence (i.e. artificial intelligence is “anything computers still can’t
do”). See What Computers Still Can’t Do: A Critique of Artificial Reason by Hubert L Dreyfus. 22 Where the software is simply used as a tool, for example Microsoft Word, the person using that tool will be
the author. Word does not supply any element of “originality”. In contrast, where an artificial intelligence
3 This challenge was identified in 1950 by Alan Turing. He proposed what has come to be known as the algorithm creates a work, it may have a creative role and help provide the necessary ingredient of originality.
“Turing test”, in which a human would evaluate natural language conversations between a human and a
machine. The machine will pass the test if the evaluator cannot tell machine from human. 23 Section 9(3), Copyright, Designs and Patents Act 1998.

4 
Miller v Jackson [1977] QB 966: “In summertime village cricket is the delight of everyone. Nearly every 24 For example, the designer of a pool game was the person who made the arrangements for the creation of
village has its own cricket field where the young men play and the old men watch. In the village of Lintz in each individual frame of the game. The player of the game is not “an author of any of the artistic works
County Durham they have their own ground, where they have played these last 70 years. They tend it well.” created in the successive frame images. His input is not artistic in nature and he has contributed no skill or
labour of an artistic kind. Nor has he undertaken any of the arrangements necessary for the creation of the
5 
Donoghue v Stevenson [1932] A.C. 562: “For a manufacturer of aerated water to store his empty bottles in frame images. All he has done is to play the game.” Nova v Mazooma [2006] EWHC 24.
a place where snails can get access to them, and to fill his bottles without taking any adequate precautions
by inspection or otherwise to ensure that they contain no deleterious foreign matter, may reasonably be 25 Such as “Collative redress across the globe: a review in 19 jurisdictions” or “FAQs on the ISDA
characterised as carelessness without applying too exacting a standard.” Benchmarks Supplement”.

6 With apologies to Sir Martin Nourse (Tektrol Ltd v International Insurance Co of Hanover [2005] EWCA Civ 845). 26 
Study finds gender and skin-type bias in commercial artificial-intelligence systems, MIT News, 11 February 2018.

7 Texts such as the Bible are used for non-European languages given it has been widely translated. 27 This type of discrimination would be atypical: see Amazon scrapped ‘Sexist AI” tool, BBC News, 10
October 2018.
8 
AlphaZero AI beats champion chess program after teaching itself in four hours, The Guardian, 7 December 2017.
28 In practice, these confidentiality duties are likely to arise in equity.
9 
What Artificial Experts Can and Cannot Do, Hubert L. Dreyfus & Stuart E. Dreyfus, 1992. This classic
example features in many undergraduate’s computer science courses and demonstrates the problem is 29 Such as guidance issued by the National Data Guardian and the various codes of practice issued by the
not new. However, it is also worth noting there is an ongoing debate as to whether this actually happened NHS and HSCIC.
or is just an apocryphal story.
30 See section 251 of the National Health Act 2006 and the associated Health Service (Control of Patient
10 See Building safe artificial intelligence: specification, robustness and assurance, Pedro Ortega and Vishal Information) Regulations 2002.
Maini, 27 September 2018.
31 See Article 28 of the GDPR.
11 See footnote 10.
32 For example, a licence agreement stated that each party “agrees to keep the terms of this Agreement
12 While not discussed in the paper, one assumes this particular problem could easily be fixed by not confidential”. In addition, either party could terminate for a material breach and for this “purpose…
repopulating the waypoints. breach of the confidentiality obligations...constitutes a non-remediable material breach”. One of the parties
disclosed the agreement to a potential purchaser so the other party terminated the agreement. The Court
13 See The quality of live subtitling, Ofcom, 17 May 2013. of Appeal decided that the strict wording of the agreement applied, and the termination was justified. See
Kason Kek-Gardner v Process Components [2017] EWCA Civ 2132.
14 See 2001: A Space Odyssey, Ex Machina and Avengers: Age of Ultron, respectively.
33 The anonymisation process itself is a processing that must be justified under the GDPR but will normally
15 In some cases, the project could involve the creation of specialised hardware on which to run the
be permitted so long as the personal data is truly anonymised.
algorithm, though this is likely to be rare.
34 A postcode identifies around 15 households (though some postcodes relate to a single property) so the
16 Recital 7 and Article 1(3) of the Software Directive 2009/24/EC.
combination of a postcode with other information, such as date of birth, will normally identify an individual.
17 Copyright will also protect the object code for that software and any preparatory works. However, it does In rare cases, a postcode alone will identify an individual.
not protect the underlying ideas or any programming interfaces.
35 See section 171 of the DPA 2018.
18 “Neural weights”: the relative importance ascribed to items within a dataset for the purpose of analysis or
36 Source “How To Break Anonymity of the Netflix Prize Dataset” by Arvind Narayanan and Vitaly Shmatikov.
decision making
https://arxiv.org/abs/cs/0610105v2.
19 The weightings within a neural net will be computer generated as part of the training process. Copyright
37 See section 29A of the Copyright, Designs and Patents Act 1988.
does protect computer generated works but only if the work is a literary, dramatic, musical or artistic
work (section 9(3) of the Copyright, Designs and Patents Act 1988). It is questionable if something like 38 IMS Health v. NDC Health (C-418/01).
weightings in a neural network could be called a literary work.
39 The GDPR is supplemented by the ePrivacy Directive (2002/58/EC) which, amongst other things, imposes
20 Database copyright may also subsist in a database in which the selection or arrangement of the data constitutes additional limitations on the use of traffic and location data. The EU is currently planning to replace this
the author’s own intellectual creation (for example, a large index of My favourite 20th century poems). Directive with a new Regulation.
FOOTNOTES

40 There has been some discussion about whether the artificial intelligence system might itself be an 62 Online seller admits breaking competition law, Competition and Markets Authority, July 2016.
independent data controller. As such systems do not have legal or natural personality, and are not really
“intelligent”, this seems unlikely for the time being. 63 “Eturas” UAB and Others v Lietuvos Respublikos konkurencijos taryba (C-74/14).

41 This is not an exhaustive list. See our GDPR Survival Guide for more information - https://www.linklaters. 64 See footnote 61.
com/en/insights/publications/2016/june/guide-to-the-general-data-protection-regulation.
65 Bundeskartellamt 18th Conference on Competition, Berlin, 16 March 2017.
42 Our GDPR Survival Guide contains a detailed summary of these rights, see footnote above.
66 Commisioner Vestage. See footnote above.
43 This consists of racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic
67 See Neutral networks, explained, Physics World, 9 July 2018.
information, biometric information, health information or information about sex life or sexual orientation.
68 For example, see Thornton v Shoe Lane Parking Ltd [1971] 1 All ER 686 in relation to a parking ticket
44 See Article 9 of the GDPR and Schedule 1 of the Data Protection Act 2018.
machine: “The customer pays his money and gets a ticket. He cannot refuse it. He cannot get his money
45 Article 6 of the GDPR. back. He may protest to the machine, even swear at it; but it will remain unmoved. He is committed
beyond recall. He was committed at the very moment when he put his money into the machine. The
46 Article 6(4) of the GDPR. contract was concluded at that time.”

47 
Guidelines on Data Protection Impact Assessment and determining whether processing is “likely to result 69 For completeness, an artificial intelligence would likely be treated as a “mere tool” for contracting, and not
in a high risk”, The Article 29 Working Party (WP 248 rev 01), October 2017. a distinct agent under agency law.

48 Information Commissioner’s Examples of processing ‘likely to result in high risk’ read in light of EDPB 70 See https://www.linklaters.com/en/about-us/news-and-deals/news/2017/smart-contracts-and-distributed-
Opinion 22/2018. ledger--a-legal-perspective.

49 This report does not consider driverless cars. 71 Example adapted from Decision time for AI: Sometimes accuracy is not your friend, The Register, 6 July 2018.

50 In a medical scenario, the doctor’s use of an artificial intelligence tool would be subject to a duty of 72 Assume you have a million claims. Of those 998,000 will be valid and 2,000 will be fraudulent (one in five
care. That duty is likely to be defined by the Bolam test. The key questions would likely be: (i) would a hundred). Of the fraudulent claims, 1960 will be flagged (2,000 x 98%). Of the valid claims, 19,960 will be
responsible professional use an artificially intelligent tool in this situation? (ii) what reliance would that flagged (998,000 x 2%). Thus the percentage of flagged claims that are actually fraudulent is 8.9% (1960
professional place on the output of that tool? ÷ (1960+19,960)).

51 
Customs & Excise Commissioners v Barclays Bank plc [2007] 1 AC 181. 73 See FCA Principle 3 and SYSC 8.

52 Primary liability falls on the manufacturer, “own brander” or importer, but distributors can also have liability 74 See https://www.fca.org.uk/publications/multi-firm-reviews/automated-investment-services-our-expectations.
in more limited circumstances. See the Product Liability Directive 85/374 implemented by the Consumer
Protection Act 1987. 75 See https://deepmind.com/applied/deepmind-health/transparency-independent-reviewers/independent-
reviewers/.
53 This is the test to determine if a product is defective. The court might consider that, for a car, those
expectations are high, see Boston Scientific v AOK (C‑503/13). 76 See Microsoft AI principles, https://www.microsoft.com/en-us/ai/our-approach-to-ai.

54 See Peter Nowak v Data Protection Commissioner, Case C-434/16 and Johnson v Medical Defence Union 77 Bank of England chief economist warns on AI jobs threat, BBC News, 20 August 2018.
[2007] EWCA Civ 262.
78 See Driven to despair — the hidden costs of the gig economy, Financial Times, 22 September 2017.
55 This inherently contradictory system may be impossible to develop.
79 Big Data, AI, Machine Learning, and Data Protection, Information Commissioner’s Office, September 2017.
56 Article 22(1), GDPR.
80 
FCA publishes feedback statement on Big Data Call for Input, Financial Conduct Authority, September 2016.
57 See statement by Silkie Carlo of Liberty in the House of Commons Science and Technology Committee’s
81 New guidance to help NHS patients benefit from digital technology, 5 September 2018.
report on Algorithms in decision making.
82 Artificial Intelligence for Europe, European Commission, April 2018.
58 
Guidelines on automated decision making and profiling, Article 29 Working Party (WP251 rev 01),
February 2018. 83 Revealed: Google AI has access to huge haul of NHS patient data, New Scientist, 29 April 2016.

59 See Guidelines on automated individual decision making and profiling, Article 29 Working Party (WP 251 84 See http://s3-eu-west-1.amazonaws.com/files.royalfree.nhs.uk/Reporting/Streams_Report.pdf.
rev 01).

60 For example, see Fiat Chrysler recalls 1.4 million cars after Jeep hack, BBC News, 24 July 2015

61 See Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (2016), Maurice E.
Stucke and Ariel Ezrachi.
CONTACTS

TECHNOLOGY & INTELLECTUAL PROPERTY FINTECH

Richard Cumbley Nemone Franks Julian Cunningham-Day


Partner, Global Head of TMT/IP Partner, Intellectual Property Partner, Global Co-head of Fintech
Tel: +44 20 7456 4681 Tel: +44 20 7456 5813 Tel: +44 20 7456 4048
richard.cumbley@linklaters.com nemone.franks@linklaters.com julian.cunningham-day@linklaters.com

Marly Didizian Georgina Kon Richard Hay


Partner, Healthcare Sector Leader Partner, Technology UK Head of Fintech
Tel: +44 20 7456 3258 Tel: +44 20 7456 5532 Tel: +44 20 7456 2684
marly.didizian@linklaters.com georgina.kon@linklaters.com richard.hay@linklaters.com

CORPORATE AND BANKING COMPETITION NAKHODA

Edward Chan Christian Ahlborn Partha Mudgil


Partner, Head of AI Working Group Partner, Competition COO of Nakhoda
Tel: +44 20 7456 4320 Tel: +44 20 7456 3570 Tel: +44 207 7456 2180
edward.chan@linklaters.com christian.ahlborn@linklaters.com partha.mudgil@linklaters.com

Stuart Bedford EMPLOYMENT


Partner, Technology M&A
Tel: +44 20 7456 3322
stuart.bedford@linklaters.com
David Speakman
Counsel, Employment
Tel: +44 20 7456 4691
david.speakman@linklaters.com
linklaters.com

© Linklaters LLP. All Rights reserved 2018

8424_INT_F/10.18
Linklaters LLP is a limited liability partnership registered in England and Wales with registered number OC326345. It is a law firm authorised and regulated by the Solicitors Regulation Authority.
The term partner in relation to Linklaters LLP is used to refer to a member of Linklaters LLP or an employee or consultant of Linklaters LLP or any of its affiliated firms or entities with equivalent standing and qualifications.
A list of the names of the members of Linklaters LLP and of the non-members who are designated as partners and their professional qualifications is open to inspection at its registered office, One Silk Street, London EC2Y 8HQ,
England or on www.linklaters.com and such persons are either solicitors, registered foreign lawyers or European lawyers.
Please refer to www.linklaters.com/regulation for important information on our regulatory position.Please refer to www.linklaters.com/regulation for important information on Linklaters LLP’s regulatory position.

You might also like