Nothing Special   »   [go: up one dir, main page]

Persuasive Speeech

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 3

Along with the rapid advancement of technology, governments, businesses, and surveillance, the

emergence of facial recognition technology raises many questions over the ethics and morality of
using such technologies around us. While it can be incredibly useful, being used as a security
measure for our phones or airports, danger lies in its potential applications as a tool of surveillance
in society. The emergence of these technologies can jeopardize the privacy and basic rights of the
general public, forming a society headed towards mass surveillance. The prominence of facial
recognition as a standardised form of monitoring and security control has materialised what was
once regarded as a futuristic concept for technology, causing great concern about its use and
functions.

Much like the intrusive nature of George Orwell’s dystopian “1984”, facial recognition is an
advancing threat to our individual rights and civil liberties, while compromising our safety with the
misuse of information about our identity, especially in a world where our digital footprints can
give anyone access to anything about us. Clearview AI, a company specialising in facial recognition
technology, stands out among the lucrative market of technology for its facial recognition system
that is able to identify anyone through its extensive database. The software works by uploading a
picture of anyone and matching it with images in their database, matching it with a name and face
in seconds – whether it be a protestor at a riot, and attractive stranger on the train home, or you.
It operates with a database of “3 billion images” collated through photos found online via a
process known as “data scraping” – over 6 times larger than that of the FBI, with the software
being used by over 600 law enforcement agencies worldwide, including the Australian Federal
Police and Victoria Police. The company promotes the use of its software claiming that it “helps
law enforcement to accurately, reliably and lawfully identify criminal suspects, as well as the
victims upon whom they prey” - so why should we feel scared?

Article 12 of the United Nations’ Universal Declaration of Human Rights entails the right to privacy
– with facial recognition technologies such as Clearview AI undermining this basic individual right.
With a single picture of your face being able to bring up all sorts of information about you without
your consent, (including your name, car registration number, even address, etc.) this brings up the
debate of whether this technology should be allowed in a society where the internet knows more
about us than we know about ourselves. Operating without any clear legal or regulatory
framework, the use of such technologies slowly builds a path towards blanket surveillance, where
one can be indiscriminately monitored going about their daily business. By allowing this
technology to be continued for these purposes, this opens doors for surveillance to grow and grow
to the point where it seems like Big Brother’s eyes are always watching you from behind.
Furthermore, the availability of such powerful abilities can be troubling when taken into the wrong
hands, with possibilities of weaponization being endless, a clear example of the potentials of this
stemming from the assassination of Qasem Soleimani. With technology like such, it makes it much
easier for Governments to track and kill the enemies of a state. Or when hacked by
criminal/terrorist groups, information like this can be used to blackmail government agencies,
threaten civilians, aid in illegal trading activities etc. As Eric Goldman, co-director of the High Tech
Law Institute at Santa Clara University explains, “Imagine a rogue law enforcement officer who
wants to stalk potential romantic partners, or a foreign government using this to dig up secrets
about people to blackmail them or throw them in jail”. Real evidence of this being used can be
seen in the Chinese Communist Party’s use of similar technologies to monitor Uighur Muslims in
Xinjiang Region, assisting in repressing and detaining millions in marginalised “re-educations
camps”, with software using biometrics to search for distinct ethnic facial features. China’s
successful implementation of modelling racism into technical framework presents huge
implications for the future of the world and technology, which should be of concern to everyone.
If this trend continues, what’s left to the meaning of privacy?

Written explanation:

This speech is written to provide a subjective view on the future of facial recognition, deeming it as
a clear violation of our rights to privacy. As this technology is no longer in our imagined futuristic
worlds, but of the imminent present, I take on the persona of a human rights activist to argue
against the rising of these types of technology.

Along with the introduction, I present the topic of facial recognition surveillance in daily life with
an allusion to Orwell’s ‘1984’, painting a picture of the dystopian, incredibly imminent reality that
we face if we are to implement this type of technology within our lives. Furthermore, I also
present some statistics that point to the fact that Clearview AI has approximately 6x the number of
pictures as the FBI has of people. With this, I then question whether or not it is really safe to have
all of this within a cloud stored in a private company, given the fact that it is very susceptible for
hackers to obtain this data.

Afterwards, I bring up the point that this type of technology could actually become ‘norm’, that is,
that we have a society where every individual is given a ‘number’ and are indiscriminately
monitored through “blanket surveillance”, bringing another Orwellian allusion of “Big Brother”.

In addition, for my third point I raise a couple of examples of where facial recognition can be used,
and is already used, including the fact that governments could potentially abuse this power by
assassinating political enemies, people of power and also potentially by terrorist groups infiltrating
these databases. With a real life example of China’s use of facial recognition in their social credit
systems and also their alleged relentless persecution of Uighur Muslims, I present the audience
with a reality that was once futuristic, though imminent in this age.
https://theconversation.com/australian-police-are-using-the-clearview-ai-facial-recognition-
system-with-no-accountability-132667

https://www.vox.com/recode/2020/2/11/21131991/clearview-ai-facial-recognition-database-law-
enforcement

https://www.theguardian.com/australia-news/2020/jun/19/victoria-police-distances-itself-from-
controversial-facial-recognition-firm-clearview-ai

https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

You might also like