Artificial Intelligence and The Survival of Human Society
Artificial Intelligence and The Survival of Human Society
BY
MOFE O. JEJE
COMPUTER SCIENCE
1.0 INTRODUCTION
What is Artificial Intelligence? In computer science, Artificial Intelligence, sometimes called machine intelligence, is intelligence
demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. A more elaborate definition
according to Andrea and Michael (2019), characterizes AI as "a system's ability to correctly interpret external data, learn from such
data, and use those learnings to achieve specific goals, and tasks through flexible adaptation." In other words, it is the development of
computer systems or device to perform tasks that normally require human intelligence, in terms of perceiving its environment and
taking actions that maximize its chance of successfully achieving its goals. For example, visual perception, speech recognition,
decision-making, and translation between languages among others.
In other words, the reality of AI is seen in hundreds of different and highly-specialized types of smart software to solve a million
different problems in different products. They are embedded within software and hardware all around us which has been happening
since the birth of computers. They are the computational equivalents to cogs and springs in mechanical devices. Just as a cog or spring
cannot magically turn itself into a murderous killing robot, our smart software, embedded within their products cannot turn itself into a
malevolent AI. As against what is being portrayed in the novels and movies, there is no such thing as the race for superhuman robots on
the horizon, or probably even possible. Real AI saves lives and changing our daily lives, almost entirely in ways that improve human
health, safety, and productivity. The other kind of AI – comprising those super-intelligent rogue AIs that will kill us all – is a fiction
COMPUTER SCIENCE
(Richardson, 2017). It is against this background that a research seminar into Artificial Intelligence and the Survival of Human Society
is hereby proposed.
Artificial
Intelligence
Image
Processing
CNN RNN
Speech NLP
Recognition
Computer Reinforcement
Vision Learning
Object
Recognition
COMPUTER SCIENCE
1.5 CLASSIFICATION OF AI
Within those definitions, there are a number of technologies that fall within the rubric of Artificial Intelligence:
Machine Learning. Data systems that modify themselves by building, testing and discarding models recursively in order to
better identify or classify input data.
Reinforcement Learning. The use of rewarding systems that achieve objectives in order to strengthen (or weaken) specific
outcomes. This is frequently used with agent systems.
Deep Learning. Systems that specifically rely upon non-linear neural networks to build out machine learning systems, often
relying upon using the machine learning to actually model the system doing the modeling. It is mostly a subset of machine
learning with a specific emphasis on neural nets.
Agent Systems. Systems in which autonomous agents interact within a given environment in order to simulate emergent or
crowd-based behavior.
Self-Modifying Graph Systems. These include knowledge bases in which the state of the system changes due to system
contingent heuristics.
Knowledge Bases, Business Intelligence Systems and Expert Systems. These often form a spectrum from traditional data
systems to aggregate semantic knowledge graphs which could be human curated or machine learning.
Chatbots and Intelligent Agents. These are computer systems that are able to parse written or spoken text, use it to retrieve
certain content, or perform certain actions, and respond using appropriately constructed content.
Visual/Audio Recognition Systems. These are systems that work by converting a media into an encoded compressed form, and
then using algorithms to look via, either indexes or machine learning systems for the closest matches.
COMPUTER SCIENCE
Fractal Visualization. The connection between fractals and AI runs deep, and not surprisingly one of the biggest areas for AI is
in the development of parameterized natural rendering - the movement of water, the roar of fire, the coarseness of rock, the
effects of smoke in the air, all of which have become standard fare in big Hollywood blockbusters.
It's also worth noting what aren't themselves AI systems, but often play in the same general "space":
Autonomous Vehicles. These make use of visual recognition systems and real time modeling in order to both anticipate
obstacles (static and moving) and to determine actions based upon objectives.
Drones. A drone is an autonomous vehicle without a passenger, and can be as small as a dragonfly or as large as a jet. Drones
can also act in a coordinated fashion, either by following swarm behavior (an agent system) or by following preprogrammed
instructions.
Data Science / Data Analytics. This is the use of data to identify patterns or predict behavior. This uses a combination of
machine learning techniques and numeric statistical analysis, along with an increasingly large roll for non-linear differential
equations without the use of higher order functions or recursion.
Blockchain and Distributed Ledgers. This has application throughout the AI space, especially in the realm of agented
systems, even if it is not AI per se. Distributed ledger technology is playing a bigger and bigger role in tracking resources and
transactions.
Internet of Things / Robotics. Internet of things is intended to provide network connectivity to devices so that they can
communicate with other devices. Like Robotics, they both end up managing their own state, and relies upon AI-based systems
for identifying signals and determining response.
GPUs. Artificial intelligence is taking advantage of Graph Processing Units in a big way, as their structure makes them ideal for
both semantic analysis and recursive filter applications.
Inability to be flexible and reasonable One of the big questions in artificial intelligence is to what extent machines can become
intelligent, and to what extent they can mirror the capabilities of the human brain. Some debates on artificial intelligence begin with the
Turing test developed in the 20th century, which simply asks if a computer could fool a human into thinking they were communicating
with another human, when in fact, they were communicating with a machine. During the test, when asked to reason or think flexibly,
machines fail, regardless of its depth of data and strength of processing power. Some companies, for instance, DeepMind and OpenAI,
claim their objective is to develop AGI, and match it with human intelligence. This may be a great objective, but we are still very, very
far from that.
Inability to reproduce result like science: AI is ultimately a system run by computers, so its results should be subjected to the
scientific method. That is, we should be able to “see, test, and verify.” In other words, any result should be reproducible. Currently, it is
not. This may be one of AI’s most substantial limitations. Ultimately, the future depends on its results being reproducible. Random
outcomes from the same data inputs that are not reproducible are unacceptable for any system.
Lethal Autonomous weapons: Currently, 50+ countries are researching battlefield robots, including the United States, China, Russia,
and the United Kingdom. If research becomes successful, it may create military contexts in which it is rational to relinquish human
control almost entirely to autonomous weapons. Therefore, in this problem domain, the degree of complexity is going to be higher than
in preventing the development and proliferation of nuclear weapons, because if humanity forces itself into an arms race on this new
technological level, this may resist political interventions and becomes difficult to curb (Hawking etal. 2015).
COMPUTER SCIENCE
Malevolent AI: Humans should not assume that machines or robots would treat us favorably because there is no a priori reason to
believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs
would not share). Hyper-intelligent software may not necessarily decide to support the continued existence of humanity, and may be
extremely difficult to stop, which if allowed unchecked, may become a real source of risks to civilization, humans, and planet Earth
(Charles 2003).
Recursive Self-Improvement and Technological singularity: If research into AI produces sufficiently intelligent software, it might
be able to reprogram and improve itself. The improved software, in turn, may be better at improving itself, leading to recursive self-
improvement. When this happens, the new intelligence could, thus increase exponentially and dramatically to surpass human
intelligence. Technological singularity is when accelerating progress in technologies will cause a runaway effect, wherein artificial
intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the
capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which
events are unpredictable or even unfathomable (Jacob 2016).
Decrease in demand for human labor: The relationship between automation and employment is complicated. While automation
eliminates old jobs, it also creates new jobs through micro-economic and macro-economic effects. Unlike previous waves of
automation, many middle-class jobs may be eliminated by artificial intelligence. The Economist (2015) states that "the worry that AI
could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".
Rise of Authoritarian Regimes: In combination with Big Nudging and predictive user control, intelligent surveillance technology
could also increase global risks by locally helping to stabilize authoritarian regimes in an efficient manner.
1.8. CONCLUSIONS
COMPUTER SCIENCE
Conclusively, in the coming years, as the public encounters new AI applications in domains such as transportation and healthcare, they
must be introduced in ways that build trust and understanding, and respect human and civil rights. While encouraging innovation;
policies and processes should address ethical, privacy, and security implications, and should also work to ensure that the benefits of AI
technologies will be spread broadly and fairly. Doing so will be critical if Artificial Intelligence research, and its applications are to
exert a positive influence on the survival of human society.
1.9 RECOMMENDATIONS
Ensure “Interpretability” of AI systems: Decisions made by an AI agent should be possible to understand, especially if those decisions
have implications for public safety, or result in discriminatory practices.
Ensure Human Interpretability of Algorithmic Decisions: AI systems must be designed with the minimum requirement for
the designer to account for an AI agent’s behaviors. Some systems with potentially severe implications for public safety should
also have the functionality to provide information in the event of an accident.
Empower Users: Providers of services that utilize AI need to incorporate the ability for the user to request and receive basic
explanations as to why a decision was made.
Public Empowerment: The public’s ability to understand AI-enabled services, and how they work, is key to ensuring trust in the
technology.
“Algorithmic Literacy” must be a basic skill: Whether it is the curating of information in social media platforms or self-
driving cars, users need to be aware and have a basic understanding of the role of algorithms and autonomous decision-making.
Such skills will also be important in shaping societal norms around the use of the technology.
Provide the public with information: While full transparency around a service’s machine learning techniques and training
data is generally not advisable due to the security risk, the public should be provided with enough information to make it possible
for people to question its outcomes.
Responsible Deployment: The capacity of an AI agent to act autonomously, and to adapt its behavior over time without human
direction, calls for significant safety checks before deployment, and ongoing monitoring.
Humans must be in control: Any autonomous system must allow for a human to interrupt an activity or shutdown the system
(an “off-switch”). There may also be a need to incorporate human checks on new decision-making strategies in AI system
design, especially where the risk to human life and safety is great.
Make safety a priority: Any deployment of an autonomous system should be extensively tested beforehand to ensure the AI
agent’s safe interaction with its environment (digital or physical), and that it functions as intended. Autonomous systems should
be monitored while in operation, and updated or corrected as needed.
COMPUTER SCIENCE
Privacy is key: AI systems must be data responsible. They should use only what they need and delete it when it is no longer
needed (“data minimization”). They should encrypt data in transit and at rest, and restrict access to authorized persons (“access
control”). AI systems should only collect, use, share and store data in accordance with privacy and personal data laws and best
practices.
Think before you act: Careful thought should be given to the instructions and data provided to AI systems. AI systems should
not be trained with data that is biased, inaccurate, incomplete or misleading.
If they are connected, they must be secured: AI systems that are connected to the Internet should be secured not only for their
protection, but also to protect the Internet from malfunctioning, or malware-infected AI systems that could become the next-
generation of botnets. High standards of device, system and network security should be applied.
Responsible disclosure: Security researchers acting in good faith should be able to responsibly test the security of AI systems
without fear of prosecution or other legal action. At the same time, researchers and others who discover security vulnerabilities
or other design flaws should responsibly disclose their findings to those who are in the best position to fix the problem.
Ensuring Accountability: Legal accountability has to be ensured when human agency is replaced by decisions of AI agents.
Ensure legal certainty: Governments should ensure legal certainty on how existing laws and policies apply to algorithmic
decision-making, and the use of autonomous systems to ensure a predictable legal environment. This includes working with
experts from all disciplines to identify potential gaps and run legal scenarios. Similarly, those designing and using AI should be
in compliance with existing legal frameworks.
Put users first: Policymakers need to ensure that any laws applicable to AI systems and their use, must put users’ interests at
the center. This must include the ability for users to challenge autonomous decisions that adversely affect their interests.
Assign liability up-front: Governments working with all stakeholders need to make some difficult decisions now about who
will be liable in the event that something goes wrong with an AI system, and how any harm suffered will be remedied.
REFERENCES
Andreas, K and Michael, H. The interpretations, illustrations, and implications of artificial intelligence. Business Horizons. 62 (1): 15–
25, 2019.
Charles, R. Artificial Intelligence and human nature. The New Atlantis. 1: 88–100. 2003.
Jacob, R. Thinking machines: The search for Artificial Intelligence, Distillations. Vol. 2 no. 2. pp. 14–23, 2016.
Matt, S. Andrew Yang’s presidential bid is so very 21st century. Wired – via www.wired.com, 2019.
COMPUTER SCIENCE
Hawking, S., Musk, E. and Gates, B. Warn About Artificial Intelligence". Observer. 19 August 2015.
Richardson, J. Three ways Artificial Intelligence is good for society. IQ magazine, Intel. 2017: Available online:
https://iq.intel.com/artificial-intelligence-is-good-for-society/
Siegel, E. Predictive Analytics: The power to predict who will click, Buy, Lie or Die. John Wiley & Sons, Inc. ISBN: 2013: 978-1-118-
35685-2.
Thrall, P. H., Bever, J. D., and Burdon, J. J. Evolutionary change in agriculture: the past, present and future. Evol Appl.3(5-6): 2012:
405–408. doi: 10.1111/j.1752-4571.2010.00155.x