Nothing Special   »   [go: up one dir, main page]

Psda Research Report: Artificial Intelligence

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

PSDA RESEARCH REPORT

ARTIFICIAL INTELLIGENCE
SUBMITTED TO MS. NIDHI KAUSHIK BY PRIYA KHANDELWAL,
BBA LLB SEC-E, A3221519300
Research Report
Artificial Intelligence: process, tools, technologies

INTRODUCTION

It is claimed that artificial intelligence is playing an increasing role in the research


of management science and operational research areas. Intelligence is commonly
considered as the ability to collect knowledge and reason about knowledge to solve
complex problems. In the near Future intelligent machines will replace human
capabilities in many areas. Artificial intelligence is the intelligence exhibited by
machines or software. It is the subfield of computer science. Artificial Intelligence
is becoming a popular field in computer science as it has enhanced the human life
in many areas. Artificial intelligence in the last two decades has greatly improved
performance of the manufacturing and service systems. Study in the area of
artificial intelligence has given rise to the rapidly growing technology known as
expert system.

Artificial intelligence is the study and developments of intelligent machines and


software that can reason, learn, gather knowledge, communicate, manipulate and
perceive the objects. John McCarthy coined the term in 1956 as branch of
computer science concerned with making computers behave like humans. It is the
study of the computation that makes it possible to perceive reason and act.

Artificial intelligence is different from psychology because it emphasis on


computation and is different from computer science because of its emphasis on
perception, reasoning and action. It makes machines smarter and more useful. It
works with the help of artificial neurons (artificial neural network) and scientific
theorems (if then statements and logics).

AI is a revolution that is transforming how humans live and work. It is a broad


concept of machines being able to carry out tasks in a way that humans would
consider “smart” — term goes back 70 years to Alan Turing who defined a test,
Turing Test, to measure a machine’s ability to exhibit intelligent behavior
equivalent to, or indistinguishable from, that of a human. The revolution has many
complex moving parts.
My goal is to simplify and provide a perspective on how these complex parts come
together in a 3 layered cake. The top layer is the AI Services that are real-world
applications solving real problems, the middle layer constitutes of the foundational
ML Algorithms, while the bottom is the ML Platform that enables the top two.

First the basic definitions, artificial intelligence (AI) is the intelligence


demonstrated by machines, in contrast to the natural intelligence displayed by
humans. Machine Learning (ML) is a subset of AI based around the idea that we
should really just be able to give machines access to data and let them learn for
themselves. A Neural Network(NN) further is a subset of ML, where a computer
system is designed to work by classifying information in the same way a human
brain does. Deep learning (DL) further is a subset of ML that uses multi-layered
artificial neural networks to solve complex problems such as object detection,
speech recognition, and language translation.

AI technologies have matured to the point in offering real practical benefits in


many of their applications. Major Artificial Intelligence areas are Expert Systems,
Natural Language Processing, Speech Understanding, Robotics and Sensory
Systems, Computer Vision and Scene Recognition, Intelligent Computer Aided
Instruction, Neural Computing.

From these Expert System is a rapidly growing technology which is having a huge
impact on various fields of life. The various techniques applied in artificial
intelligence are Neural Network, Fuzzy Logic, Evolutionary Computing, and
Hybrid Artificial Intelligence. Papers published on different Artificial Intelligence
Techniques used Artificial intelligence has the advantages over the natural
intelligence as it is more permanent, consistent, less expensive, has the ease of
duplication and dissemination, can be documented and can perform certain tasks
much faster and better than the human.
HISTORY OF AI:

The idea of inanimate objects coming to life as intelligent beings has been around
for a long time. The ancient Greeks had myths about robots, and Chinese and
Egyptian engineers built automatons.The beginnings of modern AI can be traced to
classical philosophers' attempts to describe human thinking as a symbolic system.
But the field of AI wasn't formally founded until 1956, at a conference at
Dartmouth College, in Hanover, New Hampshire, where the term "artificial
intelligence" was coined.

MIT cognitive scientist Marvin Minsky and others who attended the conference
were extremely optimistic about AI's future. "Within a generation [...] the problem
of creating 'artificial intelligence' will substantially be solved," Minsky is quoted as
saying in the book "AI: The Tumultuous Search for Artificial Intelligence" (Basic
Books, 1994). [Super-Intelligent Machines: 7 Robotic Futures]

But achieving an artificially intelligent being wasn't so simple. After several


reports criticizing progress in AI, government funding and interest in the field
dropped off – a period from 1974–80 that became known as the "AI winter." The
field later revived in the 1980s when the British government started funding it
again in part to compete with efforts by the Japanese.

The field experienced another major winter from 1987 to 1993, coinciding with the
collapse of the market for some of the early general-purpose computers, and
reduced government funding.

But research began to pick up again after that, and in 1997, IBM's Deep Blue
became the first computer to beat a chess champion when it defeated Russian
grandmaster Garry Kasparov. And in 2011, the computer giant's question-
answering system Watson won the quiz show "Jeopardy!" by beating reigning
champions Brad Rutter and Ken Jennings.
This year, the talking computer "chatbot" Eugene Goostman captured headlines for
tricking judges into thinking he was real skin-and-blood human during a Turing
test, a competition developed by British mathematician and computer scientist
Alan Turing in 1950 as a way to assess whether a machine is intelligent. But the
accomplishment has been controversial, with artificial intelligence experts saying
that only a third of the judges were fooled, and pointing out that the bot was able to
dodge some questions by claiming it was an adolescent who spoke English as a
second language.

Manyexperts now believe the Turing test isn't a good measure of artificial
intelligence. "The vast majority of people in AI who've thought about the matter,
for the most part, think it’s a very poor test, because it only looks at external
behavior," Perlis told Live Science. In fact, some scientists now plan to develop an
updated version of the test. But the field of AI has become much broader than just
the pursuit of true, humanlike intelligence.

PHILOSOPHY OF AI

Artificial intelligence has close connections with philosophy because both use
concepts that have the same names and these include intelligence, action,
consciousness, epistemology, and even free will.[1] Furthermore, the technology is
concerned with the creation of artificial animals or artificial people (or, at least,
artificial creatures; see Artificial life) so the discipline is of considerable interest to
philosophers.[2] These factors contributed to the emergence of the philosophy of
artificial intelligence. Some scholars argue that the AI community's dismissal of
philosophy is detrimental.[3]

The philosophy of artificial intelligence attempts to answer such questions as


follows:
 Can a machine act intelligently? Can it solve any problem that a person
would solve by thinking?
 Are human intelligence and machine intelligence the same? Is the human
brain essentially a computer?
 Can a machine have a mind, mental states, and consciousness in the same
sense that a human being can? Can it feel how things are?

Questions like these reflect the divergent interests of AI researchers, cognitive


scientists and philosophers respectively. The scientific answers to these questions
depend on the definition of "intelligence" and "consciousness" and exactly which
"machines" are under discussion.

Important propositions in the philosophy of AI include:

 Turing's "polite convention": If a machine behaves as intelligently as a


human being, then it is as intelligent as a human being.
 The Dartmouth proposal: "Every aspect of learning or any other feature of
intelligence can be so precisely described that a machine can be made to
simulate it."
 Allen Newell and Herbert A. Simon's physical symbol system hypothesis:
"A physical symbol system has the necessary and sufficient means of
general intelligent action."
 John Searle's strong AI hypothesis: "The appropriately programmed
computer with the right inputs and outputs would thereby have a mind in
exactly the same sense human beings have minds."
 Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding
and subtracting, of the consequences of general names agreed upon for the
'marking' and 'signifying' of our thoughts..."
AREAS OF ARTIFICIAL INTELLIGENCE

A. Language understanding: The ability to "understand" and respond to the


natural language. To translate from spoken language to a written form and to
translate from one natural language to another natural language. 1.1 Speech
Understanding 1.2 Semantic Information Processing (Computational Linguistics)
1.3 Question Answering 1.4 Information Retrieval 1.5 Language Translation

B. Learning and adaptive systems: The ability to adapt behavior baged on


previous experience, and to develop general rules concerning the world based on
such experience. 2.1 Cybernetics 2.2 Concept Formation

C. Problem solving: Ability to formulate a problem in a suitable representation, to


plan for its solution and to know when new information is needed and how to
obtain it. 3.1 Inference (Resolution-Based Theorem Proving, Plausible Inference
and Inductive Inference) 3.2 Interactive Problem Solving 3.3 Automatic Program
Writing 3.4 Heuristic Search

D. Perception (visual): The ability to analyze a sensed scene by relating it to an


internal model which represents the perceiving organism's "knowledge of the
world." The result of this analysis is a structured set of relationships between
entities in the scene. 4.1 Pattern Recognition 4.2 Scene Analysis

E. Modeling: The ability to develop an internal representation and set of


transformation rules which can be used to predict the behavior and relationship
between some set of real-world objects or entities. 5.1 The Representation Problem
for Problem Solving Systems 5.2 Modeling Natural Systems (Economic,
Sociological, Ecological, Biological etc.) 5.3 Hobot World Modeling (Perceptual
and Functional Representations)
F. Robots: A combination of most or all of the above abilities with the ability to
move over terrain and manipulate objects. 6.1 Exploration 6.2
Transportation/Navigation 6.3 Industrial Automation (e.g., Process Control,
Assembly Tasks, Executive Tasks) 6.4 Security 6.5 Other (Agriculture, Fishing,
Mining, Sanitation, Construction, etc.) 6.6 Military 6.7 Household

G. Games: The ability to accept a formal set of rules for games such as Chess, Go,
Kalah, Checkers, etc., and to translate these rules into a representation or structure
which allows problem-solving and learning abilities to be used in reaching an
adequate level of performance. 7.1 Particular Games (Chess, Go, Bridge, etc.)

WHAT ARE THE DIFFERENT TYPES OF AI?

At a very high level artificial intelligence can be split into two broad types: narrow
AI and general AI. Narrow AI is what we see all around us in computers today:
intelligent systems that have been taught or learned how to carry out specific tasks
without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition
of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems
on self-driving cars, in the recommendation engines that suggest products you
might like based on what you bought in the past. Unlike humans, these systems can
only learn or be taught how to do specific tasks, which is why they are called
narrow AI.
WHAT IS MACHINE LEARNING?

There is a broad body of research in AI, much of which feeds into and
complements each other.Currently enjoying something of resurgence, machine
learning is where a computer system is fed large amounts of data, which it then
uses to learn how to carry out a specific task, such as understanding speech or
captioning a photograph.

WHAT ARE NEURAL NETWORKS?

Key to the process of machine learning is neural networks. These are brain-
inspired networks of interconnected layers of algorithms, called neurons, that feed
data into each other, and which can be trained to carry out specific tasks by
modifying the importance attributed to input data as it passes between the layers.

During training of these neural networks, the weights attached to different inputs
will continue to be varied until the output from the neural network is very close to
what is desired, at which point the network will have 'learned' how to carry out a
particular task.

A subset of machine learning is deep learning, where neural networks are


expanded into sprawling networks with a huge number of layers that are trained
using massive amounts of data. It is these deep neural networks that have fuelled
the current leap forward in the ability of computers to carry out task like speech
recognition and computer vision.

There are various types of neural networks, with different strengths and
weaknesses. Recurrent neural networks are a type of neural net particularly well
suited to language processing and speech recognition, while convolutional neural
networks are more commonly used in image recognition. The design of neural
networks is also evolving, with researchers recently refining a more effective form
of deep neural network called long short-term memory or LSTM, allowing it to
operate fast enough to be used in on-demand systems like Google Translate.
Another area of AI research is evolutionary computation, which borrows from
Darwin's theory of natural selection, and sees genetic algorithms undergo random
mutations and combinations between generations in an attempt to evolve the
optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI
to help build AI. This use of evolutionary algorithms to optimize neural networks
is called neuroevolution, and could have an important role to play in helping
design efficient AI as the use of intelligent systems becomes more prevalent,
particularly as demand for data scientists often outstrips supply. The technique was
recently showcased by Uber AI Labs, which released papers on using genetic
algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that
allow them to take a series of decisions based on a large number of inputs,
allowing that machine to mimic the behaviour of a human expert in a specific
domain. An example of these knowledge-based systems might be, for example, an
autopilot system flying a plane.

WHAT ARE THE ELEMENTS OF MACHINE LEARNING?

As mentioned, machine learning is a subset of AI and is generally split into two


main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very


large number of labelled examples. These machine-learning systems are fed huge
amounts of data, which has been annotated to highlight the features of interest.
These might be photos labelled to indicate whether they contain a dog or written
sentences that have footnotes to indicate whether the word 'bass' relates to music or
a fish. Once trained, the system can then apply these labels can to new data, for
example to a dog in a photo that's just been uploaded.
This process of teaching a machine by example is called supervised learning and
the role of labelling these examples is commonly carried out by online workers,
employed through platforms like Amazon Mechanical Turk.

Training these systems typically requires vast amounts of data, with some systems
needing to scour millions of examples to learn how to carry out a task effectively --
although this is increasingly possible in an age of big data and widespread data
mining. Training datasets are huge and growing in size -- Google's Open Images
Dataset has about nine million images, while its labelled video
repository YouTube-8M links to seven million labelled videos.

 ImageNet, one of the early databases of this kind, has more than 14 million
categorized images. Compiled over two years, it was put together by nearly 50,000
people -- most of whom were recruited through Amazon Mechanical Turk -- who
checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less
important than access to large amounts of compute power.In recent years,
Generative Adversarial Networks ( GANs) have shown how machine-learning
systems that are fed a small amount of labelled data can then generate huge
amounts of fresh data to teach themselves.This approach could lead to the rise of
semi-supervised learning, where systems can learn how to carry out tasks using a
far smaller amount of labelled data than is necessary for training systems using
supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try


to identify patterns in data, looking for similarities that can be used to categorise
that data.An example might be clustering together fruits that weigh a similar
amount or cars with a similar engine size.
The algorithm isn't setup in advance to pick out specific types of data, it simply
looks for data that can be grouped by its similarities, for example Google News
grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it
performs a trick.In reinforcement learning, the system attempts to maximize a
reward based on its input data, basically going through a process of trial and error
until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network,


which has been used to best human performance in a variety of classic video
games. The system is fed pixels from each game and determines various
information, such as the distance between objects on screen.By also looking at the
score achieved in each game the system builds a model of which action will
maximize the score in different circumstances, for instance, in the case of the video
game Breakout, where the paddle should be moved to in order to intercept the ball.

WHICH AI SERVICES/TOOLS ARE AVAILABLE?

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and
Google Cloud Platform -- provide access to GPU arrays for training and running
machine learning models, with Google also gearing up to let users use its Tensor
Processing Units -- custom chips whose design is optimized for training and
running machine-learning models.
All of the necessary associated infrastructure and services are available from the
big three, the cloud-based data stores, capable of holding the vast amount of data
needed to train machine-learning models, services to transform data to prepare it
for analysis, visualisation tools to display the results clearly, and software that
simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-
learning models, with Google recently revealing a service that automates the
creation of AI models, called Cloud AutoML. This drag-and-drop service builds
custom image-recognition models and requires the user to have no machine-
learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of


2018, Amazon revealed a host of new AWS offerings designed to streamline the
process of training up machine-learning models.

For those firms that don't want to build their own machine learning models but
instead want to consume AI-powered, on-demand services -- such as voice, vision,
and language recognition -- Microsoft Azure stands out for the breadth of services
on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile
IBM, alongside its more general on-demand offerings, is also attempting to sell
sector-specific AI services aimed at everything from healthcare to retail, grouping
these offerings together under its IBM Watson umbrella -- and recently investing
$2bn in buying The Weather Channel to unlock a trove of data to augment its AI
services.
HOW WILL AI CHANGE THE WORLD?

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate
the world around them means there is a natural overlap between robotics and AI.
While AI is only one of the technologies used in robotics, use of AI is helping
robots move into new areas such as self-driving cars, delivery robots, as well as
helping robots to learn new skills.

General Motors recently said it would build a driverless car without a steering


wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo,
the self-driving group inside Google parent Alphabet, will soon offer a driverless
taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic


images or replicate someone's voice in a pitch-perfect fashion. With that comes the
potential for hugely disruptive social change, such as no longer being able to trust
video or audio footage as genuine. Concerns are also starting to be raised about
how such technologies will be used to misappropriate people's image, with tools
already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognise what people are


saying with an accuracy of almost 95 percent. Recently Microsoft's Artificial
Intelligence and Research group reported it had developed a system able
to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to


computers to become the norm alongside more traditional forms of human-
machine interaction.
Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the
point where Chinese tech giant Baidu says it can match faces with 99 percent
accuracy, providing the face is clear enough on the video. While police forces in
western countries have generally only trialled using facial-recognition systems at
large events, in China the authorities are mounting a nationwide program to
connect CCTV across the country to facial recognition and to use AI systems to
track suspects and suspicious behavior, and are also trialling the use of facial-
recognition glasses by police.

Although privacy regulations vary across the world, it's likely this more intrusive
use of AI technology -- including AI that can recognize emotions -- will gradually
become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to


pick out tumors in x-rays, aiding researchers in spotting genetic sequences related
to diseases and identifying molecules that could lead to more effective drugs.
There have been trials of AI-related technology in hospitals across the world.
These include IBM's Watson clinical decision support tool, which is trained by
oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google
DeepMind systems by the UK's National Health Service, where it will help spot
eye abnormalities and streamline the process of screening patients for head and
neck cancers.
WILL AN AI STEAL YOUR JOB?

The possibility of artificially intelligent systems replacing much of modern manual


labour is perhaps a more credible near-future possibility. While AI won't replace
all jobs, what seems to be certain is that AI will change the nature of work, with
the only question being how rapidly and how profoundly automation will alter the
workplace.

There is barely a field of human endeavour that AI doesn't have the potential to
impact. As AI expert Andrew Ng puts it: "many people are doing routine,
repetitive jobs. Unfortunately, technology is especially good at automating routine,
repetitive work", saying he sees a "significant risk of technological unemployment
over the next few decades".

The evidence of which jobs will be supplanted is starting to emerge. Amazon has
just launched Amazon Go, a cashier-free supermarket in Seattle where customers
just take items from the shelves and walk out. What this means for the more than
three million people in the US who works as cashiers remains to be seen. Amazon
again is leading the way in using robots to improve efficiency inside its
warehouses. These robots carry shelves of products to human pickers who select
items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers,
with plans to add many more. But Amazon also stresses that as the number of bots
have grown, so has the number of human workers in these warehouses.

However, Amazon and small robotics firms are working to automate the remaining
manual jobs in the warehouse, so it's not a given that manual and robotic labor will
continue to grow hand-in-hand. Fully autonomous self-driving vehicles aren't a
reality yet, but by some predictions the self-driving trucking industry alone is
poised to take over 1.7 million jobs in the next decade, even without considering
the impact on couriers and taxi drivers.
Yet some of the easiest jobs to automate won't even require robotics. At present
there are millions of people working in administration, entering and copying data
between systems, chasing and booking appointments for companies. As software
gets better at automatically updating systems and flagging the information that's
important, so the need for administrators will fall.

As with every technological shift, new jobs will be created to replace those lost.
However, what's uncertain is whether these new roles will be created rapidly
enough to offer employment to those displaced, and whether the newly
unemployed will have the necessary skills or temperament to fill these emerging
roles.

Not everyone is a pessimist. For some, AI is a technology that will augment, rather
than replace, workers. Not only that but they argue there will be a commercial
imperative to not replace people outright, as an AI-assisted worker -- think a
human concierge with an AR headset that tells them exactly what a client wants
before they ask for it -- will be more productive or effective than an AI working on
its own.

Among AI experts there's a broad range of opinion about how quickly artificially
intelligent systems will surpass human capabilities. Oxford University's Future of
Humanity Institute asked several hundred machine-learning experts to predict AI
capabilities, over the coming decades.

Notable dates included AI writing essays that could pass for being written by a
human by 2026, truck drivers being made redundant by 2027, AI surpassing
human capabilities in retail by 2031, writing a best-seller by 2049, and doing a
surgeon's work by 2053. They estimated there was a relatively high chance that AI
beats humans at all tasks within 45 years and automates all human jobs within 120
years.

You might also like