Nothing Special   »   [go: up one dir, main page]

Artificial Intelligence

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 41

Artificial Intelligence

** 1 INTRODUCTION
 A branch of computer science that aims to create or replicate human
intelligence in machines.

Artificial Intelligence is:

 An intelligent entity created by humans.


 Capable of performing tasks intelligently without being explicitly
instructed.
 Capable of thinking and acting rationally and humanely.

Applications of Artificial Intelligence


1. Google’s AI-powered predictions (Google Maps)
2. Ride-sharing applications (Uber, Lyft)
3. AI Autopilot in Commercial Flights
4. Spam filters on Emails
5. Plagiarism checkers and tools
6. Facial Recognition
7. Search recommendations
8. Voice-to-text features
9. Smart personal assistants (Siri, Alexa)
10. Fraud protection and prevention

Goals
1. Problem-solving
2. Knowledge representation
3. Planning
4. Learning
5. Social Intelligence

Advantages 
 Reduction in human error
 Available 24×7
 Helps in repetitive work
 Digital assistance 
 Faster decisions
 Rational Decision Maker
 Medical applications
 Improves Security
 Efficient Communication

Disadvantages
 Cost overruns
 Dearth of talent
 Lack of practical products
 Lack of standards in software development
 Potential for misuse
 Highly dependent on machines
 Requires Supervision

What are the branches of AI?

Artificial Intelligence can be divided mainly into six branches. They are,
Machine learning, neural networks, Deep Learning, Computer Vision,
Natural language processing, Cognitive Computing.  

**2 Artificial Intelligence problems


1. Lack of technical knowledge

2. The price factor


Small and mid-sized organization struggles a lot when it comes to
adopting AI technologies as it is a costly affair.
3.Data acquisition and storage

AI works best when it has a good amount of quality data available to it. The algorithm
becomes strong and performs well as the relevant data grows. The AI system fails
badly when enough quality data isn’t fed into it.

4. Rare and expensive workforce

 experts are expensive and rare in the current marketplace. 

5.Issue of responsibility

The implementation of AI application comes with great responsibility.


Any specific individual must bear the burden of any sort of hardware
malfunctions.

6. Ethical challenges

One of the major AI problems that are yet be tackled are the ethics and
morality. The way how the developers are technically grooming the AI bots to
perfection where it can flawlessly imitate human conversations, making it
increasingly tough to spot a difference between a machine and a real
customer service rep.

7. Lack of computation speed


AI, Machine learning and deep learning solutions require a high degree of
computation speeds offered only by high-end processors

8. Legal Challenges
An AI application with an erroneous algorithm and data governance can
cause legal challenges for the company.

9. AI Myths & Expectation:

There’s a quite discrepancy between the actual potential of the AI system and
the expectations of this generation.
However, in most of the occasions (particularly in highly specialized roles), AI
cannot substitute the caliber of the human brain and what it brings to the
table.

** 3. AI Technique

1. Heuristics.
2. Support Vector Machines.
3. Artificial Neural Networks.
4. Markov Decision Process.
5. Natural Language Processing.
Heuristics

It is one of the most popular search algorithms


used in Artificial Intelligence.


 It is implemented to solve problems faster than the

classic methods or to find the solutions for which


classic methods cannot.
 Heuristics techniques basically employ heuristic for

its moves and are used to reduce the total number


of alternatives for the results.
 This technique is one of the most basic techniques

used for AI and is based on the principle of trial and


error. It learns from the mistakes.
 Heuristics is one of the best options for solving

difficult problems. For instance, to know the shorter


route for any destination, the best way is to identify
all the possible routes and then to identify the
shortest one.
Support Vector Machines
 Support Vector Machine is a supervised machine
learning algorithm used for regression challenges
or classification problems.
 However, in the majority of cases, it is used for

classification only, for instance, email systems use


vector machines for email classification as Social or
Promotion or any other. It categorizes each mail
according to the respective categories.
 This technique is widely used for face recognition,

text recognition and image recognition systems.


Artificial Neural Network

Neural networks are generally found in the brains


of living organisms.
 These are basically the neural circuits which help

living beings to transmit and process the


information.
 For this purpose, there are billions of neurons which

helps to make the neural systems for making


decisions in day-to-day life and learn new things.
 These natural neural networks have inspired the

design of an Artificial Neural Network. Instead of


Neurons, Artificial Neural Networks are composed
of Nodes.
 These networks help in identifying patterns from

the data and then learns from it.


 For this purpose, it uses different learning method

such as supervised learning, unsupervised learning


and reinforced learning.
 From an application perspective, it is used in

machine learning, deep learning and pattern


recognition.
Markov Decision Process
 A Markov Decision Process (MDP) is a framework
for decision-making modeling where in some
situations the outcome is partly random and partly
based on the input of the decision maker.
 Another application where MDP is used is

optimized planning. The basic goal of MDP is to


find a policy for the decision maker, indicating what
particular action should be taken at what state.
 An MDP model consists of the following parts:

1. A set of possible states: for example, this can refer


to a grid world of a robot or the states of a door
(open or closed).
2. A set of possible actions: a fixed set of actions that
e.g. a robot can take, such as going north, left,
south or west. Or with respect to a door, closing or
opening it.
3. Transition probabilities: this is the probability of
going from one state to another. For example, what
is the probability that the door is closed, after the
action of closing the door has been performed?
4. Rewards: these are used to direct the planning. For
instance, a robot may want to move north to reach
its destination. Actually going north will result in a
higher reward.
Natural Language Processing

 Basically, it is a technique used by computers to


understand, interpret and manipulate human
language. Going by its use, it is helpful for speech
recognition and speech synthesis.
 Already, this technique is used for several
applications by a myriad of companies. Apple’s Siri,
Google Assistant, Microsoft’s Cortana and Alexa are
some of the applications which uses the Natural
Language Processing techniques.
 Additionally, it is also used for parsing techniques,
part-of-speech tagging, and text recognition.
** 4 Define the problem as State Space Search.
The steps that are required to build a system to solve a particular problem are:

1. Problem Definition that must include precise specifications of what


the initial situation will be as well as what final situations constitute
acceptable solutions to the problem.

2. Problem Analysis , this can have immense impact on the


appropriateness of varies possible techniques for solving the problem.

3. Selection of the best technique(s) for solving the particular


problem.

State Space Search


A state space represents a problem in terms of states and operators
that change states. A state space consists of:
 A representation of the states the system can be in. For example,
in a board game, the board represents the current state of the game.
 A set of operators that can change one state into another state. In
a board game, the operators are the legal moves from any given
state. Often the operators are represented as programs that change
a state representation to represent the new state.
 An initial state.
 A set of final states; some of these may be desirable, others
undesirable. This set is often represented implicitly by a program
that detects terminal states.

5 PRODUCTION SYSTEMS

Definition

Production system or production rule system is a computer


program typically used to provide some form of artificial
intelligence, which consists primarily of a set of rules about
behavior but it also includes the mechanism necessary to follow
those rules as the system responds to states of the world.

Components
Global Database: The global database is the central data
structure used by the production system in Artificial Intelligence.

 Set of Production Rules: The production rules operate


on the global database. Each rule usually has a
precondition that is either satisfied or not by the global
database. If the precondition is satisfied, the rule is usually
be applied. The application of the rule changes the
database.

 A Control System: The control system then chooses


which applicable rule should be applied and ceases
computation when a termination condition on the database
is satisfied. If multiple rules are to fire at the same time,
the control system resolves the conflicts.

Advantages
 Provides excellent tools for structuring AI programs
 The system is highly modular because individual rules can be
added, removed or modified independently
 Separation of knowledge and Control-Recognises Act Cycle
 A natural mapping onto state-space research data or goal-driven
 The system uses pattern directed control which is
more flexible than algorithmic control
 Provides opportunities for heuristic control of the search
 A good way to model the state-driven nature of intelligent
machines
 Quite helpful in a real-time environment and applications.

Disadvantages:
 It is very difficult to analyze the flow of control within a production
system
 It describes the operations that can be performed in a search for a
solution to the problem.
 There is an absence of learning due to a rule-based production
system that does not store the result of the problem for future use.
 The rules in the production system should not have any type
of conflict resolution as when a new rule is added to the
database it should ensure that it does not have any conflict wit

6 Production system characteristics


1. Simplicity: The structure of each sentence in a production system
is unique and uniform as they use the “IF-THEN” structure.

* This structure provides simplicity in knowledge representation.


This feature of the production system improves the readability of
production rules.

2. Modularity: This means the production rule code the knowledge


available in discrete pieces. Information can be treated as a collection of
independent facts which may be added or deleted from the system with
essentially no deleterious side effects.
3. Modifiability: This means the facility for modifying rules. It allows the
development of production rules in a skeletal form first and then it is
accurate to suit a specific application.

4. Knowledge-intensive: The knowledge base of the production system


stores pure knowledge. This part does not contain any type of control or
programming information. Each production rule is normally written as an
English sentence; the problem of semantics is solved by the very
structure of the representation.

7. Problem characteristics

1.Is the problem decomposable into a set of independent


smaller or easier sub-problems?

2, Can solution steps be ignored or at least undone if


they prove unwise?

3.Is the problem’s universe predictable?

4.Is a good solution to the problem obvious without


comparison to all other possible solutions?

5.Is the desired solution a state of the world or a path to a


state?

6.What is the role of knowledge?

7.Does the task require interaction with a person?

Problem Classification

Actual problems are examined from the point of view of all


these questions; it becomes apparent that there are several
broad classes into which the problems fall.
Heuristic search techniques part 2 of unit 1
10 Generate and test:- Generate and Test Search is a heuristic search
technique based on Depth First Search with Backtracking which guarantees
to find a solution if done systematically and there exists a solution. In this
technique, all the solutions are generated and tested for the best solution. It
ensures that the best solution is checked against all possible generated
solutions.
It is also known as British Museum Search Algorithm as it’s like looking for
an exhibit at random or finding an object in the British Museum by wandering
randomly. The evaluation is carried out by the heuristic function as all the
solutions are generated systematically in generate and test algorithm but if
there are some paths which are most unlikely to lead us to result then they
are not considered

o Hill climbing algorithm is a local search algorithm which continuously moves


in the direction of increasing elevation/value to find the peak of the mountain
or best solution to the problem. It terminates when it reaches a peak value
where no neighbor has a higher value.

o Hill climbing algorithm is a technique which is used for optimizing the


mathematical problems. One of the widely discussed examples of Hill climbing
algorithm is Traveling-salesman Problem in which we need to minimize the
distance traveled by the salesman.

o It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.

o A node of hill climbing algorithm has two components which are state and
value.

o Hill Climbing is mostly used when a good heuristic is available.

o In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:
o Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which helps to
decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.

Heuristics function: Heuristic is a function which is used in Informed Search, and


it finds the most promising path. It takes the current state of the agent as its input
and produces the estimation of how close agent is from the goal. The heuristic
method, however, might not always give the best solution, but it guaranteed to find
a good solution in reasonable time. Heuristic function estimates how close a state is
to the goal. It is represented by h(n), and it calculates the cost of an optimal path
between the pair of states. The value of the heuristic function is always positive.

Admissibility of the heuristic function is given as:

1. h(n) <= h*(n)  

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic
cost should be less than or equal to the estimated cost.

Pure Heuristic Search:


Pure heuristic search is the simplest form of heuristic search algorithms. It expands
nodes based on their heuristic value h(n). It maintains two lists, OPEN and CLOSED
list. In the CLOSED list, it places those nodes which have already expanded and in
the OPEN list, it places nodes which have yet not been expanded.

On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm
continues unit a goal state is found.

In the informed search we will discuss two main algorithms which are given below:

o Best First Search Algorithm(Greedy search)

o A* Search Algorithm

1.) Best-first Search Algorithm (Greedy Search):


Greedy best-first search algorithm always selects the path which appears best at
that moment. It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search. Best-first search allows us to
take the advantages of both algorithms. With the help of best-first search, at each
step, we can choose the most promising node. In the best first search algorithm, we
expand the node which is closest to the goal node and the closest cost is estimated
by heuristic function, i.e.

1. f(n)= g(n).   

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:


o Step 1: Place the starting node into the OPEN list.

o Step 2: If the OPEN list is empty, Stop and return failure.

o Step 3: Remove the node n, from the OPEN list which has the lowest value of
h(n), and places it in the CLOSED list.

o Step 4: Expand the node n, and generate the successors of node n.

o Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is goal node, then return success and
terminate the search, else proceed to Step 6.

o Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If
the node has not been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.

Advantages:
o Best first search can switch between BFS and DFS by gaining the advantages
of both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
o It can behave as an unguided depth-first search in the worst case scenario.

o It can get stuck in a loop as DFS.

o This algorithm is not optimal.

And-Or graph

PROBLEM REDUCTION:
So far we have considered search strategies for OR graphs through which we want to find a
single path to a goal. Such structure represent the fact that we know how to get from anode
to a goal state if we can discover how to get from that node to a goal state along any one of
the branches leaving it.

AND-OR GRAPHS
The AND-OR GRAPH (or tree) is useful for representing the solution of problems that can
solved by decomposing them into a set of smaller problems, all of which must then be solved.
This decomposition, or reduction, generates arcs that we call AND arcs. One AND arc may
point to any number of successor nodes, all of which must be solved in order for the arc to
point to a solution. Just as in an OR graph, several arcs may emerge from a single node,
indicating a variety of ways in which the original problem might be solved. This is why the
structure is called not simply an AND-graph but rather an AND-OR graph (which also
happens to be an AND-OR tree)

ALGORITHM:
1. Let G be a graph with only starting node INIT.
2. Repeat the followings until INIT is labeled SOLVED or h(INIT) >  FUTILITY
a)      Select an unexpanded node from the most promising path from INIT (call it
NODE)
b)      Generate successors of NODE. If there are none, set h(NODE) = FUTILITY (i.e.,
NODE is unsolvable); otherwise for each SUCCESSOR that is not an ancestor of
NODE do the following:
                                     i.            Add SUCCESSSOR to G.
                                   ii.            If SUCCESSOR is a terminal node, label it SOLVED and set h(SUCCESSOR) = 0.
                                  iii.            If SUCCESSPR is not a terminal node, compute its h
c)       Propagate the newly discovered information up the graph by doing the following: let
S be set of SOLVED nodes or nodes whose h values have been changed and need to
have values propagated back to their parents. Initialize S to Node. Until S is empty
repeat the followings:
                                     i.            Remove a node from S and call it CURRENT.
                                   ii.            Compute the cost of each of the arcs emerging from CURRENT. Assign minimum
cost of its successors as its h.
                                  iii.            Mark the best path out of CURRENT by marking the arc that had the minimum
cost in step ii
                                 iv.            Mark CURRENT as SOLVED if all of the nodes connected to it through new
labeled arc have been labeled SOLVED
                                   v.            If CURRENT has been labeled SOLVED or its cost was just changed, propagate its
new cost back up through the graph. So add all of the ancestors of CURRENT to S.
Means-Ends Analysis in Artificial
Intelligence
o We have studied the strategies which can reason either in forward or
backward, but a mixture of the two directions is appropriate for solving a
complex and large problem. Such a mixed strategy, make it possible that first
to solve the major part of a problem and then go back and solve the small
problems arise during combining the big parts of the problem. Such a
technique is called Means-Ends Analysis.

o Means-Ends Analysis is problem-solving techniques used in Artificial


intelligence for limiting search in AI programs.

o It is a mixture of Backward and forward search technique.

o The MEA technique was first introduced in 1961 by Allen Newell, and Herbert
A. Simon in their problem-solving computer program, which was named as
General Problem Solver (GPS).

o The MEA analysis process centered on the evaluation of the difference


between the current state and goal state.

How means-ends analysis Works:


The means-ends analysis process can be applied recursively for a problem. It is a
strategy to control search in problem-solving. Following are the main Steps which
describes the working of MEA technique for solving a problem.

a. First, evaluate the difference between Initial State and final State.

b. Select the various operators which can be applied for each difference.

c. Apply the operator at each difference, which reduces the difference between
the current state and goal state.

Operator Subgoaling
In the MEA process, we detect the differences between the current state and goal
state. Once these differences occur, then we can apply an operator to reduce the
differences. But sometimes it is possible that an operator cannot be applied to the
current state. So we create the subproblem of the current state, in which operator
can be applied, such type of backward chaining in which operators are selected, and
then sub goals are set up to establish the preconditions of the operator is
called Operator Subgoaling.

Algorithm for Means-Ends Analysis:


Let's we take Current state as CURRENT and Goal State as GOAL, then following are
the steps for the MEA algorithm.

o Step 1: Compare CURRENT to GOAL, if there are no differences between both


then return Success and Exit.

o Step 2: Else, select the most significant difference and reduce it by doing the
following steps until the success or failure occurs.

a. Select a new operator O which is applicable for the current difference,


and if there is no such operator, then signal failure.

b. Attempt to apply operator O to CURRENT. Make a description of two


states.
i) O-Start, a state in which O?s preconditions are satisfied.
ii) O-Result, the state that would result if O were applied In O-start.

c. If
(First-Part <------ MEA (CURRENT, O-START)
And
(LAST-Part <----- MEA (O-Result, GOAL), are successful, then
signal Success and return the result of combining FIRST-PART, O, and
LAST-PART.

The above-discussed algorithm is more suitable for a simple problem and not
adequate for solving complex problems.

Unit 2:-

Next →← Prev

Mini-Max Algorithm in Artificial


Intelligence
o Mini-max algorithm is a recursive or backtracking algorithm which is used
in decision-making and game theory. It provides an optimal move for the
player assuming that opponent is also playing optimally.

o Mini-Max algorithm uses recursion to search through the game-tree.


o Min-Max algorithm is mostly used for game playing in AI. Such as Chess,
Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm
computes the minimax decision for the current state.
o In this algorithm two players play the game, one is called MAX and other is
called MIN.
o Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.
o Both Players of the game are opponent of each other, where MAX will
select the maximized value and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
o The minimax algorithm proceeds all the way down to the terminal node of
the tree, then backtrack the tree as the recursion

Working of Min-Max Algorithm:


o The working of the minimax algorithm can be easily described using an
example. Below we have taken an example of game-tree which is
representing the two-player game.

o In this example, there are two players one is called Maximizer and other is
called Minimizer.

o Maximizer will try to get the Maximum possible score, and Minimizer will try
to get the minimum possible score.

o This algorithm applies DFS, so in this game-tree, we have to go all the way
through the leaves to reach the terminal nodes.

o At the terminal node, the terminal values are given so we will compare those
value and backtrack the tree until the initial state occurs. Following are the
main steps involved in solving the two-player game tree:

Properties of Mini-Max algorithm:


o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if
exist), in the finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing
optimally.
o Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(bm), where b is branching factor of the
game-tree, and m is the maximum depth of the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar
to DFS which is O(bm).

Limitation of the minimax Algorithm:


The main drawback of the minimax algorithm is that it gets really slow for complex
games such as Chess, go, etc. This type of games has a huge branching factor, and
the player has lots of choices to decide. This limitation of the minimax algorithm can
be improved from alpha-beta pruning which we have discussed in the next topic.

Next →← Prev

Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.

o As we have seen in the minimax search algorithm that the number of


game states it has to examine are exponential in depth of the tree. Since
we cannot eliminate the exponent, but we can cut it to half. Hence there is
a technique by which without checking each node of the game tree we can
compute the correct minimax decision, and this technique is
called pruning. This involves two threshold parameter Alpha and beta for
future expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.

o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it


not only prune the tree leaves but also entire sub-tree.

o The two-parameter can be defined as:

a. Alpha: The best (highest-value) choice we have found so far at any


point along the path of Maximizer. The initial value of alpha is -∞.

b. Beta: The best (lowest-value) choice we have found so far at any


point along the path of Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the same
move as the standard algorithm does, but it removes all the nodes which
are not really affecting the final decision but making algorithm slow. Hence
by pruning these nodes, it makes the algorithm fast.
Note: To better understand this topic, kindly study the minimax algorithm.

Condition for Alpha-beta pruning:


The main condition which required for alpha-beta pruning is:

1. α>=β  

Key points about alpha-beta pruning:


o The Max player will only update the value of alpha.

o The Min player will only update the value of beta.

o While backtracking the tree, the node values will be passed to upper nodes
instead of values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.

Move Ordering in Alpha-Beta pruning:


The effectiveness of alpha-beta pruning is highly dependent on the order in which
each node is examined. Move order is an important aspect of alpha-beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not


prune any of the leaves of the tree, and works exactly as minimax algorithm.
In this case, it also consumes more time because of alpha-beta factors, such
a move of pruning is called worst ordering. In this case, the best move occurs
on the right side of the tree. The time complexity for such an order is O(b m).

o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots
of pruning happens in the tree, and best moves occur at the left side of the
tree. We apply DFS hence it first search left of the tree and go deep twice as
minimax algorithm in the same amount of time. Complexity in ideal ordering
is O(bm/2).

Rules to find good ordering:


Following are some rules to find good ordering in alpha-beta pruning:

o Occur the best move from the shallowest node.

o Order the nodes in the tree such that the best nodes are checked first.
o Use domain knowledge while finding the best move. Ex: for Chess, try order:
captures first, then threats, then forward moves, backward moves.

o We can bookkeep the states, as there is a possibility that states may repeat.

https://player.slideplayer.com/13/4173482/data/images/img0.jpg

Introduction to Iterative Deepening


Depth-First Search
Iterative deepening depth-first search (IDDFS) is an algorithm that is an important

part of an Uninformed search strategy just like BFS and DFS. We can define IDDFS as

an algorithm of an amalgam of BFS and DFS searching techniques. In IDDFS, We

have found certain limitations in BFS and DFS so we have done hybridization of both

the procedures for eliminating the demerits lying in them individually. We do a

limited depth-first search up to a fixed “limited depth”. Then we keep on

incrementing the depth limit by iterating the procedure unless we have found the

goal node or have traversed the whole tree whichever is earlier.

Working Algorithm of IDDFS


In the uninformed searching strategy, the BFS and DFS have not been so ideal in

searching the element in optimum time and space. The algorithms only guarantee

that the path will be found in exponential time and space. So we found a method

where we can use the amalgamation of space competence of DFS and optimum

solution approach of BFS methods, and there we develop a new method called

iterative deepening using the two of them. The main idea here lies in utilizing the re-
computation of entities of the boundary instead of stocking them up. Every re-

computation is made up of DFS and thus it uses less space. Now let us also consider

using BFS in iterative deepening search.

 Consider making a breadth-first search into an iterative deepening search.

 We can do this by having aside a DFS which will search up to a limit. It first does

searching to a pre-defined limit depth to depth and then generates a route length1.

 This is done by creating routes of length 1 in the DFS way. Next, it makes way for

routes of depth limit 2, 3 and onwards.

 It even can delete all the preceding calculation all-time at the beginning of the loop

and iterate. Hence at some depth eventually the solution will be found if there is any

in the tree because the enumeration takes place in order.

Pseudo-code for IDDFS


1   IDDFS(T):

2   for d = 0 to infinity:

3   if (DLS(T, d)):

4   return 1

5   else

6   return 0

In order to implement the iterative deepening search we have to mark differences

among:

1. Breakdown as the depth limit bound was attained.


2. A breakdown where depth bound was not attained.

While in the case once we try the search method multiple times by increasing the

depth limit each time and in the second case even if we keep on searching multiple

times since no solution exists then it means simply the waste of time. Thus we come

to the conclusion that in the first case failure is found to be failing unnaturally, and in

the second case, the failure is failing naturally.

Now let us focus on the Procedure of the IDDFS.

 Popular Course in this category

Data Scienti st Training (76 Courses, 60+ Projects) 76 Online Courses | 60 Hands-on Projects | 632+
Hours | Verifiable Certificate of Completion | Lifetime Access
4.8 (9,040 ratings)

Course Price

₹7999 ₹41999
View Course

Related Courses
Machine Learning Training (17 Courses, 27+ Projects)Cloud Computing Training (18 Courses, 5+ Projects)

Example of Iterative Deepening Depth-First


Search
Let us take an example to understand this
Here in the given tree, the starting node is A and the depth initialized to 0. The goal

node is R where we have to find the depth and the path to reach it. The depth from

the figure is 4. In this example, we consider the tree as a finite tree, while we can

consider the same procedure for the infinite tree as well. We knew that in the

algorithm of IDDFS we first do DFS till a specified depth and then increase the depth

at each loop. This special step forms the part of DLS or Depth Limited Search. Thus

the following traversal shows the IDDFS search.


This gives us a glimpse of the IDDFS search pattern.

Advantages and Disadvantages of IDDFS


Below are the advantages and disadvantages are given below:

Advantages
 IDDFS gives us the hope to find the solution if it exists in the tree.

 When the solutions are found at the lower depths say n, then the algorithm proves to

be efficient and in time.

 The great advantage of IDDFS is found in-game tree searching where the IDDFS

search operation tries to improve the depth definition, heuristics, and scores of

searching nodes so as to enable efficiency in the search algorithm.

 Another major advantage of the IDDFS algorithm is its quick responsiveness. The

early results indications are a plus point in this algorithm. This followed up with

multiple refinements after the individual iteration is completed.


 Though the work is done here is more yet the performance of IDDFS is better than

single BFS and DFS operating exclusively.

 Space and time complexities are expressed as: O(d) and here d is defined as goal

depth.

 Let us consider the run time of IDDFS. Let say b>l where b is branching factor and l is

the depth limit. Then next we search the goal node under the bound k. On the depth

k, we say there may be bknodes that are only generated once.  Similarly, the nodes at

the depth limit k-1 is twice and thrice for k-2 depth. Thus the node generated at

depth l is k times.

Disadvantages
 The time taken is exponential to reach the goal node.

 The main problem with IDDFS is the time and wasted calculations that take place at

each depth.

 The situation is not as bad as we may think of especially when the branching factor is

found to be high.

 The IDDFS might fail when the BFS fails. When we are to find multiple answers from

the IDDFS, it gives back the success nodes and its path once even if it needs to be

found again after multiple iterations. To stop the depth bound is not increased

further.

Conclusion
Iterative deepening depth-first search is a hybrid algorithm emerging out of BFS and

DFS. IDDFS might not be used directly in many applications of Computer Science, yet
the strategy is used in searching data of infinite space by incrementing the depth

limit by progressing iteratively. This is quite useful and has applications in AI and the

emerging data sciences industry.

Recommended Articles
This is a guide to Iterative Deepening Depth-First Search. Here we discuss the

example of Iterative Deepening Depth-First Search. You may also have a look at the

following articles to learn more –

1. Digital Marketing Ideas

2. Features of Magento

3. Search Algorithms in AI

4. Linear Search in Data Structure

ALL IN ONE DATA SCIENCE BUNDLE (360+ COURSES,


50+ PROJECTS)
 360+ Online Courses

 50+ projects

 1500+ Hours

 Verifiable Certificates

 Lifetime Access
Learn More

What is knowledge representation?


Humans are best at understanding, reasoning, and interpreting knowledge. Human
knows things, which is knowledge and as per their knowledge they perform various
actions in the real world. But how machines do all these things comes under
knowledge representation and reasoning. Hence we can describe Knowledge
representation as following:

o Knowledge representation and reasoning (KR, KRR) is the part of Artificial


intelligence which concerned with AI agents thinking and how thinking
contributes to intelligent behavior of agents.

o It is responsible for representing information about the real world so that a


computer can understand and can utilize this knowledge to solve the complex
real world problems such as diagnosis a medical condition or communicating
with humans in natural language.

o It is also a way which describes how we can represent knowledge in artificial


intelligence. Knowledge representation is not just storing data into some
database, but it also enables an intelligent machine to learn from that
knowledge and experiences so that it can behave intelligently like a human.

What to Represent:
Following are the kind of knowledge which needs to be represented in AI systems:

o Object: All the facts about objects in our world domain. E.g., Guitars contains
strings, trumpets are brass instruments.
o Events: Events are the actions which occur in our world.

o Performance: It describe behavior which involves knowledge about how to


do things.

o Meta-knowledge: It is knowledge about what we know.

o Facts: Facts are the truths about the real world and what we represent.

o Knowledge-Base: The central component of the knowledge-based agents is


the knowledge base. It is represented as KB. The Knowledgebase is a group
of the Sentences (Here, sentences are used as a technical term and not
identical with the English language).
Knowledge: Knowledge is awareness or familiarity gained by experiences of facts,
data, and situations. Following are the types of knowledge in artificial intelligence:

Types of knowledge
Following are the various types of knowledge:

1. Declarative Knowledge:

o Declarative knowledge is to know about something.

o It includes concepts, facts, and objects.

o It is also called descriptive knowledge and expressed in declarativesentences.

o It is simpler than procedural language.

2. Procedural Knowledge

o It is also known as imperative knowledge.

o Procedural knowledge is a type of knowledge which is responsible for knowing


how to do something.
o It can be directly applied to any task.

o It includes rules, strategies, procedures, agendas, etc.

o Procedural knowledge depends on the task on which it can be applied.


3. Meta-knowledge:

o Knowledge about the other types of knowledge is called Meta-knowledge.

4. Heuristic knowledge:

o Heuristic knowledge is representing knowledge of some experts in a filed or


subject.
o Heuristic knowledge is rules of thumb based on previous experiences,
awareness of approaches, and which are good to work but not guaranteed.

5. Structural knowledge:

o Structural knowledge is basic knowledge to problem-solving.

o It describes relationships between various concepts such as kind of, part of,
and grouping of something.
o It describes the relationship that exists between concepts or objects.

The relation between knowledge and intelligence:


Knowledge of real-worlds plays a vital role in intelligence and same for creating
artificial intelligence. Knowledge plays an important role in demonstrating intelligent
behavior in AI agents. An agent is only able to accurately act on some input when he
has some knowledge or experience about that input.

Let's suppose if you met some person who is speaking in a language which you don't
know, then how you will able to act on that. The same thing applies to the intelligent
behavior of the agents.

As we can see in below diagram, there is one decision maker which act by sensing
the environment and using knowledge. But if the knowledge part will not present
then, it cannot display intelligent behavior.
AI knowledge cycle:
An Artificial intelligence system has the following components for displaying
intelligent behavior:

o Perception

o Learning

o Knowledge Representation and Reasoning

o Planning

o Execution
The above diagram is showing how an AI system can interact with the real world
and what components help it to show intelligence. AI system has Perception
component by which it retrieves information from its environment. It can be visual,
audio or another form of sensory input. The learning component is responsible for
learning from data captured by Perception comportment. In the complete cycle, the
main components are knowledge representation and Reasoning. These two
components are involved in showing the intelligence in machine-like humans. These
two components are independent with each other but also coupled together. The
planning and execution depend on analysis of Knowledge representation and
reasoning.

Approaches to knowledge representation:


There are mainly four approaches to knowledge representation, which are
givenbelow:

1. Simple relational knowledge:


o It is the simplest way of storing facts which uses the relational method, and
each fact about a set of the object is set out systematically in columns.
o This approach of knowledge representation is famous in database systems
where the relationship between different entities is represented.
o This approach has little opportunity for inference.

Example: The following is the simple relational knowledge representation.

Player Weight

Player1 65

Player2 58

Player3 75

2. Inheritable knowledge:
o In the inheritable knowledge approach, all data must be stored into a
hierarchy of classes.

o All classes should be arranged in a generalized form or a hierarchal manner.

o In this approach, we apply inheritance property.

o Elements inherit values from other members of a class.


o This approach contains inheritable knowledge which shows a relation between
instance and class, and it is called instance relation.

o Every individual frame can represent the collection of attributes and its value.

o In this approach, objects and values are represented in Boxed nodes.

o We use Arrows which point from objects to their values.

o Example:

3. Inferential knowledge:
o Inferential knowledge approach represents knowledge in the form of formal
logics.
o This approach can be used to derive more facts.

o It guaranteed correctness.

o Example: Let's suppose there are two statements:

a. Marcus is a man

b. All men are mortal


Then it can represent as;

man(Marcus)
∀x = man (x) ----------> mortal (x)s
4. Procedural knowledge:
o Procedural knowledge approach uses small programs and codes which
describes how to do specific things, and how to proceed.
o In this approach, one important rule is used which is If-Then rule.

o In this knowledge, we can use various coding languages such as LISP


language and Prolog language.

o We can easily represent heuristic or domain-specific knowledge using this


approach.

o But it is not necessary that we can represent all cases in this approach.

Requirements for knowledge Representation system:


A good knowledge representation system must possess the following properties.

1. 1. Representational Accuracy:
KR system should have the ability to represent all kind of required knowledge.

2. 2. Inferential Adequacy:
KR system should have ability to manipulate the representational structures
to produce new knowledge corresponding to existing structure.

3. 3. Inferential Efficiency:
The ability to direct the inferential knowledge mechanism into the most
productive directions by storing appropriate guides.

4. 4. Acquisitional efficiency- The ability to acquire the new knowledge easily


using automatic methods.

Issues in knowledge representation


The main objective of knowledge representation is to draw the conclusions
from the knowledge, but there are many issues associated with the use of
knowledge representation techniques.

Some of them are listed below:


Refer to the above diagram to refer to the following issues.

1. Important attributes
There are two attributes shown in the diagram, instance and isa. Since
these attributes support property of inheritance, they are of prime
importance.

2. Relationships among attributes


Basically, the attributes used to describe objects are nothing but the entities.
However, the attributes of an object do not depend on the encoded specific
knowledge.

3. Choosing the granularity of representation


While deciding the granularity of representation, it is necessary to know the
following:

i. What are the primitives and at what level should the knowledge be
represented?

ii. What should be the number (small or large) of low-level primitives or


high-level facts?

High-level facts may be insufficient to draw the conclusion while Low-level


primitives may require a lot of storage.
For example: Suppose that we are interested in following facts:
John spotted Alex.

Now, this could be represented as "Spotted (agent(John), object (Alex))"


Such a representation can make it easy to answer questions such as: Who
spotted Alex?

Suppose we want to know : "Did John see Sue?"


Given only one fact, user cannot discover that answer.

Hence, the user can add other facts, such as "Spotted (x, y) → saw (x, y)"

4. Representing sets of objects.


There are some properties of objects which satisfy the condition of a set
together but not as individual;

Example: Consider the assertion made in the sentences:


"There are more sheep than people in Australia", and "English speakers can
be found all over the world."
These facts can be described by including an assertion to the sets
representing people, sheep, and English.

5. Finding the right structure as needed


To describe a particular situation, it is always important to find the access of
right structure. This can be done by selecting an initial structure and then
revising the choice.

While selecting and reversing the right structure, it is necessary to solve


following problem statements. They include the process on how to:

 Select an initial appropriate structure.


 Fill the necessary details from the current situations.
 Determine a better structure if the initially selected structure is not
appropriate to fulfill other conditions.
 Find the solution if none of the available structures is appropriate.
 Create and remember a new structure for the given condition.
 There is no specific way to solve these problems, but some of the  effective
knowledge representation techniques have the potential to solve them.
Next →← Prev

First-Order Logic in Artificial intelligence


In the topic of Propositional logic, we have seen that how to represent statements using proposi
propositional logic, we can only represent the facts, which are either true or false. PL is not suffi
sentences or natural language statements. The propositional logic has very limited expressive po
which we cannot represent using PL logic.

o "Some humans are intelligent", or

o "Sachin likes cricket."

To represent the above statements, PL logic is not sufficient, so we required some more powerfu

First-Order logic:
o First-order logic is another way of knowledge representation in artificial intelligence. It is

o FOL is sufficiently expressive to represent the natural language statements in a concise w

o First-order logic is also known as Predicate logic or First-order predicate logic. First-
develops information about the objects in a more easy way and can also express the rela

o First-order logic (like natural language) does not only assume that the world contains fac
assumes the following things in the world:

o Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus, ...

o Relations: It can be unary relation such as: red, round, is adjacent, or n-any


of, has color, comes between
o Function: Father of, best friend, third inning of, end of, ......

o As a natural language, first-order logic also has two main parts:

a. Syntax

b. Semantics

Syntax of First-Order logic:


The syntax of FOL determines which collection of symbols is a logical expression in first-order lo
first-order logic are symbols. We write statements in short-hand notation in FOL.

Basic Elements of First-order logic:

Following are the basic elements of FOL syntax:

Constant 1, 2, A, John, Mumbai, cat,....


Variables x, y, z, a, b,....

Predicates Brother, Father, >,....

Function sqrt, LeftLegOf, ....

Connectives ∧, ∨, ¬, ⇒, ⇔

Equality ==

Quantifier ∀, ∃

Atomic sentences:
o Atomic sentences are the most basic sentences of first-order logic. These sentences are f
by a parenthesis with a sequence of terms.
o We can represent atomic sentences as Predicate (term1, term2, ......, term n).

Example: Ravi and Ajay are brothers: => Brothers(Ravi, Ajay).


                Chinky is a cat: => cat (Chinky).

Complex Sentences:
o Complex sentences are made by combining atomic sentences using connectives.

First-order logic statements can be divided into two parts:

o Subject: Subject is the main part of the statement.

o Predicate: A predicate can be defined as a relation, which binds two atoms together in a

Consider the statement: "x is an integer.", it consists of two parts, the first part x is the sub
"is an integer," is known as a predicate.
Quantifiers in First-order logic:
o A quantifier is a language element which generates quantification, and quantification spec
universe of discourse.
o These are the symbols that permit to determine or identify the range and scope of the va
are two types of quantifier:

a. Universal Quantifier, (for all, everyone, everything)

b. Existential quantifier, (for some, at least one).

Universal Quantifier:

Universal quantifier is a symbol of logical representation, which specifies that the statement with
every instance of a particular thing.

The Universal quantifier is represented by a symbol ∀, which resembles an inverted A.

Note: In universal quantifier we use implication "→".

If x is a variable, then ∀x is read as:

o For all x

o For each x

o For every x.

Example:

All man drink coffee.

Let a variable x which refers to a cat so all x can be represented in UOD as below:
∀x man(x) → drink (x, coffee).

It will be read as: There are all x where x is a man who drink coffee.

Existential Quantifier:
Existential quantifiers are the type of quantifiers, which express that the statement within its sco
something.

It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with a pre
existential quantifier.

Note: In Existential quantifier we always use AND or Conjunction symbol ( ∧).

If x is a variable, then existential quantifier will be ∃x or ∃(x). And it will be read as:

o There exists a 'x.'

o For some 'x.'

o For at least one 'x.'

Example:
Some boys are intelligent.

∃x: boys(x) ∧ intelligent(x)

It will be read as: There are some x where x is a boy who is intelligent.

Points to remember:
o The main connective for universal quantifier ∀ is implication →.

o The main connective for existential quantifier ∃ is and ∧.

Properties of Quantifiers:
o In universal quantifier, ∀x∀y is similar to ∀y∀x.

o In Existential quantifier, ∃x∃y is similar to ∃y∃x.

o ∃x∀y is not similar to ∀y∃x.

Some Examples of FOL using quantifier:

1. All birds fly.


In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as follows.
              ∀x bird(x) →fly(x).

2. Every man respects his parent.


In this question, the predicate is "respect(x, y)," where x=man, and y= parent.
Since there is every man so will use ∀, and it will be represented as follows:
              ∀x man(x) → respects (x, parent).

3. Some boys play cricket.


In this question, the predicate is "play(x, y)," where x= boys, and y= game. Since there are so
be represented as:
              ∃x boys(x) → play(x, cricket).

4. Not all students like both Mathematics and Science.


In this question, the predicate is "like(x, y)," where x= student, and y= subject.
Since there are not all students, so we will use ∀ with negation, so following representation fo
              ¬∀ (x) [ student(x) → like(x, Mathematics) ∧ like(x, Science)].

5. Only one student failed in Mathematics.


In this question, the predicate is "failed(x, y)," where x= student, and y= subject.
Since there is only one student who failed in Mathematics, so we will use following representatio
              ∃(x) [ student(x) → failed (x, Mathematics) ∧∀ (y) [¬(x==y) ∧ student(y) → ¬

Free and Bound Variables:


The quantifiers interact with variables which appear in a suitable way. There are two types of va
given below:

Free Variable: A variable is said to be a free variable in a formula if it occurs outside the scope

          Example: ∀x ∃(y)[P (x, y, z)], where z is a free variable.

Bound Variable: A variable is said to be a bound variable in a formula if it occurs within the sco

          Example: ∀x [A (x) B( y)], here x and y are the bound variables.

You might also like