Nothing Special   »   [go: up one dir, main page]

AI Unit 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

UNIT-2 Problem Solving by Search

Agents and environments:

Fig 2.1: Agents and Environments

Agent:
An Agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.

 A human agent has eyes, ears, and other organs for sensors and hands, legs,
mouth, and other body parts for actuators.
 A robotic agent might have cameras and infrared range finders for sensors and
various motors for actuators.
 A software agent receives keystrokes, file contents, and network packets as
sensory inputs and acts on the environment by displaying on the screen,
writing files, and sending network packets.

Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.

Percept Sequence:
An agent's percept sequence is the complete history of everything the agent has ever perceived.

Agent function:
Mathematically speaking, we say that an agent's behavior is described by the agent
function that maps any given percept sequence to an action.
Agent program
Internally, the agent function for an artificial agent will be implemented by an agent
program. It is important to keep these two ideas distinct. The agent function is an
abstract mathematical description; the agent program is a concrete implementation,
running on the agent architecture.

To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown
in Fig
2.1.5. This particular world has just two locations: squares A and B. The vacuum agent
perceives which square it is in and whether there is dirt in the square. It can choose
to move left, move right, suck up the dirt, or do nothing. One very simple agent
function is the following: if the current square is dirty, then suck, otherwise move to
the other square. A partial tabulation of this agent function is shown in Fig 2.1.6.

Fig 2.1.5: A vacuum-cleaner world with just two locations.


Agent function

Percept Sequence Action


[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck

Fig 2.1.6: Partial tabulation of a simple agent function for the example: vacuum-cleaner world
shown in the Fig 2.1.5

Function REFLEX-VACCUM-AGENT ([location, status]) returns an action If

status=Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Fig 2.1.6(i): The REFLEX-VACCUM-AGENT program is invoked for each new percept
(location, status) and returns an action each time
Strategies of Solving Tic-Tac-Toe Game

Playing Tic-Tac-Toe Game Playing:

Tic-Tac-Toe is a simple and yet an interesting board game. Researchers have used various
approaches to study the Tic-Tac-Toe game. For example, Fok and Ong and Grim et al. have
used artificial neural network based strategies to play it. Citrenbaum and Yakowitz
discuss games like Go-Moku, Hex and Bridg-It which share some similarities with Tic-Tac-
Toe.

A Formal Definition of the Game:

The board used to play the Tic-Tac-Toe game consists of 9 cells laid out in the form of a
3x3 matrix (Fig. 1). The game is played by 2 players and either of them can start. Each of
the two players is assigned a unique symbol (generally 0 and X). Each player alternately
gets a turn to make a move. Making a move is compulsory and cannot be deferred. In each
move a player places the symbol assigned to him/her in a hitherto blank cell.

Let a track be defined as any row, column or diagonal on the board. Since the board is
a square matrix with 9 cells, all rows, columns and diagonals have exactly 3 cells. It can be
easily observed that there are 3 rows, 3 columns and 2 diagonals, and hence a total of 8
tracks on the board (Fig. 1). The goal of the game is to fill all the three cells of any track
on the board with the symbol assigned to one before the opponent does the same with
the symbol assigned to him/her. At any point of the game, if there exists a track
whose all three cells have been marked by the same symbol, then the player to whom
that symbol have been assigned wins and the game terminates. If there exist no track
whose cells have been marked by the same symbol when there is no more blank cell on
the board then the game is drawn.

Let the priority of a cell be defined as the number of tracks passing through it. The
priorities of the nine cells on the board according to this definition are tabulated in Table
1. Alternatively, let the priority of a track be defined as the sum of the priorities of its
three cells. The priorities of the eight tracks on the board according to this definition are
tabulated in Table 2. The prioritization of the cells and the tracks lays the foundation of
the heuristics to be used in this study. These heuristics are somewhat similar to those
proposed by Rich and Knight.

Strategy 1:
Algorithm:
1. View the vector as a ternary number. Convert it to a decimal number.
2. Use the computed number as an index into Move-Table and access the vector stored
there.
3. Set the new board to that vector.

Procedure:

1) Elements of vector:

0: Empty

1: X

2: O

→ the vector is a ternary number

2) Store inside the program a move-table (lookuptable):

a) Elements in the table: 19683 (39)

b) Element = A vector which describes the most suitable move from the

Comments:

1. A lot of space to store the Move-Table.

2. A lot of work to specify all the entries in the Move-Table.

3. Difficult to extend
Explanation of Strategy 2 of solving Tic-tac-toe

problemStratergy 2:

Data Structure:
1) Use vector, called board, as Solution 1

2) However, elements of the

vector:2: Empty

3: X
5: O
3) Turn of move: indexed by

integer1,2,3, etc

Function Library:

1. Make2:

a) Return a location on a game-

board.IF (board[5] = 2)

RETURN 5; //the center

cell.ELSE

RETURN any cell that is not at the board’s corner;


// (cell: 2,4,6,8)

b) Let P represent for X or O

c) can_win(P) :

P has filled already at least two cells on a straight line (horizontal,


vertical, ordiagonal)
d) cannot_win(P) = NOT(can_win(P))

2. Posswin(P):
IF
(cannot_win(P))

RETURN 0;

ELSE
RETURN index to the empty cell on the line

ofcan_win(P)

Let odd numbers are turns of

X Let even numbers are turns

of O

3. Go(n): make a move

Algorithm:
1. Turn = 1: (X moves)

Go(1) //make a move at the left-top cell


2. Turn = 2: (O moves)

IF board[5] is empty

THENGo(5)

ELSE

Go(1)

3. Turn = 3: (X moves)

IF board[9] is empty

THENGo(9)

ELSE

Go(3)

4. Turn = 4: (O moves)

IF Posswin (X) <> 0 THEN


Go (Posswin (X))

//Prevent the opponent to

winELSE Go (Make2)

5. Turn = 5: (X moves)

IF Posswin(X) <> 0

THENGo(Posswin(X))

//Win for X.
ELSE IF Posswin(O) <> THEN
Go(Posswin(O))

//Prevent the opponent to win

ELSE IF board[7] is empty THEN

Go(7)

ELSE Go(3)

Comments:

1. Not efficient in time, as it has to check several conditions before making each
move.

2. Easier to understand the program’s strategy.

3. Hard to generalize.
Breadth First Search:

Let us discuss these strategies using water jug problem. These may be applied to any

search problem. Construct a tree with the initial state as its root.

Generate all the offspring of the root by applying each of the applicable rules to the initial state.

Now for each leaf node, generate all its successors by applying all the rules that are appropriate.

8 Puzzle Problem.

The 8 puzzle consists of eight numbered, movable tiles set in a 3x3 frame. One cell of the
frame is always empty thus making it possible to move an adjacent numbered tile into the
empty cell. Such a puzzle is illustrated in following diagram.

The program is to change the initial configuration into the goal configuration. A solution
to the problem is an appropriate sequence of moves, such as “move tiles 5 to the right,
move tile 7 to the left, move tile 6 to the down, etc”.

Solution:

To solve a problem using a production system, we must specify the global database the
rules, and the control strategy. For the 8 puzzle problem that correspond to these three

Artificial Intelligence Page 9


components. These elements are the problem states, moves and goal. In this problem
each tile configuration is a state. The set of all configuration in the space of problem states
or the problem space, there are only 3, 62,880 different configurations o the 8 tiles and
blank space. Once the problem states have been conceptually identified, we must
construct a computer representation, or description of them . this description is then used
as the database of a production system. For the 8-puzzle, a straight forward description
is a 3X3 array of matrix of numbers. The initial global database is this description of the
initial problem state. Virtually any kind of data structure can be used to describe states.

A move transforms one problem state into another state. The 8-puzzle is conveniently
interpreted as having the following for moves. Move empty space (blank) to the left, move
blank up, move blank to the right and move blank down,. These moves are modeled by
production rules that operate on the state descriptions in the appropriate manner.

The rules each have preconditions that must be satisfied by a state description in order
for them to be applicable to that state description. Thus the precondition for the rule
associated with “move blank up” is derived from the requirement that the blank space
must not already be in the top row.

The problem goal condition forms the basis for the termination condition of the
production system. The control strategy repeatedly applies rules to state descriptions
until a description of a goal state is produced. It also keeps track of rules that have been
applied so that it can compose them into sequence representing the problem solution. A
solution to the 8-puzzle problem is given in the following figure.

Example:- Depth – First – Search traversal and Breadth - First - Search traversal

Artificial Intelligence Page 10


Exhaustive Searches, BFS and DFS
Search is the systematic examination of states to find path from the start/root state to the goal
state.

Many traditional search algorithms are used in AI applications. For complex problems,
the traditional algorithms are unable to find the solution within some practical time and
space limits. Consequently, many special techniques are developed; using heuristic
functions. The algorithms that use heuristic functions are called heuristic algorithms.
Heuristic algorithms are not really intelligent; they appear to be intelligent because they
achieve better performance.

Heuristic algorithms aremore efficient because they take advantage of feedback from the
data to direct the search path.

Uninformed search

Also called blind, exhaustive or brute-force search, uses no information about the
problem to guide the search and therefore may not be very efficient.

Informed Search:

Also called heuristic or intelligent search, uses information about the problem to guide
the search, usually guesses the distance to a goal state and therefore efficient, but the
search may not be always possible.

Uninformed Search Methods:


Breadth- First -Search:
Consider the state space of a problem that takes the form of a tree. Now, if we search the
goal along each breadth of the tree, starting from the root and continuing up to the
largest depth, we call it breadth first search.

• Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
Artificial Intelligence Page 11
2. Until a goal state is found or NODE-LIST is empty do

a. Remove the first element from NODE-LIST and call it E. If NODE-LIST

was empty, quit


b. For each way that each rule can match the state described in E do:

i. Apply the rule to generate a new state


ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST
BFS illustrated:

Step 1: Initially fringe contains only one node corresponding to the source state A.

FRINGE: A

Step 2: A is removed from fringe. The node is expanded, and its children B and C are
generated.They are placed at the back of fringe.

Figure 2

Artificial Intelligence Page 12


FRINGE: B C

Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated

and putat the back of fringe.

Figure 3
FRINGE: C D E

Step 4: Node C is removed from fringe and is expanded. Its children D and G are
added to theback of fringe.

Figure 4
FRINGE: D E D G

Step 5: Node D is removed from fringe. Its children C and F are generated and added
to the backof fringe

Artificial Intelligence Page 13


Figure 5
FRINGE: E D G C F

Step 6: Node E is removed from fringe. It has no children.

Figure 6
FRINGE: D G C F

Step 7: D is expanded; B and F are put in OPEN.

Figure 7
FRINGE: G C F B F

Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm


returns thepath A C G by following the parent pointers of the node corresponding to
G. The algorithm terminates.

Artificial Intelligence Page 14


Breadth first search is:

 One of the simplest search strategies


 Complete. If there is a solution, BFS is guaranteed to find it.
 If there are multiple solutions, then a minimal solution will be found
 The algorithm is optimal (i.e., admissible) if all operators have the same
cost. Otherwise, breadth first search finds a solution with the shortest path
length.
 Time complexity : O(bd )
 Space complexity : O(bd )
 Optimality :Yes
b - branching factor(maximum no of successors of
any node), d – Depth of the shallowest goal node
Maximum length of any path (m) in search space
Advantages: Finds the path of minimal length to the goal.
Disadvantages:
Requires the generation and storage of a tree whose size is exponential the
depth of the shallowest goal node.
The breadth first search algorithm cannot be effectively used unless the search
space is quite small.

Depth- First- Search.


We may sometimes search the goal along the largest depth of the tree, and move up only
when further traversal along the depth is not possible. We then attempt to find
alternative offspring of the parent of the node (state) last visited. If we visit the nodes of
a tree using the above principles to search the goal, the traversal made is called depth
first traversal and consequently the search strategy is called depth first search.

• Algorithm:
Artificial Intelligence Page 15
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do

a. Remove the first element from NODE-LIST and call it E. If NODE-LIST

was empty, quit

Artificial Intelligence Page 16


b. For each way that each rule can match the state described in E do:

i. Apply the rule to generate a new state


ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state in front of NODE-LIST
DFS illustrated:

A State Space

GraphStep 1: Initially fringe contains only the node for

A.

Figure 1
FRINGE: A

Step 2: A is removed from fringe. A is expanded and its children B and C are put in
front offringe.

Figure 2
FRINGE: B C
Artificial Intelligence Page 17
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe

Artificial Intelligence Page 18


.

Figure 3
FRINGE: D E C

Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.
Figure 4
FRINGE: C F E C

Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.

Figure 5
FRINGE: G F E C
Step 6: Node G is expanded and found to be a goal node.

Artificial Intelligence Page 19


Artificial Intelligence Page 20
FRINGE: G F E C

The solution path A-B-D-C-G is returned and the algorithm terminates.

Depth first search:

1. The algorithm takes exponential time.

2. If N is the maximum depth of a node in the search space, in the worst case the algorithm
will
d
take time O(b ).
3. The space taken is linear in the depth of the search tree, O(bN).

Note that the time taken by the algorithm is related to the maximum depth of the search
tree. If the search tree has infinite depth, the algorithm may not terminate. This can
happen if the search space is infinite. It can also happen if the search space contains
cycles. The latter case can be handled by checking for cycles in the algorithm. Thus
Depth First Search is not complete.

Exhaustive searches- Iterative Deeping DFS

Description:

It is a search strategy resulting when you combine BFS and DFS, thus combining
the advantages of each strategy, taking the completeness and optimality of BFS
and the modest memory requirements of DFS.

IDS works by looking for the best search depth d, thus starting with depth limit 0
and make a BFS and if the search failed it increase the depth limit by 1 and try a
BFS again with depth 1 and so on – first d = 0, then 1 then 2 and so on – until a

Artificial Intelligence Page 21


depth d is reached where a goal is found.
Algorithm:

procedure IDDFS(root)
for depth from 0 to ∞
found ← DLS(root, depth)
if found ≠ null
return found

procedure DLS(node,
depth) if depth = 0 and
node is a goal return
node
else if depth > 0
foreach child of node
found ← DLS(child, depth−1)
if found ≠
null return
found
return null

Performance Measure:
o Completeness: IDS is like BFS, is complete when the branching factor b is finite.

o Optimality: IDS is also like BFS optimal when the steps are of the same cost.

Time Complexity:

o One may find that it is wasteful to generate nodes multiple times, but
actually it is not that costly compared to BFS, that is because most of the
generated nodes are always in the deepest level reached, consider that we
are searching a binary tree and our depth limit reached 4, the nodes
generated in last level = 24 = 16, the nodes generated in all nodes before

Artificial Intelligence Page 22


last level = 20 + 21 + 22 + 23= 15
o Imagine this scenario, we are performing IDS and the depth limit reached
depth d, now if you remember the way IDS expands nodes, you can see that
nodes at depth d are generated once, nodes at depth d-1 are generated 2
times, nodes at depth d-2 are generated 3 times and so on, until you reach
depth 1 which is generated d times, we can view the total number of
generated nodes in the worst case as:
 N(IDS) = (b)d + (d – 1)b2+ (d – 2)b3 + …. + (2)bd-1 + (1)bd = O(bd)

o If this search were to be done with BFS, the total number of generated
nodes inthe worst case will be like:
 N(BFS) = b + b2 + b3 + b4 + …. bd + (bd+ 1 – b) = O(bd + 1)
o If we consider a realistic numbers, and use b = 10 and d = 5, then
number ofgenerated nodes in BFS and IDS will be like
 N(IDS) = 50 + 400 + 3000 + 20000 + 100000 = 123450
 N(BFS) = 10 + 100 + 1000 + 10000 + 100000 + 999990 = 1111100

 BFS generates like 9 time nodes to those generated with IDS.

Artificial Intelligence Page 23


Space Complexity:

o IDS is like DFS in its space complexity, taking O(bd) of memory.

Weblinks:

i. https://www.youtube.com/watch?v=7QcoJjSVT38

ii. https://mhesham.wordpress.com/tag/iterative-deepening-depth-first-
search

Conclusion:

We can conclude that IDS is a hybrid search strategy between BFS and DFS
inheriting their advantages.
IDS is faster than BFS and DFS.
It is said that “IDS is the preferred uniformed search method when there is a
large search space and the depth of the solution is not known”.

Heuristic Searches:

A Heuristic technique helps in solving problems, even though there is no guarantee that
it will never lead in the wrong direction. There are heuristics of every general applicability
as well as domain specific. The strategies are general purpose heuristics. In order to use
them in a specific domain they are coupler with some domain specific heuristics. There
are two major ways in which domain - specific, heuristic information can be incorporated
into rule-based search procedure.

A heuristic function is a function that maps from problem state description to measures
desirability, usually represented as number weights. The value of a heuristic function at a
given node in the search process gives a good estimate of that node being on the desired
path to solution.

Artificial Intelligence Page 24


Greedy Best First Search

Greedy best-first search tries to expand the node that is closest to the goal, on the:
grounds that this is likely to lead to a solution quickly. Thus, it evaluates nodes by using
just the heuristic function:
f (n) = h (n).

Artificial Intelligence Page 25


Taking the example of Route-finding problems in Romania, the goal is to reach Bucharest
starting from the city Arad. We need to know the straight-line distances to Bucharest from
various cities as shown in Figure 8.1. For example, the initial state is In (Arad), and the
straight line distance heuristic hSLD (In (Arad)) is found to be 366. Using the straight-line
distance heuristic hSLD, the goal state can be reached faster.
Arad 366 Mehadia 241 Hirsova 151
Bucharest 0 Neamt 234 Urziceni 80
Craiova 160 Oradea 380 Iasi 226
Drobeta 242 Pitesti 100 Vaslui 199
Eforie 161 Rimnicu Vilcea 193 Lugoj 244
Fagaras 176 Sibiu 253 Zerind 374
Giurgiu 77 Timisoara 329
Figure 8.1: Values of hSLD-straight-line distances to B u c h a r e s t.
The Initial State

After Expanding Arad

After Expanding Sibiu

Artificial Intelligence Page 26


After Expanding Fagaras

Figure 8.2: Stages in a greedy best-first search for Bucharest using the straight-line distance
heuristic
hSLD. Nodes are labeled with their h-values.

Figure 8.2 shows the progress of greedy best-first search using hSLD to find a path from
Arad to Bucharest. The first node to be expanded from Arad will be Sibiu, because it is
closer to Bucharest than either Zerind or Timisoara. The next node to be expanded will
be Fagaras, because it is closest.
Fagaras in turn generates Bucharest, which is the goal.

Best First Search:

 A combination of depth first and breadth first searches.


 Depth first is good because a solution can be found without computing all nodes

Artificial Intelligence Page 27


and breadth first is good because it does not get trapped in dead ends.
 The best first search allows us to switch between paths thus gaining the benefit of
both approaches. At each step the most promising node is chosen. If one of the
nodes chosen generates nodes that are less promising it is possible to choose
another at the same level and in effect the search changes from depth to breadth.
If on analysis these are no better than this previously unexpanded node and branch
is not forgotten and the search method reverts to the

OPEN is a priorityqueue of nodes that have been evaluated by the heuristic function but
which have not yet been expanded into successors. The most promising nodes are at the
front.

CLOSED are nodes that have already been generated and these nodes must be stored
because a graph is being used in preference to a tree.

Algorithm:

1. Start with OPEN holding the initial state


2. Until a goal is found or there are no nodes left on open do.

 Pick the best node on OPEN


 Generate its successors
 For each successor Do
• If it has not been generated before ,evaluate it ,add it to OPEN
and record its parent

• If it has been generated before change the parent if this new path
is better and in that case update the cost of getting to any
successor nodes.

Artificial Intelligence Page 28


3. If a goal is found or no more nodes left in OPEN, quit, else return to 2.

Artificial Intelligence Page 29


Example:

1. It is not optimal.
2. It is incomplete because it can start down an infinite path and never return to
try otherpossibilities.

3. The worst-case time complexity for greedy search is O (bm), where m is the
maximum depth of the search space.
4. Because greedy search retains all nodes in memory, its space complexity is the
Artificial Intelligence Page 30
same as its time complexity

Artificial Intelligence Page 31


A* Algorithm

The Best First algorithm is a simplified form of the A* algorithm.

The A* search algorithm (pronounced "Ay-star") is a tree search algorithm that finds a
path from a given initial node to a given goal node (or one passing a given goal test). It
employs a "heuristic estimate" which ranks each node by an estimate of the best route
that goes through that node. It visits the nodes in order of this heuristic estimate.

Similar to greedy best-first search but is more accurate because A* takes into account the
nodes that have already been traversed.

From A* we note that f = g + h where

g is a measure of the distance/cost to go from the initial node to the current node

his an estimate of the distance/cost to solution from the current node.

Thus fis an estimate of how long it takes to go from the initial node to the solution

Algorithm:

1. Initialize : Set OPEN = (S); CLOSED =

( ) g(s)= 0, f(s)=h(s)

2. Fail : If OPEN = ( ), Terminate and fail.

3. Select : select the minimum cost state, n, from OPEN,

save n in CLOSED

4. Terminate : If n €G, Terminate with success and return f(n)

5. Expand : for each successor, m, of n

Artificial Intelligence Page 32


a) If m € *OPEN U

CLOSED+ Set g(m) =

g(n) + c(n , m) Set

f(m) = g(m) + h(m)

Insert m in OPEN

b) If m € *OPEN U CLOSED+

Set g(m) = min { g(m) , g(n) +

c(n , m)} Set f(m) = g(m) + h(m)

If f(m) has decreased and m €

CLOSED Move m to OPEN.

Description:

 A* begins at a selected node. Applied to this node is the "cost" of entering this
node (usually zero for the initial node). A* then estimates the distance to the goal
node from the current node. This estimate and the cost added together are the
heuristic which is assigned to the path leading to this node. The node is then added
to a priority queue, often called "open".
 The algorithm then removes the next node from the priority queue (because of the
way a priority queue works, the node removed will have the lowest heuristic). If
the queue is empty, there is no path from the initial node to the goal node and the
algorithm stops. If the node is the goal node, A* constructs and outputs the
successful path and stops.
 If the node is not the goal node, new nodes are created for all admissible adjoining
nodes; the exact way of doing this depends on the problem at hand. For each
successive node, A* calculates the "cost" of entering the node and saves it with
the node. This cost is calculated from the cumulative sum of costs stored with its
Artificial Intelligence Page 33
ancestors, plus the cost of the operation which reached this new node.
 The algorithm also maintains a 'closed' list of nodes whose adjoining nodes have
been checked. If a newly generated node is already in this list with an equal or
lower cost, no further processing is done on that node or with the path associated
with it. If a node in the closed list matches the new one, but has been stored with
a higher cost, it is removed from the closed list, and processing continues on the
new node.

Artificial Intelligence Page 34


 Next, an estimate of the new node's distance to the goal is added to the cost to
form the heuristic for that node. This is then added to the 'open' priority queue,
unless an identical node is found there.
 Once the above three steps have been repeated for each new adjoining node, the
original node taken from the priority queue is added to the 'closed' list. The next
node is then popped from the priority queue and the process is repeatedThe
heuristic costs from each city to Bucharest:

Artificial Intelligence Page 35


Artificial Intelligence Page 36
Artificial Intelligence Page 37
Artificial Intelligence Page 38
A* search properties:

 The algorithm A* is admissible. This means that provided a solution exists, the
first solution found by A* is an optimal solution. A* is admissible under the
following conditions:

 Heuristic function: for every node n , h(n) ≤ h*(n) .

 A* is also complete.

 A* is optimally efficient for a given heuristic.

 A* is much more efficient that uninformed search.

Iterative Deeping A*

Algorithm: Algorithm:

Let L be the list of visited but not expanded


node, and C the maximum depth
1) Let C=0

2) Initialize Lto the initial state (only)

3) If List empty increase C and

goto 2), else


extract a node n from the front of L
4) If n is a goal node,

SUCCEED and return the path from the initial state to n


5) Remove n from L. If the level is smaller than C, insert at the front of L all the
children n' of n with f(n') ≤ C
6) Goto 3)

Artificial Intelligence Page 39


Artificial Intelligence Page 40
 IDA* is complete & optimal Space usage is linear in the depth of
solution. Each iteration is depth first search, and thus it does not
require a priority queue.

 Iterative deepening A* (IDA*) eliminates the memory constraints of


A* search algorithm without sacrificing solution optimality.

 Each iteration of the algorithm is a depth-first search that keeps track


of the cost, f(n) = g(n) + h(n), of each node generated.

 As soon as a node is generated whose cost exceeds a threshold for


that iteration, its path is cut off, and the search backtracks before
continuing.

 The cost threshold is initialized to the heuristic estimate of the initial


state, and in each successive iteration is increased to the total cost of
the lowest-cost node that was pruned during the previous iteration.

 The algorithm terminates when a goal state is reached whose total


cost dees not exceed the current threshold.

Artificial Intelligence Page 41

You might also like