Nothing Special   »   [go: up one dir, main page]

Session 12 Informed Search Strategies (Heuristic Search Techniques) A Search (Minimizing The Total Estimated Solution Cost)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 33

Session 12

Informed Search Strategies


(Heuristic Search Techniques)

A* search
(Minimizing the total estimated solution cost)
Session Outcomes

•Student will learn the general-purpose


informed (heuristic) search algorithm that can
be used to solve problems.
INFORMED SEARCH STRATEGIES
(Heuristics Search Techniques)

•Informed search strategy is one that uses problem-specific


knowledge beyond the definition of the problem itself—can find
solutions more efficiently than can an uninformed strategy.

•We learn the following two strategies as part of Informed Search


Strategies. We discuss GBFS in this session and A* algorithm in
the following session:

• Greedy Best First Search

• A* search
Greedy best-first search

•Greedy best-first search' tries to expand the node that is closest to


the goal, on the grounds that this is likely to lead to a solution
quickly. Thus, it evaluates nodes by using just the heuristic function;
that is, f (n) = h(n).

•The algorithm is called "greedy“ because at each step it tries to get


as close to the goal as it can.

•Greedy Best First search using hSLD finds a solution without ever
expanding a node that is not on the solution path; hence, its search
cost is minimal.
Greedy Best First Search(Continued….)
Algorithm:
1. Start with OPEN containing just the initial state

2. Until a goal is found or there are no nodes left on OPEN do:


a. Pick the best node on OPEN
b. Generate its successors
c. For each successor do:
i. If it has not been generated before, evaluate it, add it to
OPEN, and record its parent.
ii. If it has been generated before, change the parent if this
new path is better than the previous one. In that case,
update the cost of getting to this node and to any
successors that this node may already have.
Greedy Best First Search(Continued….)
Explanation:
• It proceeds in steps, expanding one node at each step, until it
generates a node that corresponds to a goal state.

• At each step, it picks the most promising of the nodes that have so
far been generated but not expanded.

• It generates the successors of the chosen node, applies the


heuristic function to them, and adds them to the list of open
nodes, after checking to see if any of them have been generated
before.

• By doing this check, we can guarantee that each node only


appears once in the graph, although many nodes may point to it as
a successor.
Greedy Best First Search(Continued….)

• Heuristic function h(n)= estimated as cost of the cheapest path from the
state at node n to a goal state, i.e., h(n)=straight line distance

•hsld cannot be computed from the problem description itself

• Its search cost is minimal(Search cost is very low but not lowest).
However, it is not optimal.

•The worst-case time and space complexity for the tree version is
O(dm ), where m is the maximum depth of the search space.
Greedy Best First Search(Continued….)
Example:
• Consider the following Romania map:
• Goal is to find a path from Arad to Bucharest.
Greedy Best First Search(Continued….)
Greedy Best First Search(Continued….)

Advantages:

GBFS finds a solution without ever expanding a node that is not on


the solution path; hence, its search cost is minimal.

Greedy best-first search' tries to expand the node that is closest to


the goal, on the grounds that this is likely to lead to a solution
quickly.

Disadvantages:

Greedy best-first tree search is incomplete even in a finite state


space. For Eg., Consider the problem of getting from Iasi to
Fagaras. The heuristic sug- gests that Neamt be expanded first
because it is closest to Fagaras, but it is a dead end.
End of session
A* search (Best-first search)

•For Minimizing the total estimated solution cost

•It evaluates nodes by combining g(n), the cost to reach the node,
and h(n), the cost to get from the node to the goal: f(n) = g(n) + h(n)
.

•Since g(n) gives the path cost from the start node to node n, and
h(n) is the estimated cost of the cheapest path from n to the goal, we
have f (n) = estimated cost of the cheapest solution through n .

• If we are trying to find the cheapest solution, a reasonable thing to


try first is the node with the lowest value of f(n)=g(n) + h(n).

•The algorithm is identical to UNIFORM-COST-SEARCH except


that A* uses g + h instead of g.
A* search (Continued…)
Algorithm:
1. Start with OPEN containing only initial node. Set that node’s
g(n) value to 0, its h(n) value to whatever it is, and its f(n)
value to h+0 or h(n). Set CLOSED to empty list.

2. Until a goal node is found, repeat the following procedure:


I. If there are no nodes on OPEN, report failure.
II. Otherwise pick the node on OPEN with the lowest f(n)
value. Call it BESTNODE.
III. Remove it from OPEN. Place it in CLOSED.
IV. See if the BESTNODE is a goal state. If so exit and report
a solution.
V. Otherwise, generate the successors of BESTNODE but do
not set the BESTNODE to point to them yet.
A* search (Continued…)

3. For each of the SUCCESSOR, do the following:


a. Set SUCCESSOR to point back to BESTNODE.
(These backwards links will make it possible to recover the path once a
solution is found.)

b. Compute g(SUCCESSOR) = g(BESTNODE) + the cost of getting from


BESTNODE to SUCCESSOR

c. See if SUCCESSOR is the same as any node on OPEN. If so call the


node OLD. See whether it is cheaper to get to OLD via its current
parent or SUCCESSOR via BESTNODE by comparing their g values.
If SUCCESSOR is cheaper, then reset OLD’s parent to point to
BESTNODE, record the new cheaper path in g(OLD) and update
f’(OLD).
A* search (Continued…)

d. If SUCCESSOR was not on OPEN, see if it is on CLOSED. If


so, call the node on CLOSED OLD and add OLD to the list of
BESTNODE’s successors.

Check to see if the new path is better. If so, set the parent link
and g and f’ values appropriately.

We must propagate the improvements to OLD’s successors.


OLD points to successors. Each successor, in turn, points to its
successors, and so forth until each branch terminates with a
node that either is still on OPEN or has no successors.
So to propagate the new cost downward, do a depth-first
traversal of the tree starting at OLD, changing each node’s g
value (and thus also its f’ value), terminating each branch when
you reach either a node with no successor or a node to which
an equivalent or better path has already been found.

e. If SUCCESSOR was not already on either OPEN or CLOSED,


then put it on OPEN and add it to the list of BESTNODE’s
successors.
Compute

f’(SUCCESSOR) = g(SUCCESSOR) + h’(SUCCESSOR)


A* search (Continued…)

Example:

A∗ search applied to map of Romania:

• This figure in the next slide represents the intial map of Romania.
• The values representing in red colour are heuristic values(i.e h(n)).
• The values representing in silver colour are path cost values
i.e g(n).
• The values representing in blue colour are f(n) values
i.e., f(n) = g(n) + h(n).
A* search (Continued…)

Initial map of Romania


A* search (Continued…)

After expanding Arad :

• We have three nodes i.e Zerind, Sibiu and Timisoara.


• As we know f(n)=g(n)+h(n).
• f(Sibiu)=f(n)=140+253=393(g(n)=140 and h(n)=253).
•f(Zerind)=f(n)=75+374=449(g(n)=75 and h(n)=374).
•f(Timisoara)=f(n)=118+329=447(g(n)=118 and h(n)=329).
• From these nodes we have to choose the least f(n) value, so
f(Sibiu) is least among these nodes.
A* search (Continued…)

After expanding Arad


A* search (Continued…)

After expanding Sibiu:

• After expanding Sibiu we have four nodes i.e Arad, Oradea,


Fagaras and Rimnicu Vilcea.

• As we know f(n)=g(n)+h(n).
• f(Arad)=f(n)=280+366=646(g(n)=280 and h(n)=366).
•f(Oradea)=f(n)=291+380=671(g(n)=291 and h(n)=380).
•f(Fagaras)=f(n)=239+176=415(g(n)=239 and h(n)=176).
•f(Rimnicu)=f(n)=220+193=413(g(n)=220 and h(n)=193).

•From these nodes we have to choose the least f(n) value, so


f(Rimnicu) is least among these nodes, f(Rimnicu)=413.
A* search (Continued…)

After expanding Sibiu


A* search (Continued…)

After expanding Rimnicu:

• After expanding Rimnicu node we have three nodes i.e


Craiova, Pitesti and Sibiu.
• As we know f(n)=g(n)+h(n).
f(Craiova)=f(n)=366+160=526(g(n)=366 and h(n)=160).
f(Pitesti)=f(n)=317+100=417(g(n)=317 and h(n)=100).
f(Sibiu)=f(n)=300+253=553(g(n)=300 and h(n)=253).

• From these three nodes we have to choose the least f(n) value,
so f(Fagaras) is least among the nodes
A* search (Continued…)

After expanding Rimnicu Vilcea


A* search (Continued…)

After expanding Fagaras:

• After expanding Fagaras node we have two nodes i.e Sibiu and
Bucharest.
• As we know f(n)=g(n)+h(n).
• f(Sibiu)=f(n)=338+253=591(g(n)=338 and h(n)=253).
• f(Buchrest)=f(n)=450+0=450(g(n)=450 and h(n)=0).

• From these three nodes we have to choose the least f(n) value,
so f(Fagaras) is least among the nodes.
A* search (Continued…)

After expanding Fagaras


A* search (Continued…)

After expanding Pitesti:

• After expanding Pitesti node we have three nodes i.e


Bucharest, Craiova and Rimnicu.
• As we know f(n)=g(n)+h(n).
f(Craiova)=f(n)=455+160=615(g(n)=455 and h(n)=160).
f(Buchrest)=f(n)=418+0=418(g(n)=418 and h(n)=0).
f(Rimnicu)=f(n)=414+193=607(g(n)=414 and h(n)=193).

• From these three nodes we have to choose the least f(n)


value, so f(Bucharest) is least among the nodes

• Since Bucharest is the goal node, so h(n)=0 at Bucharest.


A* search (Continued…)

After expanding Pitest


A* search (Continued…)

Shortest path from Arad to Bucharest


A* search (Continued…)

Advantages:

• That A* search is complete, optimal, and optimally efficient


among all such algorithms.

Disadvantages:

• For most problems, the number of states within the goal contour
search space is still exponential in the length of the solution.
A* search (Continued…)

•That A* search is complete, optimal, and optimally efficient among


all such algorithms.

Conditions for optimality:

• Admissibility: The first condition we require for optimality is


that h(n) be an admissible heuristic. An admissible heuristic is
one that never overestimates the cost to reach the goal.

• Consistency: A second, slightly stronger condition called


consistency (or sometimes monotonicity) MCNOTONICITY is
required only for applications of A* to graph search.
A

B (3+1) (4+1) D (5+1)


C

E (3+2)

F (3+3)

h’ underestimates h at node B
A

B (3+1) (4+1) D (5+1)


C

E (2+2)

F (1+3)

G (0+3)

h’ overestimates h at node D
Observations about A*
1. F(n)=g(n)+h(n)

2. If g=0 then F(n)=h(n), then the node that seems closest to the goal will
be chosen (and it becomes GBFS).

3. If g=1 and h=0 then the path involving fewest number of steps will be
chosen.

4. If h=0 , then F(n)=g(n) then the search will be controlled by g (and it


becomes uniform cost search).

5. If h=0 and g= 0 then the search strategy will be random.

You might also like