Topic - 4 (Informed Search and Exploration) (14.02.17)
Topic - 4 (Informed Search and Exploration) (14.02.17)
Topic - 4 (Informed Search and Exploration) (14.02.17)
Heuristic Functions
Problems
Hill-climbing search
Simulated annealing search
Local beam search
Genetic algorithms
Best-First Search
Goal reached
For this example no node is expanded that is not
on the solution path
But not optimal (see Arad, Sibiu, Rimnicu Vilcea,
Pitesti)
Md. Tarek Habib 9
Greedy Search: Evaluation
Complete or optimal: no
Minimizing h(n) can result in false starts, e.g. Iasi
to Fagaras
Check on repeated states
14
A* Search: Example
Complete: yes
Unless there are infinitely many nodes with
f < f(G)
Optimal: yes
A* is also optimally efficient for any given
h(n). That is, no other optimal algorithm is
guaranteed to expand fewer nodes than A*.
Time complexity:
number of nodes expanded is still exponential
in length of solution
Space complexity:
All generated nodes are kept in memory
A* usually runs out of space before running
out of time
For 8s puzzle:
h1 = number of tiles out of place. In the
example h1= 8
h2 = total Manhattan distance of errant pieces.
In the example
h2 = 3+1+2+2+2+3+3+2 = 18
current MAKE-NODE(INITIAL-STATE[problem])
loop do
neighbor a highest valued successor of current
if VALUE [neighbor] VALUE[current] then return STATE[current]
current neighbor
Stochastic hill-climbing
Random selection among the uphill moves.
The selection probability can vary with the
steepness of the uphill move.
First-choice hill-climbing
implements stochastic hill climbing by
generating successors randomly until a better
one is found.
Random-restart hill-climbing
Tries to avoid getting stuck in local maxima.
Md. Tarek Habib 40
Simulated Annealing
current MAKE-NODE(INITIAL-STATE[problem])
for t 1 to do
T schedule[t]
if T = 0 then return current
next a randomly selected successor of current
E VALUE[next] - VALUE[current]
if E > 0 then current next
else current next only with probability eE /T