Heuristic Search
Heuristic Search
Heuristic Search
Outline
• Generate-and-test
• Hill climbing
• Best-first search
• Problem reduction
• Constraint satisfaction
2
Generate-and-Test
Algorithm
3
Generate-and-Test
4
Generate-and-Test
• Exhaustive generate-and-test
• Heuristic generate-and-test
Does not consider paths that seem unlikely to lead to
a solution.
• Plan generate-test
- Create a list of candidates.
- Apply generate-and-test to that list.
6
Hill Climbing
7
Simple Hill Climbing
Algorithm
1. Evaluate the initial state.
2. Loop until a solution is found or there are no new
operators left to be applied:
- Select and apply a new operator
- Evaluate the new state:
goal quit
better than current state new current state
8
Simple Hill Climbing
9
Steepest-Ascent Hill Climbing
(Gradient Search)
• Considers all the moves from the current state.
• Selects the best one as the next state.
10
Steepest-Ascent Hill Climbing
(Gradient Search)
Algorithm
1. Evaluate the initial state.
2. Loop until a solution is found or a complete iteration
produces no change to current state:
- SUCC = a state such that any possible successor of the
current state will be better than SUCC (the worst state).
- For each operator that applies to the current state, evaluate
the new state:
goal quit
better than SUCC set SUCC to this state
- SUCC is better than the current state set the current
state to SUCC. 11
Hill Climbing: Disadvantages
Local maximum
A state that is better than all of its neighbours, but not
better than some other states far away.
12
Hill Climbing: Disadvantages
Plateau
A flat area of the search space in which all neighbouring
states have the same value.
13
Hill Climbing: Disadvantages
Ridge
The orientation of the high region, compared to the set
of available moves, makes it impossible to climb up.
However, two moves executed serially may increase
the height.
14
Hill Climbing: Disadvantages
Ways Out
15
Hill Climbing: Disadvantages
16
Hill Climbing: Conclusion
17
Simulated Annealing
• A variation of hill climbing in which, at the
beginning of the process, some downhill
moves may be made.
• To do enough exploration of the whole space
early on, so that the final solution is relatively
insensitive to the starting state.
18
Simulated Annealing
Physical Annealing
• Physical substances are melted and then
gradually cooled until some solid state is
reached.
• The goal is to produce a minimal-energy
state.
• Annealing schedule: if the temperature is
lowered sufficiently slowly, then the goal
will be attained.
• Nevertheless, there is some probability for
a transition to a higher energy state: e-E/kT.
19
Simulated Annealing
Algorithm
1. Evaluate the initial state.
2. Loop until a solution is found or there are no new
operators left to be applied:
- Set T according to an annealing schedule
- Selects and applies a new operator
- Evaluate the new state:
goal quit
E = Val(current state) - Val(new state)
E < 0 new current state
else new current state with probability e-E/kT.
20
Best-First Search
• Depth-first search: not all competing branches having
to be expanded.
21
Best-First Search (OR Graphs)
A A A
B C D B C D
3 5 1 3 5
E F
4 6
A A
B C D B C D
5 5
G H E F G H E F
6 5 4 6 6 5 6
I J
2 1
22
Best-First Search
• OPEN: nodes that have been generated,
but have not examined.
This is organized as a priority queue.
23
Best-First Search
Algorithm
1. Put the initial node on a list OPEN.
2. Loop until a goal is found or there are no nodes left in
OPEN:
- Pick the best node in OPEN.
- Generate its successors.
- For each successor do:
new evaluate it, add it to OPEN, record its parent.
generated before change parent if this new path is
better than the previous one, update the cost of getting
to this node and to any successors that this node may
already have.
24
Best-First Search
• Greedy search:
h(n) = estimated cost of the cheapest path from node
n to a goal state.
25
Best-First Search
• Uniform-cost search:
g(n) = cost of the cheapest path from the initial state
to node n.
26
Best-First Search
• Greedy search:
h(n) = estimated cost of the cheapest path from node
n to a goal state.
Neither optimal nor complete
27
Best-First Search
• Greedy search:
h(n) = estimated cost of the cheapest path from node
n to a goal state.
Neither optimal nor complete
• Uniform-cost search:
g(n) = cost of the cheapest path from the initial state
to node n.
Optimal and complete, but very inefficient
28
Best-First Search
29
Best-First Search
Algorithm A*:
30
Algorithm A*
• Evaluation Function (Future)– how far a particular
node is from the goal.
• Cost Function (Past) – how much resources have been
spent in reaching a particular node from the start.
f '(n) = g(n) + h '(n)
Goal: Steal TV set Goal: Earn some money Goal: Buy TV set
AND-OR Graphs
33
AND-OR Graphs
34
Problem Reduction
35
Problem Reduction: AO*
A A 6
5 9
B C D
3 4 5
A 9 A 11
9 12
B C D 10 B 6 C D 10
3 4 4
E F G H E F
4 4 5 7 4 4
36
Problem Reduction: AO*
A 11 A 14
B 13 C 10 B 13 C 15
D 5 E 6 F 3 D 5 E 6 F 3
G 5 G 10
37
Constraint Satisfaction
38
Constraint Satisfaction
39
Constraint Satisfaction
40
Constraint Satisfaction
Two-step process:
1. Constraints are discovered and propagated as far as possible
throughout the system.
2. If there is still not a solution, then search begins. A guess
about something is made and new constraints are added.
41
Initial state:
M=1
•No two letters have
S = 8 or 9 SEND
the same value. O=0
• The sum of the digits N=E+1 MORE
C2 = 1
must be as shown.
N+R>8
E9 MONEY
E=2
N=3
R = 8 or 9
2 + D = Y or 2 + D = 10 + Y
C1 = 0 C1 = 1
2+D=Y 2 + D = 10 + Y
N + R = 10 + E D=8+Y
R=9 D = 8 or 9
S =8
D=8 D=9
Y=0 Y=1
42
Cryptarithmetic
43
Cryptarithmetic
44
45
Constraint Satisfaction
46
Constraint Satisfaction
47
Search Techniques in Game Playing
Game playing is one of the most practical direct applications
of the search problem-solving paradigm.
• The depth of the resulting tree or graph is too much and its
branching factor too high 49
Game Playing
50
Knowledge Representation
51