Lecture 2 Solving Problems by Searching by Marco Chiarandini
1. Lecture 2
Solving Problems by Searching
Marco Chiarandini
Department of Mathematics & Computer Science
University of Southern Denmark
Slides by Stuart Russell and Peter Norvig
2. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Last Time
Agents are used to provide a consistent viewpoint on various topics in
the field AI
Essential concepts:
Agents intereact with environment by means of sensors and actuators.
A rational agent does “the right thing” ≡ maximizes a performance
measure
è PEAS
Environment types: observable, deterministic, episodic, static, discrete,
single agent
Agent types: table driven (rule based), simple reflex, model-based reflex,
goal-based, utility-based, learning agent
2
3. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Structure of Agents
Agent = Architecture + Program
Architecture
operating platform of the agent
computer system, specific hardware, possibly OS
Program
function that implements the mapping from percepts to actions
This course is about the program,
not the architecture
3
4. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Outline
1. Problem Solving Agents
2. Search
3. Uninformed search algorithms
4. Informed search algorithms
5. Constraint Satisfaction Problem
5
5. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Outline
1. Problem Solving Agents
2. Search
3. Uninformed search algorithms
4. Informed search algorithms
5. Constraint Satisfaction Problem
6
6. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Outline
♦ Problem-solving agents
♦ Problem types
♦ Problem formulation
♦ Example problems
♦ Basic search algorithms
7
7. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Problem-solving agents
Restricted form of general agent:
function Simple-Problem-Solving-Agent( percept) returns an action
static: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation
state ← Update-State(state,percept)
if seq is empty then
goal ← Formulate-Goal(state)
problem ← Formulate-Problem(state,goal)
seq ← Search( problem)
action ← Recommendation(seq,state)
seq ← Remainder(seq,state)
return action
Note: this is offline problem solving; solution executed “eyes closed.”
Online problem solving involves acting without complete knowledge.
8
8. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest
Formulate goal:
be in Bucharest
Formulate problem:
states: various cities
actions: drive between cities
Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
9
10. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
State space Problem formulation
A problem is defined by five items:
1. initial state e.g., “at Arad”
2. actions defining the other states, e.g., Go(Arad)
3. transition model res(x, a)
e.g., res(In(Arad), Go(Zerind)) = In(Zerind)
alternatively: set of action–state pairs:
{h(In(Arad), Go(Zerind)), In(Zerind)i, . . .}
4. goal test, can be
explicit, e.g., x = “at Bucharest”
implicit, e.g., NoDirt(x)
5. path cost (additive)
e.g., sum of distances, number of actions executed, etc.
c(x, a, y) is the step cost, assumed to be ≥ 0
A solution is a sequence of actions
leading from the initial state to a goal state
11
11. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Selecting a state space
Real world is complex
⇒ state space must be abstracted for problem solving
(Abstract) state = set of real states
(Abstract) action = complex combination of real actions
e.g., “Arad → Zerind” represents a complex set
of possible routes, detours, rest stops, etc.
For guaranteed realizability, any real state “in Arad”
must get to some real state “in Zerind”
(Abstract) solution =
set of real paths that are solutions in the real world
Each abstract action should be “easier” than the original problem!
Atomic representation
12
12. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Vacuum world state space graph
Example
R
L
S S
S S
R
L
R
L
R
L
S
S
S
S
L
L
L
L R
R
R
R
states??: integer dirt and robot locations (ignore dirt amounts etc.)
actions??: Left, Right, Suck, NoOp
transition model??: arcs in the digraph
goal test??: no dirt
path cost??: 1 per action (0 for NoOp)
13
13. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Example: The 8-puzzle
2
Start State Goal State
5
1 3
4 6
7 8
5
1
2
3
4
6
7
8
5
states??: integer locations of tiles (ignore intermediate positions)
actions??: move blank left, right, up, down (ignore unjamming etc.)
transition model??: effect of the actions
goal test??: = goal state (given)
path cost??: 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
14
14. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Example: robotic assembly
R
R
R
P
R R
states??: real-valued coordinates of robot joint angles
parts of the object to be assembled
actions??: continuous motions of robot joints
goal test??: complete assembly with no robot included!
path cost??: time to execute
15
15. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Problem types
Deterministic, fully observable, known, discrete =⇒ state space problem
Agent knows exactly which state it will be in; solution is a sequence
Non-observable =⇒ conformant problem
Agent may have no idea where it is; solution (if any) is a sequence
Nondeterministic and/or partially observable =⇒ contingency problem
percepts provide new information about current state
solution is a contingent plan or a policy
often interleave search, execution
Unknown state space =⇒ exploration problem (“online”)
16
16. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Example: vacuum world
State space, start in #5. Solution??
[Right, Suck]
Non-observable, start in {1, 2, 3, 4, 5, 6, 7, 8}
e.g., Right goes to {2, 4, 6, 8}. Solution??
[Right, Suck, Left, Suck]
Contingency, start in #5
Murphy’s Law: Suck can dirty a clean carpet
Local sensing: dirt, location only.
Solution??
[Right, if dirt then Suck]
1 2
3 4
5 6
7 8
17
17. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Example Problems
Toy problems
vacuum cleaner agent
8-puzzle
8-queens
cryptarithmetic
missionaries and cannibals
Real-world problems
route finding
traveling salesperson
VLSI layout
robot navigation
assembly sequencing
18
18. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Outline
1. Problem Solving Agents
2. Search
3. Uninformed search algorithms
4. Informed search algorithms
5. Constraint Satisfaction Problem
19
19. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Objectives
Formulate appropriate problems in optimization and planning (sequence
of actions to achive a goal) as search tasks:
initial state, operators, goal test, path cost
Know the fundamental search strategies and algorithms
uninformed search
breadth-first, depth-first, uniform-cost, iterative deepening, bi-
directional
informed search
best-first (greedy, A*), heuristics, memory-bounded
Evaluate the suitability of a search strategy for a problem
completeness, optimality, time & space complexity
20
20. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Searching for Solutions
Traversal of some search space
from the initial state to a goal state
legal sequence of actions as defined by operators
The search can be performed on
On a search tree derived from
expanding the current state using the possible operators
Tree-Search algorithm
A graph representing
the state space
Graph-Search algorithm
22
25. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Implementation: states vs. nodes
A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search tree
includes state, parent, action, path cost g(x)
States do not have parents, children, depth, or path cost!
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
State Node
depth = 6
g = 6
state
parent, action
The Expand function creates new nodes, filling in the various fields using the
Transition Model of the problem to create the corresponding states.
27
26. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Implementation: general tree search
function Tree-Search( problem, fringe) returns a solution, or failure
fringe ← Insert(Make-Node(Initial-State[problem]), fringe)
loop do
if fringe is empty then return failure
node ← Remove-Front(fringe)
if Goal-Test(problem, State(node)) then return node
fringe ← InsertAll(Expand(node, problem), fringe)
function Expand( node, problem) returns a set of nodes
successors ← the empty set
for each action, result in Successor-Fn(problem, State[node]) do
s ← a new Node
Parent-Node[s] ← node; Action[s] ← action; State[s] ← result
Path-Cost[s] ← Path-Cost[node] + Step-Cost(State[node], action,
result)
Depth[s] ← Depth[node] + 1
add s to successors
return successors
28
27. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Search strategies
A strategy is defined by picking the order of node expansion
function Tree-Search( problem, fringe) returns a solution, or failure
fringe ← Insert(Make-Node(Initial-State[problem]), fringe)
loop do
if fringe is empty then return failure
node ← Remove-Front(fringe)
if Goal-Test(problem, State(node)) then return node
fringe ← InsertAll(Expand(node, problem), fringe)
Strategies are evaluated along the following dimensions:
completeness—does it always find a solution if one exists?
time complexity—number of nodes generated/expanded
space complexity—maximum number of nodes in memory
optimality—does it always find a least path cost solution?
Time and space complexity are measured in terms of
b—maximum branching factor of the search tree
d—depth of the least-cost solution
m—maximum depth of the state space (may be ∞)
29
28. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Outline
1. Problem Solving Agents
2. Search
3. Uninformed search algorithms
4. Informed search algorithms
5. Constraint Satisfaction Problem
30
29. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Uninformed search strategies
Uninformed strategies use only the information available
in the problem definition
Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search
Bidirectional Search
31
30. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Breadth-first search
Expand shallowest unexpanded node (shortest path in the frontier)
A
B C
D E F G
A
B C
D E F
Implementation:
fringe is a FIFO queue, i.e., new successors go at end
32
31. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Properties of breadth-first search
Complete?? Yes (if b is finite)
Optimal?? Yes (if cost = 1 per step); not optimal in general
Time?? 1 + b + b2
+ b3
+ . . . + bd
+ b(bd
− 1) = O(bd+1
), i.e., exp. in d
Space?? bd−1
+ bd
= O(bd
) (explored + frontier)
Space is the big problem; can easily generate nodes at 100MB/sec
so 24hrs = 8640GB.
33
32. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Uniform-cost search
Expand first least-cost path
(Equivalent to breadth-first if step costs all equal)
Implementation:
fringe = priority queue ordered by path cost, lowest first,
Complete?? Yes, if step cost ≥
Optimal??Yes—nodes expanded in increasing order of g(n)
Time?? # of nodes with g ≤ cost of optimal solution, O(b1+dC∗
/e
)
where C∗
is the cost of the optimal solution
Space??# of nodes with g ≤ cost of optimal solution, O(b1+dC∗
/e
)
34
33. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Depth-first search
Expand deepest unexpanded node
A
B C
D E F G
H I J K L M N O
Implementation:
fringe = LIFO queue, i.e., put successors at front
35
34. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Properties of depth-first search
Complete?? No: fails in infinite-depth spaces, or spaces with loops
Modify to avoid repeated states along path
⇒ complete in finite spaces
Optimal?? No
Time?? O(bm
): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first
Space?? O(bm), i.e., linear space!
36
35. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
Recursive implementation:
function Depth-Limited-Search( problem, limit) returns soln/fail/cutoff
Recursive-DLS(Make-Node(Initial-State[problem]), problem, limit)
function Recursive-DLS(node, problem, limit) returns soln/fail/cutoff
cutoff-occurred? ← false
if Goal-Test(problem, State[node]) then return node
else if Depth[node] = limit then return cutoff
else for each successor in Expand(node, problem) do
result ← Recursive-DLS(successor, problem, limit)
if result = cutoff then cutoff-occurred? ← true
else if result 6= failure then return result
if cutoff-occurred? then return cutoff else return failure
37
36. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Iterative deepening search
function Iterative-Deepening-Search( problem) returns a solution
inputs: problem, a problem
for depth ← 0 to ∞ do
result ← Depth-Limited-Search( problem, depth)
if result 6= cutoff then return result
end
38
37. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Iterative deepening search
Limit = 3
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H I J K L M N O
A
B C
D E F G
H J K L M N O
I
A
B C
D E F G
H I J K L M N O
39
38. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Properties of iterative deepening search
Complete?? Yes
Optimal?? Yes, if step cost = 1
Can be modified to explore uniform-cost tree
Time?? (d + 1)b0
+ db1
+ (d − 1)b2
+ . . . + bd
= O(bd
)
Space?? O(bd)
Numerical comparison in time for b = 10 and d = 5, solution at far right leaf:
NIDS) = 50 + 400 + 3, 000 + 20, 000 + 100, 000 = 123, 450
N(BFS) = 10 + 100 + 1, 000 + 10, 000 + 100, 000 + 999, 990 = 1, 111, 100
IDS does better because other nodes at depth d are not expanded
BFS can be modified to apply goal test when a node is generated
Iterative lenghtening not as successul as IDS
40
39. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Bidirectional Search
Search simultaneously (using breadth-first search)
from goal to start
from start to goal
Stop when the two search trees intersects
41
40. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Difficulties in Bidirectional Search
If applicable, may lead to substantial savings
Predecessors of a (goal) state must be generated
Not always possible, eg. when we do not know the optimal state
explicitly
Search must be coordinated between the two search processes.
What if many goal states?
One search must keep all nodes in memory
42
41. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Summary of algorithms
Criterion Breadth- Uniform- Depth- Depth- Iterative
First Cost First Limited Deepening
Complete? Yes∗
Yes∗
No Yes, if l ≥ d Yes
Time bd+1
bd1+C∗
/e
bm
bl
bd
Space bd+1
bd1+C∗
/e
bm bl bd
Optimal? Yes∗
Yes No No Yes∗
43
42. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Summary
Problem formulation usually requires abstracting away real-world details
to define a state space that can feasibly be explored
Variety of uninformed search strategies
Iterative deepening search uses only linear space
and not much more time than other uninformed algorithms
Graph search can be exponentially more efficient than tree search
46
43. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Outline
1. Problem Solving Agents
2. Search
3. Uninformed search algorithms
4. Informed search algorithms
5. Constraint Satisfaction Problem
47
44. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Review: Tree search
function Tree-Search( problem, fringe) returns a solution, or failure
fringe ← Insert(Make-Node(Initial-State[problem]), fringe)
loop do
if fringe is empty then return failure
node ← Remove-Front(fringe)
if Goal-Test[problem] applied to State(node) succeeds return node
fringe ← InsertAll(Expand(node, problem), fringe)
A strategy is defined by picking the order of node expansion
48
45. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Informed search strategy
Informed strategies use agent’s background information about the problem
map, costs of actions, approximation of solutions, ...
best-first search
greedy search
A∗
search
local search (not in this course)
Hill-climbing
Simulated annealing
Genetic algorithms
Local search in continuous spaces
49
46. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Best-first search
Idea: use an evaluation function for each node
– estimate of “desirability”
⇒ Expand most desirable unexpanded node
Implementation:
fringe is a queue sorted in decreasing order of desirability
Special cases:
greedy search
A∗
search
50
48. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Greedy search
Evaluation function h(n) (heuristic)
= estimate of cost from n to the closest goal
E.g., hSLD(n) = straight-line distance from n to Bucharest
Greedy search expands the node that appears to be closest to goal
52
50. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Properties of greedy search
Complete?? No–can get stuck in loops, e.g., from Iasi to Fargas
Iasi → Neamt → Iasi → Neamt →
Complete in finite space with repeated-state checking
Optimal?? No
Time?? O(bm
), but a good heuristic can give dramatic improvement
Space?? O(bm
)
54
51. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
A∗
search
Idea: avoid expanding paths that are already expensive
Evaluation function f(n) = g(n) + h(n)
g(n) = cost so far to reach n
h(n) = estimated cost to goal from n
f(n) = estimated total cost of path through n to goal
A∗
search uses an admissible heuristic
i.e., h(n) ≤ h∗
(n) where h∗
(n) is the true cost from n.
(Also require h(n) ≥ 0, so h(G) = 0 for any goal G.)
E.g., hSLD(n) never overestimates the actual road distance
Theorem: A∗
search is optimal
55
53. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Optimality of A∗
(standard proof)
Suppose some suboptimal goal G2 has been generated and is in the queue.
Let n be an unexpanded node on a shortest path to an optimal goal G1.
G
n
G2
Start
f(G2) = g(G2) since h(G2) = 0
g(G1) since G2 is suboptimal
≥ f(n) since h is admissible
Since f(G2) f(n), A∗
will never select G2 for expansion
57
54. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Optimality of A∗
(more useful)
Lemma: A∗
expands nodes in order of increasing f value∗
Gradually adds “f-contours” of nodes (cf. breadth-first adds layers)
Contour i has all nodes with f = fi, where fi fi+1
O
Z
A
T
L
M
D
C
R
F
P
G
B
U
H
E
V
I
N
380
400
420
S
58
56. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Properties of A∗
Complete?? Yes, unless there are infinitely many nodes with f ≤ f(G)
Optimal?? Yes—cannot expand fi+1 until fi is finished
A∗
expands all nodes with f(n) C∗
A∗
expands some nodes with f(n) = C∗
A∗
expands no nodes with f(n) C∗
Time?? Exponential in [relative error in h × length of sol.]
Space?? Keeps all nodes in memory
60
57. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Proof of lemma: Consistency
A heuristic is consistent if
n
c(n,a,n’)
h(n’)
h(n)
G
n’
h(n) ≤ c(n, a, n0
) + h(n0
)
If h is consistent, we have
f(n0
) = g(n0
) + h(n0
)
= g(n) + c(n, a, n0
) + h(n0
)
≥ g(n) + h(n)
= f(n)
I.e., f(n) is nondecreasing along any path.
61
58. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Admissible heuristics
E.g., for the 8-puzzle:
h1(n) = number of misplaced tiles
h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
2
Start State Goal State
5
1 3
4 6
7 8
5
1
2
3
4
6
7
8
5
h1(S) =??
h2(S) =??
62
59. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Admissible heuristics
E.g., for the 8-puzzle:
h1(n) = number of misplaced tiles
h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
2
Start State Goal State
5
1 3
4 6
7 8
5
1
2
3
4
6
7
8
5
h1(S) =?? 6
h2(S) =?? 4+0+3+3+1+0+2+1 = 14
63
60. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Dominance
If h2(n) ≥ h1(n) for all n (both admissible)
then h2 dominates h1 and is better for search
Typical search costs:
d = 14 IDS = 3,473,941 nodes
A∗
(h1) = 539 nodes
A∗
(h2) = 113 nodes
d = 24 IDS ≈ 54,000,000,000 nodes
A∗
(h1) = 39,135 nodes
A∗
(h2) = 1,641 nodes
Given any admissible heuristics ha, hb,
h(n) = max(ha(n), hb(n))
is also admissible and dominates ha, hb
64
61. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Relaxed problems
Admissible heuristics can be derived from the exact
solution cost of a relaxed version of the problem
If the rules of the 8-puzzle are relaxed so that a tile can move
anywhere, then h1(n) gives the shortest solution
If the rules are relaxed so that a tile can move to any adjacent square,
then h2(n) gives the shortest solution
Key point: the optimal solution cost of a relaxed problem
is no greater than the optimal solution cost of the real problem
65
62. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Memory-Bounded Heuristic Search
Try to reduce memory needs
Take advantage of heuristic to improve performance
Iterative-deepening A∗
(IDA∗
)
SMA∗
67
63. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Iterative Deepening A∗
Uniformed Iterative Deepening (repetition)
depth-first search where the max depth is iteratively increased
IDA∗
depth-first search, but only nodes with f-cost less than or equal to
smallest f-cost of nodes expanded at last iteration
was the best search algorithm for many practical problems
68
64. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Properties of IDA∗
Complete?? Yes
Time complexity?? Still exponential
Space complexity?? linear
Optimal?? Yes. Also optimal in the absence of monotonicity
69
65. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Simple Memory-Bounded A∗
Use all available memory
Follow A∗
algorithm and fill memory with new expanded nodes
If new node does not fit
remove stored node with worst f-value
propagate f-value of removed node to parent
SMA∗
will regenerate a subtree only when it is needed
the path through subtree is unknown, but cost is known
70
66. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Propeties of SMA∗
Complete?? yes, if there is enough memory for the shortest solution path
Time?? same as A∗
if enough memory to store the tree
Space?? use available memory
Optimal?? yes, if enough memory to store the best solution path
In practice, often better than A∗
and IDA∗
trade-off between time and space
requirements
71
68. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Recursive Best First Search
Zerind
Arad
Sibiu
Arad Fagaras Oradea
Craiova Sibiu
Bucharest Craiova Rimnicu Vilcea
Zerind
Arad
Sibiu
Arad
Sibiu Bucharest
Rimnicu Vilcea
Oradea
Zerind
Arad
Sibiu
Arad
Timisoara
Timisoara
Timisoara
Fagaras Oradea Rimnicu Vilcea
Craiova Pitesti Sibiu
646 415 671
526 553
646 671
450
591
646 671
526 553
418 615 607
447 449
447
447 449
449
366
393
366
393
413
413 417
415
366
393
415 450 417
Rimnicu Vilcea
Fagaras
447
415
447
447
417
(a) After expanding Arad, Sibiu,
and Rimnicu Vilcea
(c) After switching back to Rimnicu Vilcea
and expanding Pitesti
(b) After unwinding back to Sibiu
and expanding Fagaras
447
447
∞
∞
∞
417
417
Pitesti
73
69. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Outline
1. Problem Solving Agents
2. Search
3. Uninformed search algorithms
4. Informed search algorithms
5. Constraint Satisfaction Problem
75
70. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Constraint Satisfaction Problem (CSP)
Standard search problem:
state is a “black box”—any old data structure
that supports goal test, eval, successor
CSP:
state is defined by variables Xi with values from domain Di
goal test is a set of constraints specifying
allowable combinations of values for subsets of variables
Simple example of a formal representation language
Allows useful general-purpose algorithms with more power
than standard search algorithms
76
71. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Standard search formulation
States are defined by the values assigned so far
♦ Initial state: the empty assignment, { }
♦ Successor function: assign a value to an unassigned variable
that does not conflict with current assignment.
=⇒ fail if no legal assignments (not fixable!)
♦ Goal test: the current assignment is complete
1) This is the same for all CSPs!
2) Every solution appears at depth n with n variables
=⇒ use depth-first search
3) Path is irrelevant, so can also use complete-state formulation
4) b = (n − `)d at depth `, hence n!dn
leaves!!!!
77
72. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Backtracking search
Variable assignments are commutative, i.e.,
[WA = red then NT = green] same as
[NT = green then WA = red]
Only need to consider assignments to a single variable at each node
=⇒ b = d and there are dn
leaves
Depth-first search for CSPs with single-variable assignments
is called backtracking search
Backtracking search is the basic uninformed algorithm for CSPs
Can solve n-queens for n ≈ 25
78
73. Problem Solving Agents
Search
Uninformed search algorithms
Informed search algorithms
Constraint Satisfaction Problem
Backtracking search
function Backtracking-Search(csp) returns solution/failure
return Recursive-Backtracking({ },csp)
function Recursive-Backtracking(assignment,csp) returns soln/fail-
ure
if assignment is complete then return assignment
var ← Select-Unassigned-Variable(Variables[csp],assignment,csp)
for each value in Order-Domain-Values(var,assignment,csp) do
if value is consistent with assignment given Constraints[csp]
then
add {var = value} to assignment
result ← Recursive-Backtracking(assignment,csp)
if result 6= failure then return result
remove {var = value} from assignment
return failure
79