SlideShare a Scribd company logo
1
IT406
Artificial Intelligence
Problem Solving (Using Search)
Harris Chikunya
2
Problem Solving (Using Search)
• When the correct action to be taken is not immediately obvious, an
agent may need to plan ahead: to consider a sequence of actions
that form a path to a goal state.
• Such an agent is called a problem-solving agent, and the
computational process it undertakes is called search.
3
Problem Solving (using Search)
• Search Terminology
 Problem Space: It is the environment in which the search takes place.
 Problem Instance: It is Initial state + Goal state
 Problem Space Graph: It represents problem state. States are shown
by nodes and operators are shown by edges.
 Depth of a problem: Length of a shortest path or shortest sequence of
operators from Initial State to goal state.
 Branching Factor: The average number of child nodes in the problem
space graph.
 Depth: Length of the shortest path from initial state to goal state.
4
The Importance of Search in AI
• It has already become clear that many of the tasks underlying AI can
be phrased in terms of a search for the solution to the problem at
hand.
• Many goal based agents are essentially problem solving agents
which must decide what to do by searching for a sequence of
actions that lead to their solutions.
• For production systems, we have seen the need to search for a
sequence of rule applications that lead to the required fact or
action.
• For neural network systems, we need to search for the set of
connection weights that will result in the required input to output
mapping.
5
Problem Types
1. Single-State problem: deterministic, accessible
 Agent knows everything about world (the exact state),
 Can calculate optimal action sequence to reach goal
state.
 E.g., playing chess. Any action will result in an exact
state
6
Problem Types
2. Multiple-state problem: deterministic, inaccessible
• Agent does not know the exact state (could be in any of the
possible states)
• May not have sensor at all
• Assume states while working towards goal state.
• E.g., walking in a dark room
• If you are at the door, going straight will lead you to the kitchen
• If you are at the kitchen, turning left leads you to the bedroom
• …
7
Problem Types
3. Contingency problem: nondeterministic, inaccessible
• Must use sensors during execution
• Solution is a tree or policy
• Often interleave search and execution
• E.g., a new skater in an arena
• Sliding problem.
• Many skaters around
8
Problem Types
• Exploration problem: unknown state space
Discover and learn about environment while taking
actions.
• E.g., Maze
9
Problem-solving Agents
• A simplified road map of part of Romania
10
Problem-Solving Agents
• GOAL FORMULATION: Goals organize behavior by limiting the
objectives and hence the actions to be considered. The agent adopts
the goal of reaching Bucharest.
• PROBLEM FORMULATION: The agent devises a description of the
states and actions necessary to reach the goal—an abstract model
of the relevant part of the world. For our agent, one good model is
to consider the actions of traveling from one city to an adjacent city
• SEARCH: Before taking any action in the real world, searches until it
finds a sequence of actions that reaches the goal. Such a sequence is
called a solution. (such as going from Arad to Sibiu to Faragas to
Bucharest), or it will find that no solution is possible.
• EXECUTION: The agent can now execute the actions in the solution,
one at a time.
11
Search Problems and Solutions
A search problem can be defined formally as follows:
 A set of possible states that the environment can be in. We call this the state
space.
 The initial state that the agent starts in. For example: Arad.
 A set of one or more goal states. Sometimes there is one goal state (e.g.,
Bucharest), sometimes there is a small of alternative goals states.
 The actions available to the agent. Given a state s, ACTIONS(s) returns a
finite set of actions that can be executed in s. For example:
ACTIONS(Arad) = {ToSibiu, ToTimisoara, ToZerind}.
 A transition model, which describes what each action does. RESULT(s,a)
returns the state that results from doing action a in state s. For example,
RESULT(Arad, ToZerind) = Zerind.
 An action cost function, denoted by ACTION-COST(s, a, s’), that gives the numeric
cost of applying action a in state s to reach state s’
A sequence of actions forms a path, and a solution is a path from the initial state to
a goal state.
12
Example Problems
• Problems can be either standardized problems or real-world
problems.
• A standardized problem is intended to illustrate or exercise various
problem solving methods.
• It can be given a concise, exact description and hence is suitable as a
benchmark for researchers to compare the performance of
algorithms.
• A real-world problem, such as robot navigation, is one whose
solutions people actually use, and whose formulation is not
standardized, because, for example, each robot has different
sensors that produce different data.
13
Standardized Problem Example 1: 8-puzzle
• STATES:
• INITIAL STATE:
• ACTIONS:
• GOAL STATE:
• ACTION COST:
14
Standardized Problem Example 1: 8-puzzle
• STATES: Location of each of the tiles
• INITIAL STATE: Any state
• ACTIONS: moving blank Left, Right, UP, Down
• GOAL STATE: does state match goal state?
• ACTION COST: Each action cost 1
15
Standardized Problem Example 1: 8-puzzle
Why search algorithms?
 8-puzzle has 362,800 states
 15-puzzle has 10^12 states
 24-puzzle has 10^25 states
So, we need a principled way to look for a solution in these huge
search spaces…
16
Standardized Problem Example 2: Vaccum World
• STATES:
• INITIAL STATE:
• ACTIONS:
• GOAL STATE:
• ACTION COST:
17
Standardized Problem Example 2: Vaccum World
• STATES: dirt or robot locations
• INITIAL STATE: Any state
• ACTIONS: Left, Right, Suck
• GOAL STATE: no dirt
• ACTION COST: Each action cost 1
18
Real-World Problems Example 1: Route
Finding problem
• Consider the airline travel problems that must be solved by a travel-
planning Web site:
• STATES: ??
• INITIAL STATE: ??
• ACTIONS: ??
• GOAL STATE: ??
• ACTION COST: ??
19
Search Algorithms
• A search algorithm takes a search problem as input and returns a
solution, or an indication of failure.
• These algorithms use a Search tree to form the various paths from
the initial state, trying to find a path that reaches a goal state.
• Each node in the search tree corresponds to a state in the state
space and the edges in the search tree corresponds to actions.
• The root of the tree corresponds to the initial state of the problem.
• The search tree may have multiple paths to (and thus multiple
nodes for) any given state, but each node in the tree has a unique
path back to the root (as in all trees).
20
Search tree Example
21
Evaluation of Search strategies
• We can evaluate an algorithm’s performance in four ways:
 COMPLETENESS: Is the algorithm guaranteed to find a solution when
there is one, and to correctly report failure when there is not?
 COST OPTIMALITY: Does it find a solution with the lowest path cost of
all solutions?
 TIME COMPLEXITY: How long does it take to find a solution? This can
be measured in seconds, or more abstractly by the number of states
and actions considered.
 SPACE COMPLEXITY: How much memory is needed to perform the
search?
22
Uninformed Search Strategies
• An uninformed search algorithm is given no clue about how close a
state is to the goal(s).
• In contrast, an informed agent who knows the location of each city
knows that Sibiu is much closer to Bucharest and thus more likely to
be on the shortest path.
• Use only information available in the problem formulation
 Breadth-first
 Uniform-cost
 Depth-first
 Depth-limited
 Iterative deepening
23
Breadth-First Search
• Expand the shallowest unexpanded node
• Moves down level by level until a goal is reached
24
Example: Traveling from Arad to Bucharest
25
Breadth-first search
26
Breath-first Search
27
Breadth-first search
28
BFS Performance
• Search algorithms are commonly evaluated according to the following four criteria:
 Completeness: does it always find a solution if one exists?
 Time complexity: how long does it take as function of num. of nodes?
 Space complexity: how much memory does it require?
 Optimality: does it guarantee the least-cost solution?
• Time and space complexity are measured in terms of:
 b – max branching factor of the search tree
 d – depth of the least-cost solution
 m – max depth of the search tree (may be infinity)
 BFS Performance measure
• Completeness: Yes, if b is finite
• Time complexity: 1+b+b2+…+bd = O(b d), i.e., exponential in d
• Space complexity: O(b d), keeps every node in memory
• Optimality: Yes (assuming cost = 1 per step)
29
Dijkstra’s Algorithm or Uniform-Cost Search
• It explores paths in the increasing order of cost
• It always expands the least cost node
• It is identical to BFS if each transition has the same cost
• The path cost is usually taken to be the sum of the step costs
30
Romania with step costs in Km
31
Uniform-cost search
32
Uniform-cost search
33
Uniform-cost search
34
UCS Performance
• Completeness: Yes
• Time complexity: # nodes with g  cost of optimal ,  O(b d)
• Space complexity: # nodes with g  cost of optimal solution,  O(b d)
• Optimality: cost optimal
g(n) is the path cost to node n
Remember:
b = branching factor
d = depth of least-cost solution
35
Depth-first search
• Always expands the deepest node in the frontier first.
• The search proceeds immediately to the deepest level of the search,
where the nodes have no successors.
• The search then goes to the deepest node that still has unexpanded
successors.
36
DFS Example
37
Performance
• Completeness: No, fails in infinite state space (yes if finite state
space)
• Time complexity: O(b m)
• Space complexity: O(b m)
• Optimality: No
Remember:
b = branching factor
m = max depth of search tree
38
Depth-limited search
• Designed to keep DFS from wandering down an infinite path
• A version of DFS in which we supply a depth limit, ℓ, and treat all
nodes at depth ℓ as if they had no successors
• The time complexity is O(𝑏ℓ)
• The space complexity is O(bℓ)
• Unfortunately, if we make a poor choice for ℓ the algorithm will fail
to reach the solution, making it incomplete again.
• Sometimes a good depth limit can be chosen based on knowledge
of the problem.
39
Depth-limited search
40
Iterative deepening search
• Combines the best of BFS and DFS strategies
• IDS solves the problem of picking a good value for ℓ by
trying all values: first 0, then 1, then 2, and so on—until
either a solution is found, or the depth limited search
returns the failure value rather than the cutoff value
• NB: In general, iterative deepening is the preferred
uninformed search method when the search state space is
larger than can fit in memory and the depth of the solution
is not known.
41
Iterative Deepening Search
42
Informed (Heuristic) Search Strategies
• Use domain-specific hints about the location of goals
• Can find solutions more efficiently than an uninformed strategy
• The hints come in the form of a heuristic function, denoted h(n)
• h(n) = estimated cost of the cheapest path from the state at
node n to a goal state.
• For example, in route-finding problems, we can estimate the
distance from the current state to a goal by computing the straight-
line distance on the map between the two points.
 Greedy Best first
 A*
 Heuristics
 Hill-climbing
 Simulated annealing
43
Greedy best-first search
• Estimation function:
h(n) = estimate of cost from n to goal (heuristic)
• For example:
hSLD(n) = straight-line distance from n to Bucharest
• Greedy search expands first the node that appears to be closest to
the goal, according to h(n).
• This is on the grounds that it is likely to lead to a solution quickly
44
Greedy best-first search
Values of hSLD- straight-line distances to Bucharest e.g. hSLD(Arad) =
366.
45
Greedy best-first search
46
A* search
• Uses the evaluation function
f(n) = g(n) + h(n) where:
g(n) – cost so far to reach n
h(n) – estimated cost to goal from n
f(n) – estimated total cost of path through n to goal
• We are trying to find the cheapest solution i.e. the node with the
lowest value of g(n) + h(n)
• A* search is both complete and optimal
47
A* search
48
A* search
49
End

More Related Content

PPTX
5.problem reduction in artificial intelligence.pptx
PDF
Problem Solving
DOC
Chaptr 7 (final)
PPTX
Intelligent agents
PDF
I. Hill climbing algorithm II. Steepest hill climbing algorithm
PPT
Context free languages
PDF
5.-Knowledge-Representation-in-AI_010824.pdf
PPTX
Informed search algorithms.pptx
5.problem reduction in artificial intelligence.pptx
Problem Solving
Chaptr 7 (final)
Intelligent agents
I. Hill climbing algorithm II. Steepest hill climbing algorithm
Context free languages
5.-Knowledge-Representation-in-AI_010824.pdf
Informed search algorithms.pptx

What's hot (20)

PPT
Unit 2 new
PPT
Computer graphics iv unit
PPTX
Search Algorithms in AI.pptx
PPTX
Painter's Algorithm https://www old.pptx
PPT
Quadric surfaces
PDF
Peterson’s Solution.pdf by Mustehsan Mehmood
PPT
affine transformation for computer graphics
PPT
Chapter 1 characterisation of distributed systems
PDF
Solution(1)
PDF
A* Search Algorithm
PPTX
Problem Formulation
PPTX
Planning
PPTX
Control Strategies in AI
PPTX
Visible surface identification
PDF
Catching Insider Data Theft with Stochastic Forensics 2012
PPTX
Application Layer
PPTX
recoverability and serializability dbms
PPTX
Football and graph theory
PPTX
AI_Session 25 classical planning.pptx
Unit 2 new
Computer graphics iv unit
Search Algorithms in AI.pptx
Painter's Algorithm https://www old.pptx
Quadric surfaces
Peterson’s Solution.pdf by Mustehsan Mehmood
affine transformation for computer graphics
Chapter 1 characterisation of distributed systems
Solution(1)
A* Search Algorithm
Problem Formulation
Planning
Control Strategies in AI
Visible surface identification
Catching Insider Data Theft with Stochastic Forensics 2012
Application Layer
recoverability and serializability dbms
Football and graph theory
AI_Session 25 classical planning.pptx
Ad

Similar to Lecture 3 Problem Solving.pptx (20)

PDF
Lecture 3 problem solving
PPTX
Popular search algorithms
PPTX
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
PDF
Artificial Intelegince-chapter three-problem solving.pdf
PPTX
3. Module_2_Chapter_3hvghcyttrctrctfcf.pptx
PPTX
Artificial intelligence(04)
PPTX
3. ArtificialSolving problems by searching.pptx
PPTX
AI_03_Solving Problems by Searching.pptx
PPTX
Searching is the universal technique of problem solving in Artificial Intelli...
PPTX
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
PPT
02-solving-problems-by-searching-(us).ppt
PPTX
AI: AI & problem solving
PPTX
AI: AI & Problem Solving
PPTX
PPTX
chapter 3 Problem Solving using searching.pptx
PDF
Chapter 3 - Searching and prPlanning.pdf
PPT
chapter3part1.ppt
PDF
problem solving in Artificial intelligence .pdf
PPT
3 probsolver edited.ppt
Lecture 3 problem solving
Popular search algorithms
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
Artificial Intelegince-chapter three-problem solving.pdf
3. Module_2_Chapter_3hvghcyttrctrctfcf.pptx
Artificial intelligence(04)
3. ArtificialSolving problems by searching.pptx
AI_03_Solving Problems by Searching.pptx
Searching is the universal technique of problem solving in Artificial Intelli...
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
02-solving-problems-by-searching-(us).ppt
AI: AI & problem solving
AI: AI & Problem Solving
chapter 3 Problem Solving using searching.pptx
Chapter 3 - Searching and prPlanning.pdf
chapter3part1.ppt
problem solving in Artificial intelligence .pdf
3 probsolver edited.ppt
Ad

Recently uploaded (20)

PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
Complications of Minimal Access Surgery at WLH
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PPTX
Institutional Correction lecture only . . .
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Insiders guide to clinical Medicine.pdf
PDF
Basic Mud Logging Guide for educational purpose
PPTX
Cell Types and Its function , kingdom of life
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Business Ethics Teaching Materials for college
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Complications of Minimal Access Surgery at WLH
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
Institutional Correction lecture only . . .
PPH.pptx obstetrics and gynecology in nursing
2.FourierTransform-ShortQuestionswithAnswers.pdf
Insiders guide to clinical Medicine.pdf
Basic Mud Logging Guide for educational purpose
Cell Types and Its function , kingdom of life
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
O5-L3 Freight Transport Ops (International) V1.pdf
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
102 student loan defaulters named and shamed – Is someone you know on the list?
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
human mycosis Human fungal infections are called human mycosis..pptx
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
O7-L3 Supply Chain Operations - ICLT Program
Business Ethics Teaching Materials for college
3rd Neelam Sanjeevareddy Memorial Lecture.pdf

Lecture 3 Problem Solving.pptx

  • 1. 1 IT406 Artificial Intelligence Problem Solving (Using Search) Harris Chikunya
  • 2. 2 Problem Solving (Using Search) • When the correct action to be taken is not immediately obvious, an agent may need to plan ahead: to consider a sequence of actions that form a path to a goal state. • Such an agent is called a problem-solving agent, and the computational process it undertakes is called search.
  • 3. 3 Problem Solving (using Search) • Search Terminology  Problem Space: It is the environment in which the search takes place.  Problem Instance: It is Initial state + Goal state  Problem Space Graph: It represents problem state. States are shown by nodes and operators are shown by edges.  Depth of a problem: Length of a shortest path or shortest sequence of operators from Initial State to goal state.  Branching Factor: The average number of child nodes in the problem space graph.  Depth: Length of the shortest path from initial state to goal state.
  • 4. 4 The Importance of Search in AI • It has already become clear that many of the tasks underlying AI can be phrased in terms of a search for the solution to the problem at hand. • Many goal based agents are essentially problem solving agents which must decide what to do by searching for a sequence of actions that lead to their solutions. • For production systems, we have seen the need to search for a sequence of rule applications that lead to the required fact or action. • For neural network systems, we need to search for the set of connection weights that will result in the required input to output mapping.
  • 5. 5 Problem Types 1. Single-State problem: deterministic, accessible  Agent knows everything about world (the exact state),  Can calculate optimal action sequence to reach goal state.  E.g., playing chess. Any action will result in an exact state
  • 6. 6 Problem Types 2. Multiple-state problem: deterministic, inaccessible • Agent does not know the exact state (could be in any of the possible states) • May not have sensor at all • Assume states while working towards goal state. • E.g., walking in a dark room • If you are at the door, going straight will lead you to the kitchen • If you are at the kitchen, turning left leads you to the bedroom • …
  • 7. 7 Problem Types 3. Contingency problem: nondeterministic, inaccessible • Must use sensors during execution • Solution is a tree or policy • Often interleave search and execution • E.g., a new skater in an arena • Sliding problem. • Many skaters around
  • 8. 8 Problem Types • Exploration problem: unknown state space Discover and learn about environment while taking actions. • E.g., Maze
  • 9. 9 Problem-solving Agents • A simplified road map of part of Romania
  • 10. 10 Problem-Solving Agents • GOAL FORMULATION: Goals organize behavior by limiting the objectives and hence the actions to be considered. The agent adopts the goal of reaching Bucharest. • PROBLEM FORMULATION: The agent devises a description of the states and actions necessary to reach the goal—an abstract model of the relevant part of the world. For our agent, one good model is to consider the actions of traveling from one city to an adjacent city • SEARCH: Before taking any action in the real world, searches until it finds a sequence of actions that reaches the goal. Such a sequence is called a solution. (such as going from Arad to Sibiu to Faragas to Bucharest), or it will find that no solution is possible. • EXECUTION: The agent can now execute the actions in the solution, one at a time.
  • 11. 11 Search Problems and Solutions A search problem can be defined formally as follows:  A set of possible states that the environment can be in. We call this the state space.  The initial state that the agent starts in. For example: Arad.  A set of one or more goal states. Sometimes there is one goal state (e.g., Bucharest), sometimes there is a small of alternative goals states.  The actions available to the agent. Given a state s, ACTIONS(s) returns a finite set of actions that can be executed in s. For example: ACTIONS(Arad) = {ToSibiu, ToTimisoara, ToZerind}.  A transition model, which describes what each action does. RESULT(s,a) returns the state that results from doing action a in state s. For example, RESULT(Arad, ToZerind) = Zerind.  An action cost function, denoted by ACTION-COST(s, a, s’), that gives the numeric cost of applying action a in state s to reach state s’ A sequence of actions forms a path, and a solution is a path from the initial state to a goal state.
  • 12. 12 Example Problems • Problems can be either standardized problems or real-world problems. • A standardized problem is intended to illustrate or exercise various problem solving methods. • It can be given a concise, exact description and hence is suitable as a benchmark for researchers to compare the performance of algorithms. • A real-world problem, such as robot navigation, is one whose solutions people actually use, and whose formulation is not standardized, because, for example, each robot has different sensors that produce different data.
  • 13. 13 Standardized Problem Example 1: 8-puzzle • STATES: • INITIAL STATE: • ACTIONS: • GOAL STATE: • ACTION COST:
  • 14. 14 Standardized Problem Example 1: 8-puzzle • STATES: Location of each of the tiles • INITIAL STATE: Any state • ACTIONS: moving blank Left, Right, UP, Down • GOAL STATE: does state match goal state? • ACTION COST: Each action cost 1
  • 15. 15 Standardized Problem Example 1: 8-puzzle Why search algorithms?  8-puzzle has 362,800 states  15-puzzle has 10^12 states  24-puzzle has 10^25 states So, we need a principled way to look for a solution in these huge search spaces…
  • 16. 16 Standardized Problem Example 2: Vaccum World • STATES: • INITIAL STATE: • ACTIONS: • GOAL STATE: • ACTION COST:
  • 17. 17 Standardized Problem Example 2: Vaccum World • STATES: dirt or robot locations • INITIAL STATE: Any state • ACTIONS: Left, Right, Suck • GOAL STATE: no dirt • ACTION COST: Each action cost 1
  • 18. 18 Real-World Problems Example 1: Route Finding problem • Consider the airline travel problems that must be solved by a travel- planning Web site: • STATES: ?? • INITIAL STATE: ?? • ACTIONS: ?? • GOAL STATE: ?? • ACTION COST: ??
  • 19. 19 Search Algorithms • A search algorithm takes a search problem as input and returns a solution, or an indication of failure. • These algorithms use a Search tree to form the various paths from the initial state, trying to find a path that reaches a goal state. • Each node in the search tree corresponds to a state in the state space and the edges in the search tree corresponds to actions. • The root of the tree corresponds to the initial state of the problem. • The search tree may have multiple paths to (and thus multiple nodes for) any given state, but each node in the tree has a unique path back to the root (as in all trees).
  • 21. 21 Evaluation of Search strategies • We can evaluate an algorithm’s performance in four ways:  COMPLETENESS: Is the algorithm guaranteed to find a solution when there is one, and to correctly report failure when there is not?  COST OPTIMALITY: Does it find a solution with the lowest path cost of all solutions?  TIME COMPLEXITY: How long does it take to find a solution? This can be measured in seconds, or more abstractly by the number of states and actions considered.  SPACE COMPLEXITY: How much memory is needed to perform the search?
  • 22. 22 Uninformed Search Strategies • An uninformed search algorithm is given no clue about how close a state is to the goal(s). • In contrast, an informed agent who knows the location of each city knows that Sibiu is much closer to Bucharest and thus more likely to be on the shortest path. • Use only information available in the problem formulation  Breadth-first  Uniform-cost  Depth-first  Depth-limited  Iterative deepening
  • 23. 23 Breadth-First Search • Expand the shallowest unexpanded node • Moves down level by level until a goal is reached
  • 24. 24 Example: Traveling from Arad to Bucharest
  • 28. 28 BFS Performance • Search algorithms are commonly evaluated according to the following four criteria:  Completeness: does it always find a solution if one exists?  Time complexity: how long does it take as function of num. of nodes?  Space complexity: how much memory does it require?  Optimality: does it guarantee the least-cost solution? • Time and space complexity are measured in terms of:  b – max branching factor of the search tree  d – depth of the least-cost solution  m – max depth of the search tree (may be infinity)  BFS Performance measure • Completeness: Yes, if b is finite • Time complexity: 1+b+b2+…+bd = O(b d), i.e., exponential in d • Space complexity: O(b d), keeps every node in memory • Optimality: Yes (assuming cost = 1 per step)
  • 29. 29 Dijkstra’s Algorithm or Uniform-Cost Search • It explores paths in the increasing order of cost • It always expands the least cost node • It is identical to BFS if each transition has the same cost • The path cost is usually taken to be the sum of the step costs
  • 30. 30 Romania with step costs in Km
  • 34. 34 UCS Performance • Completeness: Yes • Time complexity: # nodes with g  cost of optimal ,  O(b d) • Space complexity: # nodes with g  cost of optimal solution,  O(b d) • Optimality: cost optimal g(n) is the path cost to node n Remember: b = branching factor d = depth of least-cost solution
  • 35. 35 Depth-first search • Always expands the deepest node in the frontier first. • The search proceeds immediately to the deepest level of the search, where the nodes have no successors. • The search then goes to the deepest node that still has unexpanded successors.
  • 37. 37 Performance • Completeness: No, fails in infinite state space (yes if finite state space) • Time complexity: O(b m) • Space complexity: O(b m) • Optimality: No Remember: b = branching factor m = max depth of search tree
  • 38. 38 Depth-limited search • Designed to keep DFS from wandering down an infinite path • A version of DFS in which we supply a depth limit, ℓ, and treat all nodes at depth ℓ as if they had no successors • The time complexity is O(𝑏ℓ) • The space complexity is O(bℓ) • Unfortunately, if we make a poor choice for ℓ the algorithm will fail to reach the solution, making it incomplete again. • Sometimes a good depth limit can be chosen based on knowledge of the problem.
  • 40. 40 Iterative deepening search • Combines the best of BFS and DFS strategies • IDS solves the problem of picking a good value for ℓ by trying all values: first 0, then 1, then 2, and so on—until either a solution is found, or the depth limited search returns the failure value rather than the cutoff value • NB: In general, iterative deepening is the preferred uninformed search method when the search state space is larger than can fit in memory and the depth of the solution is not known.
  • 42. 42 Informed (Heuristic) Search Strategies • Use domain-specific hints about the location of goals • Can find solutions more efficiently than an uninformed strategy • The hints come in the form of a heuristic function, denoted h(n) • h(n) = estimated cost of the cheapest path from the state at node n to a goal state. • For example, in route-finding problems, we can estimate the distance from the current state to a goal by computing the straight- line distance on the map between the two points.  Greedy Best first  A*  Heuristics  Hill-climbing  Simulated annealing
  • 43. 43 Greedy best-first search • Estimation function: h(n) = estimate of cost from n to goal (heuristic) • For example: hSLD(n) = straight-line distance from n to Bucharest • Greedy search expands first the node that appears to be closest to the goal, according to h(n). • This is on the grounds that it is likely to lead to a solution quickly
  • 44. 44 Greedy best-first search Values of hSLD- straight-line distances to Bucharest e.g. hSLD(Arad) = 366.
  • 46. 46 A* search • Uses the evaluation function f(n) = g(n) + h(n) where: g(n) – cost so far to reach n h(n) – estimated cost to goal from n f(n) – estimated total cost of path through n to goal • We are trying to find the cheapest solution i.e. the node with the lowest value of g(n) + h(n) • A* search is both complete and optimal

Editor's Notes

  • #2: Problem solving is an important aspect of Artificial Intelligence. A problem can be considered to consist of a goal and a set of actions that can be taken to lead to the goal. Search is a method that can be used by computers to examine a problem space in order to find a goal. Often, we want to find the goal as quickly as possible or without using too many resources.
  • #3: Problem solving is an important aspect of Artificial Intelligence. A problem can be considered to consist of a goal and a set of actions that can be taken to lead to the goal. Search is a method that can be used by computers to examine a problem space in order to find a goal. Often, we want to find the goal as quickly as possible or without using too many resources.
  • #4: Searching is the universal technique of problem solving in AI. There are some single-player games such as tile games, Sudoku, crossword, etc. The search algorithms help you to search for a particular position in such games. When the correct action to take is not immediately obvious, an agent may need to plan ahead: to consider a sequence of actions that form a path to a goal state. Such an agent is called a problem-solving agent, and the computational process it undertakes is called search.
  • #10: Imagine an agent enjoying a touring vacation in Romania. The agent wants to take in the sights, improve its Romanian, enjoy the nightlife, avoid hangovers, and so on. The decision problem is a complex one. Now, suppose the agent is currently in the city of Arad and has a nonrefundable ticket to fly out of Bucharest the following day. The agent observes street signs and sees that there are three roads leading out of Arad: one toward Sibiu, one to Timisoara, and one to Zerind. None of these are the goal, so unless the agent is familiar with the geography of Romania, it will not know which road to follow. If the agent has no additional information—that is, if the environment is unknown—then the agent can do no better than to execute one of the actions at random. In the example we will assume our agents always have access to information about the world, such as the map
  • #11: Given information about the world, the agent can follow the four-phase problem-solving process:
  • #12: A problem-solving agent should use a cost function that reflects its own performance measure; for example, for route-finding agents, the cost of an action might be the length in miles, or it might be the time it takes to complete the action. We assume that action costs are additive; that is, the total cost of a path is the sum of the individual action costs. An optimal solution has the lowest path cost among all solutions.
  • #14: The 8-puzzle is a sliding-tile puzzle which consists of a 3x3 grid with eight numbered tiles and one blank space. The object is to reach a specified goal state, such as the one shown on the right. The standard formulation of the 8 puzzle is as follows
  • #17: The state-space graph for the 2 cell vaccum world. There are 8 states and three actions for each state: L=Left, R=Right, S=Suck
  • #18: STATES: A state of the world says which objects are in which cells. For the vacuum world, the objects are the agent and any dirt. In the simple two-cell version, the agent can be in either of the two cells, and each call can either contain dirt or not
  • #19: STATES: Each state obviously includes a location (e.g., an airport) and the current time. Furthermore, because the cost of an action (a flight segment) may depend on previous segments, their fare bases, and their status as domestic or international, the state must record extra information about these “historical” aspects. INITIAL STATE: The user’s home airport. ACTIONS: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for within-airport transfer if needed. TRANSITION MODEL: The state resulting from taking a flight will have the flight’s destination as the new location and the flight’s arrival time as the new time. GOAL STATE: A destination city. Sometimes the goal can be more complex, such as “arrive at the destination on a nonstop flight.” ACTION COST: A combination of monetary cost, waiting time, flight time, customs and immigration procedures, seat quality, time of day, type of airplane, frequent-flyer reward points, and so on.
  • #20: Having formulated some problem, we now need to solve them Search algorithms work by considering possible action sequences Possible action sequences starting at initial state form a Search tree The initial state is at the root The branches are the actions The nodes are the states
  • #21: Three branches from the parent node to three child nodes We must choose which of these options to consider further, if the first choice is not a solution Follow up one option and put the others aside for later The process of expanding nodes continues until either A solution is found There are no more states to expand
  • #22: Before we get into the design of various search algorithms, we will consider the criteria used to choose among them.
  • #23: For example, consider our agent in Arad with the goal of reaching Bucharest. An uninformed agent with no knowledge of Romanian geography has no clue whether going to Zerind or Sibiu is a better first step. Search strategies are distinguished by the node expansion order
  • #24: At each stage, the node to be expanded next is indicated by the triangular marker.
  • #29: The algorithm has exponential time and space complexity Advantages Finds the path of minimal length to the goal disadvantages Requires the generation and storage of a tree whose size is exponential to the depth of the shallowest goal node.
  • #32: The node with the least cost is expanded first The path cost is usually taken to be the sum of the step costs
  • #37: The node to be expanded first is indicated by the arrow The goal for this example is M Attempt the Romanian Map using the DFS
  • #38: Exponential space and time complexity
  • #39: For example, on the map of Romania there are 20 cities. Therefore, 19 is a valid limit.
  • #41: Iterative deepening combines many of the benefits of depth-first and breadth-first search. Like depth-first search, its memory requirements are modest: when there is a solution, or on finite state spaces with no solution. Like breadth-first search, iterative deepening is optimal for problems where all actions have the same cost
  • #42: Four iterations of iterative deepening search for goal M on a binary tree, with the depth limit varying from 0 to 3. Note the interior nodes form a single path. The triangle marks the node to expand next; green nodes with dark outlines are on the frontier; the very faint nodes probably can’t be part of a solution with this depth limit.
  • #44: Depth-limited search
  • #45: Let us see how this works for route-finding problems in Romania; we use the straight-line distance heuristic, which we will call hSLD If the goal is Bucharest, we need to know the straight-line distances to Bucharest,
  • #46: Figure shows the progress of a greedy best-first search using hSLD to find a path from Arad to Bucharest. The first node to be expanded from Arad will be Sibiu because the heuristic says it is closer to Bucharest than is either Zerind or Timisoara. The next node to be expanded will be Fagaras because it is now closest according to the heuristic. Fagaras in turn generates Bucharest, which is the goal. For this particular problem, greedy best-first search using hSLD finds a solution without ever expanding a node that is not on the solution path. The solution it found does not have optimal cost, however: the path via Sibiu and Fagaras to Bucharest is 32 miles longer than the path through Rimnicu Vilcea and Pitesti. This is why the algorithm is called “greedy”—on each iteration it tries to get as close to a goal as it can, but greediness can lead to worse results than being careful.
  • #49: Notice that Bucharest first appears on the frontier at step (e), but it is not selected for expansion (and thus not detected as a solution) because at f = 450 it is not the lowest-cost node on the frontier—that would be Pitesti, at f = 417 Another way to say this is that there might be a solution through Pitesti whose cost is as low as 417, so the algorithm will not settle for a solution that costs 450. At step (f), a different path to Bucharest is now the lowest-cost node, at f = 418, so it is selected and detected as the optimal solution.