SlideShare a Scribd company logo
LECTURE 3
Solving Problems by Searching
Instructor : Yousef Aburawi
Cs411 -Artificial Intelligence
Misurata University
Faculty of Information Technology
Spring 2022/2023
Outline
 Problem-solving agents.
 Search problems and solutions.
 Example problems.
 Formulating problems.
 Basic search algorithms.
2
Problem-Solving Agents
 When the correct action to take is not immediately obvious, an agent may
need to plan ahead: to consider a sequence of actions that form a path to a
goal state.
 Such an agent is called a problem-solving agent, and the computational
process it undertakes is called search.
 In this chapter, we consider only the simplest environments: episodic, single
agent, fully observable, deterministic, static, discrete, and known.
 Describing one kind of goal-based agent called a problem-solving agent.
3
Problem-Solving Agents (2)
 Four-phase problem-solving process:
 Goal formulation: What are the successful world states.
 Problem formulation: What actions and states to reach the goal.
 Search: Determine the possible sequence of actions that lead to the states of
known values and then choosing the best sequence that reaches the goal.
 Execution: Give the solution perform the actions.
 The state space can be represented as a graph in which the vertices are states
and the directed edges between them are actions, (as shown next slide).
4
e.g. Romania
5
e.g. Romania (2)
 On holiday in Romania; currently in Arad.
 Flight leaves tomorrow from Bucharest.
 Formulate goal:
 Be in Bucharest.
 Formulate problem:
 States: various cities.
 Actions: drive between cities.
 Find solution:
 Sequence of cities; e.g. Arad, Sibiu, Fagaras, Bucharest, … .
6
Search Problems and Solutions
 A search problem can be defined formally as follows:
 State space: A set of possible states that the environment can be in.
 The initial state that the agent starts in.
 A set of one or more goal states.
 The actions available to the agent.
 A transition model, which describes what each action does.
 An action cost function.
 A sequence of actions forms a path.
 A solution is a path from the initial state to a goal state.
 An optimal solution has the lowest path cost among all Solutions.
7
Example Problems
 A standardized problem is intended to illustrate or exercise various
problem-solving methods.
 It can be given a concise, exact description and hence is suitable as a
benchmark for researchers to compare the performance of algorithms.
 A real-world problem, such as robot navigation, is one whose solutions
people actually use, and whose formulation is idiosyncratic, not
standardized, because.
 for example, each robot has different sensors that produce different data.
8
Problem Formulation
 A problem is defined by:
 An initial state, e.g. Arad.
 Successor function: S(X)= set of action-state pairs.
 e.g. S(Arad)={<Arad  Zerind, Zerind>,…}.
 initial state + successor function = state space.
 Goal test, can be
 Explicit : e.g. x=‘at Bucharest’.
 Implicit : e.g. checkmate(x).
 Path cost (additive).
 e.g. sum of distances, number of actions executed, …
 c(x,a,y) is the step cost, assumed to be >= 0.
9
Problem Formulation (2)
 A solution is a sequence of actions from initial to goal state.
 Optimal solution has the lowest path cost.
10
e.g. Vacuum World
 States??
 8 states: 2 locations with or without dirt.
 Initial state??
 Any state can be initial.
 Actions??
 {Left, Right, Suck}.
 Goal test??
 Check whether squares are clean.
 Path cost??
 Number of actions to reach goal.
11
e.g. 8-Puzzle
12
 States??
 The location of each tile.
 Initial state??
 Any state can be initial.
 Actions??
 {Left, Right, Up, Down}.
 Goal test??
 Check whether goal configuration is reached.
 Path cost??
 Number of actions to reach goal.
Search Algorithms
 A search algorithm takes a search problem as input and returns a solution, or an
indication of failure.
 We will study algorithms that superimpose a search tree over the state-space graph,
forming various paths from the initial state, trying to find a path that reaches a goal
state.
 Each node in the search tree corresponds to a state in the state space and the edges
in the search tree correspond to actions.
 The root of the tree corresponds to the initial state of the problem.
13
State Space Vs. Search Tree
 A state is a (representation of) a physical configuration.
 The state space describes the (possibly infinite) set of states in the world, and
the actions that allow transitions from one state to another.
 The search tree describes paths between these states, reaching towards the
goal.
 The search tree may have multiple paths to (and thus multiple nodes for) any given
state, but each node in the tree has a unique path back to the root (as in all trees).
14
Search Tree
 A node is a data structure belong to a search tree.
 A node has a parent, children, … , and includes path cost, depth, …
 Here node= <state, parent-node, action, path-cost, depth>
 FRINGE = contains generated nodes which are not yet expanded.
 White nodes with black outline.
15
Search Tree (2)
16
 The root node of the search tree is at the
initial state, Arad.
 We can expand the node, by considering
the available ACTIONS for that state,
using the RESULT function to see where
those actions lead to, and generating a
new node (called a child node or
successor node) for each of the resulting
states.
 Each child node has Arad as its parent
node.
e.g. Romania
17
Graph Tree
18
Uninformed Search Strategies
 (blind search) = use only information available in problem definition.
 When strategies can determine whether one non-goal state is better than another
informed search.
 Categories defined by expansion algorithm:
 Breadth-first search.
 Uniform-cost search.
 Depth-first search.
 Depth-limited search.
 Iterative deepening search.
 Bidirectional search.
19
Breadth-First Search
 When all actions have the same cost, an appropriate strategy is breadth-first
search.
 The root node is expanded first, then all the successors of the root node are
expanded next, then their successors, and so on.
 Also called best-First-Search.
 Implementation: fringe is a FIFO queue.
20
Breadth-First Search on a Simple Binary Tree
21
Uniform-Cost Search
 When actions have different costs, an obvious choice is to use best-first search where
the evaluation function is the cost of the path from the root to the current node.
 This is called Dijkstra’s algorithm by the theoretical computer science community,
and uniform-cost search by the AI community.
 Extension of BF-search: Expand node with lowest path cost.
 Implementation: fringe = queue ordered by path cost.
 UC-search is the same as BF-search when all step-costs are equal.
23
Depth-First Search
 Expands the deepest node in the frontier first.
 Implementation: fringe is a LIFO queue (=stack).
 Depth-first search is not cost-optimal; it returns the first solution it finds, even
if it is not cheapest.
24
25
Depth-First Search on a Simple Binary Tree
 A dozen steps (left to
right, top to bottom) in
the progress of a depth-
first search on a binary
tree from start state A to
goal M.
Depth-Limited Search
 Depth-Limited search is Depth-First search with depth limit 𝑙.
 i.e. nodes at depth 𝑙 have no successors.
 Problem knowledge can be used.
 Solves the infinite-path problem.
 If 𝑙 < 𝑑 then incompleteness results.
 If 𝑙 > 𝑑 then not optimal.
 Time complexity: 𝑂 𝑏𝑙
 Space complexity: 𝑂 𝑏𝑙
26
Iterative Deepening Search
 A general strategy to find best depth limit l.
 Goals is found at depth d, the depth of the shallowest goal-node.
 Iterative deepening search solves the problem of picking a good value for 𝑙 by trying
all values: first 0, then 1, then 2, and so on—until either a solution is found, or the depth-
limited search returns the failure value.
 Combines benefits of Depth-First and Breadth-First search.
27
28
Iterative Deepening Search on a Simple Binary Tree
29
Iterative Deepening Search on a Simple Binary Tree (2)
Bidirectional search
 bidirectional search simultaneously searches forward from the initial state and
backwards from the goal state(s), hoping that the two searches will meet.
 The motivation is: 𝑏𝑑 2 + 𝑏𝑑 2 ≠ 𝑏𝑑
 Check whether the node belongs to the other fringe before expansion.
 Space complexity is the most significant weakness.
 Complete and optimal if both searches are BF.
 The predecessor of each node should be efficiently computable.
 When actions are easily reversible.
30
Summary
 Search algorithms introduced that an agent can use to select action sequences in a
wide variety of environments - as long as they are episodic, single-agent, fully
observable, deterministic, static, discrete, and completely known.
 There are tradeoffs to be made between the amount of time the search takes, the
amount of memory available, and the quality of the solution.
 Before an agent can start searching, a well-defined problem must be formulated.
 A problem consists of five parts: the initial state, a set of actions, a transition model
describing the results of those actions, a set of goal states, and an action cost
function.
31
Summary (2)
 The environment of the problem is represented by a state space graph. A path
through the state space (a sequence of actions) from the initial state to a goal state is
a solution.
 Search algorithms generally treat states and actions as atomic, without any internal
structure.
 Search algorithms are judged on the basis of completeness, cost optimality, time
complexity, and space complexity.
 Uninformed search methods have access only to the problem definition. Algorithms
build a search tree in an attempt to find a solution. Algorithms differ based on which
node they expand first:
32
Summary (3)
 BEST-FIRST SEARCH selects nodes for expansion using to an evaluation function.
 BREADTH-FIRST SEARCH expands the shallowest nodes first; it is complete, optimal for
unit action costs, but has exponential space complexity.
 UNIFORM-COST SEARCH expands the node with lowest path cost, 𝑔(𝑛), and is optimal for
general action costs.
 DEPTH-FIRST SEARCH expands the deepest unexpanded node first. It is neither complete
nor optimal, but has linear space complexity. Depth-limited search adds a depth bound.
 ITERATIVE DEEPENING SEARCH calls depth-first search with increasing depth limits until a
goal is found. It is complete when full cycle checking is done, optimal for unit action costs,
has time complexity comparable to breadth-first search, and has linear space complexity.
33
Summary (4)
 BIDIRECTIONAL SEARCH expands two frontiers, one around the initial state and one
around the goal, stopping when the two frontiers meet.
34
Readings
 Chapters 3 of Textbox.
1-35
The End

More Related Content

PPT
AI Lecture 3 (solving problems by searching)
PPTX
Lecture 3 Problem Solving.pptx
PDF
problem solving in Artificial intelligence .pdf
PPTX
Artificial intelligence with the help of
PPTX
Chapter 3.pptx
PPTX
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
PPT
state-spaces29Sep06.ppt
PPTX
chapter 3 Problem Solving using searching.pptx
AI Lecture 3 (solving problems by searching)
Lecture 3 Problem Solving.pptx
problem solving in Artificial intelligence .pdf
Artificial intelligence with the help of
Chapter 3.pptx
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
state-spaces29Sep06.ppt
chapter 3 Problem Solving using searching.pptx

Similar to AI_03_Solving Problems by Searching.pptx (20)

PPTX
Lec#2
PPT
chapter3part1.ppt
PPTX
3. ArtificialSolving problems by searching.pptx
PPTX
CHAPTER 5.pptx of the following of our discussion
PPT
m3-searchAbout AI About AI About AI1.ppt
PDF
Lecture 3 problem solving
PPT
Unit 2 for the Artificial intelligence and machine learning
PPTX
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
PPT
02-solving-problems-by-searching-(us).ppt
PPT
Artificial intelligent Lec 3-ai chapter3-search
PPT
3.AILec5nkjnkjnkjnkjnkjnjhuhgvkjhbkhj-6.ppt
PPTX
Jarrar: Un-informed Search
PPTX
PPT
INTRODUCTION TO ARTIFICIAL INTELLIGENCE BASIC
PPT
13256181.ppt
PDF
Chapter 3 - Searching and prPlanning.pdf
PPTX
AI_Lecture2.pptx
PPTX
PROBLEM SOLVING AGENTS - SEARCH STRATEGIES
PPSX
Week 2 a - Search.ppsx this is used to search things
Lec#2
chapter3part1.ppt
3. ArtificialSolving problems by searching.pptx
CHAPTER 5.pptx of the following of our discussion
m3-searchAbout AI About AI About AI1.ppt
Lecture 3 problem solving
Unit 2 for the Artificial intelligence and machine learning
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
02-solving-problems-by-searching-(us).ppt
Artificial intelligent Lec 3-ai chapter3-search
3.AILec5nkjnkjnkjnkjnkjnjhuhgvkjhbkhj-6.ppt
Jarrar: Un-informed Search
INTRODUCTION TO ARTIFICIAL INTELLIGENCE BASIC
13256181.ppt
Chapter 3 - Searching and prPlanning.pdf
AI_Lecture2.pptx
PROBLEM SOLVING AGENTS - SEARCH STRATEGIES
Week 2 a - Search.ppsx this is used to search things
Ad

More from Yousef Aburawi (7)

PPTX
AI_07_Deep Learning.pptx
PPTX
AI_06_Machine Learning.pptx
PPTX
AI_05_First Order Logic.pptx
PPTX
AI_04_Logical Agents.pptx
PPTX
AI_02_Intelligent Agents.pptx
PPTX
AI_01_introduction.pptx
PPTX
AI_08_NLP.pptx
AI_07_Deep Learning.pptx
AI_06_Machine Learning.pptx
AI_05_First Order Logic.pptx
AI_04_Logical Agents.pptx
AI_02_Intelligent Agents.pptx
AI_01_introduction.pptx
AI_08_NLP.pptx
Ad

Recently uploaded (20)

PDF
NewMind AI Monthly Chronicles - July 2025
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
cuic standard and advanced reporting.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Encapsulation theory and applications.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Modernizing your data center with Dell and AMD
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
NewMind AI Monthly Chronicles - July 2025
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
cuic standard and advanced reporting.pdf
Spectral efficient network and resource selection model in 5G networks
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Chapter 3 Spatial Domain Image Processing.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Encapsulation theory and applications.pdf
Big Data Technologies - Introduction.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
MYSQL Presentation for SQL database connectivity
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
CIFDAQ's Market Insight: SEC Turns Pro Crypto
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Building Integrated photovoltaic BIPV_UPV.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Modernizing your data center with Dell and AMD
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx

AI_03_Solving Problems by Searching.pptx

  • 1. LECTURE 3 Solving Problems by Searching Instructor : Yousef Aburawi Cs411 -Artificial Intelligence Misurata University Faculty of Information Technology Spring 2022/2023
  • 2. Outline  Problem-solving agents.  Search problems and solutions.  Example problems.  Formulating problems.  Basic search algorithms. 2
  • 3. Problem-Solving Agents  When the correct action to take is not immediately obvious, an agent may need to plan ahead: to consider a sequence of actions that form a path to a goal state.  Such an agent is called a problem-solving agent, and the computational process it undertakes is called search.  In this chapter, we consider only the simplest environments: episodic, single agent, fully observable, deterministic, static, discrete, and known.  Describing one kind of goal-based agent called a problem-solving agent. 3
  • 4. Problem-Solving Agents (2)  Four-phase problem-solving process:  Goal formulation: What are the successful world states.  Problem formulation: What actions and states to reach the goal.  Search: Determine the possible sequence of actions that lead to the states of known values and then choosing the best sequence that reaches the goal.  Execution: Give the solution perform the actions.  The state space can be represented as a graph in which the vertices are states and the directed edges between them are actions, (as shown next slide). 4
  • 6. e.g. Romania (2)  On holiday in Romania; currently in Arad.  Flight leaves tomorrow from Bucharest.  Formulate goal:  Be in Bucharest.  Formulate problem:  States: various cities.  Actions: drive between cities.  Find solution:  Sequence of cities; e.g. Arad, Sibiu, Fagaras, Bucharest, … . 6
  • 7. Search Problems and Solutions  A search problem can be defined formally as follows:  State space: A set of possible states that the environment can be in.  The initial state that the agent starts in.  A set of one or more goal states.  The actions available to the agent.  A transition model, which describes what each action does.  An action cost function.  A sequence of actions forms a path.  A solution is a path from the initial state to a goal state.  An optimal solution has the lowest path cost among all Solutions. 7
  • 8. Example Problems  A standardized problem is intended to illustrate or exercise various problem-solving methods.  It can be given a concise, exact description and hence is suitable as a benchmark for researchers to compare the performance of algorithms.  A real-world problem, such as robot navigation, is one whose solutions people actually use, and whose formulation is idiosyncratic, not standardized, because.  for example, each robot has different sensors that produce different data. 8
  • 9. Problem Formulation  A problem is defined by:  An initial state, e.g. Arad.  Successor function: S(X)= set of action-state pairs.  e.g. S(Arad)={<Arad  Zerind, Zerind>,…}.  initial state + successor function = state space.  Goal test, can be  Explicit : e.g. x=‘at Bucharest’.  Implicit : e.g. checkmate(x).  Path cost (additive).  e.g. sum of distances, number of actions executed, …  c(x,a,y) is the step cost, assumed to be >= 0. 9
  • 10. Problem Formulation (2)  A solution is a sequence of actions from initial to goal state.  Optimal solution has the lowest path cost. 10
  • 11. e.g. Vacuum World  States??  8 states: 2 locations with or without dirt.  Initial state??  Any state can be initial.  Actions??  {Left, Right, Suck}.  Goal test??  Check whether squares are clean.  Path cost??  Number of actions to reach goal. 11
  • 12. e.g. 8-Puzzle 12  States??  The location of each tile.  Initial state??  Any state can be initial.  Actions??  {Left, Right, Up, Down}.  Goal test??  Check whether goal configuration is reached.  Path cost??  Number of actions to reach goal.
  • 13. Search Algorithms  A search algorithm takes a search problem as input and returns a solution, or an indication of failure.  We will study algorithms that superimpose a search tree over the state-space graph, forming various paths from the initial state, trying to find a path that reaches a goal state.  Each node in the search tree corresponds to a state in the state space and the edges in the search tree correspond to actions.  The root of the tree corresponds to the initial state of the problem. 13
  • 14. State Space Vs. Search Tree  A state is a (representation of) a physical configuration.  The state space describes the (possibly infinite) set of states in the world, and the actions that allow transitions from one state to another.  The search tree describes paths between these states, reaching towards the goal.  The search tree may have multiple paths to (and thus multiple nodes for) any given state, but each node in the tree has a unique path back to the root (as in all trees). 14
  • 15. Search Tree  A node is a data structure belong to a search tree.  A node has a parent, children, … , and includes path cost, depth, …  Here node= <state, parent-node, action, path-cost, depth>  FRINGE = contains generated nodes which are not yet expanded.  White nodes with black outline. 15
  • 16. Search Tree (2) 16  The root node of the search tree is at the initial state, Arad.  We can expand the node, by considering the available ACTIONS for that state, using the RESULT function to see where those actions lead to, and generating a new node (called a child node or successor node) for each of the resulting states.  Each child node has Arad as its parent node.
  • 19. Uninformed Search Strategies  (blind search) = use only information available in problem definition.  When strategies can determine whether one non-goal state is better than another informed search.  Categories defined by expansion algorithm:  Breadth-first search.  Uniform-cost search.  Depth-first search.  Depth-limited search.  Iterative deepening search.  Bidirectional search. 19
  • 20. Breadth-First Search  When all actions have the same cost, an appropriate strategy is breadth-first search.  The root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on.  Also called best-First-Search.  Implementation: fringe is a FIFO queue. 20
  • 21. Breadth-First Search on a Simple Binary Tree 21
  • 22. Uniform-Cost Search  When actions have different costs, an obvious choice is to use best-first search where the evaluation function is the cost of the path from the root to the current node.  This is called Dijkstra’s algorithm by the theoretical computer science community, and uniform-cost search by the AI community.  Extension of BF-search: Expand node with lowest path cost.  Implementation: fringe = queue ordered by path cost.  UC-search is the same as BF-search when all step-costs are equal. 23
  • 23. Depth-First Search  Expands the deepest node in the frontier first.  Implementation: fringe is a LIFO queue (=stack).  Depth-first search is not cost-optimal; it returns the first solution it finds, even if it is not cheapest. 24
  • 24. 25 Depth-First Search on a Simple Binary Tree  A dozen steps (left to right, top to bottom) in the progress of a depth- first search on a binary tree from start state A to goal M.
  • 25. Depth-Limited Search  Depth-Limited search is Depth-First search with depth limit 𝑙.  i.e. nodes at depth 𝑙 have no successors.  Problem knowledge can be used.  Solves the infinite-path problem.  If 𝑙 < 𝑑 then incompleteness results.  If 𝑙 > 𝑑 then not optimal.  Time complexity: 𝑂 𝑏𝑙  Space complexity: 𝑂 𝑏𝑙 26
  • 26. Iterative Deepening Search  A general strategy to find best depth limit l.  Goals is found at depth d, the depth of the shallowest goal-node.  Iterative deepening search solves the problem of picking a good value for 𝑙 by trying all values: first 0, then 1, then 2, and so on—until either a solution is found, or the depth- limited search returns the failure value.  Combines benefits of Depth-First and Breadth-First search. 27
  • 27. 28 Iterative Deepening Search on a Simple Binary Tree
  • 28. 29 Iterative Deepening Search on a Simple Binary Tree (2)
  • 29. Bidirectional search  bidirectional search simultaneously searches forward from the initial state and backwards from the goal state(s), hoping that the two searches will meet.  The motivation is: 𝑏𝑑 2 + 𝑏𝑑 2 ≠ 𝑏𝑑  Check whether the node belongs to the other fringe before expansion.  Space complexity is the most significant weakness.  Complete and optimal if both searches are BF.  The predecessor of each node should be efficiently computable.  When actions are easily reversible. 30
  • 30. Summary  Search algorithms introduced that an agent can use to select action sequences in a wide variety of environments - as long as they are episodic, single-agent, fully observable, deterministic, static, discrete, and completely known.  There are tradeoffs to be made between the amount of time the search takes, the amount of memory available, and the quality of the solution.  Before an agent can start searching, a well-defined problem must be formulated.  A problem consists of five parts: the initial state, a set of actions, a transition model describing the results of those actions, a set of goal states, and an action cost function. 31
  • 31. Summary (2)  The environment of the problem is represented by a state space graph. A path through the state space (a sequence of actions) from the initial state to a goal state is a solution.  Search algorithms generally treat states and actions as atomic, without any internal structure.  Search algorithms are judged on the basis of completeness, cost optimality, time complexity, and space complexity.  Uninformed search methods have access only to the problem definition. Algorithms build a search tree in an attempt to find a solution. Algorithms differ based on which node they expand first: 32
  • 32. Summary (3)  BEST-FIRST SEARCH selects nodes for expansion using to an evaluation function.  BREADTH-FIRST SEARCH expands the shallowest nodes first; it is complete, optimal for unit action costs, but has exponential space complexity.  UNIFORM-COST SEARCH expands the node with lowest path cost, 𝑔(𝑛), and is optimal for general action costs.  DEPTH-FIRST SEARCH expands the deepest unexpanded node first. It is neither complete nor optimal, but has linear space complexity. Depth-limited search adds a depth bound.  ITERATIVE DEEPENING SEARCH calls depth-first search with increasing depth limits until a goal is found. It is complete when full cycle checking is done, optimal for unit action costs, has time complexity comparable to breadth-first search, and has linear space complexity. 33
  • 33. Summary (4)  BIDIRECTIONAL SEARCH expands two frontiers, one around the initial state and one around the goal, stopping when the two frontiers meet. 34
  • 34. Readings  Chapters 3 of Textbox. 1-35

Editor's Notes

  • #16: FRINGE (الأطراف)