SlideShare a Scribd company logo
Artificial Intelligence
Solving problems by searching
            Fall 2008

     professor: Luigi Ceccaroni
Problem solving
• We want:
  – To automatically solve a problem


• We need:
  – A representation of the problem
  – Algorithms that use some strategy to solve
    the problem defined in that representation
Problem representation
• General:
  – State space: a problem is divided into a set
    of resolution steps from the initial state to the
    goal state
  – Reduction to sub-problems: a problem is
    arranged into a hierarchy of sub-problems
• Specific:
  – Game resolution
  – Constraints satisfaction
States
• A problem is defined by its elements and their
  relations.
• In each instant of the resolution of a problem,
  those elements have specific descriptors (How
  to select them?) and relations.
• A state is a representation of those elements in
  a given moment.
• Two special states are defined:
  – Initial state (starting point)
  – Final state (goal state)
State modification:
           successor function
• A successor function is needed to move
  between different states.
• A successor function is a description of
  possible actions, a set of operators. It is a
  transformation function on a state
  representation, which convert it into another
  state.
• The successor function defines a relation of
  accessibility among states.
• Representation of the successor function:
  – Conditions of applicability
  – Transformation function
State space
• The state space is the set of all states
  reachable from the initial state.
• It forms a graph (or map) in which the nodes are
  states and the arcs between nodes are actions.
• A path in the state space is a sequence of
  states connected by a sequence of actions.
• The solution of the problem is part of the map
  formed by the state space.
Problem solution
• A solution in the state space is a path from the
  initial state to a goal state or, sometimes, just a
  goal state.
• Path/solution cost: function that assigns a
  numeric cost to each path, the cost of applying
  the operators to the states
• Solution quality is measured by the path cost
  function, and an optimal solution has the
  lowest path cost among all solutions.
• Solutions: any, an optimal one, all. Cost is
  important depending on the problem and the
  type of solution sought.
Problem description
• Components:
  – State space (explicitly or implicitly defined)
  – Initial state
  – Goal state (or the conditions it has to fulfill)
  – Available actions (operators to change state)
  – Restrictions (e.g., cost)
  – Elements of the domain which are relevant to the
    problem (e.g., incomplete knowledge of the starting
    point)
  – Type of solution:
      • Sequence of operators or goal state
      • Any, an optimal one (cost definition needed), all
Example: 8-puzzle


   1    2   3

    4   5   6

    7   8
Example: 8-puzzle
• State space: configuration of the eight
  tiles on the board
• Initial state: any configuration
• Goal state: tiles in a specific order
• Operators or actions: “blank moves”
  – Condition: the move is within the board
  – Transformation: blank moves Left, Right, Up,
    or Down
• Solution: optimal sequence of operators
Example: n queens
  (n = 4, n = 8)
Example: n queens
               (n = 4, n = 8)
• State space: configurations from 0 to n queens
  on the board with only one queen per row and
  column
• Initial state: configuration without queens on the
  board
• Goal state: configuration with n queens such that
  no queen attacks any other
• Operators or actions: place a queen on the
  board
      Condition: the new queen is not attacked by any
       other already placed
      Transformation: place a new queen in a particular
       square of the board
• Solution: one solution (cost is not considered)
Structure of the state space
• Data structures:
  – Trees: only one path to a given node
  – Graphs: several paths to a given node
• Operators: directed arcs between nodes
• The search process explores the state
  space.
• In the worst case all possible paths
  between the initial state and the goal state
  are explored.
Search as goal satisfaction
• Satisfying a goal
  – Agent knows what the goal is
  – Agent cannot evaluate intermediate solutions
    (uninformed)
  – The environment is:
     • Static
     • Observable
     • Deterministic
Example: holiday in Romania
• On holiday in Romania; currently in Arad
• Flight leaves tomorrow from Bucharest at
  13:00
• Let’s configure this to be an AI problem
Romania
• What’s the problem?
  – Accomplish a goal
    • Reach Bucharest by 13:00




• So this is a goal-based problem
Romania
• What’s an example of a non-goal-based
  problem?
  – Live long and prosper
  – Maximize the happiness of your trip to
    Romania
  – Don’t get hurt too much
Romania
• What qualifies as a solution?
  – You can/cannot reach Bucharest by 13:00
  – The actions one takes to travel from Arad to
    Bucharest along the shortest (in time) path
Romania
• What additional information does one
  need?
  – A map
More concrete problem
            definition
                             Which cities could you be
          A state space      in?
                             Which city do you start
          An initial state   from?
                             Which city do you aim to
            A goal state     reach?
                             When in city foo, the
A function defining state    following cities can be
              transitions    reached
 A function defining the     How long does it take to
        “cost” of a state    travel through a city
                             sequence?
              sequence
More concrete problem
                  definition
                    A state space      Choose a representation

                                       Choose an element from the
                    An initial state   representation
                                       Create goal_function(state) such
                      A goal state     that TRUE is returned upon reaching
                                       goal
                                       successor_function(statei) =
         A function defining state
                                       {<actiona, statea>, <actionb, stateb>,
                       transitions     …}

A function defining the “cost” of a
                                       cost (sequence) = number
                  state sequence
Important notes about this
          example
– Static environment (available states,
  successor function, and cost functions don’t
  change)
– Observable (the agent knows where it is)
– Discrete (the actions are discrete)
– Deterministic (successor function is always
  the same)
Tree search algorithms
• Basic idea:
  – Simulated
    exploration of
    state space by
    generating
    successors of
    already explored
    states (AKA
    expanding
    states)            Sweep out from start (breadth)
Tree search algorithms
• Basic idea:
  – Simulated
    exploration of
    state space by
    generating
    successors of
    already explored
    states (AKA
    expanding          Go East, young man! (depth)
    states)
Implementation: general search
          algorithm
Algorithm General Search
 Open_states.insert (Initial_state)
 Current= Open_states.first()
  while not is_final?(Current) and not Open_states.empty?() do
   Open_states.delete_first()
   Closed_states.insert(Current)
   Successors= generate_successors(Current)
   Successors= process_repeated(Successors, Closed_states,
      Open_states)
   Open_states.insert(Successors)
   Current= Open_states.first()
  eWhile
eAlgorithm
Example: Arad  Bucharest
Algorithm General Search
 Open_states.insert (Initial_state)




                      Arad
Example: Arad  Bucharest
• Current= Open_states.first()




                     Arad
Example: Arad  Bucharest
while not is_final?(Current) and not Open_states.empty?() do
   Open_states.delete_first()
   Closed_states.insert(Current)
   Successors= generate_successors(Current)
   Successors= process_repeated(Successors, Closed_states,
      Open_states)
   Open_states.insert(Successors)


                                     Arad




                 Zerind (75)    Timisoara (118)   Sibiu (140)
Example: Arad  Bucharest
• Current= Open_states.first()




                              Arad




           Zerind (75)   Timisoara (118)   Sibiu (140)
Example: Arad  Bucharest
while not is_final?(Current) and not Open_states.empty?() do
   Open_states.delete_first()
   Closed_states.insert(Current)
   Successors= generate_successors(Current)
   Successors= process_repeated(Successors, Closed_states,
      Open_states)
   Open_states.insert(Successors)


                                                   Arad




               Zerind (75)       Timisoara (118)          Sibiu (140)




                             Dradea (151)             Faragas (99)      Rimnicu Vilcea (80)
Implementation: states vs.
            nodes
• State
  – (Representation of) a physical configuration
• Node
  – Data structure constituting part of a search
    tree
     • Includes parent, children, depth, path cost g(x)
• States do not have parents, children,
  depth, or path cost!
Search strategies
• A strategy is defined by picking the order of node
  expansion
• Strategies are evaluated along the following
  dimensions:
   – Completeness – does it always find a solution if
     one exists?
   – Time complexity – number of nodes
     generated/expanded
   – Space complexity – maximum nodes in memory
   – Optimality – does it always find a least-cost
     solution?
Search strategies
• Time and space complexity are measured in
  terms of:
   – b – maximum branching factor of the search tree
     (may be infinite)
   – d – depth of the least-cost solution
   – m – maximum depth of the state space (may be
     infinite)
Uninformed Search Strategies
• Uninformed strategies use only the
  information available in the problem
  definition
  –   Breadth-first search
  –   Uniform-cost search
  –   Depth-first search
  –   Depth-limited search
  –   Iterative deepening search
Nodes
• Open nodes:
  – Generated, but not yet explored
  – Explored, but not yet expanded


• Closed nodes:
  – Explored and expanded



                                      35
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
  – A FIFO queue, i.e., new successors go at end
Space cost of BFS




• Because you must be able to generate the path upon
  finding the goal state, all visited nodes must be stored
• O (bd+1)
Properties of breadth-first
                search
• Complete?
   – Yes (if b (max branch factor) is finite)
• Time?
   – 1 + b + b2 + … + bd + b(bd-1) = O(bd+1), i.e., exponential in d
• Space?
   – O(bd+1) (keeps every node in memory)
• Optimal?
   – Only if cost = 1 per step, otherwise not optimal in general
• Space is the big problem; it can easily generate nodes at
  10 MB/s, so 24 hrs = 860GB!
Depth-first search
• Expand deepest unexpanded node
• Implementation:
  – A LIFO queue, i.e., a stack
Depth-first search
•   Complete?
    –   No: fails in infinite-depth spaces, spaces with loops.
    –   Can be modified to avoid repeated states along path 
        complete in finite spaces
•   Time?
    –   O(bm): terrible if m is much larger than d, but if solutions are
        dense, may be much faster than breadth-first
•   Space?
    –   O(bm), i.e., linear space!
•   Optimal?
    –   No
Depth-limited search
• It is depth-first search with an imposed
  limit on the depth of exploration, to
  guarantee that the algorithm ends.




                                        41
Treatment of repeated states
• Breadth-first:
   – If the repeated state is in the structure of closed or open
     nodes, the actual path has equal or greater depth than the
     repeated state and can be forgotten.




                                                          42
Treatment of repeated states
• Depth-first:
   – If the repeated state is in the structure of closed nodes, the
     actual path is kept if its depth is less than the repeated state.
   – If the repeated state is in the structure of open nodes, the
     actual path has always greater depth than the repeated state
     and can be forgotten.




                                                             43
Iterative deepening search
Iterative deepening search
• The algorithm consists of iterative, depth-first
  searches, with a maximum depth that increases at
  each iteration. Maximum depth at the beginning is 1.
• Behavior similar to BFS, but without the spatial
  complexity.
• Only the actual path is kept in memory; nodes are
  regenerated at each iteration.
• DFS problems related to infinite branches are
  avoided.
• To guarantee that the algorithm ends if there is no
  solution, a general maximum depth of exploration can
  be defined.                                      45
Iterative deepening search
Summary
– All uninformed searching techniques are more alike
  than different.
– Breadth-first has space issues, and possibly optimality
  issues.
– Depth-first has time and optimality issues, and possibly
  completeness issues.
– Depth-limited search has optimality and completeness
  issues.
– Iterative deepening is the best uninformed search we
  have explored.
Uninformed vs. informed
• Blind (or uninformed) search algorithms:
  – Solution cost is not taken into account.


• Heuristic (or informed) search algorithms:
  – A solution cost estimation is used to guide the
    search.
  – The optimal solution, or even a solution, are
    not guaranteed.

                                               48

More Related Content

PDF
State Space Representation and Search
PPTX
Uninformed Search technique
PPTX
Polygon filling algorithm
PPTX
Constraint satisfaction problems (csp)
PPTX
Ensemble Method (Bagging Boosting)
PPTX
Problem solving agents
PPTX
Data processing . powerpoint slides
PPT
AI Lecture 3 (solving problems by searching)
State Space Representation and Search
Uninformed Search technique
Polygon filling algorithm
Constraint satisfaction problems (csp)
Ensemble Method (Bagging Boosting)
Problem solving agents
Data processing . powerpoint slides
AI Lecture 3 (solving problems by searching)

What's hot (20)

PPTX
Informed and Uninformed search Strategies
PPTX
Artificial Intelligence Searching Techniques
PPTX
Webinar : P, NP, NP-Hard , NP - Complete problems
PPT
AI Lecture 4 (informed search and exploration)
PPTX
AI_Session 7 Greedy Best first search algorithm.pptx
PPTX
Unit-III-AI Search Techniques and solution's
PPT
Problems, Problem spaces and Search
PPTX
Agents in Artificial intelligence
PDF
I. Hill climbing algorithm II. Steepest hill climbing algorithm
PDF
I.BEST FIRST SEARCH IN AI
PPTX
State space search
PPT
Planning
PDF
Hill climbing algorithm in artificial intelligence
PPTX
Hill climbing algorithm
PPTX
Hill Climbing Algorithm in Artificial Intelligence
PDF
A* Search Algorithm
PDF
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...
PPT
Artificial Intelligence -- Search Algorithms
PPTX
Graph coloring using backtracking
PPTX
search strategies in artificial intelligence
Informed and Uninformed search Strategies
Artificial Intelligence Searching Techniques
Webinar : P, NP, NP-Hard , NP - Complete problems
AI Lecture 4 (informed search and exploration)
AI_Session 7 Greedy Best first search algorithm.pptx
Unit-III-AI Search Techniques and solution's
Problems, Problem spaces and Search
Agents in Artificial intelligence
I. Hill climbing algorithm II. Steepest hill climbing algorithm
I.BEST FIRST SEARCH IN AI
State space search
Planning
Hill climbing algorithm in artificial intelligence
Hill climbing algorithm
Hill Climbing Algorithm in Artificial Intelligence
A* Search Algorithm
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...
Artificial Intelligence -- Search Algorithms
Graph coloring using backtracking
search strategies in artificial intelligence
Ad

Similar to Solving problems by searching (20)

PPT
02-solving-problems-by-searching-(us).ppt
PPTX
3. Module_2_Chapter_3hvghcyttrctrctfcf.pptx
PPTX
Artificial intelligence(04)
PDF
Lecture 3 problem solving
PPTX
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
PPTX
Popular search algorithms
PPTX
PROBLEM SOLVING AGENTS - SEARCH STRATEGIES
PPTX
PPTX
Lec#2
PPTX
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
PPTX
Problem Solving Agents-Introduction_L3.pptx
PPTX
Artificial Intellligence Constraint Satisfaction Problem.pptx
PPT
CH2_AI_Lecture1.ppt
PDF
Chapter 3 - Searching and prPlanning.pdf
PPT
22sch AI Module 2.ppt 22sch AI Module 2.ppt
PPT
22sch AI Module 2.ppt 22sch AI Module 2.ppt
PPT
3 probsolver edited.ppt
PPTX
Lecture 3 Problem Solving.pptx
PPTX
AI&ML-UNIT-1 problem solving agents.pptx
02-solving-problems-by-searching-(us).ppt
3. Module_2_Chapter_3hvghcyttrctrctfcf.pptx
Artificial intelligence(04)
Lecture 3 problem solving
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
Popular search algorithms
PROBLEM SOLVING AGENTS - SEARCH STRATEGIES
Lec#2
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
Problem Solving Agents-Introduction_L3.pptx
Artificial Intellligence Constraint Satisfaction Problem.pptx
CH2_AI_Lecture1.ppt
Chapter 3 - Searching and prPlanning.pdf
22sch AI Module 2.ppt 22sch AI Module 2.ppt
22sch AI Module 2.ppt 22sch AI Module 2.ppt
3 probsolver edited.ppt
Lecture 3 Problem Solving.pptx
AI&ML-UNIT-1 problem solving agents.pptx
Ad

More from Luigi Ceccaroni (20)

PPTX
ProBleu: Promoting intergenerational change via citizen science and education...
PPTX
Digital twins of the environment: opportunities and barriers for citizen science
PDF
Harnessing the power of citizen science for environmental stewardship and wat...
PPTX
Citizen science, training, data quality and interoperability
PPTX
Methods for measuring citizen-science impact
PDF
Abrazo, integra tv 4all @ eweek2004 (final)
PPT
Abrazo @ congreso e learning e inclusión social 2004
PPT
Pizza and a movie 2002 aamas
PPT
Integra tv 4all 2005 - drt4all
PPT
In out pc media center 2003
PPT
Modeling utility ontologies in agentcities with a collaborative approach 2002...
PPT
Pizza and a movie 2002 aamas
PPT
The april agent platform 2002 agentcities, lausanne
PPTX
ILIAD and CoCoast @ Noordzeedagen 2021
PDF
MICS @ Geneva 2020
PDF
Metrics and instruments to evaluate the impacts of citizen science
PDF
COST Action 15212 WG5 - Standardisation and interoperability
PDF
The role of interoperability in encouraging participation in citizen science ...
PDF
Ontology of citizen science @ Siena 2016 11 24
PDF
Citclops/EyeOnWater @ Barcelona - Citizen science day 2016
ProBleu: Promoting intergenerational change via citizen science and education...
Digital twins of the environment: opportunities and barriers for citizen science
Harnessing the power of citizen science for environmental stewardship and wat...
Citizen science, training, data quality and interoperability
Methods for measuring citizen-science impact
Abrazo, integra tv 4all @ eweek2004 (final)
Abrazo @ congreso e learning e inclusión social 2004
Pizza and a movie 2002 aamas
Integra tv 4all 2005 - drt4all
In out pc media center 2003
Modeling utility ontologies in agentcities with a collaborative approach 2002...
Pizza and a movie 2002 aamas
The april agent platform 2002 agentcities, lausanne
ILIAD and CoCoast @ Noordzeedagen 2021
MICS @ Geneva 2020
Metrics and instruments to evaluate the impacts of citizen science
COST Action 15212 WG5 - Standardisation and interoperability
The role of interoperability in encouraging participation in citizen science ...
Ontology of citizen science @ Siena 2016 11 24
Citclops/EyeOnWater @ Barcelona - Citizen science day 2016

Recently uploaded (20)

PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
cuic standard and advanced reporting.pdf
PDF
Modernizing your data center with Dell and AMD
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Big Data Technologies - Introduction.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
KodekX | Application Modernization Development
PDF
Encapsulation theory and applications.pdf
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
Cloud computing and distributed systems.
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Approach and Philosophy of On baking technology
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
cuic standard and advanced reporting.pdf
Modernizing your data center with Dell and AMD
Reach Out and Touch Someone: Haptics and Empathic Computing
Big Data Technologies - Introduction.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
Spectral efficient network and resource selection model in 5G networks
Dropbox Q2 2025 Financial Results & Investor Presentation
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
KodekX | Application Modernization Development
Encapsulation theory and applications.pdf
CIFDAQ's Market Insight: SEC Turns Pro Crypto
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Cloud computing and distributed systems.
Mobile App Security Testing_ A Comprehensive Guide.pdf
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Building Integrated photovoltaic BIPV_UPV.pdf
Approach and Philosophy of On baking technology

Solving problems by searching

  • 1. Artificial Intelligence Solving problems by searching Fall 2008 professor: Luigi Ceccaroni
  • 2. Problem solving • We want: – To automatically solve a problem • We need: – A representation of the problem – Algorithms that use some strategy to solve the problem defined in that representation
  • 3. Problem representation • General: – State space: a problem is divided into a set of resolution steps from the initial state to the goal state – Reduction to sub-problems: a problem is arranged into a hierarchy of sub-problems • Specific: – Game resolution – Constraints satisfaction
  • 4. States • A problem is defined by its elements and their relations. • In each instant of the resolution of a problem, those elements have specific descriptors (How to select them?) and relations. • A state is a representation of those elements in a given moment. • Two special states are defined: – Initial state (starting point) – Final state (goal state)
  • 5. State modification: successor function • A successor function is needed to move between different states. • A successor function is a description of possible actions, a set of operators. It is a transformation function on a state representation, which convert it into another state. • The successor function defines a relation of accessibility among states. • Representation of the successor function: – Conditions of applicability – Transformation function
  • 6. State space • The state space is the set of all states reachable from the initial state. • It forms a graph (or map) in which the nodes are states and the arcs between nodes are actions. • A path in the state space is a sequence of states connected by a sequence of actions. • The solution of the problem is part of the map formed by the state space.
  • 7. Problem solution • A solution in the state space is a path from the initial state to a goal state or, sometimes, just a goal state. • Path/solution cost: function that assigns a numeric cost to each path, the cost of applying the operators to the states • Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions. • Solutions: any, an optimal one, all. Cost is important depending on the problem and the type of solution sought.
  • 8. Problem description • Components: – State space (explicitly or implicitly defined) – Initial state – Goal state (or the conditions it has to fulfill) – Available actions (operators to change state) – Restrictions (e.g., cost) – Elements of the domain which are relevant to the problem (e.g., incomplete knowledge of the starting point) – Type of solution: • Sequence of operators or goal state • Any, an optimal one (cost definition needed), all
  • 9. Example: 8-puzzle 1 2 3 4 5 6 7 8
  • 10. Example: 8-puzzle • State space: configuration of the eight tiles on the board • Initial state: any configuration • Goal state: tiles in a specific order • Operators or actions: “blank moves” – Condition: the move is within the board – Transformation: blank moves Left, Right, Up, or Down • Solution: optimal sequence of operators
  • 11. Example: n queens (n = 4, n = 8)
  • 12. Example: n queens (n = 4, n = 8) • State space: configurations from 0 to n queens on the board with only one queen per row and column • Initial state: configuration without queens on the board • Goal state: configuration with n queens such that no queen attacks any other • Operators or actions: place a queen on the board  Condition: the new queen is not attacked by any other already placed  Transformation: place a new queen in a particular square of the board • Solution: one solution (cost is not considered)
  • 13. Structure of the state space • Data structures: – Trees: only one path to a given node – Graphs: several paths to a given node • Operators: directed arcs between nodes • The search process explores the state space. • In the worst case all possible paths between the initial state and the goal state are explored.
  • 14. Search as goal satisfaction • Satisfying a goal – Agent knows what the goal is – Agent cannot evaluate intermediate solutions (uninformed) – The environment is: • Static • Observable • Deterministic
  • 15. Example: holiday in Romania • On holiday in Romania; currently in Arad • Flight leaves tomorrow from Bucharest at 13:00 • Let’s configure this to be an AI problem
  • 16. Romania • What’s the problem? – Accomplish a goal • Reach Bucharest by 13:00 • So this is a goal-based problem
  • 17. Romania • What’s an example of a non-goal-based problem? – Live long and prosper – Maximize the happiness of your trip to Romania – Don’t get hurt too much
  • 18. Romania • What qualifies as a solution? – You can/cannot reach Bucharest by 13:00 – The actions one takes to travel from Arad to Bucharest along the shortest (in time) path
  • 19. Romania • What additional information does one need? – A map
  • 20. More concrete problem definition Which cities could you be A state space in? Which city do you start An initial state from? Which city do you aim to A goal state reach? When in city foo, the A function defining state following cities can be transitions reached A function defining the How long does it take to “cost” of a state travel through a city sequence? sequence
  • 21. More concrete problem definition A state space Choose a representation Choose an element from the An initial state representation Create goal_function(state) such A goal state that TRUE is returned upon reaching goal successor_function(statei) = A function defining state {<actiona, statea>, <actionb, stateb>, transitions …} A function defining the “cost” of a cost (sequence) = number state sequence
  • 22. Important notes about this example – Static environment (available states, successor function, and cost functions don’t change) – Observable (the agent knows where it is) – Discrete (the actions are discrete) – Deterministic (successor function is always the same)
  • 23. Tree search algorithms • Basic idea: – Simulated exploration of state space by generating successors of already explored states (AKA expanding states) Sweep out from start (breadth)
  • 24. Tree search algorithms • Basic idea: – Simulated exploration of state space by generating successors of already explored states (AKA expanding Go East, young man! (depth) states)
  • 25. Implementation: general search algorithm Algorithm General Search Open_states.insert (Initial_state) Current= Open_states.first() while not is_final?(Current) and not Open_states.empty?() do Open_states.delete_first() Closed_states.insert(Current) Successors= generate_successors(Current) Successors= process_repeated(Successors, Closed_states, Open_states) Open_states.insert(Successors) Current= Open_states.first() eWhile eAlgorithm
  • 26. Example: Arad  Bucharest Algorithm General Search Open_states.insert (Initial_state) Arad
  • 27. Example: Arad  Bucharest • Current= Open_states.first() Arad
  • 28. Example: Arad  Bucharest while not is_final?(Current) and not Open_states.empty?() do Open_states.delete_first() Closed_states.insert(Current) Successors= generate_successors(Current) Successors= process_repeated(Successors, Closed_states, Open_states) Open_states.insert(Successors) Arad Zerind (75) Timisoara (118) Sibiu (140)
  • 29. Example: Arad  Bucharest • Current= Open_states.first() Arad Zerind (75) Timisoara (118) Sibiu (140)
  • 30. Example: Arad  Bucharest while not is_final?(Current) and not Open_states.empty?() do Open_states.delete_first() Closed_states.insert(Current) Successors= generate_successors(Current) Successors= process_repeated(Successors, Closed_states, Open_states) Open_states.insert(Successors) Arad Zerind (75) Timisoara (118) Sibiu (140) Dradea (151) Faragas (99) Rimnicu Vilcea (80)
  • 31. Implementation: states vs. nodes • State – (Representation of) a physical configuration • Node – Data structure constituting part of a search tree • Includes parent, children, depth, path cost g(x) • States do not have parents, children, depth, or path cost!
  • 32. Search strategies • A strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: – Completeness – does it always find a solution if one exists? – Time complexity – number of nodes generated/expanded – Space complexity – maximum nodes in memory – Optimality – does it always find a least-cost solution?
  • 33. Search strategies • Time and space complexity are measured in terms of: – b – maximum branching factor of the search tree (may be infinite) – d – depth of the least-cost solution – m – maximum depth of the state space (may be infinite)
  • 34. Uninformed Search Strategies • Uninformed strategies use only the information available in the problem definition – Breadth-first search – Uniform-cost search – Depth-first search – Depth-limited search – Iterative deepening search
  • 35. Nodes • Open nodes: – Generated, but not yet explored – Explored, but not yet expanded • Closed nodes: – Explored and expanded 35
  • 36. Breadth-first search • Expand shallowest unexpanded node • Implementation: – A FIFO queue, i.e., new successors go at end
  • 37. Space cost of BFS • Because you must be able to generate the path upon finding the goal state, all visited nodes must be stored • O (bd+1)
  • 38. Properties of breadth-first search • Complete? – Yes (if b (max branch factor) is finite) • Time? – 1 + b + b2 + … + bd + b(bd-1) = O(bd+1), i.e., exponential in d • Space? – O(bd+1) (keeps every node in memory) • Optimal? – Only if cost = 1 per step, otherwise not optimal in general • Space is the big problem; it can easily generate nodes at 10 MB/s, so 24 hrs = 860GB!
  • 39. Depth-first search • Expand deepest unexpanded node • Implementation: – A LIFO queue, i.e., a stack
  • 40. Depth-first search • Complete? – No: fails in infinite-depth spaces, spaces with loops. – Can be modified to avoid repeated states along path  complete in finite spaces • Time? – O(bm): terrible if m is much larger than d, but if solutions are dense, may be much faster than breadth-first • Space? – O(bm), i.e., linear space! • Optimal? – No
  • 41. Depth-limited search • It is depth-first search with an imposed limit on the depth of exploration, to guarantee that the algorithm ends. 41
  • 42. Treatment of repeated states • Breadth-first: – If the repeated state is in the structure of closed or open nodes, the actual path has equal or greater depth than the repeated state and can be forgotten. 42
  • 43. Treatment of repeated states • Depth-first: – If the repeated state is in the structure of closed nodes, the actual path is kept if its depth is less than the repeated state. – If the repeated state is in the structure of open nodes, the actual path has always greater depth than the repeated state and can be forgotten. 43
  • 45. Iterative deepening search • The algorithm consists of iterative, depth-first searches, with a maximum depth that increases at each iteration. Maximum depth at the beginning is 1. • Behavior similar to BFS, but without the spatial complexity. • Only the actual path is kept in memory; nodes are regenerated at each iteration. • DFS problems related to infinite branches are avoided. • To guarantee that the algorithm ends if there is no solution, a general maximum depth of exploration can be defined. 45
  • 47. Summary – All uninformed searching techniques are more alike than different. – Breadth-first has space issues, and possibly optimality issues. – Depth-first has time and optimality issues, and possibly completeness issues. – Depth-limited search has optimality and completeness issues. – Iterative deepening is the best uninformed search we have explored.
  • 48. Uninformed vs. informed • Blind (or uninformed) search algorithms: – Solution cost is not taken into account. • Heuristic (or informed) search algorithms: – A solution cost estimation is used to guide the search. – The optimal solution, or even a solution, are not guaranteed. 48