SlideShare a Scribd company logo
Chapter 4  Informed Search and Exploration
Outline Informed (Heuristic) search strategies (Greedy) Best-first search A* search (Admissible) Heuristic Functions Relaxed problem Subproblem Local search algorithms Hill-climbing search Simulated anneal search Local beam search Genetic algorithms Online search * Online local search  learning in online search
Informed search strategies Informed search   uses  problem-specific  knowledge beyond the problem definition finds solution more  efficiently  than the uninformed search Best-first   search uses an  evaluation function   f ( n ) for each node e.g., Measures distance to the goal – lowest evaluation Implementation : fringe  is a queue sorted in  increasing  order of  f -values. Can we really expand the  best  node first? No! only the one that  appears  to be best based on  f ( n ). heuristic function   h ( n ) estimated  cost of the cheapest path from node  n  to a goal node Specific algorithms greedy best-first search A* search
Greedy best-first search expand the node that is  closest  to the goal :  Straight line distance  heuristic
Greedy best-first search example
Properties of Greedy best-first search  Complete ? Optimal ? Time ? Space ? No No  – can get stuck in loops, e.g.,  Iasi  –>  Neamt   – >  Iasi   – >  Neamt Yes  – complete in finite states with repeated-state checking , but a good heuristic function can give dramatic improvement –  keeps all nodes in memory
A* search evaluation function  f ( n ) =  g ( n ) +  h ( n ) g ( n )  =  cost to reach the node h ( n ) = estimated cost to the goal from  n f ( n ) = estimated total cost of path through n to the goal an  admissible  (optimistic) heuristic never overestimates  the cost to reach the goal estimates the cost of solving the problem is less than it actually is e.g.,  never overestimates the actual road distances A* using Tree-Search is  optimal  if  h ( n ) is  admissible could get  suboptimal  solutions using Graph-Search might discard the optimal path to a  repeated  state if it is not the  first  one generated a simple solution is to discard the more  expensive  of any two paths found to the same node (extra memory)
:  Straight line distance  heuristic
A* search example
Optimality of A* Consistency  (monotonicity) n’  is any successor of  n,  general  triangle inequality  ( n ,  n’ , and the goal) consistent heuristic is also admissible A* using Graph-Search is  optimal  if  h ( n ) is  consistent the values of  f ( n ) along any path are  nondecreasing
Properties of A* Suppose  C * is the cost of the optimal solution path A* expands  all  nodes with  f ( n ) <  C * A* might expand  some  of nodes with  f ( n ) =  C * on the “goal contour” A* will expand  no  nodes with  f ( n ) >  C *, which are  pruned ! Pruning : eliminating possibilities from consideration without examination A* is  optimally efficient  for any given heuristic function no  other  optimal  algorithm is guaranteed to expand fewer nodes than A* an algorithm might  miss  the optimal solution if it does  not  expand all nodes with  f ( n ) <  C * A* is complete Time complexity exponential number of nodes within the goal contour Space complexity keeps all generated nodes in memory runs out of space long before runs out of time
Memory-bounded heuristic search Iterative-deepening A* (IDA*) uses  f -value ( g  +  h ) as the cutoff Recursive best-first search (RBFS) replaces the  f -value of each node along the path with the  best   f -value of its  children remembers the  f -value of the  best  leaf in the “forgotten” subtree so that it can reexpand it later if necessary is efficient than IDA* but generates excessive nodes changes mind : go back to pick up the second-best path due to the extension ( f -value increased) of current best path optimal  if  h ( n ) is admissible space  complexity is  O ( bd ) time  complexity depends on the accuracy of  h ( n ) and how often the current best path is changed Exponential time complexity of Both IDA* and RBFS  cannot check  repeated  states other than those on the  current path  when search on Graphs – Should have  used more memory  (to store the nodes visited)!
:  Straight line distance  heuristic
RBFS example
Memory-bounded heuristic search (cont’d) SMA* – Simplified MA* (Memory-bounded A*) expands the  best  leaf node until memory is full then drops the  worst  leaf node – the one has the highest  f -value regenerates the subtree only when  all other paths  have been shown to look worse than the path it has forgotten complete  and  optimal  if there is a solution reachable might be the  best general-purpose  algorithm for finding optimal solutions If there is no way to balance the trade off between time an memory,  drop the optimality requirement !
(Admissible) Heuristic Functions h 1 ?  h 2 ?  = the number of misplaced tiles = total  Manhattan  (city block) distance = 7 tiles are out of position = 4+0+3+3+1+0+2+1 = 14
Effect of heuristic accuracy Effective  branching factor  b* total # of nodes generated by A* is  N , the solution depth is  d b*  is  b  that a uniform tree of depth d containing  N +1 nodes would have well-designed heuristic would have a value close to 1 h 2  is better than  h 1  based on the  b*  Domination h 2  dominates  h 1  if  for any node  n A* using  h 2  will  never  expand more nodes than A* using  h 1   every node  n  with  will be expanded the  larger  the better, as long as it does not overestimate!
Inventing admissible heuristic functions h 1  and  h 2  are  solutions  to  relaxed  (simplified) version of the puzzle. If the rules of the 8-puzze are   relaxed so that a tie can move  anywhere , then  h 1  gives the shortest solution If the rules are relaxed so that a tile can move to   any adjacent square ,   then   h 2   gives the shortest solution Relaxed problem : A problem with fewer restrictions on the actions Admissible heuristics for the original problem can be derived from the  optimal  (exact) solution to a  relaxed  problem Key point : the optimal solution cost of a relaxed problem is  no greater  than the optimal solution cost of the original problem Which should we choose if none of the  h 1  …  h m  dominates any of the others? We can have the  best of all  worlds, i.e., use whichever function is most accurate on the current node Subproblem * Admissible heuristics for the original problem can also be derived from the solution cost of the subproblem. Learning from experience *
Local search algorithms and optimization Systematic search algorithms to find (or given) the  goal  and to find the  path  to that goal Local search algorithms the  path  to the goal is  irrelevant , e.g.,  n -queens problem state space = set of “complete” configurations keep a  single  “current” state and try to  improve  it, e.g., move to its neighbors Key  advantages: use very little (constant) memory find reasonable solutions in large or infinite (continuous) state spaces (pure)  Optimization  problem: to find the best state (optimal configuration ) based on an  objective function , e.g. reproductive fitness – Darwinian, no goal test and path cost
Local search – example
Local search – state space landscape elevation  =   the value of the  objective function  or  heuristic cost function global minimum heuristic cost function A  complete  local search algorithm finds a solution if one exists A  optimal  algorithm finds a  global  minimum or maximum
moves in the direction of increasing value until a “ peak ” current node data structure only records the  state  and its  objective function neither remember the  history  nor look beyond the  immediate   neighbors like climbing Mount Everest in thick fog with  amnesia Hill-climbing search
complete-state formulation for 8-queens successor function returns  all possible  states generated by moving a  single  queen to another square in the  same  column (8 x 7 = 56 successors for each state) the  heuristic cost function   h  is the number of pairs of queens that are attacking each other Hill-climbing search - example best moves reduce  h  = 17 to  h  = 12 local minimum with  h  = 1
Hill-climbing search – greedy local search Hill climbing, the greedy local search, often gets stuck Local maxima : a  peak  that is higher than each of its neighboring states, but lower than the global maximum Ridges : a  sequence  of local maxima that is difficult to navigate Plateau : a  flat  area of the state space landscape a  flat local maximum : no uphill exit exists a  shoulder : possible to make progress can only solve 14% of 8-queen instance but fast (4 steps to  S  and 3 to  F )
Hill-climbing search – improvement Allows  sideways move : with hope that the plateau is a shoulder could stuck in an  infinite  loop when it reaches a  flat local maximum limits the number of  consecutive  sideways moves can solve 94% of 8-queen instances but slow (21 steps to  S  and 64 to  F ) Variations stochastic hill climbing chooses at  random ;  probability  of selection depends on the steepness first choice hill climbing randomly  generates successors to find a better one All the hill climbing algorithms discussed so far are  incomplete fail to find a goal when one exists because they get  stuck on local maxima Random-restart hill climbing conducts a  series  of hill-climbing searches; randomly generated initial states Have to  give up  the global optimality landscape consists of  a large amount of porcupines  on a flat floor NP-hard problems
Simulated annealing search combine hill climbing ( efficiency ) with random walk ( completeness ) annealing : harden metals by heating metals to a high temperature and  gradually  cooling them getting a ping-pong ball into the  deepest  crevice in a  humpy  surface shake  the surface to get the ball out of the  local  minima not too hard  to dislodge it from the  global  minimum simulated annealing :  start by shaking  hard  (at a high temperature) and then  gradually   reduce  the intensity of the shaking (lower the temperature) escape the local minima by allowing some “bad” moves but gradually reduce their size and frequency
Simulated annealing search - Implementation Always accept the good moves The probability to accept a bad move decreases exponentially with the “ badness ” of the move decreases exponentially with the “ temperature ”  T  (decreasing) finds a  global optimum  with probability approaching 1 if the  schedule  lowers  T   slowly enough
Local beam search Local beam search : keeps track of  k  states rather than just one generates  all  the successors of all  k  states selects the  k  best  successors from the complete list and repeats quickly  abandons unfruitful  searches and moves to the space where the most progress is being made  – “ Come over here, the grass is greener! ” lack of diversity  among the  k  states stochastic beam search : chooses  k  successors  at random , with the probability of choosing a given successor having an  increasing  value natural selection: the  successors  (offspring) if a  state  (organism) populate the next generation according to is  value  (fitness).
Genetic algorithms Genetic Algorithms  (GA): successor states are generated by combining  two  parents states. population : s set of  k  randomly generated states each state, called  individual , is represented as a string over a finite alphabet, e.g. a string of 0s and 1s;  8-queens : 24 bits or 8 digits for their positions fitness  (evaluation)  function : return  higher  values for  better  states, e.g., the number of  nonattacking pairs  of queens randomly  choosing two pairs for reproducing based on the  probability ; proportional to fitness score;  not  choosing the similar ones too early
Genetic algorithms (cont’d) schema :   a substring in which some of the positions can be left  unspecified instances : strings that match the schema GA works best when schemas correspond to  meaningful  components of a solution. a  crossover  point is  randomly  chosen from the positions in the string larger steps in the state space early and smaller steps later each location is subject to  random   mutation  with a small  independent  probability

More Related Content

PPTX
Jarrar: Informed Search
PPT
Ch2 3-informed (heuristic) search
PPT
M4 heuristics
PDF
Heuristic search
PDF
16890 unit 2 heuristic search techniques
PPT
04 search heuristic
PDF
Pathfinding - Part 1: Α* heuristic search
PPT
Jarrar.lecture notes.aai.2011s.ch4.informedsearch
Jarrar: Informed Search
Ch2 3-informed (heuristic) search
M4 heuristics
Heuristic search
16890 unit 2 heuristic search techniques
04 search heuristic
Pathfinding - Part 1: Α* heuristic search
Jarrar.lecture notes.aai.2011s.ch4.informedsearch

What's hot (20)

PPT
Solving problems by searching Informed (heuristics) Search
PDF
09 heuristic search
PDF
2 lectures 16 17-informed search algorithms ch 4.3
PPTX
Artificial intelligence(06)
PPT
Informed search (heuristics)
PPTX
A Star Search
PPTX
Control Strategies in AI
PDF
AI Lesson 05
PPTX
Final slide (bsc csit) chapter 5
PDF
Heuristic Searching: A* Search
PDF
Pathfinding - Part 3: Beyond the basics
PPT
Unit II Problem Solving Methods in AI K.sundar,AP/CSE,VEC
PPTX
Astar algorithm
PPTX
Lecture 08 uninformed search techniques
PPTX
AI Greedy and A-STAR Search
PPTX
Lecture 14 Heuristic Search-A star algorithm
PPTX
A star algorithms
PPT
M3 search
PPTX
Lecture 10 Uninformed Search Techniques conti..
Solving problems by searching Informed (heuristics) Search
09 heuristic search
2 lectures 16 17-informed search algorithms ch 4.3
Artificial intelligence(06)
Informed search (heuristics)
A Star Search
Control Strategies in AI
AI Lesson 05
Final slide (bsc csit) chapter 5
Heuristic Searching: A* Search
Pathfinding - Part 3: Beyond the basics
Unit II Problem Solving Methods in AI K.sundar,AP/CSE,VEC
Astar algorithm
Lecture 08 uninformed search techniques
AI Greedy and A-STAR Search
Lecture 14 Heuristic Search-A star algorithm
A star algorithms
M3 search
Lecture 10 Uninformed Search Techniques conti..
Ad

Viewers also liked (16)

PDF
A software approach to mathematical programming
PPT
Lp and ip programming cp 9
PPTX
Lecture 09 uninformed problem solving
PDF
Alphabeta
PDF
AI Lesson 08
PPTX
Lecture 12 Heuristic Searches
PPT
Lecture 11 Informed Search
PPTX
Lecture 23 alpha beta pruning
PPTX
Alpha beta pruning
PPT
Artificial Intelligence Chapter two agents
PPTX
Lecture 30 introduction to logic
PPTX
Lecture 29 genetic algorithm-example
ZIP
Ai Slides
PPT
Breadth first search
PPT
Hill climbing
PDF
Lecture 4- Agent types
A software approach to mathematical programming
Lp and ip programming cp 9
Lecture 09 uninformed problem solving
Alphabeta
AI Lesson 08
Lecture 12 Heuristic Searches
Lecture 11 Informed Search
Lecture 23 alpha beta pruning
Alpha beta pruning
Artificial Intelligence Chapter two agents
Lecture 30 introduction to logic
Lecture 29 genetic algorithm-example
Ai Slides
Breadth first search
Hill climbing
Lecture 4- Agent types
Ad

Similar to Searchadditional2 (20)

PPT
M4 Heuristics
PPT
Presentacion nro 1 redes y comunicaciones de datos
PPT
m4-heuristics.ppt
PDF
shamwari dzerwendo.mmmmmmfmmfmfkksrkrttkt
PDF
BCS515B Module3 vtu notes : Artificial Intelligence Module 3.pdf
PDF
Informed-search TECHNIQUES IN ai ml data science
PPT
Heuristic search problem-solving str.ppt
PPTX
Heuristic search
PPT
AI Lecture 4 (informed search and exploration)
PPTX
Informed Search.pptx
PPTX
Informed Search Techniques new kirti L 8.pptx
PPTX
Heuristic Search, Best First Search.pptx
PPT
2-Heuristic Search.ppt
PPTX
Heuristic search
PPTX
Introduction to artificial intelligence
PPTX
Artificial Intelligence - Informed search algorithms
PDF
AI 4 | Informed Search
PPTX
Artificial intelligence(06)
PDF
Unit-3 (2).pdf WEAK SLOT-AND-FILLER STRUCTURES
PPT
Heuristic Search Techniques {Artificial Intelligence}
M4 Heuristics
Presentacion nro 1 redes y comunicaciones de datos
m4-heuristics.ppt
shamwari dzerwendo.mmmmmmfmmfmfkksrkrttkt
BCS515B Module3 vtu notes : Artificial Intelligence Module 3.pdf
Informed-search TECHNIQUES IN ai ml data science
Heuristic search problem-solving str.ppt
Heuristic search
AI Lecture 4 (informed search and exploration)
Informed Search.pptx
Informed Search Techniques new kirti L 8.pptx
Heuristic Search, Best First Search.pptx
2-Heuristic Search.ppt
Heuristic search
Introduction to artificial intelligence
Artificial Intelligence - Informed search algorithms
AI 4 | Informed Search
Artificial intelligence(06)
Unit-3 (2).pdf WEAK SLOT-AND-FILLER STRUCTURES
Heuristic Search Techniques {Artificial Intelligence}

Recently uploaded (20)

PDF
RMMM.pdf make it easy to upload and study
PPTX
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PDF
1_English_Language_Set_2.pdf probationary
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PDF
advance database management system book.pdf
PDF
Trump Administration's workforce development strategy
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
PDF
Complications of Minimal Access Surgery at WLH
PPTX
History, Philosophy and sociology of education (1).pptx
PDF
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
PPTX
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
PPTX
Lesson notes of climatology university.
PDF
Indian roads congress 037 - 2012 Flexible pavement
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
RMMM.pdf make it easy to upload and study
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
Orientation - ARALprogram of Deped to the Parents.pptx
1_English_Language_Set_2.pdf probationary
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
What if we spent less time fighting change, and more time building what’s rig...
Practical Manual AGRO-233 Principles and Practices of Natural Farming
advance database management system book.pdf
Trump Administration's workforce development strategy
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
Complications of Minimal Access Surgery at WLH
History, Philosophy and sociology of education (1).pptx
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
Lesson notes of climatology university.
Indian roads congress 037 - 2012 Flexible pavement
Final Presentation General Medicine 03-08-2024.pptx
Chinmaya Tiranga quiz Grand Finale.pdf
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape

Searchadditional2

  • 1. Chapter 4 Informed Search and Exploration
  • 2. Outline Informed (Heuristic) search strategies (Greedy) Best-first search A* search (Admissible) Heuristic Functions Relaxed problem Subproblem Local search algorithms Hill-climbing search Simulated anneal search Local beam search Genetic algorithms Online search * Online local search learning in online search
  • 3. Informed search strategies Informed search uses problem-specific knowledge beyond the problem definition finds solution more efficiently than the uninformed search Best-first search uses an evaluation function f ( n ) for each node e.g., Measures distance to the goal – lowest evaluation Implementation : fringe is a queue sorted in increasing order of f -values. Can we really expand the best node first? No! only the one that appears to be best based on f ( n ). heuristic function h ( n ) estimated cost of the cheapest path from node n to a goal node Specific algorithms greedy best-first search A* search
  • 4. Greedy best-first search expand the node that is closest to the goal : Straight line distance heuristic
  • 6. Properties of Greedy best-first search Complete ? Optimal ? Time ? Space ? No No – can get stuck in loops, e.g., Iasi –> Neamt – > Iasi – > Neamt Yes – complete in finite states with repeated-state checking , but a good heuristic function can give dramatic improvement – keeps all nodes in memory
  • 7. A* search evaluation function f ( n ) = g ( n ) + h ( n ) g ( n ) = cost to reach the node h ( n ) = estimated cost to the goal from n f ( n ) = estimated total cost of path through n to the goal an admissible (optimistic) heuristic never overestimates the cost to reach the goal estimates the cost of solving the problem is less than it actually is e.g., never overestimates the actual road distances A* using Tree-Search is optimal if h ( n ) is admissible could get suboptimal solutions using Graph-Search might discard the optimal path to a repeated state if it is not the first one generated a simple solution is to discard the more expensive of any two paths found to the same node (extra memory)
  • 8. : Straight line distance heuristic
  • 10. Optimality of A* Consistency (monotonicity) n’ is any successor of n, general triangle inequality ( n , n’ , and the goal) consistent heuristic is also admissible A* using Graph-Search is optimal if h ( n ) is consistent the values of f ( n ) along any path are nondecreasing
  • 11. Properties of A* Suppose C * is the cost of the optimal solution path A* expands all nodes with f ( n ) < C * A* might expand some of nodes with f ( n ) = C * on the “goal contour” A* will expand no nodes with f ( n ) > C *, which are pruned ! Pruning : eliminating possibilities from consideration without examination A* is optimally efficient for any given heuristic function no other optimal algorithm is guaranteed to expand fewer nodes than A* an algorithm might miss the optimal solution if it does not expand all nodes with f ( n ) < C * A* is complete Time complexity exponential number of nodes within the goal contour Space complexity keeps all generated nodes in memory runs out of space long before runs out of time
  • 12. Memory-bounded heuristic search Iterative-deepening A* (IDA*) uses f -value ( g + h ) as the cutoff Recursive best-first search (RBFS) replaces the f -value of each node along the path with the best f -value of its children remembers the f -value of the best leaf in the “forgotten” subtree so that it can reexpand it later if necessary is efficient than IDA* but generates excessive nodes changes mind : go back to pick up the second-best path due to the extension ( f -value increased) of current best path optimal if h ( n ) is admissible space complexity is O ( bd ) time complexity depends on the accuracy of h ( n ) and how often the current best path is changed Exponential time complexity of Both IDA* and RBFS cannot check repeated states other than those on the current path when search on Graphs – Should have used more memory (to store the nodes visited)!
  • 13. : Straight line distance heuristic
  • 15. Memory-bounded heuristic search (cont’d) SMA* – Simplified MA* (Memory-bounded A*) expands the best leaf node until memory is full then drops the worst leaf node – the one has the highest f -value regenerates the subtree only when all other paths have been shown to look worse than the path it has forgotten complete and optimal if there is a solution reachable might be the best general-purpose algorithm for finding optimal solutions If there is no way to balance the trade off between time an memory, drop the optimality requirement !
  • 16. (Admissible) Heuristic Functions h 1 ? h 2 ? = the number of misplaced tiles = total Manhattan (city block) distance = 7 tiles are out of position = 4+0+3+3+1+0+2+1 = 14
  • 17. Effect of heuristic accuracy Effective branching factor b* total # of nodes generated by A* is N , the solution depth is d b* is b that a uniform tree of depth d containing N +1 nodes would have well-designed heuristic would have a value close to 1 h 2 is better than h 1 based on the b* Domination h 2 dominates h 1 if for any node n A* using h 2 will never expand more nodes than A* using h 1 every node n with will be expanded the larger the better, as long as it does not overestimate!
  • 18. Inventing admissible heuristic functions h 1 and h 2 are solutions to relaxed (simplified) version of the puzzle. If the rules of the 8-puzze are relaxed so that a tie can move anywhere , then h 1 gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square , then h 2 gives the shortest solution Relaxed problem : A problem with fewer restrictions on the actions Admissible heuristics for the original problem can be derived from the optimal (exact) solution to a relaxed problem Key point : the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the original problem Which should we choose if none of the h 1 … h m dominates any of the others? We can have the best of all worlds, i.e., use whichever function is most accurate on the current node Subproblem * Admissible heuristics for the original problem can also be derived from the solution cost of the subproblem. Learning from experience *
  • 19. Local search algorithms and optimization Systematic search algorithms to find (or given) the goal and to find the path to that goal Local search algorithms the path to the goal is irrelevant , e.g., n -queens problem state space = set of “complete” configurations keep a single “current” state and try to improve it, e.g., move to its neighbors Key advantages: use very little (constant) memory find reasonable solutions in large or infinite (continuous) state spaces (pure) Optimization problem: to find the best state (optimal configuration ) based on an objective function , e.g. reproductive fitness – Darwinian, no goal test and path cost
  • 20. Local search – example
  • 21. Local search – state space landscape elevation = the value of the objective function or heuristic cost function global minimum heuristic cost function A complete local search algorithm finds a solution if one exists A optimal algorithm finds a global minimum or maximum
  • 22. moves in the direction of increasing value until a “ peak ” current node data structure only records the state and its objective function neither remember the history nor look beyond the immediate neighbors like climbing Mount Everest in thick fog with amnesia Hill-climbing search
  • 23. complete-state formulation for 8-queens successor function returns all possible states generated by moving a single queen to another square in the same column (8 x 7 = 56 successors for each state) the heuristic cost function h is the number of pairs of queens that are attacking each other Hill-climbing search - example best moves reduce h = 17 to h = 12 local minimum with h = 1
  • 24. Hill-climbing search – greedy local search Hill climbing, the greedy local search, often gets stuck Local maxima : a peak that is higher than each of its neighboring states, but lower than the global maximum Ridges : a sequence of local maxima that is difficult to navigate Plateau : a flat area of the state space landscape a flat local maximum : no uphill exit exists a shoulder : possible to make progress can only solve 14% of 8-queen instance but fast (4 steps to S and 3 to F )
  • 25. Hill-climbing search – improvement Allows sideways move : with hope that the plateau is a shoulder could stuck in an infinite loop when it reaches a flat local maximum limits the number of consecutive sideways moves can solve 94% of 8-queen instances but slow (21 steps to S and 64 to F ) Variations stochastic hill climbing chooses at random ; probability of selection depends on the steepness first choice hill climbing randomly generates successors to find a better one All the hill climbing algorithms discussed so far are incomplete fail to find a goal when one exists because they get stuck on local maxima Random-restart hill climbing conducts a series of hill-climbing searches; randomly generated initial states Have to give up the global optimality landscape consists of a large amount of porcupines on a flat floor NP-hard problems
  • 26. Simulated annealing search combine hill climbing ( efficiency ) with random walk ( completeness ) annealing : harden metals by heating metals to a high temperature and gradually cooling them getting a ping-pong ball into the deepest crevice in a humpy surface shake the surface to get the ball out of the local minima not too hard to dislodge it from the global minimum simulated annealing : start by shaking hard (at a high temperature) and then gradually reduce the intensity of the shaking (lower the temperature) escape the local minima by allowing some “bad” moves but gradually reduce their size and frequency
  • 27. Simulated annealing search - Implementation Always accept the good moves The probability to accept a bad move decreases exponentially with the “ badness ” of the move decreases exponentially with the “ temperature ” T (decreasing) finds a global optimum with probability approaching 1 if the schedule lowers T slowly enough
  • 28. Local beam search Local beam search : keeps track of k states rather than just one generates all the successors of all k states selects the k best successors from the complete list and repeats quickly abandons unfruitful searches and moves to the space where the most progress is being made – “ Come over here, the grass is greener! ” lack of diversity among the k states stochastic beam search : chooses k successors at random , with the probability of choosing a given successor having an increasing value natural selection: the successors (offspring) if a state (organism) populate the next generation according to is value (fitness).
  • 29. Genetic algorithms Genetic Algorithms (GA): successor states are generated by combining two parents states. population : s set of k randomly generated states each state, called individual , is represented as a string over a finite alphabet, e.g. a string of 0s and 1s; 8-queens : 24 bits or 8 digits for their positions fitness (evaluation) function : return higher values for better states, e.g., the number of nonattacking pairs of queens randomly choosing two pairs for reproducing based on the probability ; proportional to fitness score; not choosing the similar ones too early
  • 30. Genetic algorithms (cont’d) schema : a substring in which some of the positions can be left unspecified instances : strings that match the schema GA works best when schemas correspond to meaningful components of a solution. a crossover point is randomly chosen from the positions in the string larger steps in the state space early and smaller steps later each location is subject to random mutation with a small independent probability