SlideShare a Scribd company logo
Problem-Solving Strategies in
Artificial Intelligence
1. Problem solving Methods
2. Search Strategies
3. Uninformed – Informed
4. Heuristics
5. Local Search Algorithms and Optimization Problems
Dr.J.SENTHILKUMAR
Assistant Professor
Department of Computer Science and Engineering
KIT-KALAIGNARKARUNANIDHI INSTITUTE OF TECHNOLOGY
Problem Solving Methods
• The method of solving problem through AI involves the
process of defining the search space, deciding start and goal
states then finding the path from state to goal state through
search space.
• The movement from start state to goal state is guided by set of
rules specifically designed for that particular problem.
• Problem
• It is the question which is to solved. For solving a problem it needs to be
precisely defined.
• The definition means, defining the start state, goal state, other valid states and
transitions.
• Finding the Solution
• After representation of the problem and related knowledge in the suitable format,
the appropriate methodology is chosen which uses the knowledge and
transforms the start state to goal state.
• The Techniques of finding the solution are called search techniques.
• Various search techniques are developed for this purpose.
Representation of AI Problem
• AI problem can be covered in following four parts:
• A Lexical part: that determines which symbols are allowed in the representation of the
problem. Like the normal meaning of the lexicon, this part abstracts all fundamental
features of the problem.
• A Structural part: that describes constraints on how the symbols can be arranged. This
corresponds to finding out possibilities required for joining these symbols and
generating higher structural unit.
• A Procedural Part: that specifies access procedure that enables to create descriptions,
to modify them and to answer questions using them.
• A Semantic Part: That establishes a way of associating meaning with the descriptions.
WELL-DEFINED PROBLEMS AND SOLUTIONS
• A problem can be defined formally by five components:
• INITIAL STATE • The initial state that the agent starts in.
• ACTION : A description of the possible actions available to the agent.
• A description of what each action does; the formal name for this is the TRANSITION MODEL, specified by a function
RESULT(s, a) that returns the state that results from SUCCESSOR doing action a in state s. We also use the term successor to
refer to any state reachable from a given state by a single action.
• Together, the initial state, actions, and transition model implicitly define the STATE SPACE of the problem—the set of all states reachable from the initial state by
any sequence of actions. The state space forms a directed network or GRAPH in which the nodes are states and the links between nodes are actions. A PATH in the
state space is a sequence of states connected by a sequence of actions.
• The GOAL TEST, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal
states, and the test simply checks whether the given state is one of them.
• A PATH COST function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects
its own performance measure.
Problem solving in Artificial Intelligence.pptx
Formulating problems
• This formulation seems reasonable, but it is still a model—an abstract mathematical
description—and not the real thing.
• The different amount of knowledge that an agent can have concerning its action and the
state that it is in.
• This depends on how the agent is connected to its environment through its precepts and
actions.
• We find that there are four essentially different types of problem
• Single state problems
• Multiple-state problems
• Contingency problems
• Exploration problems.
• Toy problems
• This can be formulated as a problem as follows:
States: The state is determined by both the agent location and the dirt locations. The agent is in one of
two locations, each of which might or might not contain dirt. Thus, there are 2 × 2^2= 8 possible world
states. A larger environment with n locations has n · 2n states.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger
environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in the leftmost
square, moving Right in the rightmost square, and Sucking in a clean square have no effect.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path
The 8-puzzle:
• States: A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given goal
can be reached from exactly half of the possible initial states.
• Actions: The simplest formulation defines the actions as movements of the blank space
Left, Right, Up, or Down. Different subsets of these are possible depending on where
the blank is.
• Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank
switched.
• Goal test: This checks whether the state matches the goal configuration shown in Figure
3.4. (Other goal configurations are possible.)
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
Some more Example as AI Problems:
• Tic-Tac-Toe
• 8-Queen Problem
• Chess Problem
• Tower of Hanoi
• Traveling salesperson problem
• Monkey and Banana Problem
• Cryptarithmetic problem
• Block World Problem
Search:
• The process of Finding a path to desired goal state from all possible future states. The
major work in the field of search is to find the right Search Strategy for a particular
problem.
• There are 2 kinds of search based on whether they use information about the goal.
• uninformed search algorithms—algorithms that are given no information about
the problem other than its definition. Although some of these algorithms can solve
any solvable problem, none of them can do so efficiently.
• Informed search algorithms, on the other hand, can do quite well given some
guidance on where to look for solutions.
• Search algorithms form the core of such Artificial Intelligence
programs. And while we may be inclined to think that this has
limited applicability only in areas of gaming and puzzle-solving,
such algorithms are in fact used in many more AI areas like
route and cost optimizations, action planning, knowledge
mining, robotics, autonomous driving, computational biology,
software and hardware verification, theorem proving etc.
• In a way, many of the AI problems can be modelled as a search
problem where the task is to reach the goal from the initial state
via state transformation rules. So the search space is defined as
a graph (or a tree) and the aim is to reach the goal from the
initial state via the shortest path, in terms of cost, length, a
combination of both etc.
• All search methods can be broadly classified into two categories:
1.Uninformed (or Exhaustive or Blind) methods, where the search is carried out
without any additional information that is already provided in the problem statement.
Some examples include Breadth First Search, Depth First Search etc.
2.Informed (or Heuristic) methods, where search is carried out by using additional
information to determine the next step towards finding the solution. Best First Search
is an example of such algorithms
• Informed search methods are more efficient, low in cost and high in performance as
compared to the uninformed search methods.
Problem solving in Artificial Intelligence.pptx
UNINFORMED SEARCH STRATEGIES:
• Its is also called as Blind Search. The term means that the strategies have
no additional information about states beyond that provided in the problem
definition.
• All they can do is generate successors and distinguish a goal state from a
non-goal state.
• All search strategies are distinguished by the order in which nodes are
expanded.
• Strategies that know whether one non-goal state is “more promising” than
another are called informed search or heuristic search strategies.
• Informed(Heuristic) Search Strategies
• Informed search strategy is one that uses problem-specific knowledge beyond the definition
of the problem itself. It can find solutions more efficiently than uninformed strategy.
• Best-first search
• Greedy Best-first search
• A* Search
• AO* Search
Best-First Search:
• It always selects the path which appears best at that moment.
• It is combination of DFS and BFS
• It uses the heuristics function h(n)<=h*(n) and search h (n) = heuristic cost; h* (n)=estimated
cost.
• The greedy best first algorithm is implemented by priority queue.
• The node with lowest evaluation is selected for expansion, because the evaluation measures
the distance to the goal.
Problem solving in Artificial Intelligence.pptx
Advantages:
• Best first search can switch between BFS & DFS by gaining the
advantages of both the algorithm.
• This algorithm is more efficient than BFS & DFS algorithm.
Disadvantages:
• It can behave as an unguided depth – first search in the worst case
scenario.
• It can get stuck in a loop as DFS.
• This algorithm is not optimal.
• Heuristic functions
• A heuristic function or simply a heuristic is a function that ranks alternatives in
various search algorithms at each branching step basing on an available
information in order to make a decision which branch is to be followed during a
search.
• The key component of Best-first search algorithm is a heuristic function,
denoted by h(n):
h(n) = estimated cost of the cheapest path from node n to a goal node.
• Heuristic function are the most common form in which additional knowledge is
imparted to the search algorithm.
• Greedy Best-first search
• Greedy best-first search tries to expand the node that is closest to the goal, on
the grounds that this is likely to a solution quickly.
• It evaluates the nodes by using the heuristic function f(n) = h(n).
• Using the straight-line distance heuristic hSLD ,the goal state can be reached
faster.
• Properties of greedy search
• o Complete?? No–can get stuck in loops.
• Complete in finite space with repeated-state checking
• o Time?? O(bm), but a good heuristic can give dramatic improvement
• o Space?? O(bm)—keeps all nodes in memory
• o Optimal?? No
• Greedy best-first search is not optimal, and it is incomplete.
• The worst-case time and space complexity is O(bm),where m is the maximum depth of the
search space.
• A* Search:
• A* search algorithm finds the shortest path through the search space using the
heuristic function.
• It uses h(n) & Cost to reach the node n from the start stage g (n).
• This algorithm expands less search tree and provides optimal result faster.
• It is similar to Uniform Cost Search (UCS) except that it uses g (n) + h (n)
instead of g (n).
• A* use search heuristic as well as the cost to reach the node hence we combine
both cost as
• f (n) = g (n)+h (n). {Fitness Number}
• f (n) – Estimated cost of the cheapest solution.
• g (n) – Cost of reach node n from start state
• h (n) – Cost of reach from node to goal node.
• Algorithm;
•Advantages:
• It is best algorithm than other search algorithm.
• It is optimal & Complete
• It can solve very complex problem.
•Disadvantages:
• It does not always produce shortest path.
• It is not practical for various large – scale Problems.
•AO * Algorithm:
• AO * is the best algorithm for solving cycle AND-OR Graph.
• The problem is divide into a set of sub problems, where each sub
problems can be solved separately.
Problem solving in Artificial Intelligence.pptx
• Heuristic Functions
• A heuristic function or simply a heuristic is a function that ranks alternatives in
various search algorithms at each branching step basing on an available
information in order to make a decision which branch is to be followed during a
search.
Problem solving in Artificial Intelligence.pptx
• State Space Landscape:
• A landscape has both “location” (defined by the state) and “elevation”(defined by the
value of the heuristic cost function or objective function).
• If elevation corresponds to cost, then the aim is to find the lowest valley – a global
minimum; if elevation corresponds to an objective function, then the aim is to find the
highest peak – a global maximum.
• Local search algorithms explore this landscape. A complete local search algorithm always
finds a goal if one exists; an optimal algorithm always finds a global
minimum/maximum.
• Simple Hill Climbing:
• Algorithm:
• Evaluate the initial state. If it is also a goal state, then return it and quit.
Otherwise, Continue with the initial state as the current state.
• Loop until a solution is found or until there are no new operators left to
be applied in the current state:
• Select an operator that has no yet been applied to the current state and apply it
to produce a new state.
• Evaluate the new state.
• If it is a goal state, then return it and quit.
• If it is not a goal state but it is better than the current state, then make it the
current state.
• If it is not better than the current state, then continue in the loop.
• Steepest Ascent Hill Climbing:
• Simple hill climbing considers all the moves from the current state and selects
the best one as the next state. This method is called steepest – Ascent hill
climbing.
• Algorithm:
• Evaluate the initial state. If it is also a goal state, then return it and quit.
Otherwise, Continue with the initial state as the current state.
• Loop until a solution is found or until a complete iteration produces no
changes to current state:
• Let SUCC be a state such that any possible successor of the current state will be better than
SUCC.
• For each operator that applies to current state do:
• Apply the operator and generate a new state.
• Evaluate the new state. If it is a goal state, then return it and quit. If not, compare it to
SUCC. If it is better then set SUCC to this state. If it is not better, leave SUCC alone.
• If the SUCC is better than current state, then set current state to SUCC.
LOCAL SEARCH IN CONTINUOUS SPACES
 We have considered algorithms that work only in discrete environments, but real-world
environment are continuous
 Local search amounts to maximizing a continuous objective function in a multi-
dimensional vector space.
 This is hard to do in general.
 Can immediately retreat
o Discretize the space near each state
o Apply a discrete local search strategy (e.g., stochastic hill climbing, simulated
annealing)
 Often resists a closed-form solution
o Fake up an empirical gradient
o Amounts to greedy hill climbing in discretized state space
 Can employ Newton-Raphson Method to find maxima
 Continuous problems have similar problems: plateaus, ridges, local maxima, etc.
• Online Search Agents and Unknown Environments
• Online search problems
 Offline Search (all algorithms so far)
 Compute complete solution, ignoring environment Carry out action sequence
 Online Search
 Interleave computation and action
 Compute—Act—Observe—Compute—·
 Online search good
 For dynamic, semi-dynamic, stochastic domains
 Whenever offline search would yield exponentially many contingencies
 Online search necessary for exploration problem
 States and actions unknown to agent
 Agent uses actions as experiments to determine what to do
Examples
Robot exploring unknown building
Classical hero escaping a labyrinth
 Assume agent knows
 Actions available in state s
Step-cost function c(s,a,s′)
State s is a goal state
 When it has visited a state s previously Admissible heuristic function h(s )
 Note that agent doesn’t know outcome state (s ′ ) for a given action (a) until it tries the
action (and all actions from a state s )
 Competitive ratio compares actual cost with cost agent would follow if it knew the
search space
 No agent can avoid dead ends in all state spaces
 Robotics examples: Staircase, ramp, cliff, terrain
 Assume state space is safely explorable—some goal state is always reachable
Online Search Agents
 Interleaving planning and acting hamstrings offline search
 A* expands arbitrary nodes without waiting for outcome of action Online
algorithm can expand only the node it physically occupies Best to explore nodes in
physically local order
 Suggests using depth-first search
 Next node always a child of the current
 When all actions have been tried, can’t just drop state Agent must physically backtrack
 Online Depth-First Search
 May have arbitrarily bad competitive ratio (wandering past goal) Okay for
exploration; bad for minimizing path cost
 Online Iterative-Deepening Search
 Competitive ratio stays small for state space a uniform tree
Online Local Search
 Hill Climbing Search
 Also has physical locality in node expansions is, in fact, already an online search
algorithm
 Local maxima problematic: can’t randomly transport agent to new state in effort to escape
local maximum
 Random Walk as alternative
 Select action at random from current state
 Will eventually find a goal node in a finite space
 Can be very slow, esp. if “backward” steps as common as “forward”
 Hill Climbing with Memory instead of randomness
 Store “current best estimate” of cost to goal at each visited state Starting estimate is just
h(s )
 Augment estimate based on experience in the state space Tends to “flatten out” local
minima, allowing progress Employ optimism under uncertainty
 Untried actions assumed to have least-possible cost Encourage exploration of untried paths
Learning in Online Search
o Rampant ignorance a ripe opportunity for learning Agent
learns a “map” of the environment
o Outcome of each action in each state
o Local search agents improve evaluation function accuracy
o Update estimate of value at each visited state
o Would like to infer higher-level domain model
o Example: “Up” in maze search increases y -coordinate
Requires
o Formal way to represent and manipulate such general rules
(so far, have hidden rules within the successor function)
o Algorithms that can construct general rules based on
observations of the effect of actions

More Related Content

PPTX
Artificial Intelligence Searching Techniques
PPTX
Constraint satisfaction problems (csp)
PPTX
ARTIFICIAL INTELLIGENCE BASIC PPT
PPTX
Feature selection
PPTX
Optimization problems and algorithms
PPTX
Artificial intelligence powerpoint presentation
PDF
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdf
PPTX
Convolutional Neural Network (CNN)of Deep Learning
Artificial Intelligence Searching Techniques
Constraint satisfaction problems (csp)
ARTIFICIAL INTELLIGENCE BASIC PPT
Feature selection
Optimization problems and algorithms
Artificial intelligence powerpoint presentation
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdf
Convolutional Neural Network (CNN)of Deep Learning

What's hot (20)

PDF
I.BEST FIRST SEARCH IN AI
PPTX
Structure of agents
PPTX
Water jug problem ai part 6
PDF
Production System in AI
PDF
Problem Solving
PDF
State Space Search in ai
PPTX
Problem solving agents
PPTX
Means End Analysis (MEA) in Artificial.pptx
PPT
AI Lecture 3 (solving problems by searching)
PPTX
Propositional logic
PDF
A* Search Algorithm
PPTX
Recognition-of-tokens
PPTX
AI3391 Artificial intelligence Session 22 Cryptarithmetic problem.pptx
PPTX
weak slot and filler structure
PPTX
Resolution method in AI.pptx
PDF
Lecture 2 agent and environment
PPTX
Expert system
PPTX
Problem reduction AND OR GRAPH & AO* algorithm.ppt
PPTX
daa-unit-3-greedy method
PPSX
Fuzzy expert system
I.BEST FIRST SEARCH IN AI
Structure of agents
Water jug problem ai part 6
Production System in AI
Problem Solving
State Space Search in ai
Problem solving agents
Means End Analysis (MEA) in Artificial.pptx
AI Lecture 3 (solving problems by searching)
Propositional logic
A* Search Algorithm
Recognition-of-tokens
AI3391 Artificial intelligence Session 22 Cryptarithmetic problem.pptx
weak slot and filler structure
Resolution method in AI.pptx
Lecture 2 agent and environment
Expert system
Problem reduction AND OR GRAPH & AO* algorithm.ppt
daa-unit-3-greedy method
Fuzzy expert system
Ad

Similar to Problem solving in Artificial Intelligence.pptx (20)

PDF
Chapter 3 - Searching and prPlanning.pdf
PPTX
AI-03 Problems State Space.pptx
PDF
Introduction to Artificial Intelligence_ Lec 3
PPTX
Artificial intelligence(04)
PDF
State space representation and search.pdf
PPTX
Mod3_AI_UPDATED 4th sem engineering.pptx
PPTX
3. Module_2_Chapter_3hvghcyttrctrctfcf.pptx
PPTX
UNIT II ARTIFICIQL INTELLIGENCE SEARCH STRATEGIES OSMANIA UNIVERSITY
PPTX
Lecture 07 search techniques
PPTX
Adhit_presentation_Searching_Algorithm(BFS,DFS).pptx
PPTX
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
PPTX
Problem Solving Agents-Introduction_L3.pptx
PPT
CH2_AI_Lecture1.ppt
PDF
AI Chapter Two Solving problems by searching.pdf
PPTX
Unit 2-KSK.pptx Introduction to Data Structures and Algorithms
PDF
Unit3:Informed and Uninformed search
PPT
3 probsolver edited.ppt
PPTX
Learning (e.g., machine learning) Reasoning (solving problems, making decisi...
PPTX
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
PPTX
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
Chapter 3 - Searching and prPlanning.pdf
AI-03 Problems State Space.pptx
Introduction to Artificial Intelligence_ Lec 3
Artificial intelligence(04)
State space representation and search.pdf
Mod3_AI_UPDATED 4th sem engineering.pptx
3. Module_2_Chapter_3hvghcyttrctrctfcf.pptx
UNIT II ARTIFICIQL INTELLIGENCE SEARCH STRATEGIES OSMANIA UNIVERSITY
Lecture 07 search techniques
Adhit_presentation_Searching_Algorithm(BFS,DFS).pptx
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
Problem Solving Agents-Introduction_L3.pptx
CH2_AI_Lecture1.ppt
AI Chapter Two Solving problems by searching.pdf
Unit 2-KSK.pptx Introduction to Data Structures and Algorithms
Unit3:Informed and Uninformed search
3 probsolver edited.ppt
Learning (e.g., machine learning) Reasoning (solving problems, making decisi...
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
Ad

Recently uploaded (20)

PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Lecture Notes Electrical Wiring System Components
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PDF
PPT on Performance Review to get promotions
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
composite construction of structures.pdf
PPT
Mechanical Engineering MATERIALS Selection
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
Well-logging-methods_new................
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
additive manufacturing of ss316l using mig welding
PPTX
Geodesy 1.pptx...............................................
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
R24 SURVEYING LAB MANUAL for civil enggi
CYBER-CRIMES AND SECURITY A guide to understanding
Operating System & Kernel Study Guide-1 - converted.pdf
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Lecture Notes Electrical Wiring System Components
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PPT on Performance Review to get promotions
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
composite construction of structures.pdf
Mechanical Engineering MATERIALS Selection
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Well-logging-methods_new................
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
bas. eng. economics group 4 presentation 1.pptx
additive manufacturing of ss316l using mig welding
Geodesy 1.pptx...............................................
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
R24 SURVEYING LAB MANUAL for civil enggi

Problem solving in Artificial Intelligence.pptx

  • 1. Problem-Solving Strategies in Artificial Intelligence 1. Problem solving Methods 2. Search Strategies 3. Uninformed – Informed 4. Heuristics 5. Local Search Algorithms and Optimization Problems Dr.J.SENTHILKUMAR Assistant Professor Department of Computer Science and Engineering KIT-KALAIGNARKARUNANIDHI INSTITUTE OF TECHNOLOGY
  • 2. Problem Solving Methods • The method of solving problem through AI involves the process of defining the search space, deciding start and goal states then finding the path from state to goal state through search space. • The movement from start state to goal state is guided by set of rules specifically designed for that particular problem.
  • 3. • Problem • It is the question which is to solved. For solving a problem it needs to be precisely defined. • The definition means, defining the start state, goal state, other valid states and transitions. • Finding the Solution • After representation of the problem and related knowledge in the suitable format, the appropriate methodology is chosen which uses the knowledge and transforms the start state to goal state. • The Techniques of finding the solution are called search techniques. • Various search techniques are developed for this purpose.
  • 4. Representation of AI Problem • AI problem can be covered in following four parts: • A Lexical part: that determines which symbols are allowed in the representation of the problem. Like the normal meaning of the lexicon, this part abstracts all fundamental features of the problem. • A Structural part: that describes constraints on how the symbols can be arranged. This corresponds to finding out possibilities required for joining these symbols and generating higher structural unit. • A Procedural Part: that specifies access procedure that enables to create descriptions, to modify them and to answer questions using them. • A Semantic Part: That establishes a way of associating meaning with the descriptions.
  • 5. WELL-DEFINED PROBLEMS AND SOLUTIONS • A problem can be defined formally by five components: • INITIAL STATE • The initial state that the agent starts in. • ACTION : A description of the possible actions available to the agent. • A description of what each action does; the formal name for this is the TRANSITION MODEL, specified by a function RESULT(s, a) that returns the state that results from SUCCESSOR doing action a in state s. We also use the term successor to refer to any state reachable from a given state by a single action. • Together, the initial state, actions, and transition model implicitly define the STATE SPACE of the problem—the set of all states reachable from the initial state by any sequence of actions. The state space forms a directed network or GRAPH in which the nodes are states and the links between nodes are actions. A PATH in the state space is a sequence of states connected by a sequence of actions. • The GOAL TEST, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. • A PATH COST function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure.
  • 7. Formulating problems • This formulation seems reasonable, but it is still a model—an abstract mathematical description—and not the real thing. • The different amount of knowledge that an agent can have concerning its action and the state that it is in. • This depends on how the agent is connected to its environment through its precepts and actions. • We find that there are four essentially different types of problem • Single state problems • Multiple-state problems • Contingency problems • Exploration problems.
  • 8. • Toy problems • This can be formulated as a problem as follows: States: The state is determined by both the agent location and the dirt locations. The agent is in one of two locations, each of which might or might not contain dirt. Thus, there are 2 × 2^2= 8 possible world states. A larger environment with n locations has n · 2n states. • Initial state: Any state can be designated as the initial state. • Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger environments might also include Up and Down. • Transition model: The actions have their expected effects, except that moving Left in the leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no effect. • Goal test: This checks whether all the squares are clean. • Path cost: Each step costs 1, so the path cost is the number of steps in the path
  • 9. The 8-puzzle: • States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. • Initial state: Any state can be designated as the initial state. Note that any given goal can be reached from exactly half of the possible initial states. • Actions: The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or Down. Different subsets of these are possible depending on where the blank is. • Transition model: Given a state and action, this returns the resulting state; for example, if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank switched. • Goal test: This checks whether the state matches the goal configuration shown in Figure 3.4. (Other goal configurations are possible.) • Path cost: Each step costs 1, so the path cost is the number of steps in the path.
  • 10. Some more Example as AI Problems: • Tic-Tac-Toe • 8-Queen Problem • Chess Problem • Tower of Hanoi • Traveling salesperson problem • Monkey and Banana Problem • Cryptarithmetic problem • Block World Problem
  • 11. Search: • The process of Finding a path to desired goal state from all possible future states. The major work in the field of search is to find the right Search Strategy for a particular problem. • There are 2 kinds of search based on whether they use information about the goal. • uninformed search algorithms—algorithms that are given no information about the problem other than its definition. Although some of these algorithms can solve any solvable problem, none of them can do so efficiently. • Informed search algorithms, on the other hand, can do quite well given some guidance on where to look for solutions.
  • 12. • Search algorithms form the core of such Artificial Intelligence programs. And while we may be inclined to think that this has limited applicability only in areas of gaming and puzzle-solving, such algorithms are in fact used in many more AI areas like route and cost optimizations, action planning, knowledge mining, robotics, autonomous driving, computational biology, software and hardware verification, theorem proving etc. • In a way, many of the AI problems can be modelled as a search problem where the task is to reach the goal from the initial state via state transformation rules. So the search space is defined as a graph (or a tree) and the aim is to reach the goal from the initial state via the shortest path, in terms of cost, length, a combination of both etc.
  • 13. • All search methods can be broadly classified into two categories: 1.Uninformed (or Exhaustive or Blind) methods, where the search is carried out without any additional information that is already provided in the problem statement. Some examples include Breadth First Search, Depth First Search etc. 2.Informed (or Heuristic) methods, where search is carried out by using additional information to determine the next step towards finding the solution. Best First Search is an example of such algorithms • Informed search methods are more efficient, low in cost and high in performance as compared to the uninformed search methods.
  • 15. UNINFORMED SEARCH STRATEGIES: • Its is also called as Blind Search. The term means that the strategies have no additional information about states beyond that provided in the problem definition. • All they can do is generate successors and distinguish a goal state from a non-goal state. • All search strategies are distinguished by the order in which nodes are expanded. • Strategies that know whether one non-goal state is “more promising” than another are called informed search or heuristic search strategies.
  • 16. • Informed(Heuristic) Search Strategies • Informed search strategy is one that uses problem-specific knowledge beyond the definition of the problem itself. It can find solutions more efficiently than uninformed strategy. • Best-first search • Greedy Best-first search • A* Search • AO* Search Best-First Search: • It always selects the path which appears best at that moment. • It is combination of DFS and BFS • It uses the heuristics function h(n)<=h*(n) and search h (n) = heuristic cost; h* (n)=estimated cost. • The greedy best first algorithm is implemented by priority queue. • The node with lowest evaluation is selected for expansion, because the evaluation measures the distance to the goal.
  • 18. Advantages: • Best first search can switch between BFS & DFS by gaining the advantages of both the algorithm. • This algorithm is more efficient than BFS & DFS algorithm. Disadvantages: • It can behave as an unguided depth – first search in the worst case scenario. • It can get stuck in a loop as DFS. • This algorithm is not optimal.
  • 19. • Heuristic functions • A heuristic function or simply a heuristic is a function that ranks alternatives in various search algorithms at each branching step basing on an available information in order to make a decision which branch is to be followed during a search. • The key component of Best-first search algorithm is a heuristic function, denoted by h(n): h(n) = estimated cost of the cheapest path from node n to a goal node. • Heuristic function are the most common form in which additional knowledge is imparted to the search algorithm.
  • 20. • Greedy Best-first search • Greedy best-first search tries to expand the node that is closest to the goal, on the grounds that this is likely to a solution quickly. • It evaluates the nodes by using the heuristic function f(n) = h(n). • Using the straight-line distance heuristic hSLD ,the goal state can be reached faster. • Properties of greedy search • o Complete?? No–can get stuck in loops. • Complete in finite space with repeated-state checking • o Time?? O(bm), but a good heuristic can give dramatic improvement • o Space?? O(bm)—keeps all nodes in memory • o Optimal?? No • Greedy best-first search is not optimal, and it is incomplete. • The worst-case time and space complexity is O(bm),where m is the maximum depth of the search space.
  • 21. • A* Search: • A* search algorithm finds the shortest path through the search space using the heuristic function. • It uses h(n) & Cost to reach the node n from the start stage g (n). • This algorithm expands less search tree and provides optimal result faster. • It is similar to Uniform Cost Search (UCS) except that it uses g (n) + h (n) instead of g (n). • A* use search heuristic as well as the cost to reach the node hence we combine both cost as • f (n) = g (n)+h (n). {Fitness Number} • f (n) – Estimated cost of the cheapest solution. • g (n) – Cost of reach node n from start state • h (n) – Cost of reach from node to goal node.
  • 23. •Advantages: • It is best algorithm than other search algorithm. • It is optimal & Complete • It can solve very complex problem. •Disadvantages: • It does not always produce shortest path. • It is not practical for various large – scale Problems.
  • 24. •AO * Algorithm: • AO * is the best algorithm for solving cycle AND-OR Graph. • The problem is divide into a set of sub problems, where each sub problems can be solved separately.
  • 26. • Heuristic Functions • A heuristic function or simply a heuristic is a function that ranks alternatives in various search algorithms at each branching step basing on an available information in order to make a decision which branch is to be followed during a search.
  • 28. • State Space Landscape: • A landscape has both “location” (defined by the state) and “elevation”(defined by the value of the heuristic cost function or objective function). • If elevation corresponds to cost, then the aim is to find the lowest valley – a global minimum; if elevation corresponds to an objective function, then the aim is to find the highest peak – a global maximum. • Local search algorithms explore this landscape. A complete local search algorithm always finds a goal if one exists; an optimal algorithm always finds a global minimum/maximum.
  • 29. • Simple Hill Climbing: • Algorithm: • Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, Continue with the initial state as the current state. • Loop until a solution is found or until there are no new operators left to be applied in the current state: • Select an operator that has no yet been applied to the current state and apply it to produce a new state. • Evaluate the new state. • If it is a goal state, then return it and quit. • If it is not a goal state but it is better than the current state, then make it the current state. • If it is not better than the current state, then continue in the loop.
  • 30. • Steepest Ascent Hill Climbing: • Simple hill climbing considers all the moves from the current state and selects the best one as the next state. This method is called steepest – Ascent hill climbing. • Algorithm: • Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, Continue with the initial state as the current state. • Loop until a solution is found or until a complete iteration produces no changes to current state: • Let SUCC be a state such that any possible successor of the current state will be better than SUCC. • For each operator that applies to current state do: • Apply the operator and generate a new state. • Evaluate the new state. If it is a goal state, then return it and quit. If not, compare it to SUCC. If it is better then set SUCC to this state. If it is not better, leave SUCC alone. • If the SUCC is better than current state, then set current state to SUCC.
  • 31. LOCAL SEARCH IN CONTINUOUS SPACES  We have considered algorithms that work only in discrete environments, but real-world environment are continuous  Local search amounts to maximizing a continuous objective function in a multi- dimensional vector space.  This is hard to do in general.  Can immediately retreat o Discretize the space near each state o Apply a discrete local search strategy (e.g., stochastic hill climbing, simulated annealing)  Often resists a closed-form solution o Fake up an empirical gradient o Amounts to greedy hill climbing in discretized state space  Can employ Newton-Raphson Method to find maxima  Continuous problems have similar problems: plateaus, ridges, local maxima, etc.
  • 32. • Online Search Agents and Unknown Environments • Online search problems  Offline Search (all algorithms so far)  Compute complete solution, ignoring environment Carry out action sequence  Online Search  Interleave computation and action  Compute—Act—Observe—Compute—·  Online search good  For dynamic, semi-dynamic, stochastic domains  Whenever offline search would yield exponentially many contingencies  Online search necessary for exploration problem  States and actions unknown to agent  Agent uses actions as experiments to determine what to do
  • 33. Examples Robot exploring unknown building Classical hero escaping a labyrinth  Assume agent knows  Actions available in state s Step-cost function c(s,a,s′) State s is a goal state  When it has visited a state s previously Admissible heuristic function h(s )  Note that agent doesn’t know outcome state (s ′ ) for a given action (a) until it tries the action (and all actions from a state s )  Competitive ratio compares actual cost with cost agent would follow if it knew the search space  No agent can avoid dead ends in all state spaces  Robotics examples: Staircase, ramp, cliff, terrain  Assume state space is safely explorable—some goal state is always reachable
  • 34. Online Search Agents  Interleaving planning and acting hamstrings offline search  A* expands arbitrary nodes without waiting for outcome of action Online algorithm can expand only the node it physically occupies Best to explore nodes in physically local order  Suggests using depth-first search  Next node always a child of the current  When all actions have been tried, can’t just drop state Agent must physically backtrack  Online Depth-First Search  May have arbitrarily bad competitive ratio (wandering past goal) Okay for exploration; bad for minimizing path cost  Online Iterative-Deepening Search  Competitive ratio stays small for state space a uniform tree
  • 35. Online Local Search  Hill Climbing Search  Also has physical locality in node expansions is, in fact, already an online search algorithm  Local maxima problematic: can’t randomly transport agent to new state in effort to escape local maximum  Random Walk as alternative  Select action at random from current state  Will eventually find a goal node in a finite space  Can be very slow, esp. if “backward” steps as common as “forward”  Hill Climbing with Memory instead of randomness  Store “current best estimate” of cost to goal at each visited state Starting estimate is just h(s )  Augment estimate based on experience in the state space Tends to “flatten out” local minima, allowing progress Employ optimism under uncertainty  Untried actions assumed to have least-possible cost Encourage exploration of untried paths
  • 36. Learning in Online Search o Rampant ignorance a ripe opportunity for learning Agent learns a “map” of the environment o Outcome of each action in each state o Local search agents improve evaluation function accuracy o Update estimate of value at each visited state o Would like to infer higher-level domain model o Example: “Up” in maze search increases y -coordinate Requires o Formal way to represent and manipulate such general rules (so far, have hidden rules within the successor function) o Algorithms that can construct general rules based on observations of the effect of actions