BY:SURBHI SAROHA
Assistant Professor
1SURBHI SAROHA
 Searching for solutions
 Uniformed search strategies
 Informed search strategies
 Local search algorithms and optimistic
problems
 Adversarial Search
 Search for games
 Alpha-Beta pruning.
2SURBHI SAROHA
 Searching is the universal technique of problem
solving in AI.
 There are some single-player games such as tile
games, Sudoku, crossword, etc.
 The search algorithms help you to search for a
particular position in such games.
 Artificial Intelligence is the study of building
agents that act rationally.
 Most of the time, these agents perform some
kind of search algorithm in the background in
order to achieve their tasks.
3SURBHI SAROHA
 A search problem consists of:A State Space. Set
of all possible states where you can be.
 A Start State. The state from where the search
begins.
 A GoalTest. A function that looks at the current
state returns whether or not it is the goal state.
 The Solution to a search problem is a sequence
of actions, called the plan that transforms the
start state to the goal state.
 This plan is achieved through search algorithms.
SURBHI SAROHA 4
SURBHI SAROHA 5
 So far we have talked about the uninformed search
algorithms which looked through search space for all
possible solutions of the problem without having any
additional knowledge about search space.
 But informed search algorithm contains an array of
knowledge such as how far we are from the goal, path
cost, how to reach to goal node, etc.
 This knowledge help agents to explore less to the
search space and find more efficiently the goal node.
 The informed search algorithm is more useful for
large search space.
 Informed search algorithm uses the idea of heuristic,
so it is also called Heuristic search.
SURBHI SAROHA 6
 In this section, we will discuss the following
search algorithms.
 Greedy Search
 A*Tree Search
 A* Graph Search
SURBHI SAROHA 7
 In greedy search, we expand the node closest to
the goal node.The “closeness” is estimated by a
heuristic h(x) .
 Heuristic: A heuristic h is defined as-
h(x) = Estimate of distance of node x from the
goal node.
Lower the value of h(x), closer is the node from
the goal.
 Strategy: Expand the node closest to the goal
state, i.e. expand the node with lower h value.
SURBHI SAROHA 8
 Question. Find the path from S to G using
greedy search.
 The heuristic values h of each node below the
name of the node.
SURBHI SAROHA 9
 Solution. Starting from S, we can traverse
to A(h=9) or D(h=5).We choose D, as it has
the lower heuristic cost.
 Now from D, we can move
to B(h=4) or E(h=3).We choose E with lower
heuristic cost.
 Finally, from E, we go to G(h=0).This entire
traversal is shown in the search tree below, in
blue.
SURBHI SAROHA 10
SURBHI SAROHA 11
 Path: S -> D -> E -> G
 Advantage: Works well with informed search
problems, with fewer steps to reach a goal.
Disadvantage: Can turn into unguided DFS
in the worst case.
SURBHI SAROHA 12
 To approximate the shortest path in real-life
situations, like- in maps, games where there
can be many hindrances.
 We can consider a 2D Grid having several
obstacles and we start from a source cell
(coloured red below) to reach towards a goal
cell (coloured green below)
SURBHI SAROHA 13
SURBHI SAROHA 14
 A* Search algorithm is one of the best and
popular technique used in path-finding and
graph traversals.
SURBHI SAROHA 15
 Informally speaking, A* Search algorithms,
unlike other traversal techniques, it has “brains”.
 What it means is that it is really a smart
algorithm which separates it from the other
conventional algorithms.
 This fact is cleared in detail in below sections.
 And it is also worth mentioning that many
games and web-based maps use this algorithm
to find the shortest path very efficiently
(approximation).
SURBHI SAROHA 16
 It is an advanced BFS algorithm that searches for shorter
paths first rather than the longer paths.
 A* is optimal as well as a complete algorithm.
 What do I mean by Optimal and Complete?
 Optimal meaning that A* is sure to find the least cost from
the source to the destination and Complete meaning that it
is going to find all the paths that are available to us from the
source to the destination.
 So that makes A* the best algorithm right?
 Well, in most cases, yes.
 But A* is slow and also the space it requires is a lot as it
saves all the possible paths that are available to us.This
makes other faster algorithms have an upper hand over A*
but it is nevertheless, one of the best algorithms out there.
SURBHI SAROHA 17
 Generic search algorithm: given a graph, start
nodes, and goal nodes, incrementally explore
paths from the start nodes.
 Maintain a frontier of paths from the start
node that have been explored.
 As search proceeds, the frontier expands into
the unexplored nodes until a goal node is
encountered.
SURBHI SAROHA 18
SURBHI SAROHA 19
 Generic search algorithm: given a graph, start
nodes, and goal nodes, incrementally explore
paths from the start nodes.
 Maintain a frontier of paths from the start
node that have been explored.
 As search proceeds, the frontier expands into
the unexplored nodes until a goal node is
encountered.
 The way in which the frontier is expanded
defines the search strategy.
SURBHI SAROHA 20
 Input: a graph, a set of start nodes, Boolean
procedure goal(n) that tests if n is a goal
node.
 frontier := {hsi : s is a start node};
 while frontier is not empty: select and
remove path hn0, . . . , nki from frontier;
 if goal(nk) return hn0, . . . , nki; for every
neighbor n of nk add hn0, . . . , nk, ni to
frontier;
 end while
SURBHI SAROHA 21
 After the algorithm returns, it can be asked
for more answers and the procedure
continues.
 Which value is selected from the frontier
defines the search strategy.
 The neighbor relationship defines the graph.
 The goal function defines what is a solution.
SURBHI SAROHA 22
 Yes, the local search algorithm works for pure
optimized problems.
 A pure optimization problem is one where all
the nodes can give a solution.
 But the target is to find the best state out of all
according to the objective function.
 Note: An objective function is a function whose
value is either minimized or maximized in
different contexts of the optimization problems.
In the case of search algorithms, an objective
function can be the path cost for reaching the
goal node, etc.
SURBHI SAROHA 23
 Let’s understand the working of a local search
algorithm with the help of an example:
 Consider the below state-space landscape
having both:
 Location: It is defined by the state.
 Elevation: It is defined by the value of the
objective function or heuristic cost function.
SURBHI SAROHA 24
SURBHI SAROHA 25
 The local search algorithm explores the above
landscape by finding the following two points:
 Global Minimum: If the elevation corresponds
to the cost, then the task is to find the lowest
valley, which is known as Global Minimum.
 Global Maxima: If the elevation corresponds to
an objective function, then it finds the highest
peak which is called as Global Maxima. It is the
highest point in the valley.
SURBHI SAROHA 26
 Adversarial search is a search, where we examine the problem which
arises when we try to plan ahead of the world and other agents are
planning against us.
 In previous topics, we have studied the search strategies which are only
associated with a single agent that aims to find the solution which often
expressed in the form of a sequence of actions.
 But, there might be some situations where more than one agent is
searching for the solution in the same search space, and this situation
usually occurs in game playing.
 The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and
playing against each other.
 Each agent needs to consider the action of other agent and effect of that
action on their performance.
SURBHI SAROHA 27
 So, Searches in which two or more players
with conflicting goals are trying to explore
the same search space for the solution, are
called adversarial searches, often known as
Games.
 Games are modeled as a Search problem and
heuristic evaluation function, and these are
the two main factors which help to model and
solve games in AI.
SURBHI SAROHA 28
 Adversarial search is a game-playing technique
where the agents are surrounded by a competitive
environment.
 A conflicting goal is given to the agents (multiagent).
These agents compete with one another and try to
defeat one another in order to win the game.
 Such conflicting goals give rise to the adversarial
search.
 Here, game-playing means discussing those games
where human intelligence and logic factor is used,
excluding other factors such as luck factor.
 Tic-tac-toe, chess, checkers, etc., are such type of
games where no luck factor works, only mind works.
SURBHI SAROHA 29
 Mathematically, this search is based on the
concept of ‘GameTheory.’
 According to game theory, a game is played
between two players.
 To complete the game, one has to win the
game and the other looses automatically.’
SURBHI SAROHA 30
SURBHI SAROHA 31
 We will study a classic AI problem: games.
The simplest scenario, which we will focus on
for the sake of clarity, are two-player, perfect-
information games such as tic-tac-toe and
chess.
 Example: playing tic tac toe
SURBHI SAROHA 32
 Maxine and Minnie are true game enthusiasts.
 They just love games.
 Especially two-person, perfect information
games such as tic-tac-toe or chess.
 One day they were playing tic-tac-toe. Maxine,
or Max as her friends call her, was playing with X.
 Minnie, or Min as her friends call her, had the
Os.
 Min had just played her turn and the board
looked as follows:
SURBHI SAROHA 33
SURBHI SAROHA 34
 Max was looking at the board and
contemplating her next move, as it was her
turn, when she suddenly buried her face in
her hands in despair, looking quite like Garry
Kasparov playing Deep Blue in 1997.
 Yes, Min was close to getting three Os on the
top row, but Max could easily put a stop to
that plan.
SURBHI SAROHA 35
 Alpha-beta pruning is a modified version of
the minimax algorithm.
 It is an optimization technique for the
minimax algorithm. ... It is also called
as Alpha-Beta Algorithm.
 Alpha-beta pruning can be applied at any
depth of a tree, and sometimes it not
only prune the tree leaves but also entire sub-
tree.
SURBHI SAROHA 36
 The word pruning really reminds of cutting down
branches, or to those familiar with data science,
it reminds of post and pre-pruning in trees.
 Alpha-beta pruning is essentially pruning of
useless branches.
 The word ‘pruning’ means cutting down
branches and leaves.
 In data science pruning is a much-used term
which refers to post and pre-pruning in decision
trees and random forest.
 Alpha-beta pruning is nothing but the pruning
of useless branches in decision trees.
SURBHI SAROHA 37
 Alpha: Alpha is the best choice or the highest value
that we have found at any instance along the path of
Maximizer.The initial value for alpha is – ∞.
 Beta: Beta is the best choice or the lowest value that
we have found at any instance along the path of
Minimizer.The initial value for alpha is + ∞.
 The condition for Alpha-beta Pruning is that α >= β.
 Each node has to keep track of its alpha and beta
values.
 Alpha can be updated only when it’s MAX’s turn and,
similarly, beta can be updated only when it’s MIN’s
chance.
SURBHI SAROHA 38
 MAX will update only alpha values and MIN
player will update only beta values.
 The node values will be passed to upper
nodes instead of values of alpha and beta
during go into reverse of tree.
 Alpha and Beta values only be passed to child
nodes.
SURBHI SAROHA 39
 We will first start with the initial move.We
will initially define the alpha and beta values
as the worst case i.e. α = -∞ and β= +∞.
 We will prune the node only when alpha
becomes greater than or equal to beta.
SURBHI SAROHA 40
SURBHI SAROHA 41
 2. Since the initial value of alpha is less than beta
so we didn’t prune it.
 Now it’s turn for MAX. So, at node D, value of
alpha will be calculated.
 The value of alpha at node D will be max (2, 3).
So, value of alpha at node D will be 3.
 3. Now the next move will be on node B and its
turn for MIN now.
 So, at node B, the value of alpha beta will be min
(3, ∞). So, at node B values will be alpha= – ∞
and beta will be 3.
SURBHI SAROHA 42
SURBHI SAROHA 43
 In the next step, algorithms traverse the next
successor of Node B which is node E, and the values of
α= -∞, and β= 3 will also be passed.
 4. Now it’s turn for MAX. So, at node E we will look for
MAX.
 The current value of alpha at E is – ∞ and it will be
compared with 5.
 So, MAX (- ∞, 5) will be 5. So, at node E, alpha = 5,
Beta = 5.
 Now as we can see that alpha is greater than beta
which is satisfying the pruning condition so we can
prune the right successor of node E and algorithm will
not be traversed and the value at node E will be 5.
SURBHI SAROHA 44
SURBHI SAROHA 45
 6. In the next step the algorithm again comes to
node A from node B.
 At node A alpha will be changed to maximum
value as MAX (- ∞, 3). So now the value of alpha
and beta at node A will be (3, + ∞) respectively
and will be transferred to node C.These same
values will be transferred to node F.
 7. At node F the value of alpha will be compared
to the left branch which is 0.
 So, MAX (0, 3) will be 3 and then compared with
the right child which is 1, and MAX (3,1) = 3 still α
remains 3, but the node value of F will become 1.
SURBHI SAROHA 46
SURBHI SAROHA 47
 8. Now node F will return the node value 1 to
C and will compare to beta value at C.
 Now its turn for MIN. So, MIN (+ ∞, 1) will be
1. Now at node C, α= 3, and β= 1 and alpha is
greater than beta which again satisfies the
pruning condition.
 So, the next successor of node C i.e. G will be
pruned and the algorithm didn’t compute the
entire subtree G.
SURBHI SAROHA 48
SURBHI SAROHA 49
 Now, C will return the node value to A and the
best value of A will be MAX (1, 3) will be 3.

SURBHI SAROHA 50
 The above represented tree is the final tree
which is showing the nodes which are
computed and the nodes which are not
computed.
 So, for this example the optimal value of the
maximizer will be 3.
 Thank you 
SURBHI SAROHA 51

More Related Content

PPTX
Constraint Satisfaction Problem (CSP) : Cryptarithmetic, Graph Coloring, 4- Q...
PPTX
AI_Session 25 classical planning.pptx
PPTX
Knowledge Representation & Reasoning AI UNIT 3
PPTX
AI_Session 14 Min Max Algorithm.pptx
PPTX
Propositional logic
PPTX
First order logic
PPTX
Adversarial search
PPTX
Knowledge Engineering in FOL.
Constraint Satisfaction Problem (CSP) : Cryptarithmetic, Graph Coloring, 4- Q...
AI_Session 25 classical planning.pptx
Knowledge Representation & Reasoning AI UNIT 3
AI_Session 14 Min Max Algorithm.pptx
Propositional logic
First order logic
Adversarial search
Knowledge Engineering in FOL.

What's hot (20)

PPTX
Knowledge representation
PPTX
Single source Shortest path algorithm with example
PDF
Markov decision process
PPTX
Heuristic or informed search
PPTX
AI: AI & Problem Solving
PPTX
Explainable AI
PPTX
Intelligence Agent - Artificial Intelligent (AI)
PDF
I. Hill climbing algorithm II. Steepest hill climbing algorithm
PPTX
boosting algorithm
PPTX
Theory of Computation Unit 1
PPTX
Local beam search example
PDF
First Order Logic resolution
PPTX
Chapter 5 of 1
PPTX
AI_Session 7 Greedy Best first search algorithm.pptx
PPTX
Simulated Annealing
PPTX
Lecture 21 problem reduction search ao star search
PDF
Unit8: Uncertainty in AI
PPTX
Asymptotic Notation
PDF
Design and analysis of algorithms
PPTX
Minimax
Knowledge representation
Single source Shortest path algorithm with example
Markov decision process
Heuristic or informed search
AI: AI & Problem Solving
Explainable AI
Intelligence Agent - Artificial Intelligent (AI)
I. Hill climbing algorithm II. Steepest hill climbing algorithm
boosting algorithm
Theory of Computation Unit 1
Local beam search example
First Order Logic resolution
Chapter 5 of 1
AI_Session 7 Greedy Best first search algorithm.pptx
Simulated Annealing
Lecture 21 problem reduction search ao star search
Unit8: Uncertainty in AI
Asymptotic Notation
Design and analysis of algorithms
Minimax
Ad

Similar to Artificial Intelligence Unit 2 (20)

PPTX
CHAPTER 5.pptx of the following of our discussion
PPTX
Artificial intelligence 2 Part 1
PDF
Chapter 3 - Searching and prPlanning.pdf
PPTX
Artificial Intelligence
PPTX
Popular search algorithms
PPT
unit-1-l3AI..........................ppt
PPTX
AI_03_Solving Problems by Searching.pptx
PPTX
AI BEST FIRST,A-STAR,AO-STAR SEARCH.pptx
PPT
AI Lecture 3 (solving problems by searching)
PPTX
heuristic technique.pptx...............................
PPTX
AI PPT Unit 2 Artificial Intelligence 24sd45
PPTX
Lecture 07 search techniques
PPT
Artificial intelligent Lec 3-ai chapter3-search
PPTX
Lecture 3 Problem Solving.pptx
PDF
AI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdf
PPTX
Artificial Intelligence and Machine Learning.pptx
PPTX
AI_Session 10 Local search in continious space.pptx
PPT
INTRODUCTION TO ARTIFICIAL INTELLIGENCE BASIC
PPTX
Search methods
CHAPTER 5.pptx of the following of our discussion
Artificial intelligence 2 Part 1
Chapter 3 - Searching and prPlanning.pdf
Artificial Intelligence
Popular search algorithms
unit-1-l3AI..........................ppt
AI_03_Solving Problems by Searching.pptx
AI BEST FIRST,A-STAR,AO-STAR SEARCH.pptx
AI Lecture 3 (solving problems by searching)
heuristic technique.pptx...............................
AI PPT Unit 2 Artificial Intelligence 24sd45
Lecture 07 search techniques
Artificial intelligent Lec 3-ai chapter3-search
Lecture 3 Problem Solving.pptx
AI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdf
Artificial Intelligence and Machine Learning.pptx
AI_Session 10 Local search in continious space.pptx
INTRODUCTION TO ARTIFICIAL INTELLIGENCE BASIC
Search methods
Ad

More from Dr. SURBHI SAROHA (20)

PPTX
Deep learning(UNIT 3) BY Ms SURBHI SAROHA
PPTX
MOBILE COMPUTING UNIT 2 by surbhi saroha
PPTX
Mobile Computing UNIT 1 by surbhi saroha
PPTX
DEEP LEARNING (UNIT 2 ) by surbhi saroha
PPTX
Introduction to Deep Leaning(UNIT 1).pptx
PPTX
Cloud Computing (Infrastructure as a Service)UNIT 2
PPTX
Management Information System(Unit 2).pptx
PPTX
Searching in Data Structure(Linear search and Binary search)
PPTX
Management Information System(UNIT 1).pptx
PPTX
Introduction to Cloud Computing(UNIT 1).pptx
PPTX
JAVA (UNIT 5)
PPTX
DBMS (UNIT 5)
PPTX
DBMS UNIT 4
PPTX
JAVA(UNIT 4)
PPTX
OOPs & C++(UNIT 5)
PPTX
OOPS & C++(UNIT 4)
PPTX
DBMS UNIT 3
PPTX
JAVA (UNIT 3)
PPTX
Keys in dbms(UNIT 2)
PPTX
DBMS (UNIT 2)
Deep learning(UNIT 3) BY Ms SURBHI SAROHA
MOBILE COMPUTING UNIT 2 by surbhi saroha
Mobile Computing UNIT 1 by surbhi saroha
DEEP LEARNING (UNIT 2 ) by surbhi saroha
Introduction to Deep Leaning(UNIT 1).pptx
Cloud Computing (Infrastructure as a Service)UNIT 2
Management Information System(Unit 2).pptx
Searching in Data Structure(Linear search and Binary search)
Management Information System(UNIT 1).pptx
Introduction to Cloud Computing(UNIT 1).pptx
JAVA (UNIT 5)
DBMS (UNIT 5)
DBMS UNIT 4
JAVA(UNIT 4)
OOPs & C++(UNIT 5)
OOPS & C++(UNIT 4)
DBMS UNIT 3
JAVA (UNIT 3)
Keys in dbms(UNIT 2)
DBMS (UNIT 2)

Recently uploaded (20)

PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
PDF
Environmental Education MCQ BD2EE - Share Source.pdf
PPTX
Climate Change and Its Global Impact.pptx
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
PDF
English Textual Question & Ans (12th Class).pdf
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PDF
semiconductor packaging in vlsi design fab
PPTX
What’s under the hood: Parsing standardized learning content for AI
PDF
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
PPTX
Module on health assessment of CHN. pptx
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PPTX
Introduction to pro and eukaryotes and differences.pptx
PDF
Complications of Minimal Access-Surgery.pdf
PDF
AI-driven educational solutions for real-life interventions in the Philippine...
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
Environmental Education MCQ BD2EE - Share Source.pdf
Climate Change and Its Global Impact.pptx
A powerpoint presentation on the Revised K-10 Science Shaping Paper
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
English Textual Question & Ans (12th Class).pdf
Unit 4 Computer Architecture Multicore Processor.pptx
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
Share_Module_2_Power_conflict_and_negotiation.pptx
semiconductor packaging in vlsi design fab
What’s under the hood: Parsing standardized learning content for AI
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
Module on health assessment of CHN. pptx
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
Introduction to pro and eukaryotes and differences.pptx
Complications of Minimal Access-Surgery.pdf
AI-driven educational solutions for real-life interventions in the Philippine...
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf

Artificial Intelligence Unit 2

  • 2.  Searching for solutions  Uniformed search strategies  Informed search strategies  Local search algorithms and optimistic problems  Adversarial Search  Search for games  Alpha-Beta pruning. 2SURBHI SAROHA
  • 3.  Searching is the universal technique of problem solving in AI.  There are some single-player games such as tile games, Sudoku, crossword, etc.  The search algorithms help you to search for a particular position in such games.  Artificial Intelligence is the study of building agents that act rationally.  Most of the time, these agents perform some kind of search algorithm in the background in order to achieve their tasks. 3SURBHI SAROHA
  • 4.  A search problem consists of:A State Space. Set of all possible states where you can be.  A Start State. The state from where the search begins.  A GoalTest. A function that looks at the current state returns whether or not it is the goal state.  The Solution to a search problem is a sequence of actions, called the plan that transforms the start state to the goal state.  This plan is achieved through search algorithms. SURBHI SAROHA 4
  • 6.  So far we have talked about the uninformed search algorithms which looked through search space for all possible solutions of the problem without having any additional knowledge about search space.  But informed search algorithm contains an array of knowledge such as how far we are from the goal, path cost, how to reach to goal node, etc.  This knowledge help agents to explore less to the search space and find more efficiently the goal node.  The informed search algorithm is more useful for large search space.  Informed search algorithm uses the idea of heuristic, so it is also called Heuristic search. SURBHI SAROHA 6
  • 7.  In this section, we will discuss the following search algorithms.  Greedy Search  A*Tree Search  A* Graph Search SURBHI SAROHA 7
  • 8.  In greedy search, we expand the node closest to the goal node.The “closeness” is estimated by a heuristic h(x) .  Heuristic: A heuristic h is defined as- h(x) = Estimate of distance of node x from the goal node. Lower the value of h(x), closer is the node from the goal.  Strategy: Expand the node closest to the goal state, i.e. expand the node with lower h value. SURBHI SAROHA 8
  • 9.  Question. Find the path from S to G using greedy search.  The heuristic values h of each node below the name of the node. SURBHI SAROHA 9
  • 10.  Solution. Starting from S, we can traverse to A(h=9) or D(h=5).We choose D, as it has the lower heuristic cost.  Now from D, we can move to B(h=4) or E(h=3).We choose E with lower heuristic cost.  Finally, from E, we go to G(h=0).This entire traversal is shown in the search tree below, in blue. SURBHI SAROHA 10
  • 12.  Path: S -> D -> E -> G  Advantage: Works well with informed search problems, with fewer steps to reach a goal. Disadvantage: Can turn into unguided DFS in the worst case. SURBHI SAROHA 12
  • 13.  To approximate the shortest path in real-life situations, like- in maps, games where there can be many hindrances.  We can consider a 2D Grid having several obstacles and we start from a source cell (coloured red below) to reach towards a goal cell (coloured green below) SURBHI SAROHA 13
  • 15.  A* Search algorithm is one of the best and popular technique used in path-finding and graph traversals. SURBHI SAROHA 15
  • 16.  Informally speaking, A* Search algorithms, unlike other traversal techniques, it has “brains”.  What it means is that it is really a smart algorithm which separates it from the other conventional algorithms.  This fact is cleared in detail in below sections.  And it is also worth mentioning that many games and web-based maps use this algorithm to find the shortest path very efficiently (approximation). SURBHI SAROHA 16
  • 17.  It is an advanced BFS algorithm that searches for shorter paths first rather than the longer paths.  A* is optimal as well as a complete algorithm.  What do I mean by Optimal and Complete?  Optimal meaning that A* is sure to find the least cost from the source to the destination and Complete meaning that it is going to find all the paths that are available to us from the source to the destination.  So that makes A* the best algorithm right?  Well, in most cases, yes.  But A* is slow and also the space it requires is a lot as it saves all the possible paths that are available to us.This makes other faster algorithms have an upper hand over A* but it is nevertheless, one of the best algorithms out there. SURBHI SAROHA 17
  • 18.  Generic search algorithm: given a graph, start nodes, and goal nodes, incrementally explore paths from the start nodes.  Maintain a frontier of paths from the start node that have been explored.  As search proceeds, the frontier expands into the unexplored nodes until a goal node is encountered. SURBHI SAROHA 18
  • 20.  Generic search algorithm: given a graph, start nodes, and goal nodes, incrementally explore paths from the start nodes.  Maintain a frontier of paths from the start node that have been explored.  As search proceeds, the frontier expands into the unexplored nodes until a goal node is encountered.  The way in which the frontier is expanded defines the search strategy. SURBHI SAROHA 20
  • 21.  Input: a graph, a set of start nodes, Boolean procedure goal(n) that tests if n is a goal node.  frontier := {hsi : s is a start node};  while frontier is not empty: select and remove path hn0, . . . , nki from frontier;  if goal(nk) return hn0, . . . , nki; for every neighbor n of nk add hn0, . . . , nk, ni to frontier;  end while SURBHI SAROHA 21
  • 22.  After the algorithm returns, it can be asked for more answers and the procedure continues.  Which value is selected from the frontier defines the search strategy.  The neighbor relationship defines the graph.  The goal function defines what is a solution. SURBHI SAROHA 22
  • 23.  Yes, the local search algorithm works for pure optimized problems.  A pure optimization problem is one where all the nodes can give a solution.  But the target is to find the best state out of all according to the objective function.  Note: An objective function is a function whose value is either minimized or maximized in different contexts of the optimization problems. In the case of search algorithms, an objective function can be the path cost for reaching the goal node, etc. SURBHI SAROHA 23
  • 24.  Let’s understand the working of a local search algorithm with the help of an example:  Consider the below state-space landscape having both:  Location: It is defined by the state.  Elevation: It is defined by the value of the objective function or heuristic cost function. SURBHI SAROHA 24
  • 26.  The local search algorithm explores the above landscape by finding the following two points:  Global Minimum: If the elevation corresponds to the cost, then the task is to find the lowest valley, which is known as Global Minimum.  Global Maxima: If the elevation corresponds to an objective function, then it finds the highest peak which is called as Global Maxima. It is the highest point in the valley. SURBHI SAROHA 26
  • 27.  Adversarial search is a search, where we examine the problem which arises when we try to plan ahead of the world and other agents are planning against us.  In previous topics, we have studied the search strategies which are only associated with a single agent that aims to find the solution which often expressed in the form of a sequence of actions.  But, there might be some situations where more than one agent is searching for the solution in the same search space, and this situation usually occurs in game playing.  The environment with more than one agent is termed as multi-agent environment, in which each agent is an opponent of other agent and playing against each other.  Each agent needs to consider the action of other agent and effect of that action on their performance. SURBHI SAROHA 27
  • 28.  So, Searches in which two or more players with conflicting goals are trying to explore the same search space for the solution, are called adversarial searches, often known as Games.  Games are modeled as a Search problem and heuristic evaluation function, and these are the two main factors which help to model and solve games in AI. SURBHI SAROHA 28
  • 29.  Adversarial search is a game-playing technique where the agents are surrounded by a competitive environment.  A conflicting goal is given to the agents (multiagent). These agents compete with one another and try to defeat one another in order to win the game.  Such conflicting goals give rise to the adversarial search.  Here, game-playing means discussing those games where human intelligence and logic factor is used, excluding other factors such as luck factor.  Tic-tac-toe, chess, checkers, etc., are such type of games where no luck factor works, only mind works. SURBHI SAROHA 29
  • 30.  Mathematically, this search is based on the concept of ‘GameTheory.’  According to game theory, a game is played between two players.  To complete the game, one has to win the game and the other looses automatically.’ SURBHI SAROHA 30
  • 32.  We will study a classic AI problem: games. The simplest scenario, which we will focus on for the sake of clarity, are two-player, perfect- information games such as tic-tac-toe and chess.  Example: playing tic tac toe SURBHI SAROHA 32
  • 33.  Maxine and Minnie are true game enthusiasts.  They just love games.  Especially two-person, perfect information games such as tic-tac-toe or chess.  One day they were playing tic-tac-toe. Maxine, or Max as her friends call her, was playing with X.  Minnie, or Min as her friends call her, had the Os.  Min had just played her turn and the board looked as follows: SURBHI SAROHA 33
  • 35.  Max was looking at the board and contemplating her next move, as it was her turn, when she suddenly buried her face in her hands in despair, looking quite like Garry Kasparov playing Deep Blue in 1997.  Yes, Min was close to getting three Os on the top row, but Max could easily put a stop to that plan. SURBHI SAROHA 35
  • 36.  Alpha-beta pruning is a modified version of the minimax algorithm.  It is an optimization technique for the minimax algorithm. ... It is also called as Alpha-Beta Algorithm.  Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree leaves but also entire sub- tree. SURBHI SAROHA 36
  • 37.  The word pruning really reminds of cutting down branches, or to those familiar with data science, it reminds of post and pre-pruning in trees.  Alpha-beta pruning is essentially pruning of useless branches.  The word ‘pruning’ means cutting down branches and leaves.  In data science pruning is a much-used term which refers to post and pre-pruning in decision trees and random forest.  Alpha-beta pruning is nothing but the pruning of useless branches in decision trees. SURBHI SAROHA 37
  • 38.  Alpha: Alpha is the best choice or the highest value that we have found at any instance along the path of Maximizer.The initial value for alpha is – ∞.  Beta: Beta is the best choice or the lowest value that we have found at any instance along the path of Minimizer.The initial value for alpha is + ∞.  The condition for Alpha-beta Pruning is that α >= β.  Each node has to keep track of its alpha and beta values.  Alpha can be updated only when it’s MAX’s turn and, similarly, beta can be updated only when it’s MIN’s chance. SURBHI SAROHA 38
  • 39.  MAX will update only alpha values and MIN player will update only beta values.  The node values will be passed to upper nodes instead of values of alpha and beta during go into reverse of tree.  Alpha and Beta values only be passed to child nodes. SURBHI SAROHA 39
  • 40.  We will first start with the initial move.We will initially define the alpha and beta values as the worst case i.e. α = -∞ and β= +∞.  We will prune the node only when alpha becomes greater than or equal to beta. SURBHI SAROHA 40
  • 42.  2. Since the initial value of alpha is less than beta so we didn’t prune it.  Now it’s turn for MAX. So, at node D, value of alpha will be calculated.  The value of alpha at node D will be max (2, 3). So, value of alpha at node D will be 3.  3. Now the next move will be on node B and its turn for MIN now.  So, at node B, the value of alpha beta will be min (3, ∞). So, at node B values will be alpha= – ∞ and beta will be 3. SURBHI SAROHA 42
  • 44.  In the next step, algorithms traverse the next successor of Node B which is node E, and the values of α= -∞, and β= 3 will also be passed.  4. Now it’s turn for MAX. So, at node E we will look for MAX.  The current value of alpha at E is – ∞ and it will be compared with 5.  So, MAX (- ∞, 5) will be 5. So, at node E, alpha = 5, Beta = 5.  Now as we can see that alpha is greater than beta which is satisfying the pruning condition so we can prune the right successor of node E and algorithm will not be traversed and the value at node E will be 5. SURBHI SAROHA 44
  • 46.  6. In the next step the algorithm again comes to node A from node B.  At node A alpha will be changed to maximum value as MAX (- ∞, 3). So now the value of alpha and beta at node A will be (3, + ∞) respectively and will be transferred to node C.These same values will be transferred to node F.  7. At node F the value of alpha will be compared to the left branch which is 0.  So, MAX (0, 3) will be 3 and then compared with the right child which is 1, and MAX (3,1) = 3 still α remains 3, but the node value of F will become 1. SURBHI SAROHA 46
  • 48.  8. Now node F will return the node value 1 to C and will compare to beta value at C.  Now its turn for MIN. So, MIN (+ ∞, 1) will be 1. Now at node C, α= 3, and β= 1 and alpha is greater than beta which again satisfies the pruning condition.  So, the next successor of node C i.e. G will be pruned and the algorithm didn’t compute the entire subtree G. SURBHI SAROHA 48
  • 50.  Now, C will return the node value to A and the best value of A will be MAX (1, 3) will be 3.  SURBHI SAROHA 50
  • 51.  The above represented tree is the final tree which is showing the nodes which are computed and the nodes which are not computed.  So, for this example the optimal value of the maximizer will be 3.  Thank you  SURBHI SAROHA 51