SlideShare a Scribd company logo
ARTIFICIAL
INTELLIGENCE
UNIT -2
2
PROBLEM SOLVING BY SEARCHING
 Problem formulation is the process of deciding what actions and
states to consider, given a goal.
 Goal formulation, based on current situation and the agent’s
performance measure, is the first step in problem solving.
Steps to build a system to solve a particular problem—
o Problem Definition – It Includes precise specifications of what the
initial situation will be as well as what final situations constitute
acceptable solutions to the problem.
o Problem Analysis – Analyse the various possible techniques for
solving the problem.
o Knowledge Representation - Isolate and represent the task
knowledge that is necessary to solve the problem.
3
Steps to define a problem--
 Define a state space that contains all the possible configurations
of the relevant objects.
 Specify one or more states within that space that describes
possible situations from which the problem solving process may
start.
 Specify one or more states (goal states) that would be acceptable
as solutions to the problem.
 Specify a set of rules that describe the actions available.
o Selection - Choose the best problem solving technique and apply it to
the particular problem.
4
Characteristics of AI problem
 Is the problem decomposable into smaller or easier sub-problems?
 Are solution steps ignorable, recoverable or irrecoverable?
 Are programs use internally consistent knowledge base?
 Is program able to learn?
5
An AI problem can be defined by using following components--
 Initial state – specify one or more states within that space that
describes possible situations from which the problem solving
process may start.
 Description of the possible actions available to the agent.
 State space definition -- contains all the possible configurations
of the relevant objects.
 Path – is the sequence of states connected by a sequence of
actions.
6
 Goal states -- one or more states that would be acceptable as
solutions to the problem.
 Goal test – determines whether a given state is a goal state.
 Path cost – function that assigns a numeric cost to each path.
 Step cost – sum of costs of the individuals actions along the path.
 Solution– a path form the initial state to a goal state.
7
8-puzzle problem
 States– state description specifies location of each of the eight
tiles and the blank in one of the nine squares.
 Initial state – any state
 Successor function-- generates the legal states that result from
trying actions such as moving blank tile to the left, right, up or
down.
 Goal test – checks whether the states checks the goal
configuration.
 Path cost – each step costs 1, so the path cost is the number of
steps in the path.
8
9
Route finding problem
 States– each state is represented by a location and the current
time.
 Initial state – problem specific.
 Successor function-- returns the states resulting form taking any
scheduled flight, leaving later than the current time and the with-
in airport transit time, from the current airport to another.
 Goal test – is flight is at the destination by some predefined
time?
 Path cost– monetary cost, waiting time, flight time, customs, seat
quality, time, day, type of airplane, and so on.
10
8-queens problem
 States– any arrangement of 0 to 8 queens on the board is a state.
 Initial state – no queens on the board.
 Successor function-- add a queen to any empty square.
 Goal test – 8 queens are on the board , none attacked.
11
12
Water-jug problem
You are given two jugs, a 4-litre one and a 3-litre one. Neither has
any measuring markers on it. There is a pump that can be used to
fill the jugs with water in jugs. How can you get exactly 2 litre of
water into the 4-liter jug.
 State space – can be described as the set of ordered pairs of
integers (x,y),
where
x -- the quantity of water in 4 litre jug, i,e. x=0,1,2,3, or 4
y---the quantity of water in 3-litre jug, i.e. y=0,1,2, or 3
 Initial state – (0,0)
 Goal state – (2,n)
13
X Y
0 0
0 3
3 0
3 3
4 2
0 2
2 0
14
SEARCHING FOR SOLUTIONS
 Search tree – generated by the initial state and the successor
function.
 Search node– root of the search tree and corresponds to the
initial node. Three states—
o Closed – not visited yet,
o Generated – visited but not explored or expended,
o Expanded – successor function has been applied.
Basically a node is a data structure with five components:
 STATE – the state in the state space to which the node corresponds
PARENT –the node in the search tree that generated the node.
15
 ACTION – the action that was applied to the parent to generate
the
node.
 PATH-COST – the cost of the path from initial state to the node. It
is denoted by g(n).
 DEPTH – the number of steps along the path from the initial state.
16
Performance Measurement
Following four criteria—
 Completeness
 Optimality
 Time complexity
 Space complexity
17
PRODUCTION SYSTEM
(Searching of right rule among the set of rules)
A production system consists of—
 A set of rules, each consisting of a left side (pattern) that determines the
applicability of the rule and a right side that describes the operations to be
performed if the rule is applied.
 One or more knowledge base that contain whatever information is
appropriate for the particular task.
 A control strategy that specifies the order in which the rules will be compared
to the database and a way of resolving the conflicts that arise when several
rules match at once.
 A rule applier.
18
CONTROL STRATEGY
Requirements are—
 First – a good control strategy should cause motion.
 Second – a good control strategy should be systematic.
UNINFORMED SEARCH STRATEGIES
No additional information about states is provided in problem definition
BREADTH-FIRST SEARCH
Algorithm-
1. Create a variable called NODE-LIST and set it to the initial
state.
2. Until a goal state is found or NODE-LIST is empty do:
19
a. Remove the first element from NODE-LIST and call it E.
If NODE-LIST was empty, quit.
b. For each way that each rule can match the state described in
E do:
i. Apply the rule to generate a new state.
ii. If the new state is a goal state, quit and return this state.
iii. Otherwise, add the new state to the end of the
NODE-LIST.
20
 Can be implemented by calling the TREE-SEARCH with an
empty fringe that is a FIFO queue.
 The FIFO queue puts all newly generated successors at the end
of the queue.
21
Performance Criteria
 Complete – if the shallowest goal node is at the some finite
depth d.
 Optimal – if the path cost is nondecreasing function of the depth
of the node.
 Time Complexity – where d is the goal depth.
b nodes at depth 1
b*b nodes at depth 2
b*b*b nodes at depth 3 and so on…
)
( d
b
o
d
b
b
b
b 


 ..........
3
2
22
 Space Complexity --
All nodes at a given depth must be stored in order to generate the
nodes at the next depth, so nodes must be stored at depth
d-1 to generate nodes at depth d.
)
( d
b
o
)
( 1

d
b
o
DEPTH-FIRST SEARCH
 Always expands the deepest node in the current fringe of the
search tree.
 Can be implemented by TREE-SEARCH with a LIFO queue
(STACK).
23
DEPTH-FIRST SEARCH
Algorithm-
1. If the initial state is a goal state, quit and return success.
2. Otherwise do the following until success or failure is
signalled—
a. Generate a successor, called E, of the initial state, If
there
are no more successors, signal failure.
b. Call DEPTH-FIRST SEARCH with E as the initial state.
c. If success is returned, signal success. Otherwise continue
in this loop
24
 Time Complexity—
 Space Complexity -- O(d)
If the depth cut-off is d.
)
( d
b
o
25
26
27
28
29
30
DEPTH-FIRST ITERATIVE DEEPENING SEARCH
 Are performed as a form of repetitive depth first search moving to
a successively deeper depth with each iteration.
31
 Begins by performing a depth-first search to a depth of one.
 It then discards all nodes generated and starts over doing a search
to a depth of two.
 If no goal has been found, it discards all nodes generated and
does a depth first search to a depth of three.
 Process continues until a goal node is found or some maximum
depth is reached.
32
Uniform Cost Search
 When all step costs are equal, Breadth-first search is optimal
because it always expands the shallowest unexpanded node.
 Uniform-cost search expands the node n with the lowest path cost
g(n).
 It does not care about the number of steps a path has, but only
about their total cost.
 It is guided by path costs rather than depths, so its complexity is not
easily characterized in terms of b and d.
 It expands nodes in order of their optimal path cost.
33
There are two significant differences from breadth-first search –
• First difference is that the goal test is applied to a node when it is
selected for expansion rather than when it is first generated.
• The second difference is that a test is added in case a better path is
found to a node currently on the frontier.
• When all step costs are the same, uniform-cost search is similar to
BFS, except that the BFS stops as soon as it generates a goal,
whereas uniform-cost search examines all the nodes at the goal’s
depth to see if one has a lower cost.
34
35
Bidirectional Search
 It is used when a problem has a single clearly stated goal state and
all node generation operators have inverses.
 It is performed by searching forward from the initial node and
backward from the goal node simultaneously.
 Thus, the program must store the nodes generated by both paths
until a common node is found.
 Uninformed search methods can be use to perform the bidirectional
searching with some modifications.
36
37
HEURISTIC SEARCH
 A Heuristic is a technique that improves the efficiency of a search
process, possibly by sacrificing the claims of completeness.
 Heuristics are tour guides.
 They improve the quality of the paths that are explored.
 Using good heuristic, one can get good solutions to hard problems.
Travelling salesman problem –
Heuristic is – nearest neighbour heuristics
38
Procedure –
Step 1. Arbitrarily select a starting city.
Step 2. To select the next city, look at all cities not yet visited, and
select the one closest to the current city. Go to it next.
Step 3. Repeat step 2 until all cities have been visited.
Heuristic Function
 A heuristic function is a function that maps from problem state
description to measures of desirability, usually represented as
numbers.
 The purpose of a heuristic function is to guide the search process in
the most profitable direction by suggesting which path to follow first
when more than one is available.
39
GENERATE-AND-TEST
 A depth-first search procedure as complete solutions must be generated
before
they can be tested.
 Also operate by generating solutions randomly, but then there is no guarantee
that a solution will ever be found.
 Also called British-Museum algorithm.
 Can be implemented as a depth-first search tree with backtracking.
Algorithm —
1. Generate a possible solution.
2. Test to see if this is actually a solution by comparing the chosen point or the
endpoint of the chosen path to the set of acceptable goal states.
40
Hill Climbing
 Is used when only a good heuristic function is available for evaluating
states and no other useful information is available.
 At each point in the search path, a successor node that appears to lead
most quickly to the top of the hill (goal state) is selected for
exploration.
 It is like depth-first searching where the most selected child is
selected for expansion
 When the child have been generated, alternative choices are evaluated
using some type of heuristic function.
 The path that appears most promising is then chosen and no further
reference to the parent or other children is retained.
 This process continues from node-to-node with previously expanded
nodes being discarded.
41
Simple Hill Climbing
Algorithm
1. Evaluate the initial state. If it is also a goal state, then return it
and quit.
Otherwise continue with the initial state as the current state.
2. Loop until a solution is found or until there are no new operators
left to be applied in the current state—
(a)Select an operator that has not yet been applied to the current
state and apply it to produce a new state.
(b) Evaluate the new state—
(i) If it is a goal state, then return it and quit.
(ii) If it is not a goal state but it is better than the current
state, then make it the current state.
(iii) If it is not better than the current state, then continue in
the loop.
42
Steepest-Ascent Hill Climbing
Algorithm
1. Evaluate the initial state. If it is also a goal state, then return it and quit.
Otherwise continue with the initial state as the current state.
2. Loop until a solution is found or until a complete iteration produces no
change to current state—
(a) Let SUCC be a state such that any possible successor of the current
state will be better than SUCC.
(b) For each operator that applies to the current state do--
(i) Apply the operator and generate a new state.
(ii) Evaluate the new state. If it is a goal state, then return it and quit.
If not, compare it to SUCC. If it is better, then set SUCC to this
state. If it is not better, leave SUCC alone.
(iii) If the SUCC is better than current state, then set current state to
SUCC.
43
Problems with hill climbing —
 Foothill -
o When local maxima or peaks are found. Local maxima
is a state that is better than all its neighbors but is not
better than some other states farther away.
o All children have less promising goal distances than the
parent node.
 Ridge – a special kind of local maximum. When several
adjoining nodes have higher values than the surrounding
nodes.
 Plateau – a flat area of the search space. All neighboring
nodes have the same value.
44
45
BRANCH AND BOUND SEARCH
When more than one alternative path may exist between two nodes.
Algorithm –
Step 1. Place the start node of zero path length on the queue.
Step 2. Until the queue is empty or a goal node has been found—
(a) determine if the first path in the queue contains a goal node.
(b) if the first path contains a goal node exit with success,
(c) if the first path does not contain the goal node remove the path
from the queue and form new paths by extending the removed path
by one step,
(d) compute the cost of the new path and add them to the queue,
(e) Sort the paths on the queue with lowest cost paths in front.
Step 3. Otherwise exit with failure.
46
47
SIMULATED ANNEALING
 Often used to solve problems in which the number of moves from
a given state is very large, and
 It may not make sense to try all possible moves.
 Best way is to select an annealing schedule is by trying several and
observing the effect on both the quality of the solution that is found
and the rate at which the process converges.
For implementation annealing schedule having three components is
required:
 initial value to be used for temperature of the system,
 criteria to decide when the temperature should be reduced,
 amount by which the temperature should be reduced each time it is
changed.
48
Algorithm –
Step 1. Evaluate the initial state. If it is also a goal state, then return it and quit.
Otherwise continue with the initial state as the current state.
Step 2. Initialize BEST-SO-FAR to the current state.
Step 3. Initialize T according to the annealing schedule
Step 4. Loop until a solution is found or until there are no new operators left
to be applied in the current state.
(a) Select an operator that has not yet been applied to the current state
and apply it to produce a new state.
(b) Evaluate the new state. Compute—
Δ E= (value of the current state) – (value of new state)
 If the new state is the goal state, then return it and quit.
 If not but better than the current state, then make it the
current state. Set BEST-SO-FAR to this new state.
49
 If it is not but better than the current state, then make it the current state
with probability p’.
(c) Revise T as necessary according to the annealing schedule.
Step 5. Return BEST-SO-FAR as the answer.
BEST-FIRST SEARCH
 Advantage of BFS is that it does not get trapped on dead-end paths.
 Advantage of DFS is that it allows a solution to be found without
all competing braches having to be expended.
 Best-first search is a way of combining the advantages of both BFS
and DFS into a single searching method.
50
Procedure --
 At each step, the most promising of the nodes generated so far is
selected by applying an appropriate heuristic function to each of the
nodes.
 The chosen node is then expanded by using the rules to generate its
successors. A bit of depth-first searching occurs at this point.
 If one of them is the solution , then quit. If not, all these nodes are
added to the set of nodes generated so far.
 At this point , the branch will start to look less promising than one
of the top-level branches that had been ignored.
51
 Thus the more promising, previously ignored branch will be
explored now. But the old branch is not forgotten. Its last node
remains in the set of generated but unexpanded nodes.
 Again the most promising node is selected and the process
continues.
 Two list of nodes are needed to implement Best-first search—
 OPEN – nodes that have been generated and have had the heuristic
function applied to them but which have not been examined.
 CLOSED – nodes that have already been examined.
52
53
Algorithm--
1. Start with OPEN containing just the initial state.
2. Until a goal is found or there are no nodes left on OPEN do—
a) Pick the best node on OPEN.
b) Generate its successors.
c) For each successor do—
i. If it has not been generated before, evaluate it, add it to
open and record its parent.
ii. If it has been generated before, change the parent if this
new path is better than the previous one and update the
cost of getting to this node.
54
A* ALGORITHM
 Best-first-search is a simplification of A* algorithm.
 A* algorithm was presented by Hart et al.
Algorithm--
1. Start with OPEN containing only the initial state and set CLOSED
to empty list. For the initial node f ‘
= 0 + h ‘.
2. Until a goal node is found, do the following:
i. If there are no nodes on OPEN, report failure. Otherwise, pick
the node on OPEN list with the lowest f ‘
value and Call it
BESTNODE. Remove it from OPEN and place it on CLOSED.
Check if it is goal node. If so, then exit and report success.
Otherwise generate the successors of BESTNODE. Do not set
BESTNODE to point to them yet. For each SUCCESSOR do
55
the following:
a. Set SUCCESSOR to point back to BESTNODE.
b. Compute g(SUCCESSOR) = g(BESTNODE) + cost of getting
from BESTNODE to SUCCESSOR.
c. If the SUCCESSOR has already been generated but not
processed, then call this node as OLD. Add OLD to the list of
BESTNODE successors. Check whether it is cheaper to get to
OLD via its current parent or to SUCCESSOR via BESTNODE
by comparing their g values:
 If OLD is cheaper then do nothing.
 If SUCCESSOR is cheaper then reset OLD’s parent link to
point to BESTNODE.
 Record the new cheaper path in g(OLD) and update f `(OLD)
56
d. Check if the new path or the old path is better and set the parent link
and g & f ’ values appropriately. If a better path to OLD has been
found, then do the propagation in-
 forward direction by OLD pointing to its successors, each
successor in turn point to its successors and the process
continues till each branch terminates with a node that either is
still on OPEN or has no successors, and
 backward direction as each node’s parent link points back to its
best known parent. If a node’s parent points to the node exists on
the current search path, then continue the propagation. Otherwise
stop the propagation.
e. If SUCCESSOR was not already on either OPEN or CLOSED, then
put it on OPEN, and add it to the list of BESTNODE’s successors.
Compute –
f ’ (SUCCESSOR) = g’ (SUCCESSOR) + h’ (SUCCESSOR)
57
Properties of heuristic search algorithms:
 Admissibility Condition– Algorithm A is admissible if it is
guaranteed to return an optimal solution when one exists.
 Completeness Condition– Algorithm A is complete if it always
terminates with a solution when one exists.
 Dominance Property– Let A1 and A2 be admissible algorithms with
heuristic estimation functions h1* and h2* respectively. A1 is said to
be more informed than A2 whenever h1*(n) >h2*(n) for all n. A1 is
said to be dominate A2.
 Optimality Property– Algorithm A is optimal over a class of
algorithm if A dominates all members of the class.
58
PROBLEM REDUCTION– AND-OR GRAPHS
Algorithm
1. Initialize the graph to the starting node.
2. Loop until the starting node is labeled SOLVED or until its cost
goes above FUTILITY.
a) Traverse the graph, starting at the initial node and following
the
current best path, and accumulate the set of nodes that are on
that path and have not yet been expended or labeled as solved.
b) Pick one of these unexpended nodes and expand it. If there
are no
successors , assign FUTILITY as the value of this node.
Otherwise add its successors to the graph and for each of them
59
c) Change the f’ estimate of the newly expanded node to reflect the
new information provided by its successors. Propagate this change
backward through the graph. If any node contains a successor arc
whose descendants are all solved, label the node itself as SOLVED.
AO* ALGORITHM
Algorithm
1. Place the start node S on OPEN.
2. Using the search tree constructed thus far, compute the most
promising solution tree T0.
3. Select a node N that is both on OPEN a part of T0. Remove N from
OPEN and place it on CLOSED.
4. If N is a terminal goal node, label N as solved. If the solution of N
results in any of N’s ancestor’s being SOLVED, label all the
ancestors as SOLVED. If the start node S is solved, exit with
success where T0 is the solution tree. Remove from OPEN all
nodes with a SOLVED ancestor.
60
5. If N is not solvable, label N as unsolvable. If the start node
is labeled as unsolvable, exit with failure. If any of the N’s
ancestor become unsolvable because N is label them
unsolvable as well. Remove from OPEN all nodes with
unsolvable ancestor.
6. Otherwise expand node N generating all of its successors.
For each such successor node that contain more than one
subproblem, generate their successors to give individual
subproblems . Attach to each newly generated node a back
pointer to its predecessor.
7. Return to step 2.

More Related Content

PPTX
AI_03_Solving Problems by Searching.pptx
PPTX
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
PPT
AI Lecture 3 (solving problems by searching)
PPTX
problem space and problem definition in ai
PDF
problem solving in Artificial intelligence .pdf
PPTX
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
PPT
2.Problems Problem Spaces and Search.ppt
PPTX
Lec#2
AI_03_Solving Problems by Searching.pptx
AI UNIT 2 PPT AI UNIT 2 PPT AI UNIT 2 PPT.pptx
AI Lecture 3 (solving problems by searching)
problem space and problem definition in ai
problem solving in Artificial intelligence .pdf
Moduleanaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaad-II.pptx
2.Problems Problem Spaces and Search.ppt
Lec#2

Similar to Artificial intelligence with the help of (20)

PDF
Lecture 3 problem solving
PPTX
Chapter 3.pptx
PPT
22sch AI Module 2.ppt 22sch AI Module 2.ppt
PPT
22sch AI Module 2.ppt 22sch AI Module 2.ppt
PPTX
CHAPTER 5.pptx of the following of our discussion
PPTX
3. ArtificialSolving problems by searching.pptx
PPT
02-solving-problems-by-searching-(us).ppt
PPT
Artificial intelligent Lec 3-ai chapter3-search
PPT
Unit 2 for the Artificial intelligence and machine learning
PPTX
unit 2.pptx
PPTX
Popular search algorithms
PPT
m3-searchAbout AI About AI About AI1.ppt
PPT
state-spaces29Sep06.ppt
PPT
Ch4: Searching Techniques 6_2018_12_25!05_35_25_PM.ppt
PPT
3.AILec5nkjnkjnkjnkjnkjnjhuhgvkjhbkhj-6.ppt
PDF
problem solve and resolving in ai domain , probloms
PPTX
chapter 3 Problem Solving using searching.pptx
PPTX
PROBLEM SOLVING AGENTS - SEARCH STRATEGIES
PPTX
AI_Lecture2.pptx
PPTX
Searching is the universal technique of problem solving in Artificial Intelli...
Lecture 3 problem solving
Chapter 3.pptx
22sch AI Module 2.ppt 22sch AI Module 2.ppt
22sch AI Module 2.ppt 22sch AI Module 2.ppt
CHAPTER 5.pptx of the following of our discussion
3. ArtificialSolving problems by searching.pptx
02-solving-problems-by-searching-(us).ppt
Artificial intelligent Lec 3-ai chapter3-search
Unit 2 for the Artificial intelligence and machine learning
unit 2.pptx
Popular search algorithms
m3-searchAbout AI About AI About AI1.ppt
state-spaces29Sep06.ppt
Ch4: Searching Techniques 6_2018_12_25!05_35_25_PM.ppt
3.AILec5nkjnkjnkjnkjnkjnjhuhgvkjhbkhj-6.ppt
problem solve and resolving in ai domain , probloms
chapter 3 Problem Solving using searching.pptx
PROBLEM SOLVING AGENTS - SEARCH STRATEGIES
AI_Lecture2.pptx
Searching is the universal technique of problem solving in Artificial Intelli...
Ad

Recently uploaded (20)

PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PDF
Mega Projects Data Mega Projects Data
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PDF
Lecture1 pattern recognition............
PPTX
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PDF
annual-report-2024-2025 original latest.
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
Qualitative Qantitative and Mixed Methods.pptx
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PDF
Clinical guidelines as a resource for EBP(1).pdf
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
Mega Projects Data Mega Projects Data
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Data_Analytics_and_PowerBI_Presentation.pptx
IBA_Chapter_11_Slides_Final_Accessible.pptx
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
Reliability_Chapter_ presentation 1221.5784
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Lecture1 pattern recognition............
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
annual-report-2024-2025 original latest.
Galatica Smart Energy Infrastructure Startup Pitch Deck
Qualitative Qantitative and Mixed Methods.pptx
STUDY DESIGN details- Lt Col Maksud (21).pptx
Clinical guidelines as a resource for EBP(1).pdf
Ad

Artificial intelligence with the help of

  • 2. 2 PROBLEM SOLVING BY SEARCHING  Problem formulation is the process of deciding what actions and states to consider, given a goal.  Goal formulation, based on current situation and the agent’s performance measure, is the first step in problem solving. Steps to build a system to solve a particular problem— o Problem Definition – It Includes precise specifications of what the initial situation will be as well as what final situations constitute acceptable solutions to the problem. o Problem Analysis – Analyse the various possible techniques for solving the problem. o Knowledge Representation - Isolate and represent the task knowledge that is necessary to solve the problem.
  • 3. 3 Steps to define a problem--  Define a state space that contains all the possible configurations of the relevant objects.  Specify one or more states within that space that describes possible situations from which the problem solving process may start.  Specify one or more states (goal states) that would be acceptable as solutions to the problem.  Specify a set of rules that describe the actions available. o Selection - Choose the best problem solving technique and apply it to the particular problem.
  • 4. 4 Characteristics of AI problem  Is the problem decomposable into smaller or easier sub-problems?  Are solution steps ignorable, recoverable or irrecoverable?  Are programs use internally consistent knowledge base?  Is program able to learn?
  • 5. 5 An AI problem can be defined by using following components--  Initial state – specify one or more states within that space that describes possible situations from which the problem solving process may start.  Description of the possible actions available to the agent.  State space definition -- contains all the possible configurations of the relevant objects.  Path – is the sequence of states connected by a sequence of actions.
  • 6. 6  Goal states -- one or more states that would be acceptable as solutions to the problem.  Goal test – determines whether a given state is a goal state.  Path cost – function that assigns a numeric cost to each path.  Step cost – sum of costs of the individuals actions along the path.  Solution– a path form the initial state to a goal state.
  • 7. 7 8-puzzle problem  States– state description specifies location of each of the eight tiles and the blank in one of the nine squares.  Initial state – any state  Successor function-- generates the legal states that result from trying actions such as moving blank tile to the left, right, up or down.  Goal test – checks whether the states checks the goal configuration.  Path cost – each step costs 1, so the path cost is the number of steps in the path.
  • 8. 8
  • 9. 9 Route finding problem  States– each state is represented by a location and the current time.  Initial state – problem specific.  Successor function-- returns the states resulting form taking any scheduled flight, leaving later than the current time and the with- in airport transit time, from the current airport to another.  Goal test – is flight is at the destination by some predefined time?  Path cost– monetary cost, waiting time, flight time, customs, seat quality, time, day, type of airplane, and so on.
  • 10. 10 8-queens problem  States– any arrangement of 0 to 8 queens on the board is a state.  Initial state – no queens on the board.  Successor function-- add a queen to any empty square.  Goal test – 8 queens are on the board , none attacked.
  • 11. 11
  • 12. 12 Water-jug problem You are given two jugs, a 4-litre one and a 3-litre one. Neither has any measuring markers on it. There is a pump that can be used to fill the jugs with water in jugs. How can you get exactly 2 litre of water into the 4-liter jug.  State space – can be described as the set of ordered pairs of integers (x,y), where x -- the quantity of water in 4 litre jug, i,e. x=0,1,2,3, or 4 y---the quantity of water in 3-litre jug, i.e. y=0,1,2, or 3  Initial state – (0,0)  Goal state – (2,n)
  • 13. 13 X Y 0 0 0 3 3 0 3 3 4 2 0 2 2 0
  • 14. 14 SEARCHING FOR SOLUTIONS  Search tree – generated by the initial state and the successor function.  Search node– root of the search tree and corresponds to the initial node. Three states— o Closed – not visited yet, o Generated – visited but not explored or expended, o Expanded – successor function has been applied. Basically a node is a data structure with five components:  STATE – the state in the state space to which the node corresponds PARENT –the node in the search tree that generated the node.
  • 15. 15  ACTION – the action that was applied to the parent to generate the node.  PATH-COST – the cost of the path from initial state to the node. It is denoted by g(n).  DEPTH – the number of steps along the path from the initial state.
  • 16. 16 Performance Measurement Following four criteria—  Completeness  Optimality  Time complexity  Space complexity
  • 17. 17 PRODUCTION SYSTEM (Searching of right rule among the set of rules) A production system consists of—  A set of rules, each consisting of a left side (pattern) that determines the applicability of the rule and a right side that describes the operations to be performed if the rule is applied.  One or more knowledge base that contain whatever information is appropriate for the particular task.  A control strategy that specifies the order in which the rules will be compared to the database and a way of resolving the conflicts that arise when several rules match at once.  A rule applier.
  • 18. 18 CONTROL STRATEGY Requirements are—  First – a good control strategy should cause motion.  Second – a good control strategy should be systematic. UNINFORMED SEARCH STRATEGIES No additional information about states is provided in problem definition BREADTH-FIRST SEARCH Algorithm- 1. Create a variable called NODE-LIST and set it to the initial state. 2. Until a goal state is found or NODE-LIST is empty do:
  • 19. 19 a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty, quit. b. For each way that each rule can match the state described in E do: i. Apply the rule to generate a new state. ii. If the new state is a goal state, quit and return this state. iii. Otherwise, add the new state to the end of the NODE-LIST.
  • 20. 20  Can be implemented by calling the TREE-SEARCH with an empty fringe that is a FIFO queue.  The FIFO queue puts all newly generated successors at the end of the queue.
  • 21. 21 Performance Criteria  Complete – if the shallowest goal node is at the some finite depth d.  Optimal – if the path cost is nondecreasing function of the depth of the node.  Time Complexity – where d is the goal depth. b nodes at depth 1 b*b nodes at depth 2 b*b*b nodes at depth 3 and so on… ) ( d b o d b b b b     .......... 3 2
  • 22. 22  Space Complexity -- All nodes at a given depth must be stored in order to generate the nodes at the next depth, so nodes must be stored at depth d-1 to generate nodes at depth d. ) ( d b o ) ( 1  d b o DEPTH-FIRST SEARCH  Always expands the deepest node in the current fringe of the search tree.  Can be implemented by TREE-SEARCH with a LIFO queue (STACK).
  • 23. 23 DEPTH-FIRST SEARCH Algorithm- 1. If the initial state is a goal state, quit and return success. 2. Otherwise do the following until success or failure is signalled— a. Generate a successor, called E, of the initial state, If there are no more successors, signal failure. b. Call DEPTH-FIRST SEARCH with E as the initial state. c. If success is returned, signal success. Otherwise continue in this loop
  • 24. 24  Time Complexity—  Space Complexity -- O(d) If the depth cut-off is d. ) ( d b o
  • 25. 25
  • 26. 26
  • 27. 27
  • 28. 28
  • 29. 29
  • 30. 30 DEPTH-FIRST ITERATIVE DEEPENING SEARCH  Are performed as a form of repetitive depth first search moving to a successively deeper depth with each iteration.
  • 31. 31  Begins by performing a depth-first search to a depth of one.  It then discards all nodes generated and starts over doing a search to a depth of two.  If no goal has been found, it discards all nodes generated and does a depth first search to a depth of three.  Process continues until a goal node is found or some maximum depth is reached.
  • 32. 32 Uniform Cost Search  When all step costs are equal, Breadth-first search is optimal because it always expands the shallowest unexpanded node.  Uniform-cost search expands the node n with the lowest path cost g(n).  It does not care about the number of steps a path has, but only about their total cost.  It is guided by path costs rather than depths, so its complexity is not easily characterized in terms of b and d.  It expands nodes in order of their optimal path cost.
  • 33. 33 There are two significant differences from breadth-first search – • First difference is that the goal test is applied to a node when it is selected for expansion rather than when it is first generated. • The second difference is that a test is added in case a better path is found to a node currently on the frontier. • When all step costs are the same, uniform-cost search is similar to BFS, except that the BFS stops as soon as it generates a goal, whereas uniform-cost search examines all the nodes at the goal’s depth to see if one has a lower cost.
  • 34. 34
  • 35. 35 Bidirectional Search  It is used when a problem has a single clearly stated goal state and all node generation operators have inverses.  It is performed by searching forward from the initial node and backward from the goal node simultaneously.  Thus, the program must store the nodes generated by both paths until a common node is found.  Uninformed search methods can be use to perform the bidirectional searching with some modifications.
  • 36. 36
  • 37. 37 HEURISTIC SEARCH  A Heuristic is a technique that improves the efficiency of a search process, possibly by sacrificing the claims of completeness.  Heuristics are tour guides.  They improve the quality of the paths that are explored.  Using good heuristic, one can get good solutions to hard problems. Travelling salesman problem – Heuristic is – nearest neighbour heuristics
  • 38. 38 Procedure – Step 1. Arbitrarily select a starting city. Step 2. To select the next city, look at all cities not yet visited, and select the one closest to the current city. Go to it next. Step 3. Repeat step 2 until all cities have been visited. Heuristic Function  A heuristic function is a function that maps from problem state description to measures of desirability, usually represented as numbers.  The purpose of a heuristic function is to guide the search process in the most profitable direction by suggesting which path to follow first when more than one is available.
  • 39. 39 GENERATE-AND-TEST  A depth-first search procedure as complete solutions must be generated before they can be tested.  Also operate by generating solutions randomly, but then there is no guarantee that a solution will ever be found.  Also called British-Museum algorithm.  Can be implemented as a depth-first search tree with backtracking. Algorithm — 1. Generate a possible solution. 2. Test to see if this is actually a solution by comparing the chosen point or the endpoint of the chosen path to the set of acceptable goal states.
  • 40. 40 Hill Climbing  Is used when only a good heuristic function is available for evaluating states and no other useful information is available.  At each point in the search path, a successor node that appears to lead most quickly to the top of the hill (goal state) is selected for exploration.  It is like depth-first searching where the most selected child is selected for expansion  When the child have been generated, alternative choices are evaluated using some type of heuristic function.  The path that appears most promising is then chosen and no further reference to the parent or other children is retained.  This process continues from node-to-node with previously expanded nodes being discarded.
  • 41. 41 Simple Hill Climbing Algorithm 1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise continue with the initial state as the current state. 2. Loop until a solution is found or until there are no new operators left to be applied in the current state— (a)Select an operator that has not yet been applied to the current state and apply it to produce a new state. (b) Evaluate the new state— (i) If it is a goal state, then return it and quit. (ii) If it is not a goal state but it is better than the current state, then make it the current state. (iii) If it is not better than the current state, then continue in the loop.
  • 42. 42 Steepest-Ascent Hill Climbing Algorithm 1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise continue with the initial state as the current state. 2. Loop until a solution is found or until a complete iteration produces no change to current state— (a) Let SUCC be a state such that any possible successor of the current state will be better than SUCC. (b) For each operator that applies to the current state do-- (i) Apply the operator and generate a new state. (ii) Evaluate the new state. If it is a goal state, then return it and quit. If not, compare it to SUCC. If it is better, then set SUCC to this state. If it is not better, leave SUCC alone. (iii) If the SUCC is better than current state, then set current state to SUCC.
  • 43. 43 Problems with hill climbing —  Foothill - o When local maxima or peaks are found. Local maxima is a state that is better than all its neighbors but is not better than some other states farther away. o All children have less promising goal distances than the parent node.  Ridge – a special kind of local maximum. When several adjoining nodes have higher values than the surrounding nodes.  Plateau – a flat area of the search space. All neighboring nodes have the same value.
  • 44. 44
  • 45. 45 BRANCH AND BOUND SEARCH When more than one alternative path may exist between two nodes. Algorithm – Step 1. Place the start node of zero path length on the queue. Step 2. Until the queue is empty or a goal node has been found— (a) determine if the first path in the queue contains a goal node. (b) if the first path contains a goal node exit with success, (c) if the first path does not contain the goal node remove the path from the queue and form new paths by extending the removed path by one step, (d) compute the cost of the new path and add them to the queue, (e) Sort the paths on the queue with lowest cost paths in front. Step 3. Otherwise exit with failure.
  • 46. 46
  • 47. 47 SIMULATED ANNEALING  Often used to solve problems in which the number of moves from a given state is very large, and  It may not make sense to try all possible moves.  Best way is to select an annealing schedule is by trying several and observing the effect on both the quality of the solution that is found and the rate at which the process converges. For implementation annealing schedule having three components is required:  initial value to be used for temperature of the system,  criteria to decide when the temperature should be reduced,  amount by which the temperature should be reduced each time it is changed.
  • 48. 48 Algorithm – Step 1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise continue with the initial state as the current state. Step 2. Initialize BEST-SO-FAR to the current state. Step 3. Initialize T according to the annealing schedule Step 4. Loop until a solution is found or until there are no new operators left to be applied in the current state. (a) Select an operator that has not yet been applied to the current state and apply it to produce a new state. (b) Evaluate the new state. Compute— Δ E= (value of the current state) – (value of new state)  If the new state is the goal state, then return it and quit.  If not but better than the current state, then make it the current state. Set BEST-SO-FAR to this new state.
  • 49. 49  If it is not but better than the current state, then make it the current state with probability p’. (c) Revise T as necessary according to the annealing schedule. Step 5. Return BEST-SO-FAR as the answer. BEST-FIRST SEARCH  Advantage of BFS is that it does not get trapped on dead-end paths.  Advantage of DFS is that it allows a solution to be found without all competing braches having to be expended.  Best-first search is a way of combining the advantages of both BFS and DFS into a single searching method.
  • 50. 50 Procedure --  At each step, the most promising of the nodes generated so far is selected by applying an appropriate heuristic function to each of the nodes.  The chosen node is then expanded by using the rules to generate its successors. A bit of depth-first searching occurs at this point.  If one of them is the solution , then quit. If not, all these nodes are added to the set of nodes generated so far.  At this point , the branch will start to look less promising than one of the top-level branches that had been ignored.
  • 51. 51  Thus the more promising, previously ignored branch will be explored now. But the old branch is not forgotten. Its last node remains in the set of generated but unexpanded nodes.  Again the most promising node is selected and the process continues.  Two list of nodes are needed to implement Best-first search—  OPEN – nodes that have been generated and have had the heuristic function applied to them but which have not been examined.  CLOSED – nodes that have already been examined.
  • 52. 52
  • 53. 53 Algorithm-- 1. Start with OPEN containing just the initial state. 2. Until a goal is found or there are no nodes left on OPEN do— a) Pick the best node on OPEN. b) Generate its successors. c) For each successor do— i. If it has not been generated before, evaluate it, add it to open and record its parent. ii. If it has been generated before, change the parent if this new path is better than the previous one and update the cost of getting to this node.
  • 54. 54 A* ALGORITHM  Best-first-search is a simplification of A* algorithm.  A* algorithm was presented by Hart et al. Algorithm-- 1. Start with OPEN containing only the initial state and set CLOSED to empty list. For the initial node f ‘ = 0 + h ‘. 2. Until a goal node is found, do the following: i. If there are no nodes on OPEN, report failure. Otherwise, pick the node on OPEN list with the lowest f ‘ value and Call it BESTNODE. Remove it from OPEN and place it on CLOSED. Check if it is goal node. If so, then exit and report success. Otherwise generate the successors of BESTNODE. Do not set BESTNODE to point to them yet. For each SUCCESSOR do
  • 55. 55 the following: a. Set SUCCESSOR to point back to BESTNODE. b. Compute g(SUCCESSOR) = g(BESTNODE) + cost of getting from BESTNODE to SUCCESSOR. c. If the SUCCESSOR has already been generated but not processed, then call this node as OLD. Add OLD to the list of BESTNODE successors. Check whether it is cheaper to get to OLD via its current parent or to SUCCESSOR via BESTNODE by comparing their g values:  If OLD is cheaper then do nothing.  If SUCCESSOR is cheaper then reset OLD’s parent link to point to BESTNODE.  Record the new cheaper path in g(OLD) and update f `(OLD)
  • 56. 56 d. Check if the new path or the old path is better and set the parent link and g & f ’ values appropriately. If a better path to OLD has been found, then do the propagation in-  forward direction by OLD pointing to its successors, each successor in turn point to its successors and the process continues till each branch terminates with a node that either is still on OPEN or has no successors, and  backward direction as each node’s parent link points back to its best known parent. If a node’s parent points to the node exists on the current search path, then continue the propagation. Otherwise stop the propagation. e. If SUCCESSOR was not already on either OPEN or CLOSED, then put it on OPEN, and add it to the list of BESTNODE’s successors. Compute – f ’ (SUCCESSOR) = g’ (SUCCESSOR) + h’ (SUCCESSOR)
  • 57. 57 Properties of heuristic search algorithms:  Admissibility Condition– Algorithm A is admissible if it is guaranteed to return an optimal solution when one exists.  Completeness Condition– Algorithm A is complete if it always terminates with a solution when one exists.  Dominance Property– Let A1 and A2 be admissible algorithms with heuristic estimation functions h1* and h2* respectively. A1 is said to be more informed than A2 whenever h1*(n) >h2*(n) for all n. A1 is said to be dominate A2.  Optimality Property– Algorithm A is optimal over a class of algorithm if A dominates all members of the class.
  • 58. 58 PROBLEM REDUCTION– AND-OR GRAPHS Algorithm 1. Initialize the graph to the starting node. 2. Loop until the starting node is labeled SOLVED or until its cost goes above FUTILITY. a) Traverse the graph, starting at the initial node and following the current best path, and accumulate the set of nodes that are on that path and have not yet been expended or labeled as solved. b) Pick one of these unexpended nodes and expand it. If there are no successors , assign FUTILITY as the value of this node. Otherwise add its successors to the graph and for each of them
  • 59. 59 c) Change the f’ estimate of the newly expanded node to reflect the new information provided by its successors. Propagate this change backward through the graph. If any node contains a successor arc whose descendants are all solved, label the node itself as SOLVED. AO* ALGORITHM Algorithm 1. Place the start node S on OPEN. 2. Using the search tree constructed thus far, compute the most promising solution tree T0. 3. Select a node N that is both on OPEN a part of T0. Remove N from OPEN and place it on CLOSED. 4. If N is a terminal goal node, label N as solved. If the solution of N results in any of N’s ancestor’s being SOLVED, label all the ancestors as SOLVED. If the start node S is solved, exit with success where T0 is the solution tree. Remove from OPEN all nodes with a SOLVED ancestor.
  • 60. 60 5. If N is not solvable, label N as unsolvable. If the start node is labeled as unsolvable, exit with failure. If any of the N’s ancestor become unsolvable because N is label them unsolvable as well. Remove from OPEN all nodes with unsolvable ancestor. 6. Otherwise expand node N generating all of its successors. For each such successor node that contain more than one subproblem, generate their successors to give individual subproblems . Attach to each newly generated node a back pointer to its predecessor. 7. Return to step 2.