SlideShare a Scribd company logo
1
Artificial Intelligence
Dr. Umang Soni
NSUT
2
What is AI ?
• Artificial Intelligence (AI) is a branch of Science which deals with helping machines find solutions to complex
problems in a more human-like fashion.
• “Artificial Intelligence (AI) is the part of computer science concerned with designing intelligent computer systems, that
is, systems that exhibit characteristics we associate with intelligence in human behaviour – understanding language,
learning, reasoning, solving problems, and so on.”
• It is the science and engineering of making intelligent machines, especially intelligent computer programs.
• “AI is the ability of a machine to think, learn and serve. It helps the machine perform tasks, which require human
intelligence, such as visual perception, speech recognition, learning, planning, NL, decision-making, and understanding
among others.”
• Objective : To solve real world problems using AI techniques such as knowledge representation, learning, rule systems,
search, and so on
3
Objectives of AI?
• Knowledge representation
• Problem solving
• Learning methods in AI in engineering Intelligent systems
• Access the applicability, strengths and weakness of these methods in solving
engineering problems.
4
Different perspectives
• From the perspective of intelligence: artificial intelligence is making machines "intelligent" ----- as we would
expect people to act.
• The inability to distinguish computer responses from human responses is called the Turing test. Intelligence
requires knowledge.
• From a business perspective, AI is a set of very powerful tools, and methodologies for using those tools to solve
business problems.
• From a programming perspective, AI includes the study of symbolic programming, problem solving, and search.
• Artificial Intelligence has identifiable roots in a number of older disciplines, particularly: Philosophy,
Logic/Mathematics , Computation, Psychology, Biology, Neuroscience and Evolution.
AI prehistory
• Philosophy Logic, methods of reasoning, mind as physical
system foundations of learning, language,
rationality
• Mathematics Formal representation and proof algorithms,
computation,
• Economics utility, decision theory
• Neuroscience physical substrate for mental activity
• Psychology phenomena of perception and motor control,
experimental techniques
• Computer building fast computers
engineering
• Control theory design systems that maximize an objective
function over time
• Linguistics knowledge representation, grammar
Abridged history of AI
• 1943 McCulloch & Pitts: Boolean circuit model of brain
• 1950 Turing's "Computing Machinery and Intelligence"
• 1956 "Artificial Intelligence" adopted
• 1950s Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist,
Gelernter's Geometry Engine
• 1965 Robinson's complete algorithm for logical reasoning
• 1966—73 AI discovers computational complexity
Neural network research almost disappears
• 1969—79 Early development of knowledge-based systems
• 1986-- Neural networks return to popularity
• 1987-- AI becomes a science
• 1995-- The emergence of intelligent agents
What is AI?
Views of AI fall into four categories:
Thinking humanly Thinking rationally
Acting humanly Acting rationally
Acting humanly: Turing Test
• Turing (1950) "Computing machinery and intelligence":
• "Can machines think?"  "Can machines behave intelligently?"
• Operational test for intelligent behavior: the Imitation Game
• Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5 minutes
• Anticipated all major arguments against AI in following 50 years
• Suggested major components of AI: knowledge, reasoning, language understanding, learning
9
Turing test -Summary
• British scientist Alan Turing said if humans could be duped by
computers into thinking they were talking to humans, the
machines could be called 'intelligent‘
• A proposal for a test of a machine's capability to perform human-like
conversation.
• Assume that both the human and the m/c try to appear human.
• To keep the test setting simple and universal, the conversation is
limited to a text-only.
10
AI --- applications
• Expert systems
• machine learning
• natural language interfacing
• speech processing
• vision processing
• intelligent tutoring
• game playing
• robot programming
11
Typical AI problems
• Intelligent entities (agents) be able to do both “mundane” and
“expert” task.
• Mundane Task:
Planning
Recognizing people, objects
Communicating (Through NL)
Navigating around obstacles
Expert task:
Medical diagnosis
Mathematical Problem solving
12
Four foundations of AI
• Representation,
• Search,
• Reasoning, and
• Learning.
13
Representation – includes
• symbolic description of a room for a moving robot and
• a description of a person with a disease for a classification
program.
• all the knowledge, including basic programs for testing and
measuring a structure in question, plus all the programs for
transforming the structure into another one in ways appropriate
to the task.
14
Search - Heuristic search
extra information is used to guide the search. Search
methods include:
• blind BFS & DFS;
• generate and-test (a sequence of candidates is generated,
each being tested for a solution);
• hill climbing (a measure of progress is used to guide each
step);
• means-ends analysis (the difference between the desired
situation and the present one is used to select the next step);
15
Reasoning
1. Typically rule-based systems developed using
human expertise to identify the rules of the
problem;
2. A database of previous problems and solutions
are searched for the closest match to the current
problem;
16
Learning
AI systems use the ability to adapt, or learn, based on the history, or
knowledge, of the system.
Learning takes the form of updating knowledge, adjusting the search,
reconfiguring the representation, and augmenting the reasoning.
Learning methods use statistical learning (using the number of the different
types of historical events to bias future action or to develop inductive
hypotheses, typically assuming that events follow some known distribution of
occurrence);
17
Terminologies
Initial state: The description of starting configuration of the agent
Action / Operator :Takes agent from one state to another state. A
state can have a number of successor states.
Plan :A a sequence of actions that agent can take.
Goal : A description of a set of desirable states of the world. Goal
states are often specified by goal test which any goal state must
satisfy.
Path cost: Sum of step costs.
18
AI in Problem Solving
• Given State Desired State
Knowledge
Based
Search
Memory
Based Rule Based
Decisions
19
State Space Search
• Search :
• Process of imagining sequences of operators applied to the initial
state, and checking which sequence reaches a goal state.
• Search problem is represented using a directed graph
• The states are represented as nodes.
• The allowed actions are represented as arcs (edges)
20
Searching Process
• Check the current state
• Execute the allowable actions to move to the next state.
• Check if the new state is a solution state.
• If it is not , then the new state becomes the current state and the
process is repeated until the solution is found
OR
• The state pace is exhausted
21
EX1: Pegs and Disks
3
2
1
3
2
1
Peg A Peg B
Peg C
Peg A Peg B Peg C
Operators: One may move the topmost disk on any peg to the topmost position on any other peg
22
23
8 Queens problems
• Place 8 queens on a chessboard so that no two queens are in the
same row, column or diagonal.
24
8 Queen problem ( No two queens are in the same
row, column or diagonal)
25
Key issues in basic search algorithm
• Search space may be unbounded because of loops
Or
• because of infinite state space
26
Evaluate search Strategies
• Completeness:
• Is the strategy guaranteed to find a solution if exists?
• Optimality:
• Does the solution give low cost / minimal cost
27
Simple Search 1
• OPEN  { s }
• Pick some node N from OPEN
• While GoalTest( N) ≠ True
• OPEN  OPEN U MoveGen(N)
• What’s the limitation of this algorithm?
28
Key Issues
• Corresponding to a search algorithm, we get a search tree which
contains the generated and the explored nodes.
• The search tree may be unbounded. This may happen if the state
space is infinite.
• This can also happen if there are loops in the search space. How can
we handle loops?
• Corresponding to a search algorithm, should we return a path or a
node?
• The answer to this depends on the problem.
29
Simple Search 2
• OPEN  { s }
• CLOSED  { }
• Pick some node N from OPEN
• Add it to CLOSED
• IF GoalTest( N) Then return (N)
• Else
• OPEN  OPEN U { MoveGen(N) – Closed{} }
30
• Configuration Problems:
Solution is the state
ex. N Queens
Planning Problems:
Solution is the Path
Ex. 8- puzzle etc. (Store path information)
Instead of storing the entire path as search node, we can store search node as a
pair.
(Current node , Parent node)
31
SSS
• We see that in the basic search algorithm, we have to select a node
for expansion.
• Which node should we select? Alternatively, how would we place the
newly generated nodes in the fringe?
• We will subsequently explore various search strategies and discuss
their properties.
32
• Node pair Head ( Open) / *node as a variable name, pick first
node from open list i.e head*/
• ----
• ------
• ------
• OPEN Append ( New, Tail (OPEN))
•

33
DFS
• OPEN Append ( New, Tail (OPEN))
• Equivalent to STACK (LIFO)
• This algorithm picks the newest node first
• Moves into the space search till it finds a dead end ( no more successors further)
• i.e Deeper node first
• DEPTH FIRST SEARCH (DFS)
34
DFS
• The depth first search algorithm puts newly generated nodes in the
front of OPEN.
• This results in expanding the deepest node first. Thus the nodes in
OPEN follow a LIFO order (Last In First Out).
• OPEN is thus implemented using a stack data structure.
35
DFS illustrated
36
Search Tree
37
• Let us now run Depth First Search on the search space given in Figure
and trace its progress.
• Initially fringe contains only the node for A.
• Step 1: Initially fringe contains only the node for A.
FRINGE: A
38
• Step 2: A is removed from fringe. A is expanded and its children B and
C are put in front of fringe.
FRINGE: B C
39
• Node B is removed from fringe, and its children D and E are pushed in
front of fringe.
FRINGE: D E C
40
• Step 4: Node D is removed from fringe. C and F are pushed in front of
fringe
FRINGE: C F E C
41
• Step 5: Node C is removed from fringe. Its child G is pushed in front of
fringe.
FRINGE GFE C
42
• Node G is expanded and found to be a goal node. The solution path A-
B-D-C-G is returned and the algorithm terminates.
43
Properties of Depth First Search
• The algorithm takes exponential time.
• The time taken by the algorithm is related to the maximum depth of
the search tree. If the search tree has infinite depth, the algorithm
may not terminate.
44
BFS
• This will choose the shallowest node first ( closest to start node)
• First layer is completed , Then second layer and so on
• OPEN Append (Tail (OPEN), New)
• Equivalent to QUEUE (FIFO)
• Breath First Search
• In breadth first search the newly generated nodes are put at the back
of fringe or the OPEN list. What this implies is that the nodes will be
expanded in a FIFO (First In First Out) order. The node that enters OPEN
earlier will be expanded earlier. This amounts to expanding the
shallowest nodes first.
45
BFS illustrated
• We will now consider the search space in Figure 1, and show how
breadth first search works on this graph
46
• Initially List contains only one node corresponding to the source state
A.
List: A
47
• Step 2: A is removed from List. The node is expanded, and its children
B and C are generated. They are placed at the back of List.
List : B C
48
• Node B is removed from List and is expanded. Its children D, E are
generated and put at the back of fringe.
FRINGE: C D E
49
• Node C is removed from List and is expanded. Its children D and G are
added to the back of List
List : D E D G
50
• Node D is removed from List. Its children C and F are generated and
added to the back of fringe.
• List : E D G C F
• Node E is removed from List. It has no children
• List : D G C F
• D is expanded, B and F are put in OPEN.
• List : G C F B F
• G is selected for expansion. It is found to be a goal node. So the algorithm
terminates.
51
Properties of Breadth-First Search
• Breadth first search is:
• Complete.
• The algorithm is optimal (i.e., admissible) if all operators have the
same cost. Otherwise, breadth first search finds a solution with the
shortest path length.
• Finds the path of minimal length to the goal.
• The algorithm has exponential time and space complexity.
52
• Both BFS and DFS do not have sense of direction, These algorithm are
blind.
• No idea about Goal State.
• We call these algorithm as Blind Search Algorithm / Uninformed
Search
53
Informed State Space Search
• we have looked at different blind search strategies. Uninformed search methods
lack problem-specific knowledge. Such methods are prohibitively inefficient in
many cases.
• Using problem-specific knowledge can dramatically improve the search speed.
• we will study some informed search algorithms that use problem specific
heuristics.
• Heuristics use domain specific knowledge to estimate the quality or potential of
partial solutions.
• Eurisko / Heuriskein – Greek word……Knowing something.
54
• Uninformed search methods systematically explore the state space
and find the goal. They are inefficient in most cases.
• Informed search methods use problem specific knowledge, and may
be more efficient.
• At the heart of such algorithms there is the concept of a heuristic
function.
55
Heuristics
• Heuristic means “rule of thumb”.
• To quote Judea Pearl, “Heuristics are criteria, methods or principles for
deciding which among several alternative courses of action promises
to be the most effective in order to achieve some goal”.
• In heuristic search or informed search, heuristics are used to identify
the most promising search path.
56
Example of Heuristic Function
• A heuristic function at a node n is an estimate of the optimum cost from
the current node to a goal. It is denoted by h(n).
• h(n) = estimated cost of the cheapest path from node n to a goal node.
• It’s a function that returns a number, which will tell how easy/ difficult is
to move from start state to goal state.
• Example 1: We want a path from Kolkata to Guwahati
• Heuristic for Guwahati may be straight-line distance between Kolkata
and Guwahati
• h(Kolkata) = Euclidean Distance(Kolkata, Guwahati)
57
• 8-puzzle: What can be the heuristic function?
• Misplaced Tiles: Heuristics is the number of tiles out of place.
• The first picture shows the current state n, and the second picture the
goal state.
• h(n) = 5;
• because the tiles 2, 8, 1, 6 and 7 are out of place.
58
• Manhattan Distance Heuristic:
• Another heuristic for 8-puzzle is the Manhattan distance heuristic.
• This heuristic sums the distance that the tiles are out of place. The distance of a
tile is measured by the sum of the differences in the x-positions and the y-
positions.
• For the above example, using the Manhattan distance heuristic:
• h(n) = 1 + 1 + 0 + 0 + 0 + 1 + 1 + 2 = 6
59
Greedy Search
• In greedy search, the idea is to expand the node with the smallest
estimated cost to reach the goal.
• We use a heuristic function
• f(n) = h(n)
• h(n) estimates the distance remaining to a goal.
• Greedy algorithms often perform very well. They tend to find good
solutions quickly, although not always optimal ones.
60
Best First Search
• The algorithm maintains a priority queue of nodes to be explored.
• It’s a heuristic search algorithm.
• A cost function f(n) is applied to each node. The nodes are put in
OPEN in the order of their f values.
• Nodes with smaller f(n) values are expanded earlier.
61
Generic Best First Search
Let fringe be a priority queue containing the initial state
Loop
if fringe is empty return failure
Node  remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
put the newly generated nodes into fringe according to their f values
End Loop
62
Best First Search algorithm
• OPEN  ( start , Nil )
• While OPEN not empty
• Node pair  ( Head , OPEN )
• ---------
• ---------
• ----------
• OPEN  Sorth (Append ( New, Tail (OPEN))
•
• It will pick the Best node first , The one with lowest heuristic value.
• We will now consider different ways of defining the function f. This leads to different
search algorithms.
63
• Time Complexity
• Space Complexity
Depends on h
• Quality : Not Necessarily optimal
• Completeness: Complete ( It sorts the open list)

More Related Content

PDF
Introduction to Artificial Intelligence.pdf
PDF
Presentation on the artificial intelligenc
PPTX
1- Introduction to A I and systems pptx
PDF
Intro AI.pdf
PPT
Artificial Intelligence by B. Ravikumar
PDF
Module-1.1.pdf of aiml engineering mod 1
PDF
UNIT 1 SRMIST KTR_problem and agents.pdf
PPT
01_Artificial_Intelligence-Introduction.ppt
Introduction to Artificial Intelligence.pdf
Presentation on the artificial intelligenc
1- Introduction to A I and systems pptx
Intro AI.pdf
Artificial Intelligence by B. Ravikumar
Module-1.1.pdf of aiml engineering mod 1
UNIT 1 SRMIST KTR_problem and agents.pdf
01_Artificial_Intelligence-Introduction.ppt

Similar to Artificial Intelligence -1.pptx. ajudnshdbdb (20)

PPTX
AIArtificial intelligence (AI) is a field of computer science a
PPTX
Artificial intelligence introduction
PPT
n01.ppt
PPTX
Unit 1 AI.pptx
PDF
Lec-1.pdf
PPTX
Artificial Intelligent introduction or history
PPTX
1 artificial intelligence
PPTX
Artificial Intelligence Lab programs 1. Write a Program to Implement
PDF
Artificial Intelligence in Manufacturing
PPT
AI.ppt
PPTX
1 Introduction to AI.pptx
PPTX
AI-week-1-lecture-1-slide.pptx
PDF
BCS515B Module 1 notes Artificial Intelligence.pdf
PPTX
1.Introductiontrdrdtrresrrzrexrextrtrc.pptx
PDF
Artificial intelligence
PPTX
Module 1.pptx AIIAIAIAIAIAIIAIAIAIAIAIAIAI
PPT
Introduction to Artificial Intelligence Presentation module 1
PPT
Year 1 AI.ppt
PDF
Introduction to Artificial Intelligence.pdf
PPTX
csc384-Lecture01-Introduction_abcdpdf_pdf_to_ppt.pptx
AIArtificial intelligence (AI) is a field of computer science a
Artificial intelligence introduction
n01.ppt
Unit 1 AI.pptx
Lec-1.pdf
Artificial Intelligent introduction or history
1 artificial intelligence
Artificial Intelligence Lab programs 1. Write a Program to Implement
Artificial Intelligence in Manufacturing
AI.ppt
1 Introduction to AI.pptx
AI-week-1-lecture-1-slide.pptx
BCS515B Module 1 notes Artificial Intelligence.pdf
1.Introductiontrdrdtrresrrzrexrextrtrc.pptx
Artificial intelligence
Module 1.pptx AIIAIAIAIAIAIIAIAIAIAIAIAIAI
Introduction to Artificial Intelligence Presentation module 1
Year 1 AI.ppt
Introduction to Artificial Intelligence.pdf
csc384-Lecture01-Introduction_abcdpdf_pdf_to_ppt.pptx
Ad

Recently uploaded (20)

PPTX
UNIT 4 Total Quality Management .pptx
PPTX
Construction Project Organization Group 2.pptx
PPTX
Lecture Notes Electrical Wiring System Components
DOCX
573137875-Attendance-Management-System-original
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PDF
Digital Logic Computer Design lecture notes
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
Welding lecture in detail for understanding
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
additive manufacturing of ss316l using mig welding
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
Internet of Things (IOT) - A guide to understanding
UNIT 4 Total Quality Management .pptx
Construction Project Organization Group 2.pptx
Lecture Notes Electrical Wiring System Components
573137875-Attendance-Management-System-original
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
Digital Logic Computer Design lecture notes
OOP with Java - Java Introduction (Basics)
Welding lecture in detail for understanding
UNIT-1 - COAL BASED THERMAL POWER PLANTS
CH1 Production IntroductoryConcepts.pptx
Embodied AI: Ushering in the Next Era of Intelligent Systems
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
additive manufacturing of ss316l using mig welding
Automation-in-Manufacturing-Chapter-Introduction.pdf
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Internet of Things (IOT) - A guide to understanding
Ad

Artificial Intelligence -1.pptx. ajudnshdbdb

  • 2. 2 What is AI ? • Artificial Intelligence (AI) is a branch of Science which deals with helping machines find solutions to complex problems in a more human-like fashion. • “Artificial Intelligence (AI) is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit characteristics we associate with intelligence in human behaviour – understanding language, learning, reasoning, solving problems, and so on.” • It is the science and engineering of making intelligent machines, especially intelligent computer programs. • “AI is the ability of a machine to think, learn and serve. It helps the machine perform tasks, which require human intelligence, such as visual perception, speech recognition, learning, planning, NL, decision-making, and understanding among others.” • Objective : To solve real world problems using AI techniques such as knowledge representation, learning, rule systems, search, and so on
  • 3. 3 Objectives of AI? • Knowledge representation • Problem solving • Learning methods in AI in engineering Intelligent systems • Access the applicability, strengths and weakness of these methods in solving engineering problems.
  • 4. 4 Different perspectives • From the perspective of intelligence: artificial intelligence is making machines "intelligent" ----- as we would expect people to act. • The inability to distinguish computer responses from human responses is called the Turing test. Intelligence requires knowledge. • From a business perspective, AI is a set of very powerful tools, and methodologies for using those tools to solve business problems. • From a programming perspective, AI includes the study of symbolic programming, problem solving, and search. • Artificial Intelligence has identifiable roots in a number of older disciplines, particularly: Philosophy, Logic/Mathematics , Computation, Psychology, Biology, Neuroscience and Evolution.
  • 5. AI prehistory • Philosophy Logic, methods of reasoning, mind as physical system foundations of learning, language, rationality • Mathematics Formal representation and proof algorithms, computation, • Economics utility, decision theory • Neuroscience physical substrate for mental activity • Psychology phenomena of perception and motor control, experimental techniques • Computer building fast computers engineering • Control theory design systems that maximize an objective function over time • Linguistics knowledge representation, grammar
  • 6. Abridged history of AI • 1943 McCulloch & Pitts: Boolean circuit model of brain • 1950 Turing's "Computing Machinery and Intelligence" • 1956 "Artificial Intelligence" adopted • 1950s Early AI programs, including Samuel's checkers program, Newell & Simon's Logic Theorist, Gelernter's Geometry Engine • 1965 Robinson's complete algorithm for logical reasoning • 1966—73 AI discovers computational complexity Neural network research almost disappears • 1969—79 Early development of knowledge-based systems • 1986-- Neural networks return to popularity • 1987-- AI becomes a science • 1995-- The emergence of intelligent agents
  • 7. What is AI? Views of AI fall into four categories: Thinking humanly Thinking rationally Acting humanly Acting rationally
  • 8. Acting humanly: Turing Test • Turing (1950) "Computing machinery and intelligence": • "Can machines think?"  "Can machines behave intelligently?" • Operational test for intelligent behavior: the Imitation Game • Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5 minutes • Anticipated all major arguments against AI in following 50 years • Suggested major components of AI: knowledge, reasoning, language understanding, learning
  • 9. 9 Turing test -Summary • British scientist Alan Turing said if humans could be duped by computers into thinking they were talking to humans, the machines could be called 'intelligent‘ • A proposal for a test of a machine's capability to perform human-like conversation. • Assume that both the human and the m/c try to appear human. • To keep the test setting simple and universal, the conversation is limited to a text-only.
  • 10. 10 AI --- applications • Expert systems • machine learning • natural language interfacing • speech processing • vision processing • intelligent tutoring • game playing • robot programming
  • 11. 11 Typical AI problems • Intelligent entities (agents) be able to do both “mundane” and “expert” task. • Mundane Task: Planning Recognizing people, objects Communicating (Through NL) Navigating around obstacles Expert task: Medical diagnosis Mathematical Problem solving
  • 12. 12 Four foundations of AI • Representation, • Search, • Reasoning, and • Learning.
  • 13. 13 Representation – includes • symbolic description of a room for a moving robot and • a description of a person with a disease for a classification program. • all the knowledge, including basic programs for testing and measuring a structure in question, plus all the programs for transforming the structure into another one in ways appropriate to the task.
  • 14. 14 Search - Heuristic search extra information is used to guide the search. Search methods include: • blind BFS & DFS; • generate and-test (a sequence of candidates is generated, each being tested for a solution); • hill climbing (a measure of progress is used to guide each step); • means-ends analysis (the difference between the desired situation and the present one is used to select the next step);
  • 15. 15 Reasoning 1. Typically rule-based systems developed using human expertise to identify the rules of the problem; 2. A database of previous problems and solutions are searched for the closest match to the current problem;
  • 16. 16 Learning AI systems use the ability to adapt, or learn, based on the history, or knowledge, of the system. Learning takes the form of updating knowledge, adjusting the search, reconfiguring the representation, and augmenting the reasoning. Learning methods use statistical learning (using the number of the different types of historical events to bias future action or to develop inductive hypotheses, typically assuming that events follow some known distribution of occurrence);
  • 17. 17 Terminologies Initial state: The description of starting configuration of the agent Action / Operator :Takes agent from one state to another state. A state can have a number of successor states. Plan :A a sequence of actions that agent can take. Goal : A description of a set of desirable states of the world. Goal states are often specified by goal test which any goal state must satisfy. Path cost: Sum of step costs.
  • 18. 18 AI in Problem Solving • Given State Desired State Knowledge Based Search Memory Based Rule Based Decisions
  • 19. 19 State Space Search • Search : • Process of imagining sequences of operators applied to the initial state, and checking which sequence reaches a goal state. • Search problem is represented using a directed graph • The states are represented as nodes. • The allowed actions are represented as arcs (edges)
  • 20. 20 Searching Process • Check the current state • Execute the allowable actions to move to the next state. • Check if the new state is a solution state. • If it is not , then the new state becomes the current state and the process is repeated until the solution is found OR • The state pace is exhausted
  • 21. 21 EX1: Pegs and Disks 3 2 1 3 2 1 Peg A Peg B Peg C Peg A Peg B Peg C Operators: One may move the topmost disk on any peg to the topmost position on any other peg
  • 22. 22
  • 23. 23 8 Queens problems • Place 8 queens on a chessboard so that no two queens are in the same row, column or diagonal.
  • 24. 24 8 Queen problem ( No two queens are in the same row, column or diagonal)
  • 25. 25 Key issues in basic search algorithm • Search space may be unbounded because of loops Or • because of infinite state space
  • 26. 26 Evaluate search Strategies • Completeness: • Is the strategy guaranteed to find a solution if exists? • Optimality: • Does the solution give low cost / minimal cost
  • 27. 27 Simple Search 1 • OPEN  { s } • Pick some node N from OPEN • While GoalTest( N) ≠ True • OPEN  OPEN U MoveGen(N) • What’s the limitation of this algorithm?
  • 28. 28 Key Issues • Corresponding to a search algorithm, we get a search tree which contains the generated and the explored nodes. • The search tree may be unbounded. This may happen if the state space is infinite. • This can also happen if there are loops in the search space. How can we handle loops? • Corresponding to a search algorithm, should we return a path or a node? • The answer to this depends on the problem.
  • 29. 29 Simple Search 2 • OPEN  { s } • CLOSED  { } • Pick some node N from OPEN • Add it to CLOSED • IF GoalTest( N) Then return (N) • Else • OPEN  OPEN U { MoveGen(N) – Closed{} }
  • 30. 30 • Configuration Problems: Solution is the state ex. N Queens Planning Problems: Solution is the Path Ex. 8- puzzle etc. (Store path information) Instead of storing the entire path as search node, we can store search node as a pair. (Current node , Parent node)
  • 31. 31 SSS • We see that in the basic search algorithm, we have to select a node for expansion. • Which node should we select? Alternatively, how would we place the newly generated nodes in the fringe? • We will subsequently explore various search strategies and discuss their properties.
  • 32. 32 • Node pair Head ( Open) / *node as a variable name, pick first node from open list i.e head*/ • ---- • ------ • ------ • OPEN Append ( New, Tail (OPEN)) • 
  • 33. 33 DFS • OPEN Append ( New, Tail (OPEN)) • Equivalent to STACK (LIFO) • This algorithm picks the newest node first • Moves into the space search till it finds a dead end ( no more successors further) • i.e Deeper node first • DEPTH FIRST SEARCH (DFS)
  • 34. 34 DFS • The depth first search algorithm puts newly generated nodes in the front of OPEN. • This results in expanding the deepest node first. Thus the nodes in OPEN follow a LIFO order (Last In First Out). • OPEN is thus implemented using a stack data structure.
  • 37. 37 • Let us now run Depth First Search on the search space given in Figure and trace its progress. • Initially fringe contains only the node for A. • Step 1: Initially fringe contains only the node for A. FRINGE: A
  • 38. 38 • Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of fringe. FRINGE: B C
  • 39. 39 • Node B is removed from fringe, and its children D and E are pushed in front of fringe. FRINGE: D E C
  • 40. 40 • Step 4: Node D is removed from fringe. C and F are pushed in front of fringe FRINGE: C F E C
  • 41. 41 • Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe. FRINGE GFE C
  • 42. 42 • Node G is expanded and found to be a goal node. The solution path A- B-D-C-G is returned and the algorithm terminates.
  • 43. 43 Properties of Depth First Search • The algorithm takes exponential time. • The time taken by the algorithm is related to the maximum depth of the search tree. If the search tree has infinite depth, the algorithm may not terminate.
  • 44. 44 BFS • This will choose the shallowest node first ( closest to start node) • First layer is completed , Then second layer and so on • OPEN Append (Tail (OPEN), New) • Equivalent to QUEUE (FIFO) • Breath First Search • In breadth first search the newly generated nodes are put at the back of fringe or the OPEN list. What this implies is that the nodes will be expanded in a FIFO (First In First Out) order. The node that enters OPEN earlier will be expanded earlier. This amounts to expanding the shallowest nodes first.
  • 45. 45 BFS illustrated • We will now consider the search space in Figure 1, and show how breadth first search works on this graph
  • 46. 46 • Initially List contains only one node corresponding to the source state A. List: A
  • 47. 47 • Step 2: A is removed from List. The node is expanded, and its children B and C are generated. They are placed at the back of List. List : B C
  • 48. 48 • Node B is removed from List and is expanded. Its children D, E are generated and put at the back of fringe. FRINGE: C D E
  • 49. 49 • Node C is removed from List and is expanded. Its children D and G are added to the back of List List : D E D G
  • 50. 50 • Node D is removed from List. Its children C and F are generated and added to the back of fringe. • List : E D G C F • Node E is removed from List. It has no children • List : D G C F • D is expanded, B and F are put in OPEN. • List : G C F B F • G is selected for expansion. It is found to be a goal node. So the algorithm terminates.
  • 51. 51 Properties of Breadth-First Search • Breadth first search is: • Complete. • The algorithm is optimal (i.e., admissible) if all operators have the same cost. Otherwise, breadth first search finds a solution with the shortest path length. • Finds the path of minimal length to the goal. • The algorithm has exponential time and space complexity.
  • 52. 52 • Both BFS and DFS do not have sense of direction, These algorithm are blind. • No idea about Goal State. • We call these algorithm as Blind Search Algorithm / Uninformed Search
  • 53. 53 Informed State Space Search • we have looked at different blind search strategies. Uninformed search methods lack problem-specific knowledge. Such methods are prohibitively inefficient in many cases. • Using problem-specific knowledge can dramatically improve the search speed. • we will study some informed search algorithms that use problem specific heuristics. • Heuristics use domain specific knowledge to estimate the quality or potential of partial solutions. • Eurisko / Heuriskein – Greek word……Knowing something.
  • 54. 54 • Uninformed search methods systematically explore the state space and find the goal. They are inefficient in most cases. • Informed search methods use problem specific knowledge, and may be more efficient. • At the heart of such algorithms there is the concept of a heuristic function.
  • 55. 55 Heuristics • Heuristic means “rule of thumb”. • To quote Judea Pearl, “Heuristics are criteria, methods or principles for deciding which among several alternative courses of action promises to be the most effective in order to achieve some goal”. • In heuristic search or informed search, heuristics are used to identify the most promising search path.
  • 56. 56 Example of Heuristic Function • A heuristic function at a node n is an estimate of the optimum cost from the current node to a goal. It is denoted by h(n). • h(n) = estimated cost of the cheapest path from node n to a goal node. • It’s a function that returns a number, which will tell how easy/ difficult is to move from start state to goal state. • Example 1: We want a path from Kolkata to Guwahati • Heuristic for Guwahati may be straight-line distance between Kolkata and Guwahati • h(Kolkata) = Euclidean Distance(Kolkata, Guwahati)
  • 57. 57 • 8-puzzle: What can be the heuristic function? • Misplaced Tiles: Heuristics is the number of tiles out of place. • The first picture shows the current state n, and the second picture the goal state. • h(n) = 5; • because the tiles 2, 8, 1, 6 and 7 are out of place.
  • 58. 58 • Manhattan Distance Heuristic: • Another heuristic for 8-puzzle is the Manhattan distance heuristic. • This heuristic sums the distance that the tiles are out of place. The distance of a tile is measured by the sum of the differences in the x-positions and the y- positions. • For the above example, using the Manhattan distance heuristic: • h(n) = 1 + 1 + 0 + 0 + 0 + 1 + 1 + 2 = 6
  • 59. 59 Greedy Search • In greedy search, the idea is to expand the node with the smallest estimated cost to reach the goal. • We use a heuristic function • f(n) = h(n) • h(n) estimates the distance remaining to a goal. • Greedy algorithms often perform very well. They tend to find good solutions quickly, although not always optimal ones.
  • 60. 60 Best First Search • The algorithm maintains a priority queue of nodes to be explored. • It’s a heuristic search algorithm. • A cost function f(n) is applied to each node. The nodes are put in OPEN in the order of their f values. • Nodes with smaller f(n) values are expanded earlier.
  • 61. 61 Generic Best First Search Let fringe be a priority queue containing the initial state Loop if fringe is empty return failure Node  remove-first (fringe) if Node is a goal then return the path from initial state to Node else generate all successors of Node, and put the newly generated nodes into fringe according to their f values End Loop
  • 62. 62 Best First Search algorithm • OPEN  ( start , Nil ) • While OPEN not empty • Node pair  ( Head , OPEN ) • --------- • --------- • ---------- • OPEN  Sorth (Append ( New, Tail (OPEN)) • • It will pick the Best node first , The one with lowest heuristic value. • We will now consider different ways of defining the function f. This leads to different search algorithms.
  • 63. 63 • Time Complexity • Space Complexity Depends on h • Quality : Not Necessarily optimal • Completeness: Complete ( It sorts the open list)