SlideShare a Scribd company logo
22PCOAM11
INTRODUCTION TO ARTIFICAL
INTELLIGENCE
B.TECH
II YEAR – IV SEM (R22)
(2024-2025)
Prepared
By
Asst.Prof.M.Gokilavani
Department of Artificial Intelligence and Machine Learning
R22 B.Tech. CSE (AI and ML) Syllabus JNTU Hyderabad
INTRODUCTION TO ARTIFICIAL INTELLIGENCE
B.Tech. II Year II Sem. L T P C
3 0 0 3
Prerequisite: Knowledge on Data Structures.
Course Objectives:
● To learn the distinction between optimal reasoning Vs. human like reasoning.
● To understand the concepts of state space representation, exhaustive search, heuristic
● search together with the time and space complexities.
● To learn different knowledge representation techniques.
● To understand the applications of AI, namely game playing, theorem proving, and machine
learning.
Course Outcomes:
● Learn the distinction between optimal reasoning Vs human like reasoning and formulate an
efficient problem space for a problem expressed in natural language. Also select a search
algorithm for a problem and estimate its time and space complexities.
● Apply AI techniques to solve problems of game playing, theorem proving, and machine
learning.
● Learn different knowledge representation techniques.
● Understand the concepts of state space representation, exhaustive search, heuristic search
together with the time and space complexities.
● Comprehend the applications of Probabilistic Reasoning and Bayesian Networks.
● Analyze Supervised Learning Vs. Learning Decision Trees
UNIT - I
Introduction to AI - Intelligent Agents, Problem-Solving Agents,
Searching for Solutions - Breadth-first search, Depth-first search, Hill-climbing search, Simulated
annealing search, Local Search in Continuous Spaces.
UNIT-II
Games - Optimal Decisions in Games, Alpha–Beta Pruning, Defining Constraint Satisfaction Problems,
Constraint Propagation, Backtracking Search for CSPs, Knowledge-Based Agents, Logic-
Propositional Logic, Propositional Theorem Proving: Inference and proofs, Proof by resolution, Horn
clauses and definite clauses.
UNIT-III
First-Order Logic - Syntax and Semantics of First-Order Logic, Using First Order Logic, Knowledge
Engineering in First-Order Logic. Inference in First-Order Logic: Propositional vs. First-Order Inference,
Unification, Forward Chaining, Backward Chaining, Resolution.
Knowledge Representation: Ontological Engineering, Categories and Objects, Events.
UNIT-IV
Planning - Definition of Classical Planning, Algorithms for Planning with State Space Search, Planning
Graphs, other Classical Planning Approaches, Analysis of Planning approaches. Hierarchical Planning.
UNIT-V
Probabilistic Reasoning:
Acting under Uncertainty, Basic Probability Notation Bayes’ Rule and Its Use, Probabilistic Reasoning,
Representing Knowledge in an Uncertain Domain, The Semantics of Bayesian Networks, Efficient
R22 B.Tech. CSE (AI and ML) Syllabus JNTU Hyderabad
Representation of Conditional Distributions, Approximate Inference in Bayesian Networks, Relational
and First- Order Probability.
TEXT BOOK:
1. Artificial Intelligence: A Modern Approach, Third Edition, Stuart Russell and Peter Norvig,
Pearson Education.
REFERENCE BOOKS:
1. Artificial Intelligence, 3rd Edn., E. Rich and K. Knight (TMH)
2. Artificial Intelligence, 3rd Edn., Patrick Henny Winston, Pearson Education.
3. Artificial Intelligence, Shivani Goel, Pearson Education.
4. Artificial Intelligence and Expert systems – Patterson, Pearson Education.
UNIT I
Introduction to AI - Intelligent Agents, Problem-Solving Agents
Searching for Solutions - Breadth-first search, Depth-first search, Hill-climbing search,
Simulated annealing search, Local Search in Continuous Spaces.
1. INTRODUCTION TO AI:
 AI is one of the fascinating and universal fields of Computer science which has a great scope
in future. AI holds a tendency to cause a machine to work as a human.
 Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial
defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made
thinking power."
 Artificial Intelligence exists when a machine can have human based skills such as learning,
reasoning, and solving problems.
 With Artificial Intelligence you do not need to preprogram a machine to do some work, despite
that you can create a machine with programmed algorithms which can work with own
intelligence, and that is the awesomeness of AI.
 It is believed that AI is not a new technology, and some people says that as per Greek myth,
there were Mechanical men in early days which can work and behave like humans.
Turing Test in AI:
 In 1950, Alan Turing introduced a test to check whether a machine can think like a human or
not, this test is known as the Turing Test.
 In this test, Turing proposed that the computer can be said to be an intelligent if it can mimic
human response under specific conditions.
 Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and
Intelligence," which considered the question, "Can Machine think?".
 The Turing test is based on a party game "Imitation game," with some modifications.
 This game involves three players in which one player is Computer, another player is human
responder, and the third player is a human Interrogator, who is isolated from other two players
and his job is to find that which player is machine among two of them.
 Consider, Player A is a computer, Player B is human, and Player C is an interrogator. Interrogator
is aware that one of them is machine, but he needs to identify this on the basis of questions and
their responses.
 The conversation between all players is via keyboard and screen so the result would not depend
on the machine's ability to convert words as speech.
 The test result does not depend on each correct answer, but only how closely its responses like a
human answer. The computer is permitted to do everything possible to force a wrong
identification by the interrogator.
 The questions and answers can be like:
o Interrogator: Are you a computer?
o Player A (Computer): No
o Interrogator: Multiply two large numbers such as (256896489*456725896)
o Player A: Long pause and give the wrong answer.
 In this game, if an interrogator would not be able to identify which is a machine and which is
human, then the computer passes the test successfully, and the machine is said to be intelligent
and can think like a human.
 "In 1991, the New York businessman Hugh Loebner announces the prize competition,
offering a $100,000 prize for the first computer to pass the Turing test. However, no AI
program to till date, come close to passing an undiluted Turing test".
Goals of Artificial Intelligence:
Following are the main goals of Artificial Intelligence:
1. Replicate human intelligence
2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human intelligence such as:
o Proving a theorem
o Playing chess
o Plan some surgical operation
o Driving a car in traffic
5. Creating some system which can exhibit intelligent behavior, learn new things by itself,
demonstrate, explain, and can advise to its user.
Application of AI:
 Artificial Intelligence has various applications in today's society. It is becoming essential for
today's time because it can solve complex problems with an efficient way in multiple
industries, such as Healthcare, entertainment, finance, education, etc. AI is making our daily
life more comfortable and fast.
 Following are some sectors which have the application of Artificial Intelligence:
1. AI in Astronomy
 Artificial Intelligence can be very useful to solve complex universe problems. AI technology
can be helpful for understanding the universe such as how it works, origin, etc.
2. AI in Healthcare
 In the last, five to ten years, AI becoming more advantageous for the healthcare industry and
going to have a significant impact on this industry.
 Healthcare Industries are applying AI to make a better and faster diagnosis than humans.
 AI can help doctors with diagnoses and can inform when patients are worsening so that
medical help can reach to the patient before hospitalization.
3. AI in Gaming
 AI can be used for gaming purpose. The AI machines can play strategic games like chess,
where the machine needs to think of a large number of possible places.
4. AI in Finance
 AI and finance industries are the best matches for each other.
 The finance industry is implementing automation, Chabot, adaptive intelligence, algorithm
trading, and machine learning into financial processes.
5. AI in Data Security
 The security of data is crucial for every company and cyber-attacks are growing very rapidly
in the digital world. AI can be used to make your data more safe and secure.
 Some examples such as AEG bot, AI2 Platform, are used to determine software bug and
cyber-attacks in a better way.
6. AI in Social Media
 Social Media sites such as Face book, Twitter, and Snap chat contain billions of user
profiles, which need to be stored and managed in a very efficient way.
 AI can organize and manage massive amounts of data.
 AI can analyze lots of data to identify the latest trends, hash tag, and requirement of different
users.
7. AI in Travel & Transport
 AI is becoming highly demanding for travel industries.
 AI is capable of doing various travel related works such as from making travel arrangement
to suggesting the hotels, flights, and best routes to the customers.
 Travel industries are using AI-powered chat bots which can make human-like interaction
with customers for better and fast response.
8. AI in Automotive Industry
 Some Automotive industries are using AI to provide virtual assistant to their user for better
performance. Such as Tesla has introduced TeslaBot, an intelligent virtual assistant.
 Various Industries are currently working for developing self-driven cars which can make
your journey more safe and secure.
9. AI in Robotics:
 Artificial Intelligence has a remarkable role in Robotics.
 Usually, general robots are programmed such that they can perform some repetitive task,
but with the help of AI, we can create intelligent robots which can perform tasks with their
own experiences without pre-programmed.
 Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid
robot named as Erica and Sophia has been developed which can talk and behave like
humans.
10. AI in Entertainment
 We are currently using some AI based applications in our daily life with some entertainment
services such as Netflix or Amazon.
 With the help of ML/AI algorithms, these services show the recommendations for programs
or shows.
11. AI in Agriculture
 Agriculture is an area which requires various resources, labor, money, and time for best
result. Now a day's agriculture is becoming digital, and AI is emerging in this field.
Agriculture is applying AI as agriculture robotics, solid and crop monitoring, predictive
analysis. AI in agriculture can be very helpful for farmers.
12. AI in E-commerce
 AI is providing a competitive edge to the e-commerce industry, and it is becoming more
demanding in the e-commerce business.
 AI is helping shoppers to discover associated products with recommended size, color, or
even brand.
13. AI in education:
 AI can automate grading so that the tutor can have more time to teach.
 AI Chabot can communicate with students as a teaching assistant.
 AI in the future can be work as a personal virtual tutor for students, which will be accessible
easily at any time and any place.
2. INTELLIGENT AGENTS:
Types of AI Agents:
 Agents can be grouped into five classes based on their degree of perceived intelligence
and capability. All these agents can improve their performance and generate better action
over the time. These are given below:
o Simple Reflex Agent
o Model-based reflex agent
o Goal-based agents
o Utility-based agent
o Learning agent
i. Simple Reflex agent:
 The Simple reflex agents are the simplest agents. These agents take decisions on the basis
of the current percepts and ignore the rest of the percept history.
 These agents only succeed in the fully observable environment.
 The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
 The Simple reflex agent works on Condition-action rule, which means it maps the current
state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
 Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.
ii. Model-based reflex agent:
 The Model-based agent can work in a partially observable environment, and track the
situation.
 A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a Model-
based agent.
o Internal State: It is a representation of the current state based on percept history.
 These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
 Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.
iii. Goal-based agents
 The knowledge of the current state environment is not always sufficient to decide for an
agent to what to do.
 The agent needs to know its goal which describes desirable situations.
 Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
 They choose an action, so that they can achieve the goal.
 These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not.
 Such considerations of different scenario are called searching and planning, which makes an
agent proactive.
iv. Utility-based agents
 These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
 Utility-based agent act based not only goals but also the best way to achieve the goal.
 The Utility-based agent is useful when there are multiple possible alternatives, and an agent
has to choose in order to perform the best action.
 The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
v. Learning Agents
 A learning agent in AI is the type of agent which can learn from its past experiences, or it
has learning capabilities.
 It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
 A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning from environment
2. Critic: Learning element takes feedback from critic which describes that how well the agent
is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem generator: This component is responsible for suggesting actions that will lead to
new and informative experiences.
 Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.
AGENTS:
 An AI system can be defined as the study of the rational agent and its environment. The
agents sense the environment through sensors and act on their environment through actuators.
An AI agent can have mental properties such as knowledge, belief, intention, etc.
What is an Agent?
An agent can be anything that perceive its environment through sensors and act upon that
environment through actuators. An Agent runs in the cycle of perceiving, thinking,
and acting.
An agent can be:
o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and
hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input and act
on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cell phone, camera, and even we
are also agents. Before moving forward, we should first know about sensors, effectors, and
actuators.
 Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
 Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
 Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
What is Intelligent Agents?
 An intelligent agent is an autonomous entity which acts upon an environment using sensors
and actuators for achieving goals. An intelligent agent may learn from the environment to achieve
their goals. A thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent:
o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.
What is Rational Agent?
 A rational agent is an agent which has clear preference, models uncertainty, and acts in a way
to maximize its performance measure with all possible actions.
 A rational agent is said to perform the right things. AI is about creating rational agents to use
for game theory and decision theory for various real-world scenarios.
 For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong action,
an agent gets a negative reward.
Define Rationality.
 The rationality of an agent is measured by its performance measure. Rationality can be
judged on the basis of following points:
o Performance measure which defines the success criterion.
o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.
Structure of an AI Agent
 The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It
can be viewed as:
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
 Architecture: Architecture is machinery that an AI agent executes on.
 Agent Function: Agent function is used to map a percept to an action.
F: P* → A
 Agent program: Agent program is an implementation of agent function. An agent
program executes on the physical architecture to produce function f.
Define PEAS Representation.
 PEAS is a type of model on which an AI agent works upon. When we define an AI agent
or rational agent, then we can group its properties under PEAS representation model. It is
made up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
Example: PEAS for self-driving cars:
Let's suppose a self-driving car then PEAS representation will be:
 Performance: Safety, time, legal drive, comfort
 Environment: Roads, other vehicles, road signs, pedestrian
 Actuators: Steering, accelerator, brake, signal, horn
 Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
Properties of Task Environment:
 An environment is everything in the world which surrounds the agent, but it is not a part of an
agent itself. An environment can be described as a situation in which an agent is present.
 The environment is where agent lives, operate and provide the agent with something to sense
and act upon it. An environment is mostly said to be non-feministic.
Features of Environment
 As per Russell and Norvig, an environment can have various features from the point of view
of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
1. Fully observable vs Partially Observable:
• When an agent sensor is capable to sense or access the complete state of an agent at each
point in time, it is said to be a fully observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to keep track of the
history of the surrounding.
• An environment is called unobservable when the agent has no sensors in all environments.
Examples:
• Chess – the board is fully observable, and so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around the corner is not
known.
2. Static vs Dynamic:
 An environment that keeps constantly changing itself when the agent is up with some action
is said to be dynamic.
 A roller coaster ride is dynamic as it is set in motion and the environment keeps changing
every instant.
 An idle environment with no change in its state is called a static environment.
 An empty house is static as there’s no change in the surroundings when an agent enters.
3. Discrete vs Continuous:
 If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
 The game of chess is discrete as it has only a finite number of moves. The number of moves
might vary with every game, but still, it’s finite.
 The environment in which the actions are performed cannot be numbered i.e. is not discrete,
is said to be continuous.
 Self-driving cars are an example of continuous environments as their actions are driving,
parking, etc. which cannot be numbered.
4. Deterministic vs Stochastic:
• When a uniqueness in the agent’s current state completely determines the next state of the
agent, the environment is said to be deterministic.
• The stochastic environment is random in nature which is not unique and cannot be completely
determined by the agent.
Examples:
• Chess – there would be only a few possible moves for a coin at the current state and these
moves can be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.
5. Single-agent vs Multi-agent:
 An environment consisting of only one agent is said to be a single-agent environment.
 A person left alone in a maze is an example of the single-agent system.
 An environment involving more than one agent is a multi-agent environment.
 The game of football is multi-agent as it involves 11 players in each team.
6. Episodic vs sequential:
 In an Episodic task environment, each of the agent’s actions is divided into atomic incidents
or episodes. There is no dependency between current and previous incidents. In each incident,
an agent receives input from the environment and then performs the corresponding action.
 Example: Consider an example of Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot (agent) will make the decision on the
current part i.e. there is no dependency between current and previous decisions.
 In a Sequential environment, the previous decisions can affect all future decisions. The next
action of the agent depends on what action he has taken previously and what action he is
supposed to take in the future.
Example:
Checkers- Where the previous move can affect all the following moves.
7. Known vs Unknown:
 In a known environment, the output for all probable actions is given. Obviously, in case of
unknown environment, for an agent to make a decision, it has to gain knowledge about how
the environment works.
3. PROBLEM-SOLVING AGENTS:
 In Artificial Intelligence, Search techniques are universal problem-solving
methods. Rational agents or Problem-solving agents in AI mostly used these search
strategies or algorithms to solve a specific problem and provide the best result. Problem-
solving agents are the goal-based agents and use atomic representation. In this topic, we
will learn various problem-solving search algorithms.
Well define problem and Solution:
A problem can be defined formally by five components:
• The initial state that the agent starts in.
• A description of the possible actions available to the agent.
• A description of what each action does; the formal name for this is the transition model.
• The goal test, which determines whether a given state is a goal state. Sometimes there is an
explicit set of possible goal states, and the test simply checks whether the given state is one
of them.
• A path cost function that assigns a numeric cost to each path. The problem-solving agent
chooses a cost function that reflects its own performance measure.
Example: 1 Romania
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
• be in Bucharest
• Formulate problem:
• states: various cities
• actions: drive between cities
• Find solution:
• sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Example: 2 Toy problems
• Those intended to illustrate or exercise various problem-solving methods
• E.g., puzzle chess, etc.
Example: 3 Real-world problems
• Tend to be more difficult and whose solutions people actually care about
• E.g., Design, planning, etc.
Example: 4 Toy Problem
Possible states of Vacuum Cleaner (Toy Problem):
The goal of the 8-queens problem is to place
eight queens on a chess-board such that no
queen attacks any other.
• States: Any arrangement of 0 to 8
queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty
square.
• Transition model: Returns the board
with a queen added to the specified
square.
• Goal test: 8 queens are on the board,
none attacked.
Example: 8* Puzzle Problem
Possible Moves for 8 Queen Problems:
4. SEARCHING FOR SOLUTIONS:
• Finding out a solution is done by
• searching through the state space
• All problems are transformed
• as a search tree
• generated by the initial state and successor
function
Search Tree:
• Initial state
• The root of the search tree is a search node
• Expanding
• applying successor function to the current state
• thereby generating a new set of states
• leaf nodes
• the states having no successors
Fringe: Set of search nodes that have not been expanded yet.
Search tree Components:
• A node is having five components:
• STATE: which state it is in the state space
• PARENT-NODE: from which node it is generated
• ACTION: which action applied to its parent-node to generate it
• PATH-COST: the cost, g(n), from initial state to the node n itself
• DEPTH: number of steps along the path from the initial state
Search Algorithm Terminologies:
 Search: Searching is a step by step procedure to solve a search-problem in a given search
space. A search problem can have three main factors:
a. Search Space: Search space represents a set of possible solutions, which a system
may have.
b. Start State: It is a state from where agent begins the search.
c. Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.
 Search tree: A tree representation of search problem is called Search tree. The root of the
search tree is the root node which is corresponding to the initial state.
 Actions: It gives the description of all the available actions to the agent.
 Transition model: A description of what each action do, can be represented as a transition
model.
 Path Cost: It is a function which assigns a numeric cost to each path.
 Solution: It is an action sequence which leads from the start node to the goal node.
 Optimal Solution: If a solution has the lowest cost among all solutions.
i. Properties of Search Algorithms (or) measuring problem Solving performance:
Following are the four essential properties of search algorithms to compare the efficiency of
these algorithms:
 Completeness: A search algorithm is said to be complete if it guarantees to return a
solution if at least any solution exists for any random input.
 Optimality: If a solution found for an algorithm is guaranteed to be the best solution
(lowest path cost) among all other solutions, then such a solution for is said to be an
optimal solution.
 Time Complexity: Time complexity is a measure of time for an algorithm to complete
its task.
 Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.
5. TYPES OF SEARCH ALGORITHMS:
Based on the search problems we can classify the search algorithms into uninformed (Blind
search) search and informed search (Heuristic search) algorithms.
i. Uninformed/Blind Search:
 The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal.
 It operates in a brute-force way as it only includes information about how to traverse the tree
and how to identify leaf and goal nodes.
 Uninformed search applies a way in which search tree is searched without any information
about the search space like initial state operators and test for the goal, so it is also called blind
search.
 It examines each node of the tree until it achieves the goal node.
It can be divided into five main types:
o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search
ii. Informed Search
 Informed search algorithms use domain knowledge.
 In an informed search, problem information is available which can guide the search.
 Informed search strategies can find a solution more efficiently than an uninformed search
strategy. Informed search is also called a Heuristic search.
 A heuristic is a way which might not always be guaranteed for best solutions but guaranteed
to find a good solution in reasonable time.
 Informed search can solve much complex problem which could not be solved in another way.
 An example of informed search algorithms is a traveling salesman problem.
1. Greedy Search
2. A* Search
6. BREADTH-FIRST SEARCH (BFS):
 Breadth-first search is the most common search strategy for traversing a tree or graph.
This algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
 BFS algorithm starts searching from the root node of the tree and expands all successor
nodes at the current level before moving to nodes of next level.
 The breadth-first search algorithm is an example of a general-graph search algorithm.
 Breadth-first search implemented using FIFO queue data structure.
Algorithm:
• Step 1: SET STATUS = 1 (ready state) for each node in G
• Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
• Step 3: Repeat Steps 4 and 5 until QUEUE is empty
• Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
• Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1)
and set
their STATUS = 2
(Waiting state)
[END OF LOOP]
• Step 6: EXIT
Example: 1
In the below tree structure, we have shown the traversing of the tree using BFS algorithm
from the root node S to goal node K. BFS search algorithm traverse in layers, so it will
follow the path which is shown by the dotted arrow, and the traversed path will be:
Solution: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Example: 2
Queue Structure:
Solution: 40,10,20,30,60,50,70
Example 3:
Practice problem:
Advantages:
 BFS will provide a solution if any solution exists.
 If there is more than one solution for a given problem, then BFS will provide the minimal
solution which requires the least number of steps.
Disadvantages:
 It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
 BFS needs lots of time if the solution is far away from the root node.
7. DEPTH-FIRST SEARCH (DFS):
 Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
 It is called the depth-first search because it starts from the root node and follows each path
to its greatest depth node before moving to the next path.
 DFS uses a stack data structure for its implementation.
 The process of the DFS algorithm is similar to the BFS algorithm.
Implementation steps for DFS:
• First, create a stack with the total number of vertices in the graph.
• Now, choose any vertex as the starting point of traversal, and push that vertex into the stack.
• After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top
of the stack.
• Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top.
• If no vertex is left, go back and pop a vertex from the stack.
• Repeat steps 2, 3, and 4 until the stack is empty.
Algorithm:
• Step 1: SET STATUS = 1 (ready state) for each node in G
• Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
• Step 3: Repeat Steps 4 and 5 until STACK is empty
• Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
• Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS
= 1) and set their STATUS = 2 (waiting state)
[END OF LOOP]
• Step 6: EXIT
Example 1:
In the below search tree, we have shown the flow of depth-first search, and it will follow the order
as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it
will backtrack the tree as E has no other successor and still goal node is not found. After
backtracking it will traverse node C and then G, and here it will terminate as it found goal node.
=
Solution: S,A,B,D,E,C,G,H,I,K,J.
Example 2:
Example 3:
Advantage:
 DFS requires very less memory as it only needs to store a stack of the nodes on the path
from root node to the current node.
 It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right
path).
Disadvantage:
 There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
 DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Difference between BFS and DFS:
8. BEYOND CLASSICAL SEARCH:
• We have seen methods that systematically explore the search space, possibly
using principledpruning (e.g., A*)
What if we have much larger search spaces?
• Search spaces for some real-world problems may be much larger e.g., 1030,000
states as incertain reasoning and planning tasks.
• Some of these problems can be solved by Iterative Improvement Methods.
Local search algorithm and optimization problem:
• In many optimization problems the goal state itself is the solution.
• The state space is a set of complete configurations.
• Search is about finding the optimal configuration (as in TSP) or just a feasible
configuration (asin scheduling problems).
• In such cases, one can use iterative improvement, or local search, methods.
• An evaluation, or objective, function h must be available that measures the quality of
each state.
• Main Idea: Start with a random initial configuration and make small, local
changes to it thatimprove its quality.
Hill Climbing Algorithm:
• In Hill-Climbing technique, starting at the base of a hill, we walk upwards until
we reach thetop of the hill.
• In other words, we start with initial state and we keep improving the solution until it’s
optimal.
• It's a variation of a generate-and-test algorithm which discards all states which
do not lookpromising or seem unlikely to lead us to the goal state.
• To take such decisions, it uses heuristics (an evaluation function) which
indicates how closethe current state is to the goal state.
Hill-Climbing = generate-and-test + heuristics
Feature of hill climbing Algorithm:
Following are some main features of Hill Climbing Algorithm:
• Generate and Test variant: Hill Climbing is the variant of Generate and Test
method. The Generate and Test method produce feedback which helps to decide
which direction to move in the search space.
• Greedy approach: Hill-climbing algorithm search moves in the direction which
optimizes the cost.
• No backtracking: It does not backtrack the search space, as it does not remember
the previousstates.
State-space Diagram for Hill Climbing:
 The state-space landscape is a graphical representation of the hill-climbing
algorithm which is showing a graph between various states of algorithm
and Objective function/Cost.
 On Y-axis we have taken the function which can be an objective function
or cost function, andstate-space on the x-axis.
 If the function on Y-axis is cost then, the goal of search is to find the
global minimum andlocal minimum.
 If the function of Y-axis is Objective function, then the goal of the search
is to find the globalmaximum and local maximum.
• Local Maximum: Local maximum is a state which is better than its neighbor states, but
there is alsoanother state which is higher than it.
• Global Maximum: Global maximum is the best possible state of state space
landscape. It has thehighest value of objective function.
• Current state: It is a state in a landscape diagram where an agent is currently present.
• Flat local maximum: It is a flat space in the landscape where all the neighbor states of
current stateshave the same value.
• Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm:
o Simple hill Climbing
o Steepest-Ascent hill-climbing
o Stochastic hill Climbing
1.Simple Hill Climbing:
 Simple hill climbing is the simplest way to implement a hill climbing algorithm.
 It only evaluates the neighbor node state at a time and selects the first one
whichoptimizes current cost and set it as a current state.
 It only checks it's one successor state, and if it finds better than the current state,
then moveelse be in the same state.
 This algorithm has the following features:
o Less time consuming
o Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
o Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.]
Example:
• Key point while solving any hill-climbing problem is to choose an
appropriate heuristic function.
• Let's define such function h:
• h (x) = +1 for all the blocks in the support structure if the block is correctly
positioned otherwise -1 for all the blocks in the support structure.
Solution:
2. Steepest-Ascent hill climbing:
 The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.
 This algorithm examines all the neighboring nodes of the current state and selects
one neighbornode which is closest to the goal state.
 This algorithm consumes more time as it searches for multiple neighbors
Algorithm for Steepest-Ascent hill climbing:
o Step 1: Evaluate the initial state, if it is goal state then return success and stop,
else makecurrent state as initial state.
o Step 2: Loop until a solution is found or the current state does not change.
a. Let SUCC be a state such that any successor of the current state will be better
than it.
b. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state to
SUCC.
o Step 5: Exit.
3. Stochastic hill climbing:
 Stochastic hill climbing does not examine for all its neighbor before moving.
 Rather, this search algorithm selects one neighbor node at random and decides
whether to chooseit as a current state or examine another state.
9. SIMULATED ANNEALING:
 A hill-climbing algorithm which never makes a move towards a lower value
guaranteed to beincomplete because it can get stuck on a local maximum.
 And if algorithm applies a random walk, by moving a successor, then it may
complete but notefficient.
 Simulated Annealing is an algorithm which yields both efficiency and completeness.
 In mechanical term Annealing is a process of hardening a metal or glass to a high
temperature then cooling gradually, so this allows the metal to reach a low-energy
crystalline state.
 The same process is used in simulated annealing in which the algorithm picks a
random move, instead of picking the best move.
 If the random move improves the state, then it follows the same path. Otherwise,
the algorithm follows the path which has a probability of less than 1 or it moves
downhill and chooses another path.
10. LOCAL SEARCH IN CONTINUOUS SPACE:
 The distinction between discrete and continuous environments pointing out that
most real-world environments are continuous.
 A discrete variable or categorical variable is a type of statistical variable that can
assume only fixed number of distinct values.
 Continuous variable, as the name suggest is a random variable that assumes all the
possible values in a continuum.
 Which leads to a solution state required to reach the goal node.
 But beyond these “classical search algorithms," we have some “local search
algorithms” where the path cost does not matters, and only focus on
solution-state needed to reach the goal node.
o Example: Greedy BFS* Algorithm.
 A local search algorithm completes its task by traversing on a single current node
rather than multiple paths and following the neighbors of that node generally.
o Example: Hill climbing and simulated annealing can handle continuous
state and action spaces, because they have infinite branching factors.
Solution for Continuous Space:
11.One way to avoid continuous problems is simply to discretize the neighborhood of
each state.
12.Many methods attempt to use the gradient of the landscape to find a maximum.
The gradient of the objective function is a vector ∇f that gives the magnitude and
direction of the steepest slope.
Local search in continuous space:
Does the local search algorithm work for a pure optimized problem?
• Yes, the local search algorithm works for pure optimized problems.
• A pure optimization problem is one where all the nodes can give a solution.
But thetarget is to find the best state out of all according to the objective
function.
• Unfortunately, the pure optimization problem fails to find high-quality
solutions toreach the goal state from the current state.
• Note: An objective function is a function whose value is either minimized
ormaximized in different contexts of the optimization problems.
• In the case of search algorithms, an objective function can be the path
cost forreaching the goal node, etc.
Working of a Local search algorithm:
Problems in Hill Climbing Algorithm:
1. Local Maximum: A local maximum is a peak state in the landscape which
is better than each of its neighboring states, but there is another state also
present which is higherthan the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in
state spacelandscape. Create a list of the promising path so that the algorithm
can backtrack the search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the
neighbor statesof the current state contains the same value, because of this
algorithm does not find any best direction to move. A hill-climbing search
might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps
while searching, to solve the problem. Randomly select a state which is far
away from the current state so it is possible that the algorithm could find non-
plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which
is higher than its surrounding areas, but itself has a slope, and cannot be
reached in asingle move.
Solution: With the use of bidirectional search, or by moving in different
directions, wecan improve this problem.
Conclusion:
• Local search often works well on large problems
– optimality
– Always has some answer available (best found so far)
QUESTION BANK
UNIT 1 QUESTIONS & ANSWERS
2MARKS
1.Define A.I or what is A.I? May03,04
Artificial intelligence is the branch of computer science that deals with the automation of intelligent
behavior. AI gives basis for developing human like programs which can be useful to solve real life
problems and thereby become useful to mankind.
2. Define PEAS representation.
PEAS → P- Performance measure
E - Environment
A- Actuators
S – Sensors
Example:
performance measure maximizes student’s score on test.
Environment Set of students testing Agency
Actuators Display exercises suggestions, corrections.
Sensors Keyboard entry
3. What is meant by robotic agent? May 05
A machine that looks like a human being and performs various complex acts of a human being. It can
do the task efficiently and repeatedly without fault. It works on the basis of a program feeder to it; it
can have previously stored knowledge from environment through its sensors. It acts with the help of
actuators.
4. Define an agent? May 03, Dec-09
An agent is anything (a program, a machine assembly) that can be viewed as perceiving its
environment through sensors and acting upon that environment through actuators.
5. Define rational agent? Dec-05,11, May-10
A rational agent is one that does the right thing. Here right thing is one that will cause agent to be
more successful. That leaves us with the problem of deciding how and when to evaluate the agent’s
success.
6. List down the characteristics of intelligent agent? May-11
• The IA must learn and improve through interaction with the environment.
• The IA must adapt online and in the real time situation.
• The IA must accommodate new problem-solving rules incrementally.
• The IA must have memory which must exhibit storage and retrieval capabilities.
7. State the concept of rationality? May-12
• Rationality is the capacity to generate maximally successful behavior given the available
information. Rationality also indicates the capacity to compute the perfectly rational decision
given the initially available information.
• The capacity to select the optimal combination of computation – sequence plus the action,
under the constraint that the action must be selected by the computation is also rationality.
• Perfect rationality constraints an agent’s actions to provide the maximum expectations of
success given the information available.
8. What are the functionalities of the agent function? Dec-12
• Agent function is a mathematical function which maps each and every possible percept
sequence to a possible action.
• The major functionality of the agent function is to generate the possible action to each and
every percept.
• It helps the agent to get the list of possible actions the agent can take.
• Agent function can be represented in the tabular form.
9. Define structure of agent program. May-13
The task of AI is to design an agent program which implements the agent function. The structure of
an intelligent agent is a combination of architecture and agent program. It can be viewed as:
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action.
F: P* → A
Agent program: Agent program is an implementation of agent function. An agent program
executes on the physical architecture to produce function f.
10. What are the four components to define a problem? Define them. May-13
• Initial state: state in which agent starts in.
• A description of possible actions: description of possible actions which are available to the
agent.
• The goal test: it is the test that determines whether a given state is goal state.
• A path cost function: it is the function that assigns a numeric cost (value) to each path. The
problem-solving agent is expected to choose a cost function that reflects its own performance
measure.
11. What is meant by Turing test?
• To conduct this test, we need two people and one machine.
• One person will be an interrogator(i.e.) questioner, will be asking questions to one person
and one machine. Three of them will being a separate room.
• Interrogator knows them just as A and B. so it has to identify which is the person and
machine.
• The goal of the machine is to make Interrogator believe that it is the person’s answer. If
machine succeeds by fooling Interrogator, the machine acts like a human. Programming a
computer to pass Turing test is very difficult.
12. What are the factors that a rational agent should depend on at any given time?
• The performance measure that defines degree of success.
• Ever thing that the agent has perceived so far. We will call this complete perceptual history
the percept sequence.
• When the agent knows about the environment.
• The action that the agent can perform.
13. List the various type of agent program.
• Simple reflex agent program.
• Model based reflex agent.
• Goal based agent program.
• Utility based agent program
14. Give the structure of agent in an environment.
Agent interacts with environment through sensors and actuators. An Agent is anything that can be
viewed as perceiving (i.e.) understanding its environment through sensors and acting upon that
environment through actuators.
15. Define Agent Function.
It is a mathematical description which deals with the agent’s behavior that maps the given percept
sequence into an action.
16. Define Agent Program.
Agent function for an agent will be implemented by agent program.
17. Define problem solving agent.
Problem solving agent is one kind of goal-based agent, where the agent Should select one action
from sequence of actions which lead to desirable states.
18. List the steps involved in simple problem-solving technique.
i. Goal formulation
ii. Problem formulation
iii. Search
iv. Solution
v. Execution phase
19. What are the components of a problem?
A problem can be defined formally by five components:
• The initial state that the agent starts in.
• A description of the possible actions available to the agent.
• A description of what each action does; the formal name for this is the transition model.
• The goal test, which determines whether a given state is a goal state. Sometimes there is
an explicit set of possible goal states, and the test simply checks whether the given state
is one of them.
• A path cost function that assigns a numeric cost to each path. The problem-solving agent
chooses a cost function that reflects its own performance measure.
20. Give example problems for Artificial Intelligence.
• Toy problems
• Real world problem
21. Define search tree.
The tree which is constructed for the search process over the state space is called search tree.
22. Define search node.
The root of the search tree that is the initial state of the problem is called search node.
23. Define fringe.
The collection of nodes that have been generated but not yet expanded, this collection is called
fringe or frontier.
24. List out the possible states of Vacuum Cleaner with neat diagram.
25. Difference Between DFS and BFS with suitable examples.
26. What do you mean by local maxima with respect to search technique?
• Local maximum is the peak that is higher than each of its neighbor states, but lowers than
the global maximum i.e. a local maximum is a tiny hill on the surface whose peak is not as
high as the main peak (which is a optimal solution).
• Hill climbing fails to find optimum solution when it encounters local maxima. Any small
move, from here also makes things worse (temporarily).
• At local maxima all the search procedure turns out to be wasted here. It is like a dead end.
27. Sketch hill climbing state space diagram.
• Local Maximum: Local maximum is a state which is better than its neighbor states, but
there is alsoanother state which is higher than it.
• Global Maximum: Global maximum is the best possible state of state space landscape. It
has thehighest value of objective function.
• Current state: It is a state in a landscape diagram where an agent is currently present.
• Flat local maximum: It is a flat space in the landscape where all the neighbor states of
current stateshave the same value.
• Shoulder: It is a plateau region which has an uphill edge.
28. State the reason when hill climbing often gets stuck.
• Local maxima are the state where hill climbing algorithm is sure to get struck.
• Local maxima are the peak that is higher than each of its neighbor states, but lower than
the global maximum.
• So, we have missed the better state here.
• All the search procedure turns out to be wasted here. It is like a dead end.
29. How can we avoid ridge and plateau in hill climbing?
• Ridge and plateau in hill climbing can be avoided using methods like backtracking, making
big jumps.
• Backtracking and making big jumps help to avoid plateau, whereas, application of multiple
rules helps to avoid the problem of ridges.
30.List out the components of search tree.
A node is having five components:
• STATE: which state it is in the state space.
• PARENT-NODE: from which node it is generated.
• ACTION: which action applied to its parent-node to generate it.
• PATH-COST: the cost, g(n), from initial state to the node n itself.
• DEPTH: number of steps along the path from the initial state.
31. Does the local search algorithm work for a pure optimized problem?
• Yes, the local search algorithm works for pure optimized problems.
• A pure optimization problem is one where all the nodes can give a solution. But the target
is to find the best state out of all according to the objective function.
• Unfortunately, the pure optimization problem fails to find high-quality solutions to reach
the goal state from the current state.
• Note: An objective function is a function whose value is either minimized or maximized
in different contexts of the optimization problems.
• In the case of search algorithms, an objective function can be the path cost for reaching the
goal node, etc.
5- & 10-MARK QUESTIONS & ANSWERS
1. What is an agent? How does it interact with the environment? Explain. (10 marks) (Dec
2023)
Properties of Task Environment:
• An environment is everything in the world which surrounds the agent, but it is not a part of an
agent itself. An environment can be described as a situation in which an agent is present.
• The environment is where agent lives, operate and provide the agent with something to sense
and act upon it. An environment is mostly said to be non-feministic.
Features of Environment
• As per Russell and Norvig, an environment can have various features from the point of view of
an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs multi-agent
6. Episodic vs sequential
7. Known vs Unknown
i. Fully observable vs Partially Observable:
• When an agent sensor is capable to sense or access the complete state of an agent at each point
in time, it is said to be a fully observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to keep track of the
history of the surrounding.
• An environment is called unobservable when the agent has no sensors in all environments.
Examples:
• Chess – the board is fully observable, and so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around the corner is not
known.
ii. Static vs Dynamic:
• An environment that keeps constantly changing itself when the agent is up with some action
is said to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the environment keeps changing
every instant.
• An idle environment with no change in its state is called a static environment.
• An empty house is static as there’s no change in the surroundings when an agent enters.
iii.Discrete vs Continuous:
• If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
• The game of chess is discrete as it has only a finite number of moves. The number of moves
might vary with every game, but still, it’s finite.
• The environment in which the actions are performed cannot be numbered i.e. is not discrete,
is said to be continuous.
• Self-driving cars are an example of continuous environments as their actions are driving,
parking, etc. which cannot be numbered.
iv.Deterministic vs Stochastic:
• When a uniqueness in the agent’s current state completely determines the next state of the
agent, the environment is said to be deterministic.
• The stochastic environment is random in nature which is not unique and cannot be completely
determined by the agent.
Examples:
• Chess – there would be only a few possible moves for a coin at the current state and these
moves can be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique; it varies time to time.
v.Single-agent vs multi-agent:
•An environment consisting of only one agent is said to be a single-agent environment.
•A person left alone in a maze is an example of the single-agent system.
•An environment involving more than one agent is a multi-agent environment.
•The game of football is multi-agent as it involves 11 players in each team.
vi. Episodic vs sequential:
•In an Episodic task environment, each of the agent’s actions is divided into atomic incidents
or episodes. There is no dependency between current and previous incidents. In each
incident, an agent receives input from the environment and then performs the corresponding
action.
•Example: Consider an example of Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot (agent) will make the decision on the
current part i.e. there is no dependency between current and previous decisions.
•In a Sequential environment, the previous decisions can affect all future decisions. The next
action of the agent depends on what action he has taken previously and what action he is
supposed to take in the future.
Example:
Checkers- Where the previous move can affect all the following moves.
vii.Known vs Unknown: In a known environment, the output for all probable actions is given.
Obviously, in case of unknown environment, for an agent to make a decision, it has to gain
knowledge about how the environment works.
2. Test the turning machine for human and machine intelligence. (5 marks)
Ans:
Turing Test in AI:
• In 1950, Alan Turing introduced a test to check whether a machine can think like a human or
not, this test is known as the Turing Test.
• In this test, Turing proposed that the computer can be said to be an intelligent if it can mimic
human response under specific conditions.
• Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and
Intelligence," which considered the question, "Can Machine think?".
• The Turing test is based on a party game "Imitation game," with some modifications.
• This game involves three players in which one player is Computer, another player is human
responder, and the third player is a human Interrogator, who is isolated from other two players
and his job are to find that which player is machine among two of them.
• Consider, Player A is a computer, Player B is human, and Player C is an interrogator.
Interrogator is aware that one of them is machine, but he needs to identify this on the basis of
questions and their responses.
• The conversation between all players is via keyboard and screen so the result would not depend
on the machine's ability to convert words as speech.
• The test result does not depend on each correct answer, but only how closely its responses like
a human answer. The computer is permitted to do everything possible to force a wrong
identification by the interrogator.
• The questions and answers can be like:
o Interrogator: Are you a computer?
o Player A (Computer): No
o Interrogator: Multiply two large numbers such as (256896489*456725896)
o Player A: Long pause and give the wrong answer.
• In this game, if an interrogator would not be able to identify which is a machine and which is
human, then the computer passes the test successfully, and the machine is said to be intelligent
and can think like a human.
3. Discuss any 5 Applications of AI in detail. (10 marks)
Ans: Artificial Intelligence has various applications in today's society. It is becoming essential for
today's time because it can solve complex problems with an efficient way in multiple industries,
such as Healthcare, entertainment, finance, education, etc. AI is making our daily life more
comfortable and faster. Following are some sectors which have the application of Artificial
Intelligence:
i.AI in Astronomy
• Artificial Intelligence can be very useful to solve complex universe problems. AI
technology can be helpful for understanding the universe such as how it works, origin, etc.
ii. AI in Healthcare
• In the last, five to ten years, AI becoming more advantageous for the healthcare industry
and going to have a significant impact on this industry.
• Healthcare Industries are applying AI to make a better and faster diagnosis than humans.
• AI can help doctors with diagnoses and can inform when patients are worsening so that
medical help can reach to the patient before hospitalization.
iii.AI in Gaming
• AI can be used for gaming purpose. The AI machines can play strategic games like chess,
where the machine needs to think of a large number of possible places.
iv. AI in Finance
• AI and finance industries are the best matches for each other.
• The finance industry is implementing automation, Chabot, adaptive intelligence,
algorithm trading, and machine learning into financial processes.
v. AI in Data Security
• The security of data is crucial for every company and cyber-attacks are growing very
rapidly in the digital world. AI can be used to make your data more safe and secure.
• Some examples such as AEG bot, AI2 Platform, are used to determine software bug and
cyber-attacks in a better way.
vi. AI in Social Media
• Social Media sites such as Face book, Twitter, and Snap chat contain billions of user
profiles, which need to be stored and managed in a very efficient way.
• AI can organize and manage massive amounts of data.
• AI can analyze lots of data to identify the latest trends, hash tag, and requirement of
different users.
vii. AI in Travel & Transport
• AI is becoming highly demanding for travel industries.
• AI is capable of doing various travel related works such as from making travel
arrangement to suggesting the hotels, flights, and best routes to the customers.
4. List the basic Kinds of intelligent agents and explain any two agents with neat schematic
diagram. (10 marks) (Nov 2021, May 2022, 2023)
(or)
Illustrate in detail about Types of agents with neat sketch. (10 marks) (Nov 2023)
(or)
What are the four basic types of agent program in any intelligent system? Explain how did
you convert them into learning agents? (10 marks) (May 2022)
Types of AI Agents:
• Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over
the time. These are given below:
i.Simple Reflex Agent
ii.Model-based reflex agent
iii.Goal-based agents
iv.Utility-based agent
v.Learning agent
i. Simple Reflex agent:
• The Simple reflex agents are the simplest agents. These agents take decisions on the basis of
the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their decision and
action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the current state
to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
• Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.
ii. Model-based reflex agent:
• The Model-based agent can work in a partially observable environment, and track the
situation.
• A model-based agent has two important factors:
oModel: It is knowledge about "how things happen in the world," so it is called a Model-
based agent.
oInternal State: It is a representation of the current state based on percept history.
• These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
• Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.
iii. Goal-based agents
• The knowledge of the current state environment is not always sufficient to decide for an agent to
what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions before deciding whether
the goal is achieved or not.
• Such considerations of different scenario are called searching and planning, which makes an
agent proactive.
iv. Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an agent
has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
v. Learning Agents
• A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
• A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning from environment
2. Critic: Learning element takes feedback from critic which describes that how well the agent is
doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.
5. Evaluate the Structure of agent in detail. (5 marks) (May 2023)
Structure of an AI Agent
• The task of AI is to design an agent program which implements the agent function. The
structure of an intelligent agent is a combination of architecture and agent program. It can
be viewed as:
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
• Architecture: Architecture is machinery that an AI agent executes on.
• Agent Function: Agent function is used to map a percept to an action.
F: P* → A
• Agent program: Agent program is an implementation of agent function. An agent
program executes on the physical architecture to produce function f.
Define PEAS Representation.
• PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is made
up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
Example: PEAS for self-driving cars:
Let's suppose a self-driving car then PEAS representation will be:
• Performance: Safety, time, legal drive, comfort
• Environment: Roads, other vehicles, road signs, pedestrian
• Actuators: Steering, accelerator, brake, signal, horn
• Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
6. Demonstrate in detail about well-define problem solving agent. (5 marks) (Nov 2023).
Well-define problem and Solution:
A problem can be defined formally by five components:
• The initial state that the agent starts in.
• A description of the possible actions available to the agent.
• A description of what each action does; the formal name for this is the transition model.
• The goal test, which determines whether a given state is a goal state. Sometimes there is an
explicit set of possible goal states, and the test simply checks whether the given state is one of
them.
• A path cost function that assigns a numeric cost to each path. The problem-solving agent
chooses a cost function that reflects its own performance measure.
Example: 4 Toy Problem
7. Predict the components and state of Vacuum cleaner with neat sketch. (5 marks) (Nov 2023).
Toy Problem:
8. Define the component of 8 puzzle problem with suitable example. (5 marks) (May 2019)
• State: A state description specifies the location of each of the eight tiles and the blank in one
of the nine squares.
• Initial state: Any state can be designated as the initial state.
• Actions: The simplest formulation defines the actions as movements of the blank space Left,
Right, Up, or Down. Different subsets of these are possible depending on where the blank is.
Goal State Initial State
• Transition model: Given a state and action, this returns the resulting state.
• Goal test: This check whether the state matches the goal configuration. (Other goal
configurations are possible.)
• Path cost: Each step costs 1, so the path cost is the number of steps in the path. Goal State
Initial State.
• Conclusion: The right formulation makes a big difference to the size of the search space
9. Explain about Search Algorithm Terminologies.
Search Algorithm Terminologies:
• Search: Searching is a step-by-step procedure to solve a search-problem in a given search
space. A search problem can have three main factors:
a. Search Space: Search space represents a set of possible solutions, which a system
may have.
b. Start State: It is a state from where agent begins the search.
c. Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.
• Search tree: A tree representation of search problem is called Search tree. The root of the
search tree is the root node which is corresponding to the initial state.
• Actions: It gives the description of all the available actions to the agent.
• Transition model: A description of what each action do, can be represented as a transition
model.
• Path Cost: It is a function which assigns a numeric cost to each path.
• Solution: It is an action sequence which leads from the start node to the goal node.
• Optimal Solution: If a solution has the lowest cost among all solutions.
10. What are uninformed search techniques? Explain anyone. (10 marks) (Dec 2023)
(or)
Outline the uninformed search strategies like breath-first search and depth-first search
with examples. (10 marks) (May 2023)
(or)
Discuss in detail the uninformed search strategies and compare the analysis of various
searches. (10 marks) (Nov 2018)
Ans:
Uninformed/Blind Search:
• The uninformed search does not contain any domain knowledge such as closeness, the location
of the goal.
• It operates in a brute-force way as it only includes information about how to traverse the tree and
how to identify leaf and goal nodes.
• Uninformed search applies a way in which search tree is searched without any information about
the search space like initial state operators and test for the goal, so it is also called blind search.
• It examines each node of the tree until it achieves the goal node.
It can be divided into five main types:
o Breadth-first search
o Depth-first search
i.BREADTH-FIRST SEARCH (BFS):
• Breadth-first search is the most common search strategy for traversing a tree or graph. This
algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
• BFS algorithm starts searching from the root node of the tree and expands all successor
nodes at the current level before moving to nodes of next level.
• The breadth-first search algorithm is an example of a general-graph search algorithm.
• Breadth-first search implemented using FIFO queue data structure.
Algorithm:
• Step 1: SET STATUS = 1 (ready state) for each node in G
• Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
• Step 3: Repeat Steps 4 and 5 until QUEUE is empty
• Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
• Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1)
and set
their STATUS = 2
(Waiting state)
[END OF LOOP]
• Step 6: EXIT
Example: 1
In the below tree structure, we have shown the traversing of the tree using BFS algorithm
from the root node S to goal node K. BFS search algorithm traverse in layers, so it will
follow the path which is shown by the dotted arrow, and the traversed path will be:
Solution: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Example: 2
Queue Structure:
Solution: 40,10,20,30,60,50,70
Example 3:
Advantages:
• BFS will provide a solution if any solution exists.
• If there is more than one solution for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages:
• It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
ii. DEPTH-FIRST SEARCH (DFS):
• Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
• It is called the depth-first search because it starts from the root node and follows each path to its
greatest depth node before moving to the next path.
• DFS uses a stack data structure for its implementation.
• The process of the DFS algorithm is similar to the BFS algorithm.
Implementation steps for DFS:
• First, create a stack with the total number of vertices in the graph.
• Now, choose any vertex as the starting point of traversal, and push that vertex into the
stack.
• After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the
top of the stack.
• Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's
top.
• If no vertex is left, go back and pop a vertex from the stack.
• Repeat steps 2, 3, and 4 until the stack is empty.
Algorithm:
• Step 1: SET STATUS = 1 (ready state) for each node in G
• Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
• Step 3: Repeat Steps 4 and 5 until STACK is empty
• Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
• Step 5: Push on the stack all the neighbors of N that are in the ready state (whose
STATUS = 1) and set their STATUS = 2 (waiting state)
[END OF LOOP]
• Step 6: EXIT
Example 1:
In the below search tree, we have shown the flow of depth-first search, and it will follow the order
as: Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it
will backtrack the tree as E has no other successor and still goal node is not found. After
backtracking it will traverse node C and then G, and here it will terminate as it found goal node.
=
Solution: S,A,B,D,E,C,G,H,I,K,J.
Example 2:
Example 3:
Advantage:
• DFS requires very less memory as it only needs to store a stack of the nodes on the path
from root node to the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
Disadvantage:
• There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
• DFS algorithm goes for deep down searching and sometime it may go to the infinite
loop.
11. Difference between BFS and DFS. (5 marks)
Key BFS DFS
Definition
BFS stands for Breadth First
Search.
DFS stands for Depth First Search.
Data structure
BFS uses a Queue to find the
shortest path.
DFS uses a Stack to find the shortest
path.
Source
BFS is better when target is
closer to Source.
DFS is better when target is far from
source.
Suitability for
decision tree
As BFS considers all
neighbour so it is not
suitable for decision tree
used in puzzle games.
DFS is more suitable for decision tree.
As with one decision, we need to
traverse further to augment the decision.
If we reach the conclusion, we won.
Speed BFS is slower than DFS. DFS is faster than BFS.
Time
Complexity
Time Complexity of BFS =
O(V+E) where V is vertices
and E is edges.
Time Complexity of DFS is also
O(V+E) where V is vertices and E is
edges.
Memory
BFS requires more memory
space.
DFS requires less memory space.
Tapping in
loops
In BFS, there is no problem
of trapping into finite loops.
In DFS, we may be trapped into infinite
loops.
Principle
BFS is implemented using
FIFO (First in First Out)
principle.
DFS is implemented using LIFO (Last
in First Out) principle.
12. Draw a state space representation of for Hill Climbing. (5 marks) (Nov 2022)
State-space Diagram for Hill Climbing:
• The state-space landscape is a graphical representation of the hill-climbing algorithm which
is showing a graph between various states of algorithm and Objective function/Cost.
• On Y-axis we have taken the function which can be an objective function or cost function,
andstate-space on the x-axis.
• If the function on Y-axis is cost then, the goal of search is to find the global minimum
andlocal minimum.
• If the function of Y-axis is Objective function, then the goal of the search is to find the
globalmaximum and local maximum.
• Local Maximum: Local maximum is a state which is better than its neighbor states, but there
is alsoanother state which is higher than it.
• Global Maximum: Global maximum is the best possible state of state space landscape. It
has thehighest value of objective function.
• Current state: It is a state in a landscape diagram where an agent is currently present.
• Flat local maximum: It is a flat space in the landscape where all the neighbor states of current
stateshave the same value.
• Shoulder: It is a plateau region which has an uphill edge.
13. What is heuristic search technique in AI? How does heuristics search works? Explain its
advantages and disadvantages. (5 marks) (Nov 2021, May 2023)
(or)
Explain heuristic search technique with example. (or) Hill climbing with example. (10 marks)
(May 2019).
What is a Heuristic Function?
A heuristic function is a function that ranks the possible alternatives at any branching step in a
search algorithm based on available information. It helps the algorithm select the best route among
various possible paths, thus guiding the search towards a good solution efficiently.
Simple Hill Climbing:
• Simple hill climbing is the simplest way to implement a hill climbing algorithm.
• It only evaluates the neighbor node state at a time and selects the first one whichoptimizes
current cost and set it as a current state.
• It only checks it's one successor state, and if it finds better than the current state, then
moveelse be in the same state.
• This algorithm has the following features:
o Less time consuming
o Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
o Step 1: Evaluate the initial state, if it is goal state then return success and stop.
o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assigns new state as a current
state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.]
Example:
• Key points while solving any hill climbing problem is to choose an appropriate
heuristic function.
• Let's define such function h:h (x) = +1 for all the blocks in the support structure if
the block is correctlypositioned otherwise -1 for all the blocks in the support structure.
Solution:
Advantages of Hill Climbing Algorithm
1. Simplicity and Ease of Implementation: Hill Climbing is a simple and intuitive algorithm that
is easy to understand and implement, making it accessible for developers and researchers alike.
Problems in Hill Climbing Algorithm:
i. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its
neighboring states, but there is another state also present which is higherthan the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state spacelandscape.
Create a list of the promising path so that the algorithm can backtrack the search space and explore
other paths as well.
ii. Plateau: A plateau is the flat area of the search space in which all the neighbor statesof the current
state contains the same value, because of this algorithm does not find any best direction to move. A
hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve
the problem. Randomly select a state which is far away from the current state so it is possible that the
algorithm could find non-plateau region.
iii. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in asingle move.
Solution: With the use of bidirectional search, or by moving in different directions, wecan improve
this problem.
14. Describe the local search algorithm withy neat sketch. (10 marks) (Nov 2020, May 2023)
(or)
Illustrate the working of Working of Local search algorithms in continuous space.
• The distinction between discrete and continuous environments pointing out thatmost real-
world environments are continuous.
• A discrete variable or categorical variable is a type of statistical variable that canassume only
fixed number of distinct values.
• Continuous variable, as the name suggest is a random variable that assumes all thepossible
values in a continuum.
• Which leads to a solution state required to reach the goal node.
• But beyond these “classical search algorithms," we have some “local searchalgorithms” where
the path cost does not matter, and only focus on solution-state needed to reach the goal node.
o Example: Greedy BFS* Algorithm.
• A local search algorithm completes its task by traversing on a single current node rather than
multiple paths and following the neighbors of that node generally.
o Example: Hill climbing and simulated annealing can handle continuousstate and action
spaces, because they have infinite branching factors.
Solution for Continuous Space:
• One way to avoid continuous problems is simply to discretize the neighborhood of each
state.
• Many methods attempt to use the gradient of the landscape to find a maximum. The
gradient of the objective function is a vector ∇f that gives the magnitude and direction of
the steepest slope.
Does the local search algorithm work for a pure optimized problem?
• Yes, the local search algorithm works for pure optimized problems.
• A pure optimization problem is one where all the nodes can give a solution. But thetarget is
to find the best state out of all according to the objective function.
• Unfortunately, the pure optimization problem fails to find high-quality solutions toreach
the goal state from the current state.
• Note: An objective function is a function whose value is either minimized ormaximized in
different contexts of the optimization problems.
• In the case of search algorithms, an objective function can be the path cost forreaching
the goal node, etc.
Working of a Local search algorithm:
Problems in Hill Climbing Algorithm:
i. Local Maximum: A local maximum is a peak state in the landscape which is better than each of
its neighboring states, but there is another state also present which is higherthan the local
maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the search space
and explore other paths as well.
ii.Plateau: Aplateau is the flat area of the search space in which all the neighbor statesof the current
state contain the same value, because of this algorithm does not find any best direction to move.
A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to
solve the problem. Randomly select a state which is far away from the current state so it is possible
that the algorithm could find non-plateau region.
iii.Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its
surrounding areas, but itself has a slope, and cannot be reached in asingle move.
Solution: With the use of bidirectional search, or by moving in different directions, wecan
improve this problem.
Conclusion:
• Local search often works well on large problems
– optimality
– Always has some answer available (best found so far)

More Related Content

PDF
AI_Unit I notes .pdf
PDF
AI3391 ARTIFICIAL INTELLIGENCE Unit I notes.pdf
PDF
What Is Artificial Intelligence,How It Is Used and Its Future.pdf
PDF
Introduction to Artificial Intelligence Material
PPTX
Chapter Three, four, five and six.ppt ITEtx
PPTX
IoT Sensor Nodes with AI - Lecture Notes
PPT
Chapter -3- Artificial Intelligence (AI).ppt
PPTX
aman presentation 2.pptx
AI_Unit I notes .pdf
AI3391 ARTIFICIAL INTELLIGENCE Unit I notes.pdf
What Is Artificial Intelligence,How It Is Used and Its Future.pdf
Introduction to Artificial Intelligence Material
Chapter Three, four, five and six.ppt ITEtx
IoT Sensor Nodes with AI - Lecture Notes
Chapter -3- Artificial Intelligence (AI).ppt
aman presentation 2.pptx

Similar to 22PCOAM11_Introduction to artificial intelligence _Unit_1_and_Question_Bank (20)

PDF
Lecture 1-Introduction to AI and its application.pdf
PPTX
AI ML Unit-1 in machine learning techniques.pptx.
PPTX
Artificial intelligence ppt
DOCX
Artificial intelligence-full -report.doc
PPTX
Expository Writing presentation on renewable energy
PDF
project-report-on-artificial-intelligence_compress (1).pdf
PPTX
Unit-II-Introduction of Artifiial Intelligence.pptx
PPTX
LEC_2_AI_INTRODUCTION - Copy.pptx
PPTX
Artificial Intelligence fundamentals | Machine Learning | Deep Learning
PPT
Artificial Intelligence
PPTX
Intro to ET [Chapter 03 chgjkljg uyt ftyf f yyf 7].pptx
PPTX
Selected topics in Computer Science
PPTX
UNIT I - AI.pptx
PDF
Chapter three - Artificial intelligence.pdf
PPTX
Unit 1 AI.pptx
PPTX
Emerging Technology chapter 3.pptx
PDF
Applications of Artificial Intelligence & Associated Technologies
PPTX
Introduction to Artificial intelligence.
PPT
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
Lecture 1-Introduction to AI and its application.pdf
AI ML Unit-1 in machine learning techniques.pptx.
Artificial intelligence ppt
Artificial intelligence-full -report.doc
Expository Writing presentation on renewable energy
project-report-on-artificial-intelligence_compress (1).pdf
Unit-II-Introduction of Artifiial Intelligence.pptx
LEC_2_AI_INTRODUCTION - Copy.pptx
Artificial Intelligence fundamentals | Machine Learning | Deep Learning
Artificial Intelligence
Intro to ET [Chapter 03 chgjkljg uyt ftyf f yyf 7].pptx
Selected topics in Computer Science
UNIT I - AI.pptx
Chapter three - Artificial intelligence.pdf
Unit 1 AI.pptx
Emerging Technology chapter 3.pptx
Applications of Artificial Intelligence & Associated Technologies
Introduction to Artificial intelligence.
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
Ad

More from Guru Nanak Technical Institutions (20)

PPTX
22PCOAM21 Data Quality Session 3 Data Quality.pptx
PPTX
22PCOAM21 Session 1 Data Management.pptx
PPTX
22PCOAM21 Session 2 Understanding Data Source.pptx
PDF
III Year II Sem 22PCOAM21 Data Analytics Syllabus.pdf
PDF
22PCOAM16 _ML_Unit 3 Notes & Question bank
PDF
22PCOAM16 Machine Learning Unit V Full notes & QB
PDF
22PCOAM16_MACHINE_LEARNING_UNIT_IV_NOTES_with_QB
PDF
22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf
PPTX
22PCOAM16 Unit 3 Session 23 Different ways to Combine Classifiers.pptx
PPTX
22PCOAM16 Unit 3 Session 22 Ensemble Learning .pptx
PPTX
22PCOAM16 Unit 3 Session 24 K means Algorithms.pptx
PPTX
22PCOAM16 ML Unit 3 Session 18 Learning with tree.pptx
PPTX
22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx
PPTX
22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx
PPTX
22PCOAM16 ML Unit 3 Session 19 Constructing Decision Trees.pptx
PDF
22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS
PDF
22PCOAM16 _ML_ Unit 2 Full unit notes.pdf
PDF
22PCOAM16_ML_Unit 1 notes & Question Bank with answers.pdf
PDF
22PCOAM16_MACHINE_LEARNING_UNIT_I_NOTES.pdf
PPTX
22PCOAM16 Unit 2 Session 17 Support vector Machine.pptx
22PCOAM21 Data Quality Session 3 Data Quality.pptx
22PCOAM21 Session 1 Data Management.pptx
22PCOAM21 Session 2 Understanding Data Source.pptx
III Year II Sem 22PCOAM21 Data Analytics Syllabus.pdf
22PCOAM16 _ML_Unit 3 Notes & Question bank
22PCOAM16 Machine Learning Unit V Full notes & QB
22PCOAM16_MACHINE_LEARNING_UNIT_IV_NOTES_with_QB
22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf
22PCOAM16 Unit 3 Session 23 Different ways to Combine Classifiers.pptx
22PCOAM16 Unit 3 Session 22 Ensemble Learning .pptx
22PCOAM16 Unit 3 Session 24 K means Algorithms.pptx
22PCOAM16 ML Unit 3 Session 18 Learning with tree.pptx
22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx
22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx
22PCOAM16 ML Unit 3 Session 19 Constructing Decision Trees.pptx
22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS
22PCOAM16 _ML_ Unit 2 Full unit notes.pdf
22PCOAM16_ML_Unit 1 notes & Question Bank with answers.pdf
22PCOAM16_MACHINE_LEARNING_UNIT_I_NOTES.pdf
22PCOAM16 Unit 2 Session 17 Support vector Machine.pptx
Ad

Recently uploaded (20)

PPTX
additive manufacturing of ss316l using mig welding
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
Lecture Notes Electrical Wiring System Components
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
OOP with Java - Java Introduction (Basics)
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
Structs to JSON How Go Powers REST APIs.pdf
PPTX
Construction Project Organization Group 2.pptx
PPT
Project quality management in manufacturing
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
web development for engineering and engineering
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
additive manufacturing of ss316l using mig welding
Foundation to blockchain - A guide to Blockchain Tech
Lecture Notes Electrical Wiring System Components
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
OOP with Java - Java Introduction (Basics)
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Structs to JSON How Go Powers REST APIs.pdf
Construction Project Organization Group 2.pptx
Project quality management in manufacturing
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
web development for engineering and engineering
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Arduino robotics embedded978-1-4302-3184-4.pdf
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
UNIT 4 Total Quality Management .pptx
Operating System & Kernel Study Guide-1 - converted.pdf
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx

22PCOAM11_Introduction to artificial intelligence _Unit_1_and_Question_Bank

  • 1. 22PCOAM11 INTRODUCTION TO ARTIFICAL INTELLIGENCE B.TECH II YEAR – IV SEM (R22) (2024-2025) Prepared By Asst.Prof.M.Gokilavani Department of Artificial Intelligence and Machine Learning
  • 2. R22 B.Tech. CSE (AI and ML) Syllabus JNTU Hyderabad INTRODUCTION TO ARTIFICIAL INTELLIGENCE B.Tech. II Year II Sem. L T P C 3 0 0 3 Prerequisite: Knowledge on Data Structures. Course Objectives: ● To learn the distinction between optimal reasoning Vs. human like reasoning. ● To understand the concepts of state space representation, exhaustive search, heuristic ● search together with the time and space complexities. ● To learn different knowledge representation techniques. ● To understand the applications of AI, namely game playing, theorem proving, and machine learning. Course Outcomes: ● Learn the distinction between optimal reasoning Vs human like reasoning and formulate an efficient problem space for a problem expressed in natural language. Also select a search algorithm for a problem and estimate its time and space complexities. ● Apply AI techniques to solve problems of game playing, theorem proving, and machine learning. ● Learn different knowledge representation techniques. ● Understand the concepts of state space representation, exhaustive search, heuristic search together with the time and space complexities. ● Comprehend the applications of Probabilistic Reasoning and Bayesian Networks. ● Analyze Supervised Learning Vs. Learning Decision Trees UNIT - I Introduction to AI - Intelligent Agents, Problem-Solving Agents, Searching for Solutions - Breadth-first search, Depth-first search, Hill-climbing search, Simulated annealing search, Local Search in Continuous Spaces. UNIT-II Games - Optimal Decisions in Games, Alpha–Beta Pruning, Defining Constraint Satisfaction Problems, Constraint Propagation, Backtracking Search for CSPs, Knowledge-Based Agents, Logic- Propositional Logic, Propositional Theorem Proving: Inference and proofs, Proof by resolution, Horn clauses and definite clauses. UNIT-III First-Order Logic - Syntax and Semantics of First-Order Logic, Using First Order Logic, Knowledge Engineering in First-Order Logic. Inference in First-Order Logic: Propositional vs. First-Order Inference, Unification, Forward Chaining, Backward Chaining, Resolution. Knowledge Representation: Ontological Engineering, Categories and Objects, Events. UNIT-IV Planning - Definition of Classical Planning, Algorithms for Planning with State Space Search, Planning Graphs, other Classical Planning Approaches, Analysis of Planning approaches. Hierarchical Planning. UNIT-V Probabilistic Reasoning: Acting under Uncertainty, Basic Probability Notation Bayes’ Rule and Its Use, Probabilistic Reasoning, Representing Knowledge in an Uncertain Domain, The Semantics of Bayesian Networks, Efficient
  • 3. R22 B.Tech. CSE (AI and ML) Syllabus JNTU Hyderabad Representation of Conditional Distributions, Approximate Inference in Bayesian Networks, Relational and First- Order Probability. TEXT BOOK: 1. Artificial Intelligence: A Modern Approach, Third Edition, Stuart Russell and Peter Norvig, Pearson Education. REFERENCE BOOKS: 1. Artificial Intelligence, 3rd Edn., E. Rich and K. Knight (TMH) 2. Artificial Intelligence, 3rd Edn., Patrick Henny Winston, Pearson Education. 3. Artificial Intelligence, Shivani Goel, Pearson Education. 4. Artificial Intelligence and Expert systems – Patterson, Pearson Education.
  • 4. UNIT I Introduction to AI - Intelligent Agents, Problem-Solving Agents Searching for Solutions - Breadth-first search, Depth-first search, Hill-climbing search, Simulated annealing search, Local Search in Continuous Spaces. 1. INTRODUCTION TO AI:  AI is one of the fascinating and universal fields of Computer science which has a great scope in future. AI holds a tendency to cause a machine to work as a human.  Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."  Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning, and solving problems.  With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that you can create a machine with programmed algorithms which can work with own intelligence, and that is the awesomeness of AI.  It is believed that AI is not a new technology, and some people says that as per Greek myth, there were Mechanical men in early days which can work and behave like humans. Turing Test in AI:  In 1950, Alan Turing introduced a test to check whether a machine can think like a human or not, this test is known as the Turing Test.  In this test, Turing proposed that the computer can be said to be an intelligent if it can mimic human response under specific conditions.
  • 5.  Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and Intelligence," which considered the question, "Can Machine think?".  The Turing test is based on a party game "Imitation game," with some modifications.  This game involves three players in which one player is Computer, another player is human responder, and the third player is a human Interrogator, who is isolated from other two players and his job is to find that which player is machine among two of them.  Consider, Player A is a computer, Player B is human, and Player C is an interrogator. Interrogator is aware that one of them is machine, but he needs to identify this on the basis of questions and their responses.  The conversation between all players is via keyboard and screen so the result would not depend on the machine's ability to convert words as speech.  The test result does not depend on each correct answer, but only how closely its responses like a human answer. The computer is permitted to do everything possible to force a wrong identification by the interrogator.  The questions and answers can be like: o Interrogator: Are you a computer? o Player A (Computer): No o Interrogator: Multiply two large numbers such as (256896489*456725896) o Player A: Long pause and give the wrong answer.  In this game, if an interrogator would not be able to identify which is a machine and which is human, then the computer passes the test successfully, and the machine is said to be intelligent and can think like a human.
  • 6.  "In 1991, the New York businessman Hugh Loebner announces the prize competition, offering a $100,000 prize for the first computer to pass the Turing test. However, no AI program to till date, come close to passing an undiluted Turing test". Goals of Artificial Intelligence: Following are the main goals of Artificial Intelligence: 1. Replicate human intelligence 2. Solve Knowledge-intensive tasks 3. An intelligent connection of perception and action 4. Building a machine which can perform tasks that requires human intelligence such as: o Proving a theorem o Playing chess o Plan some surgical operation o Driving a car in traffic 5. Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user. Application of AI:  Artificial Intelligence has various applications in today's society. It is becoming essential for today's time because it can solve complex problems with an efficient way in multiple industries, such as Healthcare, entertainment, finance, education, etc. AI is making our daily life more comfortable and fast.  Following are some sectors which have the application of Artificial Intelligence: 1. AI in Astronomy  Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be helpful for understanding the universe such as how it works, origin, etc. 2. AI in Healthcare  In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going to have a significant impact on this industry.  Healthcare Industries are applying AI to make a better and faster diagnosis than humans.  AI can help doctors with diagnoses and can inform when patients are worsening so that medical help can reach to the patient before hospitalization.
  • 7. 3. AI in Gaming  AI can be used for gaming purpose. The AI machines can play strategic games like chess, where the machine needs to think of a large number of possible places. 4. AI in Finance  AI and finance industries are the best matches for each other.  The finance industry is implementing automation, Chabot, adaptive intelligence, algorithm trading, and machine learning into financial processes. 5. AI in Data Security  The security of data is crucial for every company and cyber-attacks are growing very rapidly in the digital world. AI can be used to make your data more safe and secure.  Some examples such as AEG bot, AI2 Platform, are used to determine software bug and cyber-attacks in a better way. 6. AI in Social Media  Social Media sites such as Face book, Twitter, and Snap chat contain billions of user profiles, which need to be stored and managed in a very efficient way.  AI can organize and manage massive amounts of data.  AI can analyze lots of data to identify the latest trends, hash tag, and requirement of different users. 7. AI in Travel & Transport  AI is becoming highly demanding for travel industries.  AI is capable of doing various travel related works such as from making travel arrangement to suggesting the hotels, flights, and best routes to the customers.
  • 8.  Travel industries are using AI-powered chat bots which can make human-like interaction with customers for better and fast response. 8. AI in Automotive Industry  Some Automotive industries are using AI to provide virtual assistant to their user for better performance. Such as Tesla has introduced TeslaBot, an intelligent virtual assistant.  Various Industries are currently working for developing self-driven cars which can make your journey more safe and secure. 9. AI in Robotics:  Artificial Intelligence has a remarkable role in Robotics.  Usually, general robots are programmed such that they can perform some repetitive task, but with the help of AI, we can create intelligent robots which can perform tasks with their own experiences without pre-programmed.  Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid robot named as Erica and Sophia has been developed which can talk and behave like humans. 10. AI in Entertainment  We are currently using some AI based applications in our daily life with some entertainment services such as Netflix or Amazon.  With the help of ML/AI algorithms, these services show the recommendations for programs or shows. 11. AI in Agriculture  Agriculture is an area which requires various resources, labor, money, and time for best result. Now a day's agriculture is becoming digital, and AI is emerging in this field. Agriculture is applying AI as agriculture robotics, solid and crop monitoring, predictive analysis. AI in agriculture can be very helpful for farmers. 12. AI in E-commerce  AI is providing a competitive edge to the e-commerce industry, and it is becoming more demanding in the e-commerce business.  AI is helping shoppers to discover associated products with recommended size, color, or even brand. 13. AI in education:  AI can automate grading so that the tutor can have more time to teach.  AI Chabot can communicate with students as a teaching assistant.  AI in the future can be work as a personal virtual tutor for students, which will be accessible easily at any time and any place. 2. INTELLIGENT AGENTS: Types of AI Agents:
  • 9.  Agents can be grouped into five classes based on their degree of perceived intelligence and capability. All these agents can improve their performance and generate better action over the time. These are given below: o Simple Reflex Agent o Model-based reflex agent o Goal-based agents o Utility-based agent o Learning agent i. Simple Reflex agent:  The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current percepts and ignore the rest of the percept history.  These agents only succeed in the fully observable environment.  The Simple reflex agent does not consider any part of percepts history during their decision and action process.  The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.  Problems for the simple reflex agent design approach: o They have very limited intelligence o They do not have knowledge of non-perceptual parts of the current state o Mostly too big to generate and to store. o Not adaptive to changes in the environment. ii. Model-based reflex agent:  The Model-based agent can work in a partially observable environment, and track the situation.  A model-based agent has two important factors: o Model: It is knowledge about "how things happen in the world," so it is called a Model- based agent. o Internal State: It is a representation of the current state based on percept history.
  • 10.  These agents have the model, "which is knowledge of the world" and based on the model they perform actions.  Updating the agent state requires information about: o How the world evolves o How the agent's action affects the world. iii. Goal-based agents  The knowledge of the current state environment is not always sufficient to decide for an agent to what to do.  The agent needs to know its goal which describes desirable situations.  Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.  They choose an action, so that they can achieve the goal.  These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not.  Such considerations of different scenario are called searching and planning, which makes an agent proactive.
  • 11. iv. Utility-based agents  These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state.  Utility-based agent act based not only goals but also the best way to achieve the goal.  The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action.  The utility function maps each state to a real number to check how efficiently each action achieves the goals.
  • 12. v. Learning Agents  A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning capabilities.  It starts to act with basic knowledge and then able to act and adapt automatically through learning.  A learning agent has mainly four conceptual components, which are: 1. Learning element: It is responsible for making improvements by learning from environment 2. Critic: Learning element takes feedback from critic which describes that how well the agent is doing with respect to a fixed performance standard. 3. Performance element: It is responsible for selecting external action 4. Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences.  Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the performance. AGENTS:  An AI system can be defined as the study of the rational agent and its environment. The agents sense the environment through sensors and act on their environment through actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc. What is an Agent? An agent can be anything that perceive its environment through sensors and act upon that environment through actuators. An Agent runs in the cycle of perceiving, thinking, and acting.
  • 13. An agent can be: o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators. o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and various motors for actuators. o Software Agent: Software agent can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen. Hence the world around us is full of agents such as thermostat, cell phone, camera, and even we are also agents. Before moving forward, we should first know about sensors, effectors, and actuators.  Sensor: Sensor is a device which detects the change in the environment and sends the information to other electronic devices. An agent observes its environment through sensors.  Actuators: Actuators are the component of machines that converts energy into motion. The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.  Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen. What is Intelligent Agents?  An intelligent agent is an autonomous entity which acts upon an environment using sensors and actuators for achieving goals. An intelligent agent may learn from the environment to achieve their goals. A thermostat is an example of an intelligent agent. Following are the main four rules for an AI agent: o Rule 1: An AI agent must have the ability to perceive the environment. o Rule 2: The observation must be used to make decisions. o Rule 3: Decision should result in an action. o Rule 4: The action taken by an AI agent must be a rational action.
  • 14. What is Rational Agent?  A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to maximize its performance measure with all possible actions.  A rational agent is said to perform the right things. AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios.  For an AI agent, the rational action is most important because in AI reinforcement learning algorithm, for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward. Define Rationality.  The rationality of an agent is measured by its performance measure. Rationality can be judged on the basis of following points: o Performance measure which defines the success criterion. o Agent prior knowledge of its environment. o Best possible actions that an agent can perform. o The sequence of percepts. Structure of an AI Agent  The task of AI is to design an agent program which implements the agent function. The structure of an intelligent agent is a combination of architecture and agent program. It can be viewed as: Agent = Architecture + Agent program Following are the main three terms involved in the structure of an AI agent:  Architecture: Architecture is machinery that an AI agent executes on.  Agent Function: Agent function is used to map a percept to an action. F: P* → A  Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f. Define PEAS Representation.  PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words: o P: Performance measure o E: Environment o A: Actuators o S: Sensors Here performance measure is the objective for the success of an agent's behavior.
  • 15. Example: PEAS for self-driving cars: Let's suppose a self-driving car then PEAS representation will be:  Performance: Safety, time, legal drive, comfort  Environment: Roads, other vehicles, road signs, pedestrian  Actuators: Steering, accelerator, brake, signal, horn  Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar. Properties of Task Environment:  An environment is everything in the world which surrounds the agent, but it is not a part of an agent itself. An environment can be described as a situation in which an agent is present.  The environment is where agent lives, operate and provide the agent with something to sense and act upon it. An environment is mostly said to be non-feministic. Features of Environment  As per Russell and Norvig, an environment can have various features from the point of view of an agent: 1. Fully observable vs Partially Observable 2. Static vs Dynamic 3. Discrete vs Continuous 4. Deterministic vs Stochastic 5. Single-agent vs Multi-agent 6. Episodic vs sequential 7. Known vs Unknown 1. Fully observable vs Partially Observable: • When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it is said to be a fully observable environment else it is partially observable. • Maintaining a fully observable environment is easy as there is no need to keep track of the history of the surrounding. • An environment is called unobservable when the agent has no sensors in all environments. Examples: • Chess – the board is fully observable, and so are the opponent’s moves. • Driving – the environment is partially observable because what’s around the corner is not known. 2. Static vs Dynamic:  An environment that keeps constantly changing itself when the agent is up with some action is said to be dynamic.  A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant.  An idle environment with no change in its state is called a static environment.  An empty house is static as there’s no change in the surroundings when an agent enters.
  • 16. 3. Discrete vs Continuous:  If an environment consists of a finite number of actions that can be deliberated in the environment to obtain the output, it is said to be a discrete environment.  The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with every game, but still, it’s finite.  The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be continuous.  Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which cannot be numbered. 4. Deterministic vs Stochastic: • When a uniqueness in the agent’s current state completely determines the next state of the agent, the environment is said to be deterministic. • The stochastic environment is random in nature which is not unique and cannot be completely determined by the agent. Examples: • Chess – there would be only a few possible moves for a coin at the current state and these moves can be determined. • Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time. 5. Single-agent vs Multi-agent:  An environment consisting of only one agent is said to be a single-agent environment.  A person left alone in a maze is an example of the single-agent system.  An environment involving more than one agent is a multi-agent environment.  The game of football is multi-agent as it involves 11 players in each team. 6. Episodic vs sequential:  In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or episodes. There is no dependency between current and previous incidents. In each incident, an agent receives input from the environment and then performs the corresponding action.  Example: Consider an example of Pick and Place robot, which is used to detect defective parts from the conveyor belts. Here, every time robot (agent) will make the decision on the current part i.e. there is no dependency between current and previous decisions.  In a Sequential environment, the previous decisions can affect all future decisions. The next action of the agent depends on what action he has taken previously and what action he is supposed to take in the future. Example: Checkers- Where the previous move can affect all the following moves.
  • 17. 7. Known vs Unknown:  In a known environment, the output for all probable actions is given. Obviously, in case of unknown environment, for an agent to make a decision, it has to gain knowledge about how the environment works. 3. PROBLEM-SOLVING AGENTS:  In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. Problem- solving agents are the goal-based agents and use atomic representation. In this topic, we will learn various problem-solving search algorithms. Well define problem and Solution: A problem can be defined formally by five components: • The initial state that the agent starts in. • A description of the possible actions available to the agent. • A description of what each action does; the formal name for this is the transition model. • The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. • A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure. Example: 1 Romania • On holiday in Romania; currently in Arad. • Flight leaves tomorrow from Bucharest • Formulate goal: • be in Bucharest • Formulate problem: • states: various cities • actions: drive between cities • Find solution: • sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
  • 18. Example: 2 Toy problems • Those intended to illustrate or exercise various problem-solving methods • E.g., puzzle chess, etc. Example: 3 Real-world problems • Tend to be more difficult and whose solutions people actually care about • E.g., Design, planning, etc. Example: 4 Toy Problem Possible states of Vacuum Cleaner (Toy Problem):
  • 19. The goal of the 8-queens problem is to place eight queens on a chess-board such that no queen attacks any other. • States: Any arrangement of 0 to 8 queens on the board is a state. • Initial state: No queens on the board. • Actions: Add a queen to any empty square. • Transition model: Returns the board with a queen added to the specified square. • Goal test: 8 queens are on the board, none attacked.
  • 20. Example: 8* Puzzle Problem Possible Moves for 8 Queen Problems: 4. SEARCHING FOR SOLUTIONS: • Finding out a solution is done by • searching through the state space • All problems are transformed • as a search tree • generated by the initial state and successor function Search Tree: • Initial state • The root of the search tree is a search node • Expanding • applying successor function to the current state • thereby generating a new set of states • leaf nodes • the states having no successors
  • 21. Fringe: Set of search nodes that have not been expanded yet. Search tree Components: • A node is having five components: • STATE: which state it is in the state space • PARENT-NODE: from which node it is generated • ACTION: which action applied to its parent-node to generate it • PATH-COST: the cost, g(n), from initial state to the node n itself • DEPTH: number of steps along the path from the initial state Search Algorithm Terminologies:  Search: Searching is a step by step procedure to solve a search-problem in a given search space. A search problem can have three main factors: a. Search Space: Search space represents a set of possible solutions, which a system may have. b. Start State: It is a state from where agent begins the search. c. Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not.  Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the root node which is corresponding to the initial state.  Actions: It gives the description of all the available actions to the agent.  Transition model: A description of what each action do, can be represented as a transition model.  Path Cost: It is a function which assigns a numeric cost to each path.  Solution: It is an action sequence which leads from the start node to the goal node.  Optimal Solution: If a solution has the lowest cost among all solutions. i. Properties of Search Algorithms (or) measuring problem Solving performance: Following are the four essential properties of search algorithms to compare the efficiency of these algorithms:  Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any solution exists for any random input.  Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all other solutions, then such a solution for is said to be an optimal solution.  Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.  Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of the problem.
  • 22. 5. TYPES OF SEARCH ALGORITHMS: Based on the search problems we can classify the search algorithms into uninformed (Blind search) search and informed search (Heuristic search) algorithms. i. Uninformed/Blind Search:  The uninformed search does not contain any domain knowledge such as closeness, the location of the goal.  It operates in a brute-force way as it only includes information about how to traverse the tree and how to identify leaf and goal nodes.  Uninformed search applies a way in which search tree is searched without any information about the search space like initial state operators and test for the goal, so it is also called blind search.  It examines each node of the tree until it achieves the goal node. It can be divided into five main types: o Breadth-first search o Uniform cost search o Depth-first search o Iterative deepening depth-first search o Bidirectional Search ii. Informed Search  Informed search algorithms use domain knowledge.  In an informed search, problem information is available which can guide the search.
  • 23.  Informed search strategies can find a solution more efficiently than an uninformed search strategy. Informed search is also called a Heuristic search.  A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a good solution in reasonable time.  Informed search can solve much complex problem which could not be solved in another way.  An example of informed search algorithms is a traveling salesman problem. 1. Greedy Search 2. A* Search 6. BREADTH-FIRST SEARCH (BFS):  Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.  BFS algorithm starts searching from the root node of the tree and expands all successor nodes at the current level before moving to nodes of next level.  The breadth-first search algorithm is an example of a general-graph search algorithm.  Breadth-first search implemented using FIFO queue data structure. Algorithm: • Step 1: SET STATUS = 1 (ready state) for each node in G • Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state) • Step 3: Repeat Steps 4 and 5 until QUEUE is empty • Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state). • Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and set their STATUS = 2 (Waiting state) [END OF LOOP] • Step 6: EXIT Example: 1 In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted arrow, and the traversed path will be:
  • 26. Practice problem: Advantages:  BFS will provide a solution if any solution exists.  If there is more than one solution for a given problem, then BFS will provide the minimal solution which requires the least number of steps. Disadvantages:  It requires lots of memory since each level of the tree must be saved into memory to expand the next level.  BFS needs lots of time if the solution is far away from the root node. 7. DEPTH-FIRST SEARCH (DFS):  Depth-first search is a recursive algorithm for traversing a tree or graph data structure.  It is called the depth-first search because it starts from the root node and follows each path to its greatest depth node before moving to the next path.  DFS uses a stack data structure for its implementation.  The process of the DFS algorithm is similar to the BFS algorithm. Implementation steps for DFS: • First, create a stack with the total number of vertices in the graph. • Now, choose any vertex as the starting point of traversal, and push that vertex into the stack. • After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top of the stack. • Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top. • If no vertex is left, go back and pop a vertex from the stack. • Repeat steps 2, 3, and 4 until the stack is empty. Algorithm: • Step 1: SET STATUS = 1 (ready state) for each node in G • Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state) • Step 3: Repeat Steps 4 and 5 until STACK is empty • Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
  • 27. • Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1) and set their STATUS = 2 (waiting state) [END OF LOOP] • Step 6: EXIT Example 1: In the below search tree, we have shown the flow of depth-first search, and it will follow the order as: Root node--->Left node ----> right node. It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will traverse node C and then G, and here it will terminate as it found goal node. = Solution: S,A,B,D,E,C,G,H,I,K,J.
  • 28. Example 2: Example 3: Advantage:  DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the current node.  It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path). Disadvantage:  There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.  DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
  • 30. 8. BEYOND CLASSICAL SEARCH: • We have seen methods that systematically explore the search space, possibly using principledpruning (e.g., A*) What if we have much larger search spaces? • Search spaces for some real-world problems may be much larger e.g., 1030,000 states as incertain reasoning and planning tasks. • Some of these problems can be solved by Iterative Improvement Methods. Local search algorithm and optimization problem: • In many optimization problems the goal state itself is the solution. • The state space is a set of complete configurations. • Search is about finding the optimal configuration (as in TSP) or just a feasible configuration (asin scheduling problems). • In such cases, one can use iterative improvement, or local search, methods. • An evaluation, or objective, function h must be available that measures the quality of each state. • Main Idea: Start with a random initial configuration and make small, local changes to it thatimprove its quality. Hill Climbing Algorithm: • In Hill-Climbing technique, starting at the base of a hill, we walk upwards until we reach thetop of the hill. • In other words, we start with initial state and we keep improving the solution until it’s optimal. • It's a variation of a generate-and-test algorithm which discards all states which do not lookpromising or seem unlikely to lead us to the goal state. • To take such decisions, it uses heuristics (an evaluation function) which indicates how closethe current state is to the goal state. Hill-Climbing = generate-and-test + heuristics Feature of hill climbing Algorithm: Following are some main features of Hill Climbing Algorithm: • Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate and Test method produce feedback which helps to decide which direction to move in the search space. • Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost. • No backtracking: It does not backtrack the search space, as it does not remember the previousstates.
  • 31. State-space Diagram for Hill Climbing:  The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a graph between various states of algorithm and Objective function/Cost.  On Y-axis we have taken the function which can be an objective function or cost function, andstate-space on the x-axis.  If the function on Y-axis is cost then, the goal of search is to find the global minimum andlocal minimum.  If the function of Y-axis is Objective function, then the goal of the search is to find the globalmaximum and local maximum. • Local Maximum: Local maximum is a state which is better than its neighbor states, but there is alsoanother state which is higher than it. • Global Maximum: Global maximum is the best possible state of state space landscape. It has thehighest value of objective function. • Current state: It is a state in a landscape diagram where an agent is currently present. • Flat local maximum: It is a flat space in the landscape where all the neighbor states of current stateshave the same value. • Shoulder: It is a plateau region which has an uphill edge. Types of Hill Climbing Algorithm: o Simple hill Climbing o Steepest-Ascent hill-climbing o Stochastic hill Climbing
  • 32. 1.Simple Hill Climbing:  Simple hill climbing is the simplest way to implement a hill climbing algorithm.  It only evaluates the neighbor node state at a time and selects the first one whichoptimizes current cost and set it as a current state.  It only checks it's one successor state, and if it finds better than the current state, then moveelse be in the same state.  This algorithm has the following features: o Less time consuming o Less optimal solution and the solution is not guaranteed Algorithm for Simple Hill Climbing: o Step 1: Evaluate the initial state, if it is goal state then return success and Stop. o Step 2: Loop Until a solution is found or there is no new operator left to apply. o Step 3: Select and apply an operator to the current state. o Step 4: Check new state: a. If it is goal state, then return success and quit. b. Else if it is better than the current state then assign new state as a current state. c. Else if not better than the current state, then return to step2. o Step 5: Exit.] Example: • Key point while solving any hill-climbing problem is to choose an appropriate heuristic function. • Let's define such function h: • h (x) = +1 for all the blocks in the support structure if the block is correctly positioned otherwise -1 for all the blocks in the support structure.
  • 33. Solution: 2. Steepest-Ascent hill climbing:  The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.  This algorithm examines all the neighboring nodes of the current state and selects one neighbornode which is closest to the goal state.  This algorithm consumes more time as it searches for multiple neighbors Algorithm for Steepest-Ascent hill climbing: o Step 1: Evaluate the initial state, if it is goal state then return success and stop, else makecurrent state as initial state. o Step 2: Loop until a solution is found or the current state does not change. a. Let SUCC be a state such that any successor of the current state will be better than it. b. For each operator that applies to the current state: a. Apply the new operator and generate a new state. b. Evaluate the new state. c. If it is goal state, then return it and quit, else compare it to the SUCC. d. If it is better than SUCC, then set new state as SUCC. e. If the SUCC is better than the current state, then set current state to SUCC.
  • 34. o Step 5: Exit. 3. Stochastic hill climbing:  Stochastic hill climbing does not examine for all its neighbor before moving.  Rather, this search algorithm selects one neighbor node at random and decides whether to chooseit as a current state or examine another state. 9. SIMULATED ANNEALING:  A hill-climbing algorithm which never makes a move towards a lower value guaranteed to beincomplete because it can get stuck on a local maximum.  And if algorithm applies a random walk, by moving a successor, then it may complete but notefficient.  Simulated Annealing is an algorithm which yields both efficiency and completeness.  In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then cooling gradually, so this allows the metal to reach a low-energy crystalline state.  The same process is used in simulated annealing in which the algorithm picks a random move, instead of picking the best move.  If the random move improves the state, then it follows the same path. Otherwise, the algorithm follows the path which has a probability of less than 1 or it moves downhill and chooses another path. 10. LOCAL SEARCH IN CONTINUOUS SPACE:  The distinction between discrete and continuous environments pointing out that most real-world environments are continuous.  A discrete variable or categorical variable is a type of statistical variable that can assume only fixed number of distinct values.  Continuous variable, as the name suggest is a random variable that assumes all the possible values in a continuum.  Which leads to a solution state required to reach the goal node.  But beyond these “classical search algorithms," we have some “local search algorithms” where the path cost does not matters, and only focus on solution-state needed to reach the goal node. o Example: Greedy BFS* Algorithm.  A local search algorithm completes its task by traversing on a single current node rather than multiple paths and following the neighbors of that node generally. o Example: Hill climbing and simulated annealing can handle continuous state and action spaces, because they have infinite branching factors.
  • 35. Solution for Continuous Space: 11.One way to avoid continuous problems is simply to discretize the neighborhood of each state. 12.Many methods attempt to use the gradient of the landscape to find a maximum. The gradient of the objective function is a vector ∇f that gives the magnitude and direction of the steepest slope. Local search in continuous space:
  • 36. Does the local search algorithm work for a pure optimized problem? • Yes, the local search algorithm works for pure optimized problems. • A pure optimization problem is one where all the nodes can give a solution. But thetarget is to find the best state out of all according to the objective function. • Unfortunately, the pure optimization problem fails to find high-quality solutions toreach the goal state from the current state. • Note: An objective function is a function whose value is either minimized ormaximized in different contexts of the optimization problems. • In the case of search algorithms, an objective function can be the path cost forreaching the goal node, etc. Working of a Local search algorithm: Problems in Hill Climbing Algorithm: 1. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its neighboring states, but there is another state also present which is higherthan the local maximum.
  • 37. Solution: Backtracking technique can be a solution of the local maximum in state spacelandscape. Create a list of the promising path so that the algorithm can backtrack the search space and explore other paths as well. 2. Plateau: A plateau is the flat area of the search space in which all the neighbor statesof the current state contains the same value, because of this algorithm does not find any best direction to move. A hill-climbing search might be lost in the plateau area. Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the problem. Randomly select a state which is far away from the current state so it is possible that the algorithm could find non- plateau region. 3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its surrounding areas, but itself has a slope, and cannot be reached in asingle move. Solution: With the use of bidirectional search, or by moving in different directions, wecan improve this problem. Conclusion: • Local search often works well on large problems – optimality – Always has some answer available (best found so far)
  • 38. QUESTION BANK UNIT 1 QUESTIONS & ANSWERS 2MARKS 1.Define A.I or what is A.I? May03,04 Artificial intelligence is the branch of computer science that deals with the automation of intelligent behavior. AI gives basis for developing human like programs which can be useful to solve real life problems and thereby become useful to mankind. 2. Define PEAS representation. PEAS → P- Performance measure E - Environment A- Actuators S – Sensors Example: performance measure maximizes student’s score on test. Environment Set of students testing Agency Actuators Display exercises suggestions, corrections. Sensors Keyboard entry 3. What is meant by robotic agent? May 05 A machine that looks like a human being and performs various complex acts of a human being. It can do the task efficiently and repeatedly without fault. It works on the basis of a program feeder to it; it can have previously stored knowledge from environment through its sensors. It acts with the help of actuators. 4. Define an agent? May 03, Dec-09 An agent is anything (a program, a machine assembly) that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. 5. Define rational agent? Dec-05,11, May-10 A rational agent is one that does the right thing. Here right thing is one that will cause agent to be more successful. That leaves us with the problem of deciding how and when to evaluate the agent’s success. 6. List down the characteristics of intelligent agent? May-11 • The IA must learn and improve through interaction with the environment. • The IA must adapt online and in the real time situation. • The IA must accommodate new problem-solving rules incrementally. • The IA must have memory which must exhibit storage and retrieval capabilities. 7. State the concept of rationality? May-12 • Rationality is the capacity to generate maximally successful behavior given the available information. Rationality also indicates the capacity to compute the perfectly rational decision given the initially available information. • The capacity to select the optimal combination of computation – sequence plus the action, under the constraint that the action must be selected by the computation is also rationality. • Perfect rationality constraints an agent’s actions to provide the maximum expectations of success given the information available.
  • 39. 8. What are the functionalities of the agent function? Dec-12 • Agent function is a mathematical function which maps each and every possible percept sequence to a possible action. • The major functionality of the agent function is to generate the possible action to each and every percept. • It helps the agent to get the list of possible actions the agent can take. • Agent function can be represented in the tabular form. 9. Define structure of agent program. May-13 The task of AI is to design an agent program which implements the agent function. The structure of an intelligent agent is a combination of architecture and agent program. It can be viewed as: Agent = Architecture + Agent program Following are the main three terms involved in the structure of an AI agent: Architecture: Architecture is machinery that an AI agent executes on. Agent Function: Agent function is used to map a percept to an action. F: P* → A Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f. 10. What are the four components to define a problem? Define them. May-13 • Initial state: state in which agent starts in. • A description of possible actions: description of possible actions which are available to the agent. • The goal test: it is the test that determines whether a given state is goal state. • A path cost function: it is the function that assigns a numeric cost (value) to each path. The problem-solving agent is expected to choose a cost function that reflects its own performance measure. 11. What is meant by Turing test? • To conduct this test, we need two people and one machine. • One person will be an interrogator(i.e.) questioner, will be asking questions to one person and one machine. Three of them will being a separate room. • Interrogator knows them just as A and B. so it has to identify which is the person and machine. • The goal of the machine is to make Interrogator believe that it is the person’s answer. If machine succeeds by fooling Interrogator, the machine acts like a human. Programming a computer to pass Turing test is very difficult. 12. What are the factors that a rational agent should depend on at any given time? • The performance measure that defines degree of success. • Ever thing that the agent has perceived so far. We will call this complete perceptual history the percept sequence. • When the agent knows about the environment. • The action that the agent can perform. 13. List the various type of agent program.
  • 40. • Simple reflex agent program. • Model based reflex agent. • Goal based agent program. • Utility based agent program 14. Give the structure of agent in an environment. Agent interacts with environment through sensors and actuators. An Agent is anything that can be viewed as perceiving (i.e.) understanding its environment through sensors and acting upon that environment through actuators. 15. Define Agent Function. It is a mathematical description which deals with the agent’s behavior that maps the given percept sequence into an action. 16. Define Agent Program. Agent function for an agent will be implemented by agent program. 17. Define problem solving agent. Problem solving agent is one kind of goal-based agent, where the agent Should select one action from sequence of actions which lead to desirable states. 18. List the steps involved in simple problem-solving technique. i. Goal formulation ii. Problem formulation iii. Search iv. Solution v. Execution phase 19. What are the components of a problem? A problem can be defined formally by five components: • The initial state that the agent starts in. • A description of the possible actions available to the agent. • A description of what each action does; the formal name for this is the transition model. • The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. • A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure. 20. Give example problems for Artificial Intelligence. • Toy problems • Real world problem
  • 41. 21. Define search tree. The tree which is constructed for the search process over the state space is called search tree. 22. Define search node. The root of the search tree that is the initial state of the problem is called search node. 23. Define fringe. The collection of nodes that have been generated but not yet expanded, this collection is called fringe or frontier. 24. List out the possible states of Vacuum Cleaner with neat diagram. 25. Difference Between DFS and BFS with suitable examples.
  • 42. 26. What do you mean by local maxima with respect to search technique? • Local maximum is the peak that is higher than each of its neighbor states, but lowers than the global maximum i.e. a local maximum is a tiny hill on the surface whose peak is not as high as the main peak (which is a optimal solution). • Hill climbing fails to find optimum solution when it encounters local maxima. Any small move, from here also makes things worse (temporarily). • At local maxima all the search procedure turns out to be wasted here. It is like a dead end. 27. Sketch hill climbing state space diagram. • Local Maximum: Local maximum is a state which is better than its neighbor states, but there is alsoanother state which is higher than it. • Global Maximum: Global maximum is the best possible state of state space landscape. It has thehighest value of objective function. • Current state: It is a state in a landscape diagram where an agent is currently present. • Flat local maximum: It is a flat space in the landscape where all the neighbor states of current stateshave the same value. • Shoulder: It is a plateau region which has an uphill edge. 28. State the reason when hill climbing often gets stuck. • Local maxima are the state where hill climbing algorithm is sure to get struck. • Local maxima are the peak that is higher than each of its neighbor states, but lower than the global maximum. • So, we have missed the better state here. • All the search procedure turns out to be wasted here. It is like a dead end. 29. How can we avoid ridge and plateau in hill climbing? • Ridge and plateau in hill climbing can be avoided using methods like backtracking, making big jumps. • Backtracking and making big jumps help to avoid plateau, whereas, application of multiple rules helps to avoid the problem of ridges.
  • 43. 30.List out the components of search tree. A node is having five components: • STATE: which state it is in the state space. • PARENT-NODE: from which node it is generated. • ACTION: which action applied to its parent-node to generate it. • PATH-COST: the cost, g(n), from initial state to the node n itself. • DEPTH: number of steps along the path from the initial state. 31. Does the local search algorithm work for a pure optimized problem? • Yes, the local search algorithm works for pure optimized problems. • A pure optimization problem is one where all the nodes can give a solution. But the target is to find the best state out of all according to the objective function. • Unfortunately, the pure optimization problem fails to find high-quality solutions to reach the goal state from the current state. • Note: An objective function is a function whose value is either minimized or maximized in different contexts of the optimization problems. • In the case of search algorithms, an objective function can be the path cost for reaching the goal node, etc. 5- & 10-MARK QUESTIONS & ANSWERS 1. What is an agent? How does it interact with the environment? Explain. (10 marks) (Dec 2023) Properties of Task Environment: • An environment is everything in the world which surrounds the agent, but it is not a part of an agent itself. An environment can be described as a situation in which an agent is present. • The environment is where agent lives, operate and provide the agent with something to sense and act upon it. An environment is mostly said to be non-feministic. Features of Environment • As per Russell and Norvig, an environment can have various features from the point of view of an agent: 1. Fully observable vs Partially Observable 2. Static vs Dynamic 3. Discrete vs Continuous 4. Deterministic vs Stochastic 5. Single-agent vs multi-agent 6. Episodic vs sequential 7. Known vs Unknown i. Fully observable vs Partially Observable: • When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it is said to be a fully observable environment else it is partially observable. • Maintaining a fully observable environment is easy as there is no need to keep track of the history of the surrounding. • An environment is called unobservable when the agent has no sensors in all environments. Examples: • Chess – the board is fully observable, and so are the opponent’s moves. • Driving – the environment is partially observable because what’s around the corner is not known.
  • 44. ii. Static vs Dynamic: • An environment that keeps constantly changing itself when the agent is up with some action is said to be dynamic. • A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant. • An idle environment with no change in its state is called a static environment. • An empty house is static as there’s no change in the surroundings when an agent enters. iii.Discrete vs Continuous: • If an environment consists of a finite number of actions that can be deliberated in the environment to obtain the output, it is said to be a discrete environment. • The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with every game, but still, it’s finite. • The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be continuous. • Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which cannot be numbered. iv.Deterministic vs Stochastic: • When a uniqueness in the agent’s current state completely determines the next state of the agent, the environment is said to be deterministic. • The stochastic environment is random in nature which is not unique and cannot be completely determined by the agent. Examples: • Chess – there would be only a few possible moves for a coin at the current state and these moves can be determined. • Self-Driving Cars- the actions of a self-driving car are not unique; it varies time to time. v.Single-agent vs multi-agent: •An environment consisting of only one agent is said to be a single-agent environment. •A person left alone in a maze is an example of the single-agent system. •An environment involving more than one agent is a multi-agent environment. •The game of football is multi-agent as it involves 11 players in each team. vi. Episodic vs sequential: •In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or episodes. There is no dependency between current and previous incidents. In each incident, an agent receives input from the environment and then performs the corresponding action. •Example: Consider an example of Pick and Place robot, which is used to detect defective parts from the conveyor belts. Here, every time robot (agent) will make the decision on the current part i.e. there is no dependency between current and previous decisions. •In a Sequential environment, the previous decisions can affect all future decisions. The next action of the agent depends on what action he has taken previously and what action he is supposed to take in the future. Example: Checkers- Where the previous move can affect all the following moves. vii.Known vs Unknown: In a known environment, the output for all probable actions is given. Obviously, in case of unknown environment, for an agent to make a decision, it has to gain knowledge about how the environment works.
  • 45. 2. Test the turning machine for human and machine intelligence. (5 marks) Ans: Turing Test in AI: • In 1950, Alan Turing introduced a test to check whether a machine can think like a human or not, this test is known as the Turing Test. • In this test, Turing proposed that the computer can be said to be an intelligent if it can mimic human response under specific conditions. • Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and Intelligence," which considered the question, "Can Machine think?". • The Turing test is based on a party game "Imitation game," with some modifications. • This game involves three players in which one player is Computer, another player is human responder, and the third player is a human Interrogator, who is isolated from other two players and his job are to find that which player is machine among two of them. • Consider, Player A is a computer, Player B is human, and Player C is an interrogator. Interrogator is aware that one of them is machine, but he needs to identify this on the basis of questions and their responses. • The conversation between all players is via keyboard and screen so the result would not depend on the machine's ability to convert words as speech. • The test result does not depend on each correct answer, but only how closely its responses like a human answer. The computer is permitted to do everything possible to force a wrong identification by the interrogator. • The questions and answers can be like: o Interrogator: Are you a computer? o Player A (Computer): No o Interrogator: Multiply two large numbers such as (256896489*456725896) o Player A: Long pause and give the wrong answer. • In this game, if an interrogator would not be able to identify which is a machine and which is human, then the computer passes the test successfully, and the machine is said to be intelligent and can think like a human.
  • 46. 3. Discuss any 5 Applications of AI in detail. (10 marks) Ans: Artificial Intelligence has various applications in today's society. It is becoming essential for today's time because it can solve complex problems with an efficient way in multiple industries, such as Healthcare, entertainment, finance, education, etc. AI is making our daily life more comfortable and faster. Following are some sectors which have the application of Artificial Intelligence: i.AI in Astronomy • Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be helpful for understanding the universe such as how it works, origin, etc. ii. AI in Healthcare • In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going to have a significant impact on this industry. • Healthcare Industries are applying AI to make a better and faster diagnosis than humans. • AI can help doctors with diagnoses and can inform when patients are worsening so that medical help can reach to the patient before hospitalization. iii.AI in Gaming • AI can be used for gaming purpose. The AI machines can play strategic games like chess, where the machine needs to think of a large number of possible places. iv. AI in Finance • AI and finance industries are the best matches for each other. • The finance industry is implementing automation, Chabot, adaptive intelligence, algorithm trading, and machine learning into financial processes.
  • 47. v. AI in Data Security • The security of data is crucial for every company and cyber-attacks are growing very rapidly in the digital world. AI can be used to make your data more safe and secure. • Some examples such as AEG bot, AI2 Platform, are used to determine software bug and cyber-attacks in a better way. vi. AI in Social Media • Social Media sites such as Face book, Twitter, and Snap chat contain billions of user profiles, which need to be stored and managed in a very efficient way. • AI can organize and manage massive amounts of data. • AI can analyze lots of data to identify the latest trends, hash tag, and requirement of different users. vii. AI in Travel & Transport • AI is becoming highly demanding for travel industries. • AI is capable of doing various travel related works such as from making travel arrangement to suggesting the hotels, flights, and best routes to the customers. 4. List the basic Kinds of intelligent agents and explain any two agents with neat schematic diagram. (10 marks) (Nov 2021, May 2022, 2023) (or) Illustrate in detail about Types of agents with neat sketch. (10 marks) (Nov 2023) (or) What are the four basic types of agent program in any intelligent system? Explain how did you convert them into learning agents? (10 marks) (May 2022) Types of AI Agents: • Agents can be grouped into five classes based on their degree of perceived intelligence and capability. All these agents can improve their performance and generate better action over the time. These are given below: i.Simple Reflex Agent ii.Model-based reflex agent iii.Goal-based agents iv.Utility-based agent v.Learning agent i. Simple Reflex agent: • The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current percepts and ignore the rest of the percept history. • These agents only succeed in the fully observable environment. • The Simple reflex agent does not consider any part of percepts history during their decision and action process. • The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room. • Problems for the simple reflex agent design approach: o They have very limited intelligence o They do not have knowledge of non-perceptual parts of the current state o Mostly too big to generate and to store.
  • 48. o Not adaptive to changes in the environment. ii. Model-based reflex agent: • The Model-based agent can work in a partially observable environment, and track the situation. • A model-based agent has two important factors: oModel: It is knowledge about "how things happen in the world," so it is called a Model- based agent. oInternal State: It is a representation of the current state based on percept history. • These agents have the model, "which is knowledge of the world" and based on the model they perform actions. • Updating the agent state requires information about: o How the world evolves o How the agent's action affects the world. iii. Goal-based agents • The knowledge of the current state environment is not always sufficient to decide for an agent to what to do. • The agent needs to know its goal which describes desirable situations. • Goal-based agents expand the capabilities of the model-based agent by having the "goal" information. • They choose an action, so that they can achieve the goal. • These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not.
  • 49. • Such considerations of different scenario are called searching and planning, which makes an agent proactive. iv. Utility-based agents • These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state. • Utility-based agent act based not only goals but also the best way to achieve the goal. • The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action. • The utility function maps each state to a real number to check how efficiently each action achieves the goals. v. Learning Agents • A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning capabilities. • It starts to act with basic knowledge and then able to act and adapt automatically through learning. • A learning agent has mainly four conceptual components, which are:
  • 50. 1. Learning element: It is responsible for making improvements by learning from environment 2. Critic: Learning element takes feedback from critic which describes that how well the agent is doing with respect to a fixed performance standard. 3. Performance element: It is responsible for selecting external action 4. Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences. • Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the performance. 5. Evaluate the Structure of agent in detail. (5 marks) (May 2023) Structure of an AI Agent • The task of AI is to design an agent program which implements the agent function. The structure of an intelligent agent is a combination of architecture and agent program. It can be viewed as: Agent = Architecture + Agent program Following are the main three terms involved in the structure of an AI agent: • Architecture: Architecture is machinery that an AI agent executes on. • Agent Function: Agent function is used to map a percept to an action. F: P* → A • Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f. Define PEAS Representation. • PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words: o P: Performance measure o E: Environment o A: Actuators o S: Sensors Here performance measure is the objective for the success of an agent's behavior.
  • 51. Example: PEAS for self-driving cars: Let's suppose a self-driving car then PEAS representation will be: • Performance: Safety, time, legal drive, comfort • Environment: Roads, other vehicles, road signs, pedestrian • Actuators: Steering, accelerator, brake, signal, horn • Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar. 6. Demonstrate in detail about well-define problem solving agent. (5 marks) (Nov 2023). Well-define problem and Solution: A problem can be defined formally by five components: • The initial state that the agent starts in. • A description of the possible actions available to the agent. • A description of what each action does; the formal name for this is the transition model. • The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. • A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure. Example: 4 Toy Problem
  • 52. 7. Predict the components and state of Vacuum cleaner with neat sketch. (5 marks) (Nov 2023). Toy Problem:
  • 53. 8. Define the component of 8 puzzle problem with suitable example. (5 marks) (May 2019) • State: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. • Initial state: Any state can be designated as the initial state. • Actions: The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or Down. Different subsets of these are possible depending on where the blank is. Goal State Initial State • Transition model: Given a state and action, this returns the resulting state. • Goal test: This check whether the state matches the goal configuration. (Other goal configurations are possible.) • Path cost: Each step costs 1, so the path cost is the number of steps in the path. Goal State Initial State. • Conclusion: The right formulation makes a big difference to the size of the search space
  • 54. 9. Explain about Search Algorithm Terminologies. Search Algorithm Terminologies: • Search: Searching is a step-by-step procedure to solve a search-problem in a given search space. A search problem can have three main factors: a. Search Space: Search space represents a set of possible solutions, which a system may have. b. Start State: It is a state from where agent begins the search. c. Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not. • Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the root node which is corresponding to the initial state. • Actions: It gives the description of all the available actions to the agent. • Transition model: A description of what each action do, can be represented as a transition model. • Path Cost: It is a function which assigns a numeric cost to each path. • Solution: It is an action sequence which leads from the start node to the goal node. • Optimal Solution: If a solution has the lowest cost among all solutions. 10. What are uninformed search techniques? Explain anyone. (10 marks) (Dec 2023) (or) Outline the uninformed search strategies like breath-first search and depth-first search with examples. (10 marks) (May 2023) (or) Discuss in detail the uninformed search strategies and compare the analysis of various searches. (10 marks) (Nov 2018) Ans: Uninformed/Blind Search: • The uninformed search does not contain any domain knowledge such as closeness, the location of the goal. • It operates in a brute-force way as it only includes information about how to traverse the tree and how to identify leaf and goal nodes. • Uninformed search applies a way in which search tree is searched without any information about the search space like initial state operators and test for the goal, so it is also called blind search. • It examines each node of the tree until it achieves the goal node. It can be divided into five main types: o Breadth-first search o Depth-first search i.BREADTH-FIRST SEARCH (BFS): • Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm searches breadthwise in a tree or graph, so it is called breadth-first search. • BFS algorithm starts searching from the root node of the tree and expands all successor nodes at the current level before moving to nodes of next level. • The breadth-first search algorithm is an example of a general-graph search algorithm. • Breadth-first search implemented using FIFO queue data structure.
  • 55. Algorithm: • Step 1: SET STATUS = 1 (ready state) for each node in G • Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state) • Step 3: Repeat Steps 4 and 5 until QUEUE is empty • Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state). • Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and set their STATUS = 2 (Waiting state) [END OF LOOP] • Step 6: EXIT Example: 1 In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted arrow, and the traversed path will be: Solution: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K Example: 2
  • 56. Queue Structure: Solution: 40,10,20,30,60,50,70 Example 3: Advantages: • BFS will provide a solution if any solution exists. • If there is more than one solution for a given problem, then BFS will provide the minimal solution which requires the least number of steps. Disadvantages: • It requires lots of memory since each level of the tree must be saved into memory to expand the next level. • BFS needs lots of time if the solution is far away from the root node. ii. DEPTH-FIRST SEARCH (DFS): • Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
  • 57. • It is called the depth-first search because it starts from the root node and follows each path to its greatest depth node before moving to the next path. • DFS uses a stack data structure for its implementation. • The process of the DFS algorithm is similar to the BFS algorithm. Implementation steps for DFS: • First, create a stack with the total number of vertices in the graph. • Now, choose any vertex as the starting point of traversal, and push that vertex into the stack. • After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top of the stack. • Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top. • If no vertex is left, go back and pop a vertex from the stack. • Repeat steps 2, 3, and 4 until the stack is empty. Algorithm: • Step 1: SET STATUS = 1 (ready state) for each node in G • Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state) • Step 3: Repeat Steps 4 and 5 until STACK is empty • Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state) • Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1) and set their STATUS = 2 (waiting state) [END OF LOOP] • Step 6: EXIT Example 1: In the below search tree, we have shown the flow of depth-first search, and it will follow the order as: Root node--->Left node ----> right node. It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will traverse node C and then G, and here it will terminate as it found goal node. = Solution: S,A,B,D,E,C,G,H,I,K,J.
  • 58. Example 2: Example 3: Advantage: • DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the current node. • It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path). Disadvantage: • There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution. • DFS algorithm goes for deep down searching and sometime it may go to the infinite loop. 11. Difference between BFS and DFS. (5 marks) Key BFS DFS Definition BFS stands for Breadth First Search. DFS stands for Depth First Search. Data structure BFS uses a Queue to find the shortest path. DFS uses a Stack to find the shortest path. Source BFS is better when target is closer to Source. DFS is better when target is far from source.
  • 59. Suitability for decision tree As BFS considers all neighbour so it is not suitable for decision tree used in puzzle games. DFS is more suitable for decision tree. As with one decision, we need to traverse further to augment the decision. If we reach the conclusion, we won. Speed BFS is slower than DFS. DFS is faster than BFS. Time Complexity Time Complexity of BFS = O(V+E) where V is vertices and E is edges. Time Complexity of DFS is also O(V+E) where V is vertices and E is edges. Memory BFS requires more memory space. DFS requires less memory space. Tapping in loops In BFS, there is no problem of trapping into finite loops. In DFS, we may be trapped into infinite loops. Principle BFS is implemented using FIFO (First in First Out) principle. DFS is implemented using LIFO (Last in First Out) principle. 12. Draw a state space representation of for Hill Climbing. (5 marks) (Nov 2022) State-space Diagram for Hill Climbing: • The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a graph between various states of algorithm and Objective function/Cost. • On Y-axis we have taken the function which can be an objective function or cost function, andstate-space on the x-axis. • If the function on Y-axis is cost then, the goal of search is to find the global minimum andlocal minimum. • If the function of Y-axis is Objective function, then the goal of the search is to find the globalmaximum and local maximum.
  • 60. • Local Maximum: Local maximum is a state which is better than its neighbor states, but there is alsoanother state which is higher than it. • Global Maximum: Global maximum is the best possible state of state space landscape. It has thehighest value of objective function. • Current state: It is a state in a landscape diagram where an agent is currently present. • Flat local maximum: It is a flat space in the landscape where all the neighbor states of current stateshave the same value. • Shoulder: It is a plateau region which has an uphill edge. 13. What is heuristic search technique in AI? How does heuristics search works? Explain its advantages and disadvantages. (5 marks) (Nov 2021, May 2023) (or) Explain heuristic search technique with example. (or) Hill climbing with example. (10 marks) (May 2019). What is a Heuristic Function? A heuristic function is a function that ranks the possible alternatives at any branching step in a search algorithm based on available information. It helps the algorithm select the best route among various possible paths, thus guiding the search towards a good solution efficiently. Simple Hill Climbing: • Simple hill climbing is the simplest way to implement a hill climbing algorithm. • It only evaluates the neighbor node state at a time and selects the first one whichoptimizes current cost and set it as a current state. • It only checks it's one successor state, and if it finds better than the current state, then moveelse be in the same state. • This algorithm has the following features: o Less time consuming o Less optimal solution and the solution is not guaranteed Algorithm for Simple Hill Climbing: o Step 1: Evaluate the initial state, if it is goal state then return success and stop. o Step 2: Loop Until a solution is found or there is no new operator left to apply. o Step 3: Select and apply an operator to the current state. o Step 4: Check new state: a. If it is goal state, then return success and quit. b. Else if it is better than the current state then assigns new state as a current state. c. Else if not better than the current state, then return to step2. o Step 5: Exit.] Example: • Key points while solving any hill climbing problem is to choose an appropriate heuristic function. • Let's define such function h:h (x) = +1 for all the blocks in the support structure if the block is correctlypositioned otherwise -1 for all the blocks in the support structure.
  • 61. Solution: Advantages of Hill Climbing Algorithm 1. Simplicity and Ease of Implementation: Hill Climbing is a simple and intuitive algorithm that is easy to understand and implement, making it accessible for developers and researchers alike. Problems in Hill Climbing Algorithm: i. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its neighboring states, but there is another state also present which is higherthan the local maximum. Solution: Backtracking technique can be a solution of the local maximum in state spacelandscape. Create a list of the promising path so that the algorithm can backtrack the search space and explore other paths as well. ii. Plateau: A plateau is the flat area of the search space in which all the neighbor statesof the current state contains the same value, because of this algorithm does not find any best direction to move. A hill-climbing search might be lost in the plateau area.
  • 62. Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the problem. Randomly select a state which is far away from the current state so it is possible that the algorithm could find non-plateau region. iii. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its surrounding areas, but itself has a slope, and cannot be reached in asingle move. Solution: With the use of bidirectional search, or by moving in different directions, wecan improve this problem. 14. Describe the local search algorithm withy neat sketch. (10 marks) (Nov 2020, May 2023) (or) Illustrate the working of Working of Local search algorithms in continuous space. • The distinction between discrete and continuous environments pointing out thatmost real- world environments are continuous. • A discrete variable or categorical variable is a type of statistical variable that canassume only fixed number of distinct values. • Continuous variable, as the name suggest is a random variable that assumes all thepossible values in a continuum. • Which leads to a solution state required to reach the goal node. • But beyond these “classical search algorithms," we have some “local searchalgorithms” where the path cost does not matter, and only focus on solution-state needed to reach the goal node. o Example: Greedy BFS* Algorithm. • A local search algorithm completes its task by traversing on a single current node rather than multiple paths and following the neighbors of that node generally. o Example: Hill climbing and simulated annealing can handle continuousstate and action spaces, because they have infinite branching factors. Solution for Continuous Space: • One way to avoid continuous problems is simply to discretize the neighborhood of each state. • Many methods attempt to use the gradient of the landscape to find a maximum. The gradient of the objective function is a vector ∇f that gives the magnitude and direction of the steepest slope.
  • 63. Does the local search algorithm work for a pure optimized problem? • Yes, the local search algorithm works for pure optimized problems. • A pure optimization problem is one where all the nodes can give a solution. But thetarget is to find the best state out of all according to the objective function. • Unfortunately, the pure optimization problem fails to find high-quality solutions toreach the goal state from the current state. • Note: An objective function is a function whose value is either minimized ormaximized in different contexts of the optimization problems. • In the case of search algorithms, an objective function can be the path cost forreaching the goal node, etc. Working of a Local search algorithm: Problems in Hill Climbing Algorithm: i. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its neighboring states, but there is another state also present which is higherthan the local maximum. Solution: Backtracking technique can be a solution of the local maximum in state space landscape. Create a list of the promising path so that the algorithm can backtrack the search space and explore other paths as well. ii.Plateau: Aplateau is the flat area of the search space in which all the neighbor statesof the current state contain the same value, because of this algorithm does not find any best direction to move. A hill-climbing search might be lost in the plateau area.
  • 64. Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the problem. Randomly select a state which is far away from the current state so it is possible that the algorithm could find non-plateau region. iii.Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its surrounding areas, but itself has a slope, and cannot be reached in asingle move. Solution: With the use of bidirectional search, or by moving in different directions, wecan improve this problem. Conclusion: • Local search often works well on large problems – optimality – Always has some answer available (best found so far)