SlideShare a Scribd company logo
A machine that could
organize your cupboard
as you like
Serve every member of a
house a customized cup of
coffee makes your day easier
These are the products of artificial intelligence
Artificial intelligence
is the simulation of
human intelligence
processes by
machines, especially
computer systems
These machine are
artificially incorporated
with human like
intelligence to perform
tasks as we do
This intelligence is built using complex
algorithm and mathematical functions
Presentation on Artificial Intelligence.
Inspite of variation in lighting , landscape
and dimensions of the field the AI robot
must perform as expected
The ability to react appropriately to new
situation is called generalized learning
Robot at crossroad one that is paved
and other one is rocky
The robot should determine which
path to take based on circumstances
This portraits the robot’s reasoning
ability
Robot encounters a stream
that it cannot swim across
Using the plank provided
as an input the robot is
able to cross the string
AI provides machine with capability
to adapt, reason and provide
solutions
Weak AI also called as narrow AI focuses solely on
one task
Alphago is a maestro
of the game go but
you cant expect it to
be remotely good at
chess
Avengers is an ideal
example of a strong AI
because its self aware
and eventually develops
emotions
This makes AI response
unpredictable
Presentation on Artificial Intelligence.
Algorithm that mimics human brain to
incorporate intelligence into machine
Algorithm to incorporated intelligence
into machine by automatically learning
from data
AI is the ability of machine to
imitate human intelligence
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Search provides a way
of solving problems for
which no more direct
approach is available as
well as a framework into
which any direct
techniques that are
available can be
embedded.
Abstraction finds a way of
separating important features
and notifications from the
unimportant ones that would
otherwise confuse any process
This technique provides a way
of solving complex problems
by exploiting the structure of
the object that are involved
Defining the
problem
Analyzing the
problem
Identification of
solutions
Choosing the
solution
Implementation
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
 Structured Problem
 Well structured – Yield a right answer (1+2=3)
 Ill structured – Do not yield a particular answer
(Which economic system allows for the most human flourishing?)
 Unstructured Problem
 Very hard to formulate the problem
 Ambiguous in nature
 Linear Problem
 Have clear solution
 All kind of classification problems
 Non linear Problem
 Relationships between input and output is non linear
 Further decision can’t be taken like in linear problem
23
• Semiotic Models
- Based on Sign processes / signification and communication.
- Code is specific which gives meaning to each sign based on the sound or letters that human use to form
words or movements.
24
• Statistical Models
- Refers to representation and formulation of relationships through statistical techniques.
- Statistical model employs probabilistic approaches and is typically a collection of
probability density function and distribution functions.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
Presentation on Artificial Intelligence.
32
33
34
35
36
37
38
Deterministic or
observable
(single-state)
Non-observable
(multiple-state)
Non-deterministic or
partially observable
Unknown state space
40
• Each state is fully observable and it goes to one definite state after any
action.
• Here, the goal state is reachable in one single action or sequence of actions.
• Deterministic environments ignore uncertainty.
Ex- Vacuum cleaner with sensor
41
• Problem – solving agent does not have any information about the state.
• Solution may or may not be reached.
Ex- In case of vacuum cleaner , the goal state is to clean the floor
rather clean floor. Action is to suck if there is dirt. So , in non-observable condition ,
as there is no sensor , it will have to suck the dirt , irrespective of whether it is
towards right or left . Here , the solution space is the states specifying its
movement across the floor.
42
• The effect of action is not clear.
• Percepts provide new information about the current state.
Ex- If we take Vacuum cleaner , and now assume that the sensor is attached to it ,
then it will suck if there is dirt. Movement of the cleaner will be based on its
current percept.
43
• Typically exploration problems
• States and impact of actions are not known
Ex- online search that involves acting without compete knowledge of the next state
or scheduling without map.
44
1. Is the problem decomposable ?
2. Can Solution steps be ignored or undone ?
3. Is the Universe Predictable?
4. Is a good solution absolute or relative ?
5. Is the solution a state or a path?
6. What is the role of knowledge?
7. Does the task require interaction with a person ?
45
• In this case, the problem is divided into smaller problems.
• The smaller problems are solved independently.
• Finally, the result is merged to get the final result.
46
• In the Theorem Proving problem, a lemma that has been proved can be ignored
for the next steps.
• Such problems are called Ignorable problems.
• In the 8-Puzzle, Moves can be undone and backtracked.
• Such problems are called Recoverable problems.
47
• In Playing Bridge, We cannot know exactly where all the cards are or
what the other players will do on their turns.
• Uncertain outcome!
• For certain-outcome problems, planning can be used to generate a
sequence of operators that is guaranteed to lead to a solution.
• For uncertain-outcome problems, a sequence of generated operators
can only have a good probability of leading to a solution. Plan revision is
made as the plan is carried out and the necessary feedback is provided.
48
• The facts can be represented using a formal language
called “Predicate Logic”
• There may be “n” different solutions. If one solution is
found , there is no need to go back and see if some path
might also lead to a solution.
Absolute solution:
Ex: P -> Q
If marks = 92 -> Grade = O
Relative solution:
Ex. TSP
(Explore all possible solutions)
49
● To solve a problem, of finding interpretation,
we need to produce only the interpretation
itself. Record of processing by how the
interpretation was arrived is NOT required.
Ex: The President ate Food with a fork.
● In contrast, if we must produce, the final
state along with the path that we found to
that state along with sequence of operations
to produce the final state.
50
• Now consider the problem of scanning daily newspapers to decide which are
supporting the Democrats and which are supporting the Republicans in some
upcoming election. Again assuming unlimited computing power, how much
knowledge would be required by a computer trying to solve this problem? This time
the answer is a great deal.
• It would have to know such things as:
 The names of the candidates in each party.
 The fact that if the major thing you want to see done is have taxes lowered, you
are probably supporting the Republicans.
 The fact that if the major thing you want to see done is improved education for
minority students, you are probably supporting the Democrats.
 The fact that if you are opposed to big government, you are probably supporting
the Republicans.
51
• The solitary problem, in which there is no intermediate communication
and no demand for an explanation of the reasoning process.
• The conversational problem, in which intermediate communication is
to provide either additional assistance to the computer or additional
information to the user.
52
Problem:
• A problem is a specific task or challenge that requires finding a solution or making a decision.
• In artificial intelligence, problems can vary in complexity and scope, ranging from simple tasks
like arithmetic calculations to complex challenges such as image recognition, natural language
processing, game playing, and optimization.
• Each problem has a defined set of initial states, possible actions or moves, and a goal state that
needs to be reached or achieved.
• Example, in a game of chess, the problem is to find a sequence of moves that lead to
checkmate, while in route planning, the problem is to find the shortest path between two
locations on a map.
53
Problem Space:
• The problem space is the set of all possible states, actions, and transitions that can be encountered
while attempting to solve a specific problem. It represents the entire landscape of potential solutions
and paths from the initial state to the goal state.
• In other words, the problem space defines all the possible configurations or arrangements of elements
involved in the problem and the set of valid moves or actions that can be taken at each state. Each
state in the problem space represents a specific configuration, and each action represents a possible
move or step from one state to another.
• Example, in the problem of route planning, the problem space includes all possible locations on the
map as states and all valid roads or paths between them as actions.
54
Search:
• Search is the process of exploring the problem space to find a sequence of actions or moves that lead to the goal
state or a satisfactory solution. In AI, search algorithms are used to systematically navigate through the problem
space and discover paths or solutions that satisfy the problem’s constraints and objectives.
• The search process involves starting from the initial state and exploring possible actions to generate new states.
These states are then evaluated based on certain criteria (e.g., distance to the goal, cost, or utility) to determine
the most promising states to explore further. The process continues iteratively until the goal state is reached or a
satisfactory solution is found.
• There are various search algorithms used in AI, such as depth-first search, breadth-first search, A* search, and
heuristic search. Each algorithm has its strengths and weaknesses, and the choice of search algorithm depends
on the problem’s characteristics, size of the problem space, and the resources available.
The 8-Puzzle (Sliding Block Puzzle)
States: A state description specifies the location of each of the eight tiles
and the blank in one of the nine squares.
Initial state: Any state can be designated as the initial state.
Successor function: This generates the legal states that result from trying
the four actions (blank moves Left, Right, Up, or Down).
Goal test: This checks whether the state matches the goal configuration
(Other goal configurations are possible.)
Path cost: Each step costs 1, so the path cost is the number of steps in the
path.
The 8-Puzzle (Sliding Block Puzzle) - Solution
• hf= +1 for every correct position
• Solution of this problem is “movement of tiles” in order to reach
goal state.
• The transition function or legal move is any one tile movement
by one space in any direction.
The 8-Puzzle (Sliding Block Puzzle) - Solution
The 8-Puzzle (Sliding Block Puzzle) - Solution
4- queen problem
Toy Problem- 7
Missionaries and Cannibals
Let Missionary is denoted by ‘M’ and Cannibal, by ‘C’.
These rules are described below:
All or some of these production rules will have to be used in a particular
sequence to find the solution of the problem.
Toy Problem- 7
Missionaries and Cannibals
Rules applied and their sequence in Missionaries and Cannibals
problem
Toy Problem- 7
Formalization of the M&C Problem
State space: triple (x,y,z) with 0 ≤ x,y,z ≤ 3, where x,y, and z
represent the number of missionaries, cannibals and boats
currently on the original bank.
Initial State: (3,3,1)
Successor function: From each state, either bring one
missionary, one cannibal, two missionaries, two cannibals,
or one of each type to the other bank.
Note: Not all states are attainable (e.g., (0,0,1)), and some
are illegal.
Goal State: (0,0,0) Path Costs: 1 unit per crossing
63
What is an Agent?
• An agent can be anything that perceive its environment through sensors and act upon
that environment through actuators.
• Agent runs in the cycle of perceiving, thinking, and acting.
64
Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory
input and act on those inputs and display output on the screen.
65
Sensor: Sensor is a device which detects the change in the environment and sends the information to
other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The actuators
are only responsible for moving and controlling a system. An actuator can be an electric motor, gears,
rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms,
fingers, wings, fins, and display screen.
66
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent
agent.
Following are the main four rules for an AI agent:
Rule 1: An AI agent must have the ability to perceive the environment.
Rule 2: The observation must be used to make decisions.
Rule 3: Decision should result in an action.
Rule 4: The action taken by an AI agent must be a rational action.
67
Autonomy: An AI virtual agent is capable of performing tasks independently without requiring
constant human intervention or input.
Perception: The agent function senses and interprets the environment they operate in through
various sensors, such as cameras or microphones.
Reactivity: An AI agent can assess the environment and respond accordingly to achieve its goals.
Reasoning and decision-making: AI agents are intelligent tools that can analyze data and make
decisions to achieve goals. They use reasoning techniques and algorithms to process information and
take appropriate actions.
Learning: They can learn and enhance their performance through machine, deep, and reinforcement
learning elements and techniques.
Communication: AI agents can communicate with other agents or humans using different methods,
like understanding and responding to natural language, recognizing speech, and exchanging messages
through text.
Goal-oriented: They are designed to achieve specific goals, which can be pre-defined or learned
through interactions with the environment.
68
• A rational agent is an agent which has clear preference, models uncertainty, and
acts in a way to maximize its performance measure with all possible actions.
• A rational agent is said to perform the right things. AI is about creating rational
agents to use for game theory and decision theory for various real-world
scenarios.
• For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward
and for each wrong action, an agent gets a negative reward.
69
The rationality of an agent is measured by its performance measure. Rationality can
be judged on the basis of following points:
• Performance measure which defines the success criterion.
• Agent prior knowledge of its environment.
• Best possible actions that an agent can perform.
• The sequence of percepts.
Rationality differs from Omniscience because an Omniscient agent knows the actual
outcome of its action and act accordingly, which is not possible in reality.
70
Artificial Intelligence agents perform these functions continuously:
• Perceiving dynamic conditions in the environment
• Acting to affect conditions in the environment
• Using reasoning to interpret perceptions
• Problem-solving
• Drawing inferences
• Determining actions and their outcomes
71
• The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent
program.
Agent = Architecture + Agent program
• Following are the main three terms involved in the structure of an AI agent:
Architecture: Architecture is machinery that an AI agent executes on.
Agent Function: Agent function is used to map a percept to an action.
f:P* → A
Agent program: Agent program is an implementation of agent function. An agent
program executes on the physical architecture to produce function f.
72
There are five different types of intelligent agents used in AI.
• Simple Reflex Agents
• Model-based Agents
• Goal-based agents
• Utility-based agents
• Learning agents
73
• A simple reflex agent is an AI system that follows pre-defined rules to make
decisions. It only responds to the current situation without considering the past or
future ramifications.
• A simple reflex agent is suitable for environments with stable rules and
straightforward actions, as its behavior is purely reactive and responsive to
immediate environmental changes.
Example
A rule-based system developed to
support automated customer support
interactions. The system can
automatically generate a predefined
response containing instructions on
resetting the password if a customer’s
message contains keywords
indicating a password reset.
74
A model-based reflex performs actions based on a current percept and an internal
state representing the unobservable word. It updates its internal state based on two
factors:
• How the world evolves independently of the agent
• How does the agent’s action affect the world
A cautionary model-based reflex agent is a variant of a model-based reflex agent that
also considers the possible consequences of its actions before executing them.
75
The model-based reflex agent operates in four stages:
Sense: It perceives the current state of the world with its sensors.
Model: It constructs an internal model of the world from what it sees.
Reason: It uses its model of the world to decide how to act based on a set of predefined rules or
heuristics.
Act: The agent carries out the action that it has chosen.
Example
One of the finest examples of a cautionary model-based reflex agent is Amazon Bedrock
Amazon Bedrock is a service that uses foundational models to simulate operations, gain insights, and make
informed decisions for effective planning and optimization.
By relying on various models, Bedrock gains insights, predicts outcomes, and makes informed decisions. It
continuously refines its models with real-world data, allowing it to adapt and optimize its operations.
Amazon Bedrock then plans for different scenarios and selects optimal strategies through simulations and
adjustments to model parameters.
76
• Goal-based agents are AI agents that use information from their environment to achieve specific
goals. They employ search algorithms to find the most efficient path towards their objectives
within a given environment.
• These agents are also known as rule-based agents, as they follow predefined rules to accomplish
their goals and take specific actions based on certain conditions.
• Goal-based agents are easy to design and can handle complex tasks. They can be used in various
applications like robotics, computer vision, and natural language processing.
• Unlike basic models, a goal-based agent can determine the optimal course of decision-making and
action-taking processes depending on its desired outcome or goal.
77
The working pattern of the goal-based agent can be divided into five steps:
Perception: The agent perceives its environment using sensors or other input
devices to collect information about its surroundings.
Reasoning: The agent analyzes the information collected and decides on the best
course of action to achieve its goal.
Action: The agent takes actions to achieve its goal, such as moving or manipulating
objects in the environment.
Evaluation: After taking action, the agent evaluates its progress towards the goal and
adjusts its actions, if necessary.
Goal Completion: Once the agent has achieved its goal, it either stops working or
begins working on a new goal.
Example
Google Bard is a goal-based agent. No doubt, it is also a learning agent.
As a goal-based agent, it has a goal or objective to provide high-quality responses to user queries.
It chooses its actions that are likely to assist users in finding the information they seek and achieving their
desired goal of obtaining accurate and helpful responses.
78
• Utility-based agents are AI agents that make decisions based on maximizing a
utility function or value. They choose the action with the highest expected utility,
which measures how good the outcome is.
• This helps them deal with complex and uncertain situations more flexibly and
adaptively. Utility-based agents are often used in applications where they have to
compare and select among multiple options, such as resource allocation,
scheduling, and game-playing.
79
• A utility-based agent aims to choose actions that lead to a high utility state. To
achieve this, it needs to model its environment, which can be simple or complex.
• Then, it evaluates the expected utility of each possible outcome based on the
probability distribution and the utility function.
• Finally, it selects the action with the highest expected utility and repeats this
process at each time step
Example
Anthropic Claude, an AI tool whose goal is to help card members maximize their rewards and benefits
from using cards, is a utility-based agent.
Because to achieve its goal, it employs a utility function to assign numerical values representing
success or happiness to different states (situations that card members face, such as purchasing,
paying bills, redeeming rewards, etc.). And then compares the outcome of different actions in each
state and trade-off decisions based on their utility values.
80
An AI learning agent is a software agent that can learn from past experiences and
improve its performance. It initially acts with basic knowledge and adapts
automatically through machine learning.
The learning agent comprises four main components:
Learning Element: It is responsible for learning and making improvements based on
the experiences it gains from its environment.
Citric: It provides feedback to the learning element by the agent’s performance for a
predefined standard.
Performance Element: It selects and executes external actions based on the
information from the learning element and the critic.
Problem Generator: It suggests actions to create new and informative experiences
for the learning element to improve its performance.
81
82
AI learning agents follow a cycle of observing, learning, and acting based on feedback. They interact
with their environment, learn from feedback, and modify their behavior for future interactions.
Observation: The learning agent observes its environment through sensors or other inputs.
Learning: The agent analyzes data using algorithms and statistical models, learning from feedback on
its actions and performance.
Action: Based on what it has learned, the agent acts in its environment to decide how to behave.
Feedback: The agent receives feedback about their actions and performance through rewards,
penalties, or environmental cues.
Adaptation: Using feedback, the agent changes its behavior and decision-making processes,
updating its knowledge and adapting to its environment.
Example
AutoGPT analyzes the pros and cons of the top ten smartphones by exploring various websites and
sources. It evaluates the authenticity of websites using a sub-agent program. Finally, it generates a
detailed report summarizing the findings and listing the pros and cons of the top ten smartphone
companies.
83
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational
agent, then we can group its properties under PEAS representation model. It is made up of four
words:
P: Performance measure
E: Environment
A: Actuators
S: Sensors
84
Performance: Safety, time, legal drive,
comfort
Environment: Roads, other vehicles, road
signs, pedestrian
Actuators: Steering, accelerator, brake, signal,
horn
Sensors: Camera, GPS, speedometer,
odometer, accelerometer, sonar.
85
Agent Performance measure Environment Actuators Sensor
Medical Diagnose
Vacuum Cleaner
Part -picking Robot
86
Agent Performance measure Environment Actuators Sensor
Medical Diagnose
• Healthy patient
• Minimized cost
• Patient
• Hospital
• Staff
• Tests
• Treatments
• Keyboard
(Entry of symptoms)
Vacuum Cleaner
• Cleanness
• Efficiency
• Battery life
• Security
• Room
• Table
• Wood floor
• Carpet
• Various obstacles
• Wheels
• Brushes
• Vacuum Extractor
• Camera
• Dirt detection
sensor
• Cliff sensor
• Bump Sensor
• Infrared Wall
Sensor
Part -picking Robot
• Percentage of parts in
correct bins
• Conveyor belt with
parts
• Bins
• Jointed Arms
• Hand
• Camera
• Joint angle sensors.
87
• Fully Observable vs Partially Observable
• Deterministic vs Stochastic
• Competitive vs Collaborative
• Single-agent vs Multi-agent
• Static vs Dynamic
• Discrete vs Continuous
• Episodic vs Sequential
• Known vs Unknown
88
• When an agent sensor is capable to sense or access the complete state of an agent at each point in
time, it is said to be a fully observable environment else it is partially observable.
• Maintaining a fully observable environment is easy as there is no need to keep track of the history
of the surrounding.
• An environment is called unobservable when the agent has no sensors in all environments.
Examples:
• Chess – the board is fully observable, and so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around the corner is not known.
89
• When a uniqueness in the agent’s current state completely determines the next state of the agent,
the environment is said to be deterministic.
• The stochastic environment is random in nature which is not unique and cannot be completely
determined by the agent.
Examples:
• Chess – there would be only a few possible moves for a coin at the current state and these moves
can be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.
90
• An agent is said to be in a competitive environment when it competes against another agent to
optimize the output.
• The game of chess is competitive as the agents compete with each other to win the game which is
the output.
• An agent is said to be in a collaborative environment when multiple agents cooperate to produce
the desired output.
• When multiple self-driving cars are found on the roads, they cooperate with each other to avoid
collisions and reach their destination which is the output desired.
91
• An environment consisting of only one agent is said to be a single-agent environment.
• A person left alone in a maze is an example of the single-agent system.
• An environment involving more than one agent is a multi-agent environment.
• The game of football is multi-agent as it involves 11 players in each team.
• An environment that keeps constantly changing itself when the agent is up with some action is said
to be dynamic.
• A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every
instant.
• An idle environment with no change in its state is called a static environment.
• An empty house is static as there’s no change in the surroundings when an agent enters.
92
• If an environment consists of a finite number of actions that can be deliberated in the environment
to obtain the output, it is said to be a discrete environment.
• The game of chess is discrete as it has only a finite number of moves. The number of moves might
vary with every game, but still, it’s finite.
• The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said
to be continuous.
• Self-driving cars are an example of continuous environments as their actions are driving, parking,
etc. which cannot be numbered.
93
• In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or
episodes. There is no dependency between current and previous incidents. In each incident, an
agent receives input from the environment and then performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which is used to detect defective parts from
the conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e.
there is no dependency between current and previous decisions.
• In a Sequential environment, the previous decisions can affect all future decisions. The next action
of the agent depends on what action he has taken previously and what action he is supposed to
take in the future.
• Example:
Checkers- Where the previous move can affect all the following moves.
94
• In a known environment, the output for all probable actions is given.
• Obviously, in case of unknown environment, for an agent to make a decision, it has
to gain knowledge about how the environment works.
95
• Finding a solution that meets a set of constraints is the goal of constraint
satisfaction problems (CSPs), a type of AI issue.
• Finding values for a group of variables that fulfill a set of restrictions or rules is the
aim of constraint satisfaction problems.
• For tasks including resource allocation, planning, scheduling, and decision-making,
CSPs are frequently employed in AI.
96
Variables: The things that need to be determined are variables. Variables in a CSP are
the objects that must have values assigned to them in order to satisfy a particular set
of constraints. Boolean, integer, and categorical variables are just a few examples of
the various types of variables Variables, for instance, could stand in for the many
puzzle cells that need to be filled with numbers in a sudoku puzzle.
Domains: The range of potential values that a variable can have is represented by
domains. Depending on the issue, a domain may be finite or limitless. For instance, in
Sudoku, the set of numbers from 1 to 9 can serve as the domain of a variable
representing a problem cell.
Constraints: The guidelines that control how variables relate to one another are
known as constraints. Constraints in a CSP define the ranges of possible values for
variables. Unary constraints, binary constraints, and higher-order constraints are only
a few examples of the various sorts of constraints. For instance, in a sudoku problem,
the restrictions might be that each row, column, and 3×3 box can only have one
instance of each number from 1 to 9.
97
98
• The backtracking algorithm is a depth-first search algorithm that methodically
investigates the search space of potential solutions up until a solution is discovered
that satisfies all the restrictions.
• The method begins by choosing a variable and giving it a value before repeatedly
attempting to give values to the other variables.
• The method returns to the prior variable and tries a different value if at any time a
variable cannot be given a value that fulfills the requirements.
• Once all assignments have been tried or a solution that satisfies all constraints has
been discovered, the algorithm ends.
99
• The forward-checking algorithm is a variation of the backtracking algorithm that
condenses the search space using a type of local consistency.
• For each unassigned variable, the method keeps a list of remaining values and
applies local constraints to eliminate inconsistent values from these sets.
• The algorithm examines a variable’s neighbors after it is given a value to see
whether any of its remaining values become inconsistent and removes them from
the sets if they do.
• The algorithm goes backward if, after forward checking, a variable has no more
values.
100
• Algorithms for propagating constraints are a class that uses local consistency and
inference to condense the search space. These algorithms operate by propagating
restrictions between variables and removing inconsistent values from the variable
domains using the information obtained.

More Related Content

PPTX
Problem Solving Strategies Pg Sem 1.pptx
PDF
Chapter 3 - Searching and prPlanning.pdf
PPTX
problem characterstics.pptx
PPTX
Problem solving in Artificial Intelligence.pptx
PPTX
Artificial Intelligence jejeiejj3iriejrjifirirjdjeie
PPTX
study material for Artificial Intelligence
PPTX
3. ArtificialSolving problems by searching.pptx
PPTX
CHAPTER 5.pptx of the following of our discussion
Problem Solving Strategies Pg Sem 1.pptx
Chapter 3 - Searching and prPlanning.pdf
problem characterstics.pptx
Problem solving in Artificial Intelligence.pptx
Artificial Intelligence jejeiejj3iriejrjifirirjdjeie
study material for Artificial Intelligence
3. ArtificialSolving problems by searching.pptx
CHAPTER 5.pptx of the following of our discussion

Similar to Presentation on Artificial Intelligence. (20)

PDF
Unit3:Informed and Uninformed search
PPT
Amit ppt
PPTX
Unit 1- Part 1.pptx about basic of Artificial intelligence
PPT
Unit-V Machine Learning.ppt
PPTX
AI-03 Problems State Space.pptx
PPTX
Learning (e.g., machine learning) Reasoning (solving problems, making decisi...
PPTX
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
PPTX
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
PPTX
AI CH 1d.pptx
PPT
Problem space
PPT
Problem space
PPT
Problem space
PPT
3 probsolver edited.ppt
PPTX
AI Unit-1 PPT.pptx https https plans to go
PPTX
Python first year btech Algorithmic thinking with python
PPTX
Problem Formulation in Artificial Inteligence Projects
PDF
Lecture 5 machine learning updated
PPTX
Lecture 07 search techniques
PDF
AI Chapter III for Computer Science Students
PDF
Introduction to Artificial Intelligence_ Lec 3
Unit3:Informed and Uninformed search
Amit ppt
Unit 1- Part 1.pptx about basic of Artificial intelligence
Unit-V Machine Learning.ppt
AI-03 Problems State Space.pptx
Learning (e.g., machine learning) Reasoning (solving problems, making decisi...
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
UNIT 2-FULL.pptxLearning (e.g., machine learning) Reasoning (solving problem...
AI CH 1d.pptx
Problem space
Problem space
Problem space
3 probsolver edited.ppt
AI Unit-1 PPT.pptx https https plans to go
Python first year btech Algorithmic thinking with python
Problem Formulation in Artificial Inteligence Projects
Lecture 5 machine learning updated
Lecture 07 search techniques
AI Chapter III for Computer Science Students
Introduction to Artificial Intelligence_ Lec 3
Ad

Recently uploaded (20)

PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
KodekX | Application Modernization Development
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Electronic commerce courselecture one. Pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Approach and Philosophy of On baking technology
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Encapsulation theory and applications.pdf
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPT
Teaching material agriculture food technology
PPTX
Cloud computing and distributed systems.
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Chapter 3 Spatial Domain Image Processing.pdf
KodekX | Application Modernization Development
Mobile App Security Testing_ A Comprehensive Guide.pdf
Review of recent advances in non-invasive hemoglobin estimation
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Programs and apps: productivity, graphics, security and other tools
Unlocking AI with Model Context Protocol (MCP)
Electronic commerce courselecture one. Pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Approach and Philosophy of On baking technology
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Digital-Transformation-Roadmap-for-Companies.pptx
Encapsulation theory and applications.pdf
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
20250228 LYD VKU AI Blended-Learning.pptx
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Teaching material agriculture food technology
Cloud computing and distributed systems.
Ad

Presentation on Artificial Intelligence.

  • 1. A machine that could organize your cupboard as you like Serve every member of a house a customized cup of coffee makes your day easier These are the products of artificial intelligence Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems
  • 2. These machine are artificially incorporated with human like intelligence to perform tasks as we do This intelligence is built using complex algorithm and mathematical functions
  • 4. Inspite of variation in lighting , landscape and dimensions of the field the AI robot must perform as expected The ability to react appropriately to new situation is called generalized learning
  • 5. Robot at crossroad one that is paved and other one is rocky The robot should determine which path to take based on circumstances This portraits the robot’s reasoning ability
  • 6. Robot encounters a stream that it cannot swim across Using the plank provided as an input the robot is able to cross the string
  • 7. AI provides machine with capability to adapt, reason and provide solutions Weak AI also called as narrow AI focuses solely on one task Alphago is a maestro of the game go but you cant expect it to be remotely good at chess Avengers is an ideal example of a strong AI because its self aware and eventually develops emotions This makes AI response unpredictable
  • 9. Algorithm that mimics human brain to incorporate intelligence into machine Algorithm to incorporated intelligence into machine by automatically learning from data AI is the ability of machine to imitate human intelligence
  • 12. Search provides a way of solving problems for which no more direct approach is available as well as a framework into which any direct techniques that are available can be embedded. Abstraction finds a way of separating important features and notifications from the unimportant ones that would otherwise confuse any process This technique provides a way of solving complex problems by exploiting the structure of the object that are involved
  • 13. Defining the problem Analyzing the problem Identification of solutions Choosing the solution Implementation
  • 20.  Structured Problem  Well structured – Yield a right answer (1+2=3)  Ill structured – Do not yield a particular answer (Which economic system allows for the most human flourishing?)  Unstructured Problem  Very hard to formulate the problem  Ambiguous in nature
  • 21.  Linear Problem  Have clear solution  All kind of classification problems  Non linear Problem  Relationships between input and output is non linear  Further decision can’t be taken like in linear problem
  • 22. 23 • Semiotic Models - Based on Sign processes / signification and communication. - Code is specific which gives meaning to each sign based on the sound or letters that human use to form words or movements.
  • 23. 24 • Statistical Models - Refers to representation and formulation of relationships through statistical techniques. - Statistical model employs probabilistic approaches and is typically a collection of probability density function and distribution functions.
  • 31. 32
  • 32. 33
  • 33. 34
  • 34. 35
  • 35. 36
  • 36. 37
  • 37. 38
  • 39. 40 • Each state is fully observable and it goes to one definite state after any action. • Here, the goal state is reachable in one single action or sequence of actions. • Deterministic environments ignore uncertainty. Ex- Vacuum cleaner with sensor
  • 40. 41 • Problem – solving agent does not have any information about the state. • Solution may or may not be reached. Ex- In case of vacuum cleaner , the goal state is to clean the floor rather clean floor. Action is to suck if there is dirt. So , in non-observable condition , as there is no sensor , it will have to suck the dirt , irrespective of whether it is towards right or left . Here , the solution space is the states specifying its movement across the floor.
  • 41. 42 • The effect of action is not clear. • Percepts provide new information about the current state. Ex- If we take Vacuum cleaner , and now assume that the sensor is attached to it , then it will suck if there is dirt. Movement of the cleaner will be based on its current percept.
  • 42. 43 • Typically exploration problems • States and impact of actions are not known Ex- online search that involves acting without compete knowledge of the next state or scheduling without map.
  • 43. 44 1. Is the problem decomposable ? 2. Can Solution steps be ignored or undone ? 3. Is the Universe Predictable? 4. Is a good solution absolute or relative ? 5. Is the solution a state or a path? 6. What is the role of knowledge? 7. Does the task require interaction with a person ?
  • 44. 45 • In this case, the problem is divided into smaller problems. • The smaller problems are solved independently. • Finally, the result is merged to get the final result.
  • 45. 46 • In the Theorem Proving problem, a lemma that has been proved can be ignored for the next steps. • Such problems are called Ignorable problems. • In the 8-Puzzle, Moves can be undone and backtracked. • Such problems are called Recoverable problems.
  • 46. 47 • In Playing Bridge, We cannot know exactly where all the cards are or what the other players will do on their turns. • Uncertain outcome! • For certain-outcome problems, planning can be used to generate a sequence of operators that is guaranteed to lead to a solution. • For uncertain-outcome problems, a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided.
  • 47. 48 • The facts can be represented using a formal language called “Predicate Logic” • There may be “n” different solutions. If one solution is found , there is no need to go back and see if some path might also lead to a solution. Absolute solution: Ex: P -> Q If marks = 92 -> Grade = O Relative solution: Ex. TSP (Explore all possible solutions)
  • 48. 49 ● To solve a problem, of finding interpretation, we need to produce only the interpretation itself. Record of processing by how the interpretation was arrived is NOT required. Ex: The President ate Food with a fork. ● In contrast, if we must produce, the final state along with the path that we found to that state along with sequence of operations to produce the final state.
  • 49. 50 • Now consider the problem of scanning daily newspapers to decide which are supporting the Democrats and which are supporting the Republicans in some upcoming election. Again assuming unlimited computing power, how much knowledge would be required by a computer trying to solve this problem? This time the answer is a great deal. • It would have to know such things as:  The names of the candidates in each party.  The fact that if the major thing you want to see done is have taxes lowered, you are probably supporting the Republicans.  The fact that if the major thing you want to see done is improved education for minority students, you are probably supporting the Democrats.  The fact that if you are opposed to big government, you are probably supporting the Republicans.
  • 50. 51 • The solitary problem, in which there is no intermediate communication and no demand for an explanation of the reasoning process. • The conversational problem, in which intermediate communication is to provide either additional assistance to the computer or additional information to the user.
  • 51. 52 Problem: • A problem is a specific task or challenge that requires finding a solution or making a decision. • In artificial intelligence, problems can vary in complexity and scope, ranging from simple tasks like arithmetic calculations to complex challenges such as image recognition, natural language processing, game playing, and optimization. • Each problem has a defined set of initial states, possible actions or moves, and a goal state that needs to be reached or achieved. • Example, in a game of chess, the problem is to find a sequence of moves that lead to checkmate, while in route planning, the problem is to find the shortest path between two locations on a map.
  • 52. 53 Problem Space: • The problem space is the set of all possible states, actions, and transitions that can be encountered while attempting to solve a specific problem. It represents the entire landscape of potential solutions and paths from the initial state to the goal state. • In other words, the problem space defines all the possible configurations or arrangements of elements involved in the problem and the set of valid moves or actions that can be taken at each state. Each state in the problem space represents a specific configuration, and each action represents a possible move or step from one state to another. • Example, in the problem of route planning, the problem space includes all possible locations on the map as states and all valid roads or paths between them as actions.
  • 53. 54 Search: • Search is the process of exploring the problem space to find a sequence of actions or moves that lead to the goal state or a satisfactory solution. In AI, search algorithms are used to systematically navigate through the problem space and discover paths or solutions that satisfy the problem’s constraints and objectives. • The search process involves starting from the initial state and exploring possible actions to generate new states. These states are then evaluated based on certain criteria (e.g., distance to the goal, cost, or utility) to determine the most promising states to explore further. The process continues iteratively until the goal state is reached or a satisfactory solution is found. • There are various search algorithms used in AI, such as depth-first search, breadth-first search, A* search, and heuristic search. Each algorithm has its strengths and weaknesses, and the choice of search algorithm depends on the problem’s characteristics, size of the problem space, and the resources available.
  • 54. The 8-Puzzle (Sliding Block Puzzle) States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares. Initial state: Any state can be designated as the initial state. Successor function: This generates the legal states that result from trying the four actions (blank moves Left, Right, Up, or Down). Goal test: This checks whether the state matches the goal configuration (Other goal configurations are possible.) Path cost: Each step costs 1, so the path cost is the number of steps in the path.
  • 55. The 8-Puzzle (Sliding Block Puzzle) - Solution • hf= +1 for every correct position • Solution of this problem is “movement of tiles” in order to reach goal state. • The transition function or legal move is any one tile movement by one space in any direction.
  • 56. The 8-Puzzle (Sliding Block Puzzle) - Solution
  • 57. The 8-Puzzle (Sliding Block Puzzle) - Solution
  • 59. Toy Problem- 7 Missionaries and Cannibals Let Missionary is denoted by ‘M’ and Cannibal, by ‘C’. These rules are described below: All or some of these production rules will have to be used in a particular sequence to find the solution of the problem.
  • 60. Toy Problem- 7 Missionaries and Cannibals Rules applied and their sequence in Missionaries and Cannibals problem
  • 61. Toy Problem- 7 Formalization of the M&C Problem State space: triple (x,y,z) with 0 ≤ x,y,z ≤ 3, where x,y, and z represent the number of missionaries, cannibals and boats currently on the original bank. Initial State: (3,3,1) Successor function: From each state, either bring one missionary, one cannibal, two missionaries, two cannibals, or one of each type to the other bank. Note: Not all states are attainable (e.g., (0,0,1)), and some are illegal. Goal State: (0,0,0) Path Costs: 1 unit per crossing
  • 62. 63 What is an Agent? • An agent can be anything that perceive its environment through sensors and act upon that environment through actuators. • Agent runs in the cycle of perceiving, thinking, and acting.
  • 63. 64 Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators. Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and various motors for actuators. Software Agent: Software agent can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen.
  • 64. 65 Sensor: Sensor is a device which detects the change in the environment and sends the information to other electronic devices. An agent observes its environment through sensors. Actuators: Actuators are the component of machines that converts energy into motion. The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc. Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen.
  • 65. 66 An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators for achieving goals. An intelligent agent may learn from the environment to achieve their goals. A thermostat is an example of an intelligent agent. Following are the main four rules for an AI agent: Rule 1: An AI agent must have the ability to perceive the environment. Rule 2: The observation must be used to make decisions. Rule 3: Decision should result in an action. Rule 4: The action taken by an AI agent must be a rational action.
  • 66. 67 Autonomy: An AI virtual agent is capable of performing tasks independently without requiring constant human intervention or input. Perception: The agent function senses and interprets the environment they operate in through various sensors, such as cameras or microphones. Reactivity: An AI agent can assess the environment and respond accordingly to achieve its goals. Reasoning and decision-making: AI agents are intelligent tools that can analyze data and make decisions to achieve goals. They use reasoning techniques and algorithms to process information and take appropriate actions. Learning: They can learn and enhance their performance through machine, deep, and reinforcement learning elements and techniques. Communication: AI agents can communicate with other agents or humans using different methods, like understanding and responding to natural language, recognizing speech, and exchanging messages through text. Goal-oriented: They are designed to achieve specific goals, which can be pre-defined or learned through interactions with the environment.
  • 67. 68 • A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to maximize its performance measure with all possible actions. • A rational agent is said to perform the right things. AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios. • For an AI agent, the rational action is most important because in AI reinforcement learning algorithm, for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward.
  • 68. 69 The rationality of an agent is measured by its performance measure. Rationality can be judged on the basis of following points: • Performance measure which defines the success criterion. • Agent prior knowledge of its environment. • Best possible actions that an agent can perform. • The sequence of percepts. Rationality differs from Omniscience because an Omniscient agent knows the actual outcome of its action and act accordingly, which is not possible in reality.
  • 69. 70 Artificial Intelligence agents perform these functions continuously: • Perceiving dynamic conditions in the environment • Acting to affect conditions in the environment • Using reasoning to interpret perceptions • Problem-solving • Drawing inferences • Determining actions and their outcomes
  • 70. 71 • The task of AI is to design an agent program which implements the agent function. The structure of an intelligent agent is a combination of architecture and agent program. Agent = Architecture + Agent program • Following are the main three terms involved in the structure of an AI agent: Architecture: Architecture is machinery that an AI agent executes on. Agent Function: Agent function is used to map a percept to an action. f:P* → A Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f.
  • 71. 72 There are five different types of intelligent agents used in AI. • Simple Reflex Agents • Model-based Agents • Goal-based agents • Utility-based agents • Learning agents
  • 72. 73 • A simple reflex agent is an AI system that follows pre-defined rules to make decisions. It only responds to the current situation without considering the past or future ramifications. • A simple reflex agent is suitable for environments with stable rules and straightforward actions, as its behavior is purely reactive and responsive to immediate environmental changes. Example A rule-based system developed to support automated customer support interactions. The system can automatically generate a predefined response containing instructions on resetting the password if a customer’s message contains keywords indicating a password reset.
  • 73. 74 A model-based reflex performs actions based on a current percept and an internal state representing the unobservable word. It updates its internal state based on two factors: • How the world evolves independently of the agent • How does the agent’s action affect the world A cautionary model-based reflex agent is a variant of a model-based reflex agent that also considers the possible consequences of its actions before executing them.
  • 74. 75 The model-based reflex agent operates in four stages: Sense: It perceives the current state of the world with its sensors. Model: It constructs an internal model of the world from what it sees. Reason: It uses its model of the world to decide how to act based on a set of predefined rules or heuristics. Act: The agent carries out the action that it has chosen. Example One of the finest examples of a cautionary model-based reflex agent is Amazon Bedrock Amazon Bedrock is a service that uses foundational models to simulate operations, gain insights, and make informed decisions for effective planning and optimization. By relying on various models, Bedrock gains insights, predicts outcomes, and makes informed decisions. It continuously refines its models with real-world data, allowing it to adapt and optimize its operations. Amazon Bedrock then plans for different scenarios and selects optimal strategies through simulations and adjustments to model parameters.
  • 75. 76 • Goal-based agents are AI agents that use information from their environment to achieve specific goals. They employ search algorithms to find the most efficient path towards their objectives within a given environment. • These agents are also known as rule-based agents, as they follow predefined rules to accomplish their goals and take specific actions based on certain conditions. • Goal-based agents are easy to design and can handle complex tasks. They can be used in various applications like robotics, computer vision, and natural language processing. • Unlike basic models, a goal-based agent can determine the optimal course of decision-making and action-taking processes depending on its desired outcome or goal.
  • 76. 77 The working pattern of the goal-based agent can be divided into five steps: Perception: The agent perceives its environment using sensors or other input devices to collect information about its surroundings. Reasoning: The agent analyzes the information collected and decides on the best course of action to achieve its goal. Action: The agent takes actions to achieve its goal, such as moving or manipulating objects in the environment. Evaluation: After taking action, the agent evaluates its progress towards the goal and adjusts its actions, if necessary. Goal Completion: Once the agent has achieved its goal, it either stops working or begins working on a new goal. Example Google Bard is a goal-based agent. No doubt, it is also a learning agent. As a goal-based agent, it has a goal or objective to provide high-quality responses to user queries. It chooses its actions that are likely to assist users in finding the information they seek and achieving their desired goal of obtaining accurate and helpful responses.
  • 77. 78 • Utility-based agents are AI agents that make decisions based on maximizing a utility function or value. They choose the action with the highest expected utility, which measures how good the outcome is. • This helps them deal with complex and uncertain situations more flexibly and adaptively. Utility-based agents are often used in applications where they have to compare and select among multiple options, such as resource allocation, scheduling, and game-playing.
  • 78. 79 • A utility-based agent aims to choose actions that lead to a high utility state. To achieve this, it needs to model its environment, which can be simple or complex. • Then, it evaluates the expected utility of each possible outcome based on the probability distribution and the utility function. • Finally, it selects the action with the highest expected utility and repeats this process at each time step Example Anthropic Claude, an AI tool whose goal is to help card members maximize their rewards and benefits from using cards, is a utility-based agent. Because to achieve its goal, it employs a utility function to assign numerical values representing success or happiness to different states (situations that card members face, such as purchasing, paying bills, redeeming rewards, etc.). And then compares the outcome of different actions in each state and trade-off decisions based on their utility values.
  • 79. 80 An AI learning agent is a software agent that can learn from past experiences and improve its performance. It initially acts with basic knowledge and adapts automatically through machine learning. The learning agent comprises four main components: Learning Element: It is responsible for learning and making improvements based on the experiences it gains from its environment. Citric: It provides feedback to the learning element by the agent’s performance for a predefined standard. Performance Element: It selects and executes external actions based on the information from the learning element and the critic. Problem Generator: It suggests actions to create new and informative experiences for the learning element to improve its performance.
  • 80. 81
  • 81. 82 AI learning agents follow a cycle of observing, learning, and acting based on feedback. They interact with their environment, learn from feedback, and modify their behavior for future interactions. Observation: The learning agent observes its environment through sensors or other inputs. Learning: The agent analyzes data using algorithms and statistical models, learning from feedback on its actions and performance. Action: Based on what it has learned, the agent acts in its environment to decide how to behave. Feedback: The agent receives feedback about their actions and performance through rewards, penalties, or environmental cues. Adaptation: Using feedback, the agent changes its behavior and decision-making processes, updating its knowledge and adapting to its environment. Example AutoGPT analyzes the pros and cons of the top ten smartphones by exploring various websites and sources. It evaluates the authenticity of websites using a sub-agent program. Finally, it generates a detailed report summarizing the findings and listing the pros and cons of the top ten smartphone companies.
  • 82. 83 PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words: P: Performance measure E: Environment A: Actuators S: Sensors
  • 83. 84 Performance: Safety, time, legal drive, comfort Environment: Roads, other vehicles, road signs, pedestrian Actuators: Steering, accelerator, brake, signal, horn Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
  • 84. 85 Agent Performance measure Environment Actuators Sensor Medical Diagnose Vacuum Cleaner Part -picking Robot
  • 85. 86 Agent Performance measure Environment Actuators Sensor Medical Diagnose • Healthy patient • Minimized cost • Patient • Hospital • Staff • Tests • Treatments • Keyboard (Entry of symptoms) Vacuum Cleaner • Cleanness • Efficiency • Battery life • Security • Room • Table • Wood floor • Carpet • Various obstacles • Wheels • Brushes • Vacuum Extractor • Camera • Dirt detection sensor • Cliff sensor • Bump Sensor • Infrared Wall Sensor Part -picking Robot • Percentage of parts in correct bins • Conveyor belt with parts • Bins • Jointed Arms • Hand • Camera • Joint angle sensors.
  • 86. 87 • Fully Observable vs Partially Observable • Deterministic vs Stochastic • Competitive vs Collaborative • Single-agent vs Multi-agent • Static vs Dynamic • Discrete vs Continuous • Episodic vs Sequential • Known vs Unknown
  • 87. 88 • When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it is said to be a fully observable environment else it is partially observable. • Maintaining a fully observable environment is easy as there is no need to keep track of the history of the surrounding. • An environment is called unobservable when the agent has no sensors in all environments. Examples: • Chess – the board is fully observable, and so are the opponent’s moves. • Driving – the environment is partially observable because what’s around the corner is not known.
  • 88. 89 • When a uniqueness in the agent’s current state completely determines the next state of the agent, the environment is said to be deterministic. • The stochastic environment is random in nature which is not unique and cannot be completely determined by the agent. Examples: • Chess – there would be only a few possible moves for a coin at the current state and these moves can be determined. • Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.
  • 89. 90 • An agent is said to be in a competitive environment when it competes against another agent to optimize the output. • The game of chess is competitive as the agents compete with each other to win the game which is the output. • An agent is said to be in a collaborative environment when multiple agents cooperate to produce the desired output. • When multiple self-driving cars are found on the roads, they cooperate with each other to avoid collisions and reach their destination which is the output desired.
  • 90. 91 • An environment consisting of only one agent is said to be a single-agent environment. • A person left alone in a maze is an example of the single-agent system. • An environment involving more than one agent is a multi-agent environment. • The game of football is multi-agent as it involves 11 players in each team. • An environment that keeps constantly changing itself when the agent is up with some action is said to be dynamic. • A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant. • An idle environment with no change in its state is called a static environment. • An empty house is static as there’s no change in the surroundings when an agent enters.
  • 91. 92 • If an environment consists of a finite number of actions that can be deliberated in the environment to obtain the output, it is said to be a discrete environment. • The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with every game, but still, it’s finite. • The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be continuous. • Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which cannot be numbered.
  • 92. 93 • In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or episodes. There is no dependency between current and previous incidents. In each incident, an agent receives input from the environment and then performs the corresponding action. • Example: Consider an example of Pick and Place robot, which is used to detect defective parts from the conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e. there is no dependency between current and previous decisions. • In a Sequential environment, the previous decisions can affect all future decisions. The next action of the agent depends on what action he has taken previously and what action he is supposed to take in the future. • Example: Checkers- Where the previous move can affect all the following moves.
  • 93. 94 • In a known environment, the output for all probable actions is given. • Obviously, in case of unknown environment, for an agent to make a decision, it has to gain knowledge about how the environment works.
  • 94. 95 • Finding a solution that meets a set of constraints is the goal of constraint satisfaction problems (CSPs), a type of AI issue. • Finding values for a group of variables that fulfill a set of restrictions or rules is the aim of constraint satisfaction problems. • For tasks including resource allocation, planning, scheduling, and decision-making, CSPs are frequently employed in AI.
  • 95. 96 Variables: The things that need to be determined are variables. Variables in a CSP are the objects that must have values assigned to them in order to satisfy a particular set of constraints. Boolean, integer, and categorical variables are just a few examples of the various types of variables Variables, for instance, could stand in for the many puzzle cells that need to be filled with numbers in a sudoku puzzle. Domains: The range of potential values that a variable can have is represented by domains. Depending on the issue, a domain may be finite or limitless. For instance, in Sudoku, the set of numbers from 1 to 9 can serve as the domain of a variable representing a problem cell. Constraints: The guidelines that control how variables relate to one another are known as constraints. Constraints in a CSP define the ranges of possible values for variables. Unary constraints, binary constraints, and higher-order constraints are only a few examples of the various sorts of constraints. For instance, in a sudoku problem, the restrictions might be that each row, column, and 3×3 box can only have one instance of each number from 1 to 9.
  • 96. 97
  • 97. 98 • The backtracking algorithm is a depth-first search algorithm that methodically investigates the search space of potential solutions up until a solution is discovered that satisfies all the restrictions. • The method begins by choosing a variable and giving it a value before repeatedly attempting to give values to the other variables. • The method returns to the prior variable and tries a different value if at any time a variable cannot be given a value that fulfills the requirements. • Once all assignments have been tried or a solution that satisfies all constraints has been discovered, the algorithm ends.
  • 98. 99 • The forward-checking algorithm is a variation of the backtracking algorithm that condenses the search space using a type of local consistency. • For each unassigned variable, the method keeps a list of remaining values and applies local constraints to eliminate inconsistent values from these sets. • The algorithm examines a variable’s neighbors after it is given a value to see whether any of its remaining values become inconsistent and removes them from the sets if they do. • The algorithm goes backward if, after forward checking, a variable has no more values.
  • 99. 100 • Algorithms for propagating constraints are a class that uses local consistency and inference to condense the search space. These algorithms operate by propagating restrictions between variables and removing inconsistent values from the variable domains using the information obtained.