SlideShare a Scribd company logo
Agents-Artificial Intelligence with different types of agents
Agents and Environments
Agent:
An agent can be viewed as anything that perceives its environment through sensors
and acts upon that environment through actuators.
For example, human being perceives their surroundings through their sensory organs
known as sensors and take actions using their hands, legs, etc., known as actuators.
Agents interact with the environment
through sensors and actuators.
Terminologies:
Percept: Agent’s perceptual inputs at
any given instant.
Percept sequence: Complete history of
Everything the agent has ever perceived.
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Behavior of an Agent
Mathematically, an agent behavior can be described by
an:
• Agent Function: is a mathematical function that maps
a sequence of perceptions into action.
• Agent Program: The agent function for an artificial
agent is implemented using program called agent
program. Agent program is the concrete
implementation running within some physical system.
• The perception capability is usually called a sensor.
• The actions can depend on the most recent perception or on
the entire history (percept sequence).
• The part of the agent taking an action is called an
actuator.
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Vacuum cleaner world problem Example
There are two rooms and one vacuum cleaner.
There is dirt in both the rooms.
Vacuum cleaner present in any one room
Goal – Clean both rooms
Dirt Dirt
Room 1 Room 2
Representation
Vacuum cleaner is the agent
Possible actions
Move left
Move Right
Clean Dirt
Example Cont… 8 Possible states
Concept of Rationality
Rational Agent: A rational agent is one that can take the right decision in every
situation.
An ideal rational agent is the one, which is capable of doing expected actions to
maximize its performance measure, on the basis of −
• Its percept sequence
• Its built-in knowledge base
Rationality of an agent depends on the following −
• The performance measures, which determine the degree of success.
• Agent’s Percept Sequence till now.
• The agent’s prior knowledge about the environment.
• The actions that the agent can carry out.
• A rational agent always performs right action, where the right action means the
action that causes the agent to be most successful in the given percept sequence.
Omniscient Agent: An omniscient agent is an agent which knows the
actual outcome of its action in advance. However, such agents are
impossible in the real world.
Rational agents are different from Omniscient agents because a rational
agent tries to get the best possible outcome with the current perception,
which leads to imperfection. A chess AI can be a good example of a
rational agent because, with the current action, it is not possible to foresee
every possible outcome whereas a tic-tac-toe AI is omniscient as it always
knows the outcome in advance.
Nature of Environment
Environment is the place where the agent is going to work.
To perform a task in an environment, the following are the important
parameters need to be considered..
PEAS stands for Performance measures, Environment, Actuators, and
Sensors.
• Performance measures: These are the parameters used to measure the
performance of the agent. How well the agent is carrying out a particular
assigned task.
• Environment: It is the task environment of the agent. The agent interacts
with its environment. It takes perceptual input from the environment and
acts on the environment using actuators.
• Actuators: These are the means of performing calculated actions on the
environment.
• Sensors: These are the means of taking the input from the environment.
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Sensors Actuators
Human Agent Eyes, Ears, Nose…. Hands, Joints, Legs, Vocal
Cord..
Robotic Agent Cameras, IR Sensors,.. Motors
Software Agent Keystrokes, File contents,… Writing files, displaying on
screen,..
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Autonomous taxi
 Performance measure
 Safe, Fast, Legal,
Comfortable
 Environment
 Roads, Traffic,
Customers
 Actuators
 Steering, Accelerator,
Braking, Horn,
 Sensors
 Camera, GPS,
Odometer, Keyboard
and Microphone…
Agents-Artificial Intelligence with different types of agents
Properties of task environments
1. Fully observable and Partially observable:
An agent’s sensors give it access to complete state of the environment at
each point in time, then we say that the task environment is fully
observable; otherwise it is only partially observable. If the agent has no
sensors at all then the environment is unobservable.
Chess is fully observable: A player gets to see the whole board.
Poker is partially observable: A player gets to see only his own cards,
not the cards of anyone in the game.
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
2. Single Agent / Multi Agents:
An agent operating by itself in an environment is single agent.
Multi agent is when other agents are present.
For example:
An agent solving a crossword puzzle by itself is clearly in a single-agent
environment,
whereas an agent playing chess is in a two agent environment.
• A person left alone in a maze is
an example of the single-agent
system.
• An environment involving more
than one agent is a multi-agent
environment.
• The game of football is multi-
agent as it involves 11 players in
each team.
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
3. Competitive vs Collaborative
• An agent is said to be in a competitive environment when it
competes against another agent to optimize the output.
• The game of chess is competitive as the agents compete with each
other to win the game which is the output.
• An agent is said to be in a collaborative environment when multiple
agents cooperate to produce the desired output.
• When multiple self-driving cars are found on the roads, they
cooperate with each other to avoid collisions and reach their
destination which is the output desired.
4. Deterministic vs Stochastic
If the next state of the environment is completely determined by the
current state and the actions of the agent, then the environment is
deterministic; otherwise it is non-deterministic / stochastic.
Deterministic Environment: Tic Tac Toe game.
Non - Deterministic Environment: Self-driving
vehicles.
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
5. Episodic / Non-episodic ( Sequential )
In an Episodic task environment: Each of the agent’s actions is divided into
atomic incidents or episodes (Each episode consists of the agent perceiving and
then performing a single action ).
There is no dependency between current and previous incidents. In each
incident, an agent receives input from the environment and then performs the
corresponding action.
Example: Consider an example of Pick and Place robot, which is used to
detect defective parts from the conveyor belts. Here, every time robot(agent) will
make the decision on the current part i.e. there is no dependency between
current and previous decisions.
In a Sequential environment, the previous decisions can affect all future
decisions. The next action of the agent depends on what action agent has taken
previously and what action agent is supposed to take in the future.
Example : Checkers- Where the previous move can affect all the following
moves.
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Pick and Place robot
6. Dynamic vs Static
An environment that keeps constantly changing itself when the
agent is up with some action is said to be dynamic.
An idle environment with no change in its state is called a static
environment. If the environment does not change while an agent is
acting, then it is static.
Dynamic: Playing football game, other players make it dynamic.
Every action there will be new reaction.
Agents-Artificial Intelligence with different types of agents
7. Discrete vs Continuous
If an environment consists of a finite number of actions that can be
deliberated in the environment to obtain the output, it is said to be
a discrete environment.
• The game of chess is discrete as it has only a finite number of
moves. The number of moves might vary with every game, but still,
it’s finite.
The environment in which the actions are performed cannot be
numbered i.e. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as
their actions are driving, parking, etc. which cannot be numbered.
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
8. Known vs Unknown
In a known environment, the output for all probable actions are given.
In unknown environment, the agent will have to learn how it works in
order to make good decisions.
Agents-Artificial Intelligence with different types of agents
Example
Agents-Artificial Intelligence with different types of agents
Structure of Agents
The main goal of AI is to design an agent program that implements
the agent function (the mapping from percepts to actions).
Agent = physical sensors and actuators + program
Architecture
Agent = architecture + program
Agent programs
The agent program takes current percept as input from the sensors and
return an action to the actuators.
The agent program takes the current percept as input, and the agent function,
which takes the entire percept history. ; if the agent’s actions need to depend
on the entire percept sequence, the agent will have to remember the percepts.
Types of Agents
Agents can be grouped into five classes based on their degree of perceived
intelligence and capability :
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
Simple reflex agents
• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of
the percept history.
• These agents only succeed in the fully observable environment. If it is
partially observable, in that case the agent function enters into infinite
loops that can be escaped only on randomization of its actions.
• The Simple reflex agent does not consider any part of percepts history
during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it
maps the current state to action.
If the condition is true, then the action is taken, else not.
Simple reflex agents
Function Simple-Reflex-Agent(percept)
static: rules, /* condition-action rules */
state <- Intercept_input(percept)
rule <- Rule_match(state, rules)
action <- Rule_Action(rule)
return(action)
The vacuum agent is a simple
reflex agent because the decision
is based only on the current
location, and whether the place
contains dirt.
Model-based reflex agents
A model-based agent can handle partially observable environments.
It consists of two important factors, which are Model and Internal State.
Model provides knowledge which helps in understanding of the
occurrence of different things in the environment such that the
current situation can be studied and a condition can be created
then appropriate actions are performed by the agent.
Internal State uses the perceptual history to represent a current
percept. The agent keeps a track of this internal state and is adjusted
by each of the percepts. The internal state is stored by the agent to
describe the unseen world.
The state of the agent can be updated by gaining information about how the
environment evolves and how the agent's action affects the environment.
Model-based reflex agents
Function Reflex-Agent-With-State(percept)
static: state, /* description of the current world state */
rules // set of condition-action rules //
state <- Update_State(state, percept)
rule <- Rule_Match(state, rules)
action <- Rule_Action(rule)
state <- Update_State(state, action)
return(action)
Agents-Artificial Intelligence with different types of agents
Agents-Artificial Intelligence with different types of agents
Goal-based agents
These kinds of agents take decisions based on how far they are currently
from their goal.
Their every action is intended to reduce its distance from the goal.
This allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state.
The knowledge that supports its decisions is represented explicitly and
can be modified, which makes these agents more flexible. They usually
require search and planning.
It is an improvement over model based agent where information about the
goal is also included. This is because it is not always sufficient to know just
about the current state, knowledge of the goal is a more beneficial approach.
Goal-based agents
Function Goal-Based-Agent(percept)
static: state, /* description of the current world state */
rules /* set of condition-action rules */
goal /* set of specific success states */
state <- Update_State(state, percept)
rule <- Rule_Match(state, rules)
action <- Rule_Action(rule)
state <- Update_State(state, action)
if (state in goal) then
return (action)
else
percept <- Obtain_Percept(state, goal)
return(Goal-Based-Agent(percept))
Agents-Artificial Intelligence with different types of agents
Utility-based agents
• Utility agent uses building blocks which will help in taking the best
actions and decisions when multiple alternatives are present.
• It is an improvement over goal based agent as it not only involves the goal
but also the way the goal can be achieved such that the goal can be
achieved in a quicker, safer, cheaper way.
• When there are multiple possible alternatives, then to decide which one is
best, utility-based agents are used. They choose actions based on
a preference (utility) for each state. Utility describes how “happy” the
agent is. Utility: A function which maps a state (successful) into a real
number (describes associated degree of success).
• Because of the uncertainty in the world, a utility agent chooses the action
that maximizes the expected utility. A utility function maps a state onto a
real number which describes the associated degree of happiness.
Agents-Artificial Intelligence with different types of agents
Utility-based agents
Function Goal-Based-Agent(percept)
static: state, /* description of the current world state */
rules /* set of condition-action rules */
goal /* set of specific success states */
state <- Update_State(state, percept)
rule <- Rule_Match(state, rules)
action <- Rule_Action(rule)
state <- Update_State(state, action)
score <- Obtain_Score(state)
if (state in goal) and Best_Score(score) then
return(action)
else
percept <- Obtain_Percept(state, goal)
return(Goal-Based-Agent(percept))
Learning Agent
Learning agent, as the name suggests, has the capability to learn from past
experiences and takes actions or decisions based on learning capabilities.
It starts to act with basic knowledge and then is able to act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by learning
from the environment
• Critic: The learning element takes feedback from critics which describes
how well the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem Generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
Learning Agent
Thank You

More Related Content

PPTX
Artificial intelligence Agents lecture slides
PPTX
1.1 What are Agent and Environment.pptx
PDF
Artificial Intelligence chapter 1 and 2(1).pdf
PPTX
AI Chapter Two.pArtificial Intelligence Chapter One.pptxptx
PDF
Artificial Intelligence Course of BIT Unit 2
PPT
introduction to inteligent IntelligentAgent.ppt
PPTX
Artificial Intelligence
PPTX
Lecture 2 Agents.pptx
Artificial intelligence Agents lecture slides
1.1 What are Agent and Environment.pptx
Artificial Intelligence chapter 1 and 2(1).pdf
AI Chapter Two.pArtificial Intelligence Chapter One.pptxptx
Artificial Intelligence Course of BIT Unit 2
introduction to inteligent IntelligentAgent.ppt
Artificial Intelligence
Lecture 2 Agents.pptx

Similar to Agents-Artificial Intelligence with different types of agents (20)

PDF
lec02_intelligentAgentsintelligentAgentsintelligentAgentsintelligentAgents
PPTX
AI_Lec1.pptx ist step to enter in AI field
PPTX
Intelligent Agents, A discovery on How A Rational Agent Acts
PPTX
AI Agents, Agents in Artificial Intelligence
PPTX
m2-agents.pptx
PPTX
AI: Artificial Agents on the Go and its types
PPT
Artificial Intelligent Agents
PPTX
Intelligent Agents
PPT
Agents_AI.ppt
PDF
Week 2.pdf
PPT
chapter -2 Intelligent Agents power Point .ppt
PDF
Chapter word of it Intelligent Agents.pdf
PPT
Artificial intelligence introduction
PPTX
Intelligent agent
PPT
agents in ai ppt
PDF
intelligentagent-140313053301-phpapp01 (1).pdf
PPT
Lec 2-agents
PPT
Intelligent agent artificial intelligent CSE 315
PPTX
AI_Ch2.pptx
PPTX
A modern approach to AI AI_02_agents_Strut.pptx
lec02_intelligentAgentsintelligentAgentsintelligentAgentsintelligentAgents
AI_Lec1.pptx ist step to enter in AI field
Intelligent Agents, A discovery on How A Rational Agent Acts
AI Agents, Agents in Artificial Intelligence
m2-agents.pptx
AI: Artificial Agents on the Go and its types
Artificial Intelligent Agents
Intelligent Agents
Agents_AI.ppt
Week 2.pdf
chapter -2 Intelligent Agents power Point .ppt
Chapter word of it Intelligent Agents.pdf
Artificial intelligence introduction
Intelligent agent
agents in ai ppt
intelligentagent-140313053301-phpapp01 (1).pdf
Lec 2-agents
Intelligent agent artificial intelligent CSE 315
AI_Ch2.pptx
A modern approach to AI AI_02_agents_Strut.pptx
Ad

More from veronica380506 (6)

PPTX
Module 5 - Expert System-in artificial intelligence
PPT
fundamentals of data networks in data communication
PPT
transmission media in data communication
PPT
Scaling Web Applications with Cassandra Presentation (1).ppt
PPT
introduction to Data communication and networking
PPTX
ACA-CM2 Architecture shared memory model
Module 5 - Expert System-in artificial intelligence
fundamentals of data networks in data communication
transmission media in data communication
Scaling Web Applications with Cassandra Presentation (1).ppt
introduction to Data communication and networking
ACA-CM2 Architecture shared memory model
Ad

Recently uploaded (20)

PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPTX
additive manufacturing of ss316l using mig welding
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
web development for engineering and engineering
DOCX
573137875-Attendance-Management-System-original
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
OOP with Java - Java Introduction (Basics)
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
Well-logging-methods_new................
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
UNIT 4 Total Quality Management .pptx
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
additive manufacturing of ss316l using mig welding
Embodied AI: Ushering in the Next Era of Intelligent Systems
web development for engineering and engineering
573137875-Attendance-Management-System-original
Foundation to blockchain - A guide to Blockchain Tech
CYBER-CRIMES AND SECURITY A guide to understanding
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
OOP with Java - Java Introduction (Basics)
Model Code of Practice - Construction Work - 21102022 .pdf
CH1 Production IntroductoryConcepts.pptx
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Well-logging-methods_new................
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Lecture Notes Electrical Wiring System Components
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
UNIT 4 Total Quality Management .pptx

Agents-Artificial Intelligence with different types of agents

  • 2. Agents and Environments Agent: An agent can be viewed as anything that perceives its environment through sensors and acts upon that environment through actuators. For example, human being perceives their surroundings through their sensory organs known as sensors and take actions using their hands, legs, etc., known as actuators. Agents interact with the environment through sensors and actuators. Terminologies: Percept: Agent’s perceptual inputs at any given instant. Percept sequence: Complete history of Everything the agent has ever perceived.
  • 6. Behavior of an Agent Mathematically, an agent behavior can be described by an: • Agent Function: is a mathematical function that maps a sequence of perceptions into action. • Agent Program: The agent function for an artificial agent is implemented using program called agent program. Agent program is the concrete implementation running within some physical system. • The perception capability is usually called a sensor. • The actions can depend on the most recent perception or on the entire history (percept sequence). • The part of the agent taking an action is called an actuator.
  • 10. Vacuum cleaner world problem Example There are two rooms and one vacuum cleaner. There is dirt in both the rooms. Vacuum cleaner present in any one room Goal – Clean both rooms Dirt Dirt Room 1 Room 2 Representation Vacuum cleaner is the agent Possible actions Move left Move Right Clean Dirt
  • 11. Example Cont… 8 Possible states
  • 12. Concept of Rationality Rational Agent: A rational agent is one that can take the right decision in every situation. An ideal rational agent is the one, which is capable of doing expected actions to maximize its performance measure, on the basis of − • Its percept sequence • Its built-in knowledge base Rationality of an agent depends on the following − • The performance measures, which determine the degree of success. • Agent’s Percept Sequence till now. • The agent’s prior knowledge about the environment. • The actions that the agent can carry out. • A rational agent always performs right action, where the right action means the action that causes the agent to be most successful in the given percept sequence.
  • 13. Omniscient Agent: An omniscient agent is an agent which knows the actual outcome of its action in advance. However, such agents are impossible in the real world. Rational agents are different from Omniscient agents because a rational agent tries to get the best possible outcome with the current perception, which leads to imperfection. A chess AI can be a good example of a rational agent because, with the current action, it is not possible to foresee every possible outcome whereas a tic-tac-toe AI is omniscient as it always knows the outcome in advance.
  • 14. Nature of Environment Environment is the place where the agent is going to work. To perform a task in an environment, the following are the important parameters need to be considered.. PEAS stands for Performance measures, Environment, Actuators, and Sensors. • Performance measures: These are the parameters used to measure the performance of the agent. How well the agent is carrying out a particular assigned task. • Environment: It is the task environment of the agent. The agent interacts with its environment. It takes perceptual input from the environment and acts on the environment using actuators. • Actuators: These are the means of performing calculated actions on the environment. • Sensors: These are the means of taking the input from the environment.
  • 17. Sensors Actuators Human Agent Eyes, Ears, Nose…. Hands, Joints, Legs, Vocal Cord.. Robotic Agent Cameras, IR Sensors,.. Motors Software Agent Keystrokes, File contents,… Writing files, displaying on screen,..
  • 20. Autonomous taxi  Performance measure  Safe, Fast, Legal, Comfortable  Environment  Roads, Traffic, Customers  Actuators  Steering, Accelerator, Braking, Horn,  Sensors  Camera, GPS, Odometer, Keyboard and Microphone…
  • 22. Properties of task environments 1. Fully observable and Partially observable: An agent’s sensors give it access to complete state of the environment at each point in time, then we say that the task environment is fully observable; otherwise it is only partially observable. If the agent has no sensors at all then the environment is unobservable. Chess is fully observable: A player gets to see the whole board. Poker is partially observable: A player gets to see only his own cards, not the cards of anyone in the game.
  • 26. 2. Single Agent / Multi Agents: An agent operating by itself in an environment is single agent. Multi agent is when other agents are present. For example: An agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two agent environment. • A person left alone in a maze is an example of the single-agent system. • An environment involving more than one agent is a multi-agent environment. • The game of football is multi- agent as it involves 11 players in each team.
  • 29. 3. Competitive vs Collaborative • An agent is said to be in a competitive environment when it competes against another agent to optimize the output. • The game of chess is competitive as the agents compete with each other to win the game which is the output. • An agent is said to be in a collaborative environment when multiple agents cooperate to produce the desired output. • When multiple self-driving cars are found on the roads, they cooperate with each other to avoid collisions and reach their destination which is the output desired.
  • 30. 4. Deterministic vs Stochastic If the next state of the environment is completely determined by the current state and the actions of the agent, then the environment is deterministic; otherwise it is non-deterministic / stochastic. Deterministic Environment: Tic Tac Toe game. Non - Deterministic Environment: Self-driving vehicles.
  • 33. 5. Episodic / Non-episodic ( Sequential ) In an Episodic task environment: Each of the agent’s actions is divided into atomic incidents or episodes (Each episode consists of the agent perceiving and then performing a single action ). There is no dependency between current and previous incidents. In each incident, an agent receives input from the environment and then performs the corresponding action. Example: Consider an example of Pick and Place robot, which is used to detect defective parts from the conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e. there is no dependency between current and previous decisions. In a Sequential environment, the previous decisions can affect all future decisions. The next action of the agent depends on what action agent has taken previously and what action agent is supposed to take in the future. Example : Checkers- Where the previous move can affect all the following moves.
  • 37. Pick and Place robot
  • 38. 6. Dynamic vs Static An environment that keeps constantly changing itself when the agent is up with some action is said to be dynamic. An idle environment with no change in its state is called a static environment. If the environment does not change while an agent is acting, then it is static. Dynamic: Playing football game, other players make it dynamic. Every action there will be new reaction.
  • 40. 7. Discrete vs Continuous If an environment consists of a finite number of actions that can be deliberated in the environment to obtain the output, it is said to be a discrete environment. • The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with every game, but still, it’s finite. The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be continuous. • Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which cannot be numbered.
  • 43. 8. Known vs Unknown In a known environment, the output for all probable actions are given. In unknown environment, the agent will have to learn how it works in order to make good decisions.
  • 47. Structure of Agents The main goal of AI is to design an agent program that implements the agent function (the mapping from percepts to actions). Agent = physical sensors and actuators + program Architecture Agent = architecture + program
  • 48. Agent programs The agent program takes current percept as input from the sensors and return an action to the actuators. The agent program takes the current percept as input, and the agent function, which takes the entire percept history. ; if the agent’s actions need to depend on the entire percept sequence, the agent will have to remember the percepts. Types of Agents Agents can be grouped into five classes based on their degree of perceived intelligence and capability : • Simple Reflex Agents • Model-Based Reflex Agents • Goal-Based Agents • Utility-Based Agents • Learning Agent
  • 49. Simple reflex agents • The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current percepts and ignore the rest of the percept history. • These agents only succeed in the fully observable environment. If it is partially observable, in that case the agent function enters into infinite loops that can be escaped only on randomization of its actions. • The Simple reflex agent does not consider any part of percepts history during their decision and action process. • The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. If the condition is true, then the action is taken, else not.
  • 50. Simple reflex agents Function Simple-Reflex-Agent(percept) static: rules, /* condition-action rules */ state <- Intercept_input(percept) rule <- Rule_match(state, rules) action <- Rule_Action(rule) return(action) The vacuum agent is a simple reflex agent because the decision is based only on the current location, and whether the place contains dirt.
  • 51. Model-based reflex agents A model-based agent can handle partially observable environments. It consists of two important factors, which are Model and Internal State. Model provides knowledge which helps in understanding of the occurrence of different things in the environment such that the current situation can be studied and a condition can be created then appropriate actions are performed by the agent. Internal State uses the perceptual history to represent a current percept. The agent keeps a track of this internal state and is adjusted by each of the percepts. The internal state is stored by the agent to describe the unseen world. The state of the agent can be updated by gaining information about how the environment evolves and how the agent's action affects the environment.
  • 52. Model-based reflex agents Function Reflex-Agent-With-State(percept) static: state, /* description of the current world state */ rules // set of condition-action rules // state <- Update_State(state, percept) rule <- Rule_Match(state, rules) action <- Rule_Action(rule) state <- Update_State(state, action) return(action)
  • 55. Goal-based agents These kinds of agents take decisions based on how far they are currently from their goal. Their every action is intended to reduce its distance from the goal. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. The knowledge that supports its decisions is represented explicitly and can be modified, which makes these agents more flexible. They usually require search and planning. It is an improvement over model based agent where information about the goal is also included. This is because it is not always sufficient to know just about the current state, knowledge of the goal is a more beneficial approach.
  • 57. Function Goal-Based-Agent(percept) static: state, /* description of the current world state */ rules /* set of condition-action rules */ goal /* set of specific success states */ state <- Update_State(state, percept) rule <- Rule_Match(state, rules) action <- Rule_Action(rule) state <- Update_State(state, action) if (state in goal) then return (action) else percept <- Obtain_Percept(state, goal) return(Goal-Based-Agent(percept))
  • 59. Utility-based agents • Utility agent uses building blocks which will help in taking the best actions and decisions when multiple alternatives are present. • It is an improvement over goal based agent as it not only involves the goal but also the way the goal can be achieved such that the goal can be achieved in a quicker, safer, cheaper way. • When there are multiple possible alternatives, then to decide which one is best, utility-based agents are used. They choose actions based on a preference (utility) for each state. Utility describes how “happy” the agent is. Utility: A function which maps a state (successful) into a real number (describes associated degree of success). • Because of the uncertainty in the world, a utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real number which describes the associated degree of happiness.
  • 62. Function Goal-Based-Agent(percept) static: state, /* description of the current world state */ rules /* set of condition-action rules */ goal /* set of specific success states */ state <- Update_State(state, percept) rule <- Rule_Match(state, rules) action <- Rule_Action(rule) state <- Update_State(state, action) score <- Obtain_Score(state) if (state in goal) and Best_Score(score) then return(action) else percept <- Obtain_Percept(state, goal) return(Goal-Based-Agent(percept))
  • 63. Learning Agent Learning agent, as the name suggests, has the capability to learn from past experiences and takes actions or decisions based on learning capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically through learning. A learning agent has mainly four conceptual components, which are: • Learning element: It is responsible for making improvements by learning from the environment • Critic: The learning element takes feedback from critics which describes how well the agent is doing with respect to a fixed performance standard. • Performance element: It is responsible for selecting external action • Problem Generator: This component is responsible for suggesting actions that will lead to new and informative experiences.

Editor's Notes

  • #3: An object is perceived by humans through sensor organs
  • #4: Its full automated car its receives the sensors through the environment
  • #5: Objects are appearing and disappearing-we will have one start point –to form circle s called percept sequence
  • #7: It will perceives the temperature of the room (withhelp of actuators its adjust to environment condition
  • #8: Agent function-what input your getting from the environment what action I should be taken
  • #33: The episodic environment is also called the non-sequential environment. In an episodic environment, an agent's current action will not affect a future action, whereas in a non-episodic environment, an agent's current action will affect a future action and is also called the sequential environment.
  • #44: In solitare card games I know the rules but am not able to see the cards that have not turned over. In unknwn –video games the screen may show the entire game state but what button will do until I try them.
  • #50: The actions are taken depending upon the condition. If the condition is true, the relevant action is taken. If it is false, the other action is taken. The agent takes input from the environment through sensors, and delivers the output to the environment through actuators.
  • #52: What differentiates a model-based reflex agent from a simple reflex agent? A model-based agent relies only on current understanding. A simple reflex agent is more sophisticated. A model-based agent can incorporate percept history. A simple reflex agent only looks at percept history.
  • #55: Alibaba(g+) robot agent in Amazon delivers to customers based on goal based agent
  • #62: What is the primary function of the utility-based agent? Making the best choice. Reaching its intended goal. Improving its thought processes. Delivering actuator data.