SlideShare a Scribd company logo
Intelligent Agents
Chapter 2
Outline
• Agents and environments
• Rationality
• PEAS (Performance measure,
Environment, Actuators, Sensors)
• Environment types
• Agent types
Agents
• An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators
Agents and environments
• The agent function maps from percept histories
to actions:
[f: P*  A]
• The agent program runs on the physical
architecture to produce f
• agent = architecture + program
Vacuum-cleaner world
• Percepts: location and contents, e.g.,
[A,Dirty]
• Actions: Left, Right, Suck, NoOp
Rational agents
• An agent should strive to "do the right thing",
based on what it can perceive and the actions it
can perform. The right action is the one that will
cause the agent to be most successful
• Performance measure: An objective criterion for
success of an agent's behavior
• E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up,
amount of time taken, amount of electricity
consumed, amount of noise generated, etc.
Rational agents
• Rational Agent: For each possible percept
sequence, a rational agent should select
an action that is expected to maximize its
performance measure, given the evidence
provided by the percept sequence and
whatever built-in knowledge the agent has.
Rational agents
• Rationality is distinct from omniscience
• Agents can perform actions in order to
modify future percepts so as to obtain
useful information (information gathering,
exploration)
• An agent is autonomous if its behavior is
determined by its own experience (with
ability to learn and adapt)
PEAS
• PEAS: Performance measure, Environment,
Actuators, Sensors
• Must first specify the setting for intelligent agent
design
• Consider, e.g., the task of designing an
automated taxi driver:
– Performance measure
– Environment
– Actuators
– Sensors
PEAS
• Must first specify the setting for intelligent agent
design
• Consider, e.g., the task of designing an
automated taxi driver:
– Performance measure: Safe, fast, legal, comfortable
trip, maximize profits
– Environment: Roads, other traffic, pedestrians,
customers
– Actuators: Steering wheel, accelerator, brake, signal,
horn
– Sensors: Cameras, sonar, speedometer, GPS,
odometer, engine sensors, keyboard
PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient,
minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions,
tests, diagnoses, treatments, referrals)
• Sensors: Keyboard (entry of symptoms,
findings, patient's answers)
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of
parts in correct bins
• Environment: Conveyor belt with parts,
bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors
PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's
score on test
• Environment: Set of students
• Actuators: Screen display (exercises,
suggestions, corrections)
• Sensors: Keyboard
Environment types
• Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time.
• Deterministic (vs. stochastic): The next state of the
environment is completely determined by the current
state and the action executed by the agent. (If the
environment is deterministic except for the actions of
other agents, then the environment is strategic)
• Episodic (vs. sequential): The agent's experience is
divided into atomic "episodes" (each episode consists of
the agent perceiving and then performing a single
action), and the choice of action in each episode
depends only on the episode itself.
Environment types
• Static (vs. dynamic): The environment is
unchanged while an agent is deliberating. (The
environment is semidynamic if the environment
itself does not change with the passage of time
but the agent's performance score does)
• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.
• Single agent (vs. multiagent): An agent
operating by itself in an environment.
Environment types
• Uncertain We say an environment is uncertain if
it is not fully observable or not deterministic.
• Fully observable
For example, a checkers game can be
classed as fully observable, because the
agent can observe the full state of the
game how many pieces the opponent has,
how many pieces we have etc.
• Partially observable
An example of this could be a Poker game.
The Agent may not know what cards the
opponent has and will have to make best
decision based on what cards the
opponent has played.
• Deterministic
Deterministic environments are where your
agent's actions uniquely determine the
outcome. So for example, if we had a
pawn while playing chess and we moved
that piece from A2 to A3, that would
always work. There is no uncertainty in the
outcome of that move
• Episodic
Many classification tasks are episodic. For
example, an agent that has to spot
defective parts on an assembly line bases
each decision on the current part,
regardless of previous decisions;
moreover, the current decision doesn’t
affect whether the next part is defective.
• Sequential
In sequential environments, on the other hand, the
current decision could affect all future decisions.
Chess and taxi driving are sequential: in both
cases, short-term actions can have long-term
consequences.
Episodic environments are much simpler than
sequential environments because the agent
does not need to think ahead.
• Static environments
easy to deal with because the agent need
not keep looking at the world while it is
deciding on an action, nor need it worry
about the passage of time.
Crossword puzzles are static.
• Dynamic environments
continuously asking the agent what it wants
to do, if it hasn’t decided yet, that counts
as deciding to do nothing.
Taxi driving is clearly dynamic: the other
cars and the taxi itself keep moving
• SEMIDYNAMIC
Chess, when played with a clock, is
semidynamic.
• Discrete
In discrete environments, we have a finite
amount of action choices and a finite
amount of things that we can sense. Using
our checkers example again, there are a
finite amount of board positions and a
finite amount of things we can do within
the checkers environment.
• Continuous
In continuous environments, many actions can
be sensed by our agents. To apply this to a
medical context, a patient's temperature and
blood pressure are continuous variables,
and can be sensed by medical agents
designed to capture vital signs from patients
and then recommend diagnostic action to
healthcare professionals.
• Discrete vs. continuous: The discrete/continuous
distinction applies to the state of the
environment, to the way time is handled, and to
the percepts and actions of the agent. Taxi
driving is a continuous-state and continuous-
time problem: the speed and location of the taxi
and of the other vehicles sweep through a range
of continuous values and do so smoothly over
time. Taxi-driving actions are also continuous
(steering angles, etc.).
• Single vs. Multi agent
example, an agent solving a crossword
puzzle by itself is clearly in a single-agent
environment, whereas an agent playing
chess is in a two agent environment
• Multi-agents environment
– Competitive : chess
– Cooperative : taxi
Environment types
Chess with Chess without Taxi
driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No
• The environment type largely determines the agent design
• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
•
•
Class exercise
• Internet book shopping agent
Write PEAS and discuss all
environment types as well for
following:
Robot Soccer Player
Agent functions and programs
• An agent is completely specified by the
agent function mapping percept
sequences to actions
• One agent function (or a small
equivalence class) is rational
• Aim: find a way to implement the rational
agent function concisely
•
•
Table-lookup agent
• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn
the table entries
Agent types
• Four basic types in order of increasing
generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
Simple reflex agents
Model-based reflex agents
Goal-based agents
•
Utility-based agents
Learning agents

More Related Content

PPTX
INTELLIGENT AGENTS.pptx
PPTX
rational agent it is a lecture for ai course
PPTX
AI Intelligent Agents: Types and Environment Properties
PPT
2.IntelligentAgents.ppt
PPTX
m2-agents.pptx
PPT
mosfet3inteliggent ageent preserve2ss.ppt
PDF
lec02_intelligentAgentsintelligentAgentsintelligentAgentsintelligentAgents
PPT
M2 agents
INTELLIGENT AGENTS.pptx
rational agent it is a lecture for ai course
AI Intelligent Agents: Types and Environment Properties
2.IntelligentAgents.ppt
m2-agents.pptx
mosfet3inteliggent ageent preserve2ss.ppt
lec02_intelligentAgentsintelligentAgentsintelligentAgentsintelligentAgents
M2 agents

Similar to 3.agents-2.ppt real world Artificial Intelligence (20)

PPTX
2. Intelligent_Agents_ShgfutydtfxcfdxdfL.pptx
PPTX
Intelligence Agent - Artificial Intelligent (AI)
PPT
Intelligent agent artificial intelligent CSE 315
PPTX
Lecture 2-Artificial Intelligence.pptx Mzuzu
PPT
PPTX
Environments of Agents for their effective working
PPTX
Artificial intelligence Agents lecture slides
PPTX
Unit-1.pptx
PPTX
AI Basic.pptx
PDF
Lec 2 agents
PDF
Agents1
PPT
Agents_AI.ppt
PPTX
Artificial Intelligence and Machine Learning.pptx
PDF
Lecture 2 agent and environment
PDF
Artificial intelligence what is agent and all about agent
PPTX
AI: Artificial Agents on the Go and its types
PPTX
Intelligent Agents
PPTX
Lecture 3 - Properties of Task Environment.pptx
PPTX
1.1 What are Agent and Environment.pptx
PPT
Lecture 2
2. Intelligent_Agents_ShgfutydtfxcfdxdfL.pptx
Intelligence Agent - Artificial Intelligent (AI)
Intelligent agent artificial intelligent CSE 315
Lecture 2-Artificial Intelligence.pptx Mzuzu
Environments of Agents for their effective working
Artificial intelligence Agents lecture slides
Unit-1.pptx
AI Basic.pptx
Lec 2 agents
Agents1
Agents_AI.ppt
Artificial Intelligence and Machine Learning.pptx
Lecture 2 agent and environment
Artificial intelligence what is agent and all about agent
AI: Artificial Agents on the Go and its types
Intelligent Agents
Lecture 3 - Properties of Task Environment.pptx
1.1 What are Agent and Environment.pptx
Lecture 2
Ad

Recently uploaded (20)

PPTX
Zeem: Transition Your Fleet, Seamlessly by Margaret Boelter
PDF
Volvo EC290C NL EC290CNL Hydraulic Excavator Specs Manual.pdf
PDF
Volvo EC20C Excavator Service maintenance schedules.pdf
PDF
Caterpillar Cat 315C Excavator (Prefix ANF) Service Repair Manual Instant Dow...
PDF
3-REasdfghjkl;[poiunvnvncncn-Process.pdf
PPTX
Business Economics uni 1.pptxRTRETRETRTRETRETRETRETERT
PPTX
Culture by Design.pptxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
PPT
Kaizen for Beginners and how to implement Kaizen
PPTX
Lecture 3b C Library xnxjxjxjxkx_ ESP32.pptx
PDF
Volvo EC290C NL EC290CNL excavator weight.pdf
PPTX
Small Fleets, Big Change: Market Acceleration by Niki Okuk
PPTX
Intro to ISO 9001 2015.pptx for awareness
PDF
Volvo EC20C Excavator Step-by-step Maintenance Instructions pdf
PPTX
Cloud_Computing_ppt[1].pptx132EQ342RRRRR1
PDF
Caterpillar Cat 315C Excavator (Prefix CJC) Service Repair Manual Instant Dow...
PPTX
Independence_Day_Patriotic theme (1).pptx
PDF
Caterpillar CAT 312B L EXCAVATOR (2KW00001-UP) Operation and Maintenance Manu...
PDF
Journal Meraj.pdfuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
PDF
computer system to create, modify, analyse or optimize an engineering design.
PPTX
1. introduction-to-bvcjdhjdfffffffffffffffffffffffffffffffffffmicroprocessors...
Zeem: Transition Your Fleet, Seamlessly by Margaret Boelter
Volvo EC290C NL EC290CNL Hydraulic Excavator Specs Manual.pdf
Volvo EC20C Excavator Service maintenance schedules.pdf
Caterpillar Cat 315C Excavator (Prefix ANF) Service Repair Manual Instant Dow...
3-REasdfghjkl;[poiunvnvncncn-Process.pdf
Business Economics uni 1.pptxRTRETRETRTRETRETRETRETERT
Culture by Design.pptxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Kaizen for Beginners and how to implement Kaizen
Lecture 3b C Library xnxjxjxjxkx_ ESP32.pptx
Volvo EC290C NL EC290CNL excavator weight.pdf
Small Fleets, Big Change: Market Acceleration by Niki Okuk
Intro to ISO 9001 2015.pptx for awareness
Volvo EC20C Excavator Step-by-step Maintenance Instructions pdf
Cloud_Computing_ppt[1].pptx132EQ342RRRRR1
Caterpillar Cat 315C Excavator (Prefix CJC) Service Repair Manual Instant Dow...
Independence_Day_Patriotic theme (1).pptx
Caterpillar CAT 312B L EXCAVATOR (2KW00001-UP) Operation and Maintenance Manu...
Journal Meraj.pdfuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
computer system to create, modify, analyse or optimize an engineering design.
1. introduction-to-bvcjdhjdfffffffffffffffffffffffffffffffffffmicroprocessors...
Ad

3.agents-2.ppt real world Artificial Intelligence

  • 2. Outline • Agents and environments • Rationality • PEAS (Performance measure, Environment, Actuators, Sensors) • Environment types • Agent types
  • 3. Agents • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
  • 4. Agents and environments • The agent function maps from percept histories to actions: [f: P*  A] • The agent program runs on the physical architecture to produce f • agent = architecture + program
  • 5. Vacuum-cleaner world • Percepts: location and contents, e.g., [A,Dirty] • Actions: Left, Right, Suck, NoOp
  • 6. Rational agents • An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful • Performance measure: An objective criterion for success of an agent's behavior • E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc.
  • 7. Rational agents • Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
  • 8. Rational agents • Rationality is distinct from omniscience • Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) • An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)
  • 9. PEAS • PEAS: Performance measure, Environment, Actuators, Sensors • Must first specify the setting for intelligent agent design • Consider, e.g., the task of designing an automated taxi driver: – Performance measure – Environment – Actuators – Sensors
  • 10. PEAS • Must first specify the setting for intelligent agent design • Consider, e.g., the task of designing an automated taxi driver: – Performance measure: Safe, fast, legal, comfortable trip, maximize profits – Environment: Roads, other traffic, pedestrians, customers – Actuators: Steering wheel, accelerator, brake, signal, horn – Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
  • 11. PEAS • Agent: Medical diagnosis system • Performance measure: Healthy patient, minimize costs, lawsuits • Environment: Patient, hospital, staff • Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) • Sensors: Keyboard (entry of symptoms, findings, patient's answers)
  • 12. PEAS • Agent: Part-picking robot • Performance measure: Percentage of parts in correct bins • Environment: Conveyor belt with parts, bins • Actuators: Jointed arm and hand • Sensors: Camera, joint angle sensors
  • 13. PEAS • Agent: Interactive English tutor • Performance measure: Maximize student's score on test • Environment: Set of students • Actuators: Screen display (exercises, suggestions, corrections) • Sensors: Keyboard
  • 14. Environment types • Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. • Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic) • Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.
  • 15. Environment types • Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does) • Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. • Single agent (vs. multiagent): An agent operating by itself in an environment.
  • 16. Environment types • Uncertain We say an environment is uncertain if it is not fully observable or not deterministic.
  • 17. • Fully observable For example, a checkers game can be classed as fully observable, because the agent can observe the full state of the game how many pieces the opponent has, how many pieces we have etc.
  • 18. • Partially observable An example of this could be a Poker game. The Agent may not know what cards the opponent has and will have to make best decision based on what cards the opponent has played.
  • 19. • Deterministic Deterministic environments are where your agent's actions uniquely determine the outcome. So for example, if we had a pawn while playing chess and we moved that piece from A2 to A3, that would always work. There is no uncertainty in the outcome of that move
  • 20. • Episodic Many classification tasks are episodic. For example, an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions; moreover, the current decision doesn’t affect whether the next part is defective.
  • 21. • Sequential In sequential environments, on the other hand, the current decision could affect all future decisions. Chess and taxi driving are sequential: in both cases, short-term actions can have long-term consequences. Episodic environments are much simpler than sequential environments because the agent does not need to think ahead.
  • 22. • Static environments easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time. Crossword puzzles are static.
  • 23. • Dynamic environments continuously asking the agent what it wants to do, if it hasn’t decided yet, that counts as deciding to do nothing. Taxi driving is clearly dynamic: the other cars and the taxi itself keep moving
  • 24. • SEMIDYNAMIC Chess, when played with a clock, is semidynamic.
  • 25. • Discrete In discrete environments, we have a finite amount of action choices and a finite amount of things that we can sense. Using our checkers example again, there are a finite amount of board positions and a finite amount of things we can do within the checkers environment.
  • 26. • Continuous In continuous environments, many actions can be sensed by our agents. To apply this to a medical context, a patient's temperature and blood pressure are continuous variables, and can be sensed by medical agents designed to capture vital signs from patients and then recommend diagnostic action to healthcare professionals.
  • 27. • Discrete vs. continuous: The discrete/continuous distinction applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. Taxi driving is a continuous-state and continuous- time problem: the speed and location of the taxi and of the other vehicles sweep through a range of continuous values and do so smoothly over time. Taxi-driving actions are also continuous (steering angles, etc.).
  • 28. • Single vs. Multi agent example, an agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two agent environment • Multi-agents environment – Competitive : chess – Cooperative : taxi
  • 29. Environment types Chess with Chess without Taxi driving a clock a clock Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes No Single agent No No No • The environment type largely determines the agent design • The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent • •
  • 30. Class exercise • Internet book shopping agent
  • 31. Write PEAS and discuss all environment types as well for following: Robot Soccer Player
  • 32. Agent functions and programs • An agent is completely specified by the agent function mapping percept sequences to actions • One agent function (or a small equivalence class) is rational • Aim: find a way to implement the rational agent function concisely • •
  • 33. Table-lookup agent • Drawbacks: – Huge table – Take a long time to build the table – No autonomy – Even with learning, need a long time to learn the table entries
  • 34. Agent types • Four basic types in order of increasing generality: • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents

Editor's Notes

  • #8: (all-knowing with infinite knowledge)