SlideShare a Scribd company logo
Chapter Two
Intelligent Agent
1
Objectives
 Gives ideas about agent, agent function, agent
program and architecture, environment, percept,
sensor, actuator (effectors)
 Gives idea on how agent should act
 How to measure agent success
 Rational agent, autonomous agent
 Types of environment
 Types of agent
2
Intelligent Agent
• I want to build a robot that will
– Clean my house
– Cook when I don’t want to
– Wash my clothes
– Cut my hair
– Fix my car (or take it to be fixed)
– Take a note when I am in a meeting
i.e. do the things that I don’t feel like doing…
• AI is the science of building machines (agents) that act
rationally with respect to a goal.
• AI is the study and construction of rational agents
– A rational agent is one that does the right thing
3
Agent
•Agent is something that perceives its environment through
SENSORS and acts upon that environment through
EFFECTORS.
•The agent is assumed to exist in an environment in which it
perceives and acts
•An agent is rational since it does the right thing.
4
Agent
Human beings Agents
Sensors Eyes, Ears, Nose Cameras,
Scanners
Effectors Hands, Legs,
Mouth
Various
Motors
5
Agent type Percepts Actions Goals Environment
Medical diagnosis
system
Symptoms,
findings,
patient's
answers
Questions, tests,
treatments
Healthy
patients,
minimize
costs
Patient,
hospital
Interactive English
tutor
Typed words Print exercises,
suggestions,
corrections
Maximize
student's
score on test
Set of students
Part-picking robot
Pixels of
varying
intensity
Pick up parts and
sort into bins
Place parts in
correct bins
Conveyor belts
with parts
Satellite image
analysis system
Pixels of
varying
intensity, color
Print a
categorization of
scene
Correct
categorization
Images from
orbiting
satellite
Examples of agents in different types of applications
Refinery
controller
Temperature,
pressure
readings
Open, close valves;
adjust temperature
Maximize
purity, yield,
safety
Refinery
6
Agents and environments
• The agent function maps from percept histories to
actions:
[f: P*  A]
• The agent program runs on the physical
architecture to produce f
• agent = architecture + program
• Ideal Example of Agent
– Vacuum-cleaner world
• Percepts: location and contents
– e.g., [A, Dirty]
• Actions: Left, Right, Suck
7
Rationality vs. Omniscience
•A rational agent is an agent that does the right thing for the
perceived data from the environment.
•Rational agent acts so as to achieve one's goals, given one's
beliefs (one that does the right thing).
–What does right thing mean? one that will cause the agent to be most
successful and is expected to maximize goal achievement, given the
available information
•An Omniscient agent knows the actual outcome of its actions,
and can act accordingly, but in reality omniscience is
impossible.
•Rational agent could make a mistake because of unpredictable
factors at the time of making decision.
•Omniscient agent that act and think rationally never make a
mistake
8
Example
– You are walking along the road to mebrat-
haile; You see an old friend across the street.
There is no traffic.
– So, being rational, you start to cross the
street.
– Meanwhile a big banner falls off from above
and before you finish crossing the road, you
are flattened.
Were you irrational to cross the street?
9
Rationality
• This points out that rationality is concerned
with expected success, given what has
been perceived.
• Crossing the street was rational, because
most of the time, the crossing would be
successful, and there was no way you
could have foreseen the falling banner.
10
Rational agent
•The EXAMPLE shows that we can not blame an
agent for failing to take into account something it
could not perceive.
Or for failing to take an action that it is incapable of
taking.
•In summary what is rational at any given point
depends on four things.
–Everything that the agent has perceived so far
–What an agent knows about the environment
–The actions that the agent can perform
–The performance measure that defines degrees of success;
11
Performance measure
• How do we decide whether an agent is
successful or not?
– Establish a standard of what it means to be
successful in an environment and use it to measure
the performance
– A rational agent should do whatever action is
expected to maximize its performance measure, on
the basis of the evidence provided by the percept
sequence and whatever built-in knowledge the agent
has.
• What is the performance measure for “crossing
the road”?
12
cont’d
 Performance measure (how?)
◦ Subjective measure using the agent
 How happy is the agent at the end of the action
 Agent should answer based on his opinion
 Some agents are unable to answer, some are delude
themselves, some over estimate and some under
estimate their success
 Therefore, subjective measure is not a better way.
 Objective Measure imposed by some authority is an
alternative
13
 Objective Measure
◦ Needs standard to measure success
◦ Provides quantitative value of success measure of
agent
◦ Involves factors that affect performance and weight to
each factors
• E.g., performance measure of a vacuum-cleaner agent
could be
• Amount of dirt cleaned up,
• Amount of time taken,
• Amount of electricity consumed,
• Amount of noise generated, etc.
• The when of evaluating performance is also important
for success.
• It may include knowing starting time, finishing time, duration of
job, consistency etc
14
cont’d
Factors to measure rationality of agents
1. Percept sequence perceived so far (do we have the
entire history of how the world evolve or not)
2. The set of actions that the agent can perform
(agents designed to do the same job with different
action set will have different performance)
3. Performance measures ( is it subjective or
objective? What are the factors and their weights)
4. The agent knowledge about the environment (what
kind of sensor does the agent have? Does the agent
knows every thing about the environment or not)
15
Autonomous agent
•It is a system situated in, and a part of, an
environment, which senses that environment, and acts
on it, over time, in pursuit of its own agenda and so as
to effect what it senses in the future.
–This agenda evolves from drives (or programmed goals).
–exercises control over its own actions
•An agent is autonomous
autonomous if its behavior is determined
by its own experience (with ability to learn and adapt)
•Agent that lacks autonomous,
lacks autonomous, if its actions are
based completely on built-in knowledge
16
Cont’d
•A fully autonomous robot has the ability to
–Gain information about the environment.
–Work for an extended period without human intervention.
–Move throughout its operating environment without
human assistance.
–Avoid situations that are harmful to people, property, or
itself.
–Learn or gain new capabilities like adjusting strategies for
accomplishing its task(s) or adapting to changing
surroundings.
–Autonomous robots still require regular maintenance, as
do other machines.
17
Cont.
• AI assistants, like Alexa and Siri,
are examples of intelligent agents as they
use sensors to perceive a request made by
the user and the automatically collect data
from the internet without the user's help.
• They can be used to gather information
about its perceived environment such as
weather and time.
18
Cont..
• Alexa is a smart voice assistant
developed by Amazon, which is capable of
a multitude of tasks. With a simple voice
command, Alexa can set alarms,
reminders, play music, answer questions,
search the internet, and control smart
home devices.
19
Designing an agent
• An agent has two parts: architecture + program
– This course concentrates on the program
• Architecture
– Runs the programs
– Makes the percept from the sensors available to the
programs
– Feeds the program’s action choices to the effectors
• Programs
– Accepts percept from an environment and generates
actions
• Before designing an agent program, we need to know the
possible percept and actions
– By enabling a learning mechanism, the agent could have
a degree of autonomy, such that it can reason and take
decision 20
function SKELETON-AGENT (percept) returns action
static: memory, the agent’s memory of the world
memory  UPDATE-MEMORY(memory,percept)
action  CHOOSE-BEST-ACTION(memory)
memory  UPDATE-MEMORY(memory, action)
return action
On each invocation, the agent’s memory is updated to reflect the
new percept, the best action is chosen, and the fact that the action
was taken is also stored in the memory. The memory persists from
one invocation to the next.
Program Skeleton of Agent
NOTE: Performance measure is not part of the agent
21
Cont’d
• Design of intelligent agent needs prior knowledge of
• Performance measure or Goal the agent supposed
to achieve,
• On what kind of Environment it operates
• What kind of Actuators it has (what are the possible
Actions),
• What kind of Sensors the agent has (what are the
possible Percepts)
• Performance measure  Environment  Actuators
 Sensors are abbreviated as PEAS
• Percepts Actions Goal  Environment are
abbreviated as PAGE
22
Cont’d
 Agent
Agent: Medical diagnosis system
◦ Environment
Environment: Patient, hospital, physician, nurses,
◦ Sensors
Sensors: Keyboard (percept
percept can be symptoms,
findings, patient's answers)
◦ Actuators:
Actuators: Screen display (action
action can be questions,
tests, diagnoses, treatments, referrals)
◦ Performance measure
Performance measure: Healthy patient, minimize
costs, lawsuits
23
Example
Example: Automating taxi driving.
– what are the sensors, effectors, goals,
environment and performance measure?
• Note:
– The taxi driver needs to know where it is?
What else is on the road? How fast it has to
drive? How to communicate with the
passengers and other vehicles?
24
Classes of Environments
•Actions are done by the agent on the environment.
Environments provide percepts to an agent.
•Agent perceives and acts in an environment.
•Properties of Environments:
–Fully observable vs. partially observable
–Deterministic vs. stochastic
–Episodic vs. non-episodic
–Static vs. Dynamic
–Discrete vs. continuous
–Single vs. multiagent 25
Fully observable vs. partially
observable
• Based on the portion of the environment
observable
• Does the agent’s sensory see the complete
state of the environment?
– If an agent has access to the complete state of the
environment, then the environment is accessible or
fully observable.
• An environment is effectively accessible if the
sensors detect all aspects that are relevant to
the choice of action.
• Taxi driving is partially observable 26
Deterministic vs. stochastic
• Based on the effect of the agent action
• Is there a unique mapping from one state to
another state for a given action?
• The environment is deterministic if the next state
is completely determined by
– the current state of the environment and
– the actions selected by the agents.
• Taxi driving is non-deterministic (i.e. stochastic)
• Strategic: If the environment is deterministic
except for the actions of other agents, then
the environment is strategic 27
Cont.
• Most real situations are so complex that it is
impossible to keep track of all the unobserved
aspects; for practical purposes, they must be
treated as stochastic.
• Taxi driving is clearly stochastic in this sense,
because one can never predict the behavior of
traffic exactly;
• We say an UNCERTAIN environment is uncertain if it is
not fully observable or not deterministic.
28
Episodic vs. Sequential
• Based on loosely dependent sub-objectives
• Does the next “episode” depend on the actions taken in
previous episodes?
• In an episodic environment, the agent's
experience is divided into "episodes".
– Each episode consists of the agent perceiving and
then acting. The quality of its action depends just on
the episode itself.
• In sequential environment the current decision
could affect all future decisions
• Taxi driving is sequential
29
Static vs. Dynamic
• Based on the effect of time
• Can the world change while the agent is
thinking?
– If the environment can change while the agent
is on purpose, then we say the environment is
dynamic for that agent
– otherwise it is static.
• Taxi driving is dynamic
30
Discrete vs. Continuous
• Based on the state, action and percept space pattern
• Are the distinct percepts & actions limited or unlimited?
• Discrete: A limited number of distinct, clearly defined
state, percepts and actions.
• Continuous: state, percept and action are
consciously changing variables
– If there are a limited number of distinct, clearly
defined percepts and actions, we say the
environment is discrete.
• Taxi driving is continuous – speed, location, steering
angels are in a range of continuous values.
• Chess is discrete - there are a fixed number of possible
moves on each item 31
Single vs. Multi
• Based on the number agent involved
• Single agent A single agent operating by itself
in an environment.
• Multi-agent: multiple agents are involved in
the environment
32
Example
Chess with Chess without Taxi driving
a clocka clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No
• The environment type largely determines the agent design
• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
33
Environment Types
Problems Observable Deterministic Episodic Static Discrete
Crossword
Puzzle
Yes Yes No Yes Yes
Part-picking
robot
No No Yes No No
Web shopping
program
No No No No Yes
Tutor No No No Yes Yes
Medical
Diagnosis
No No No No No
Taxi driving No No No No No
•Hardest case: a environment that is inaccessible, sequential,
non-deterministic, dynamic, continuous.
Below are lists of properties of a number of familiar environments
34
Types of agents
Simple reflex agents
•works by finding a rule whose condition matches the
current situation (as defined by the percept) and then
doing the action associated with that rule.
E.g. If the car in front brakes, and its brake lights
come on, then the driver should notice this and
initiate braking,
–Some processing is done on the visual input to establish the
condition. If "The car in front is braking"; then this triggers
some established connection in the agent program to the
action "initiate braking". We call such a connection a
condition-action rule written as: If car-in-front-is breaking
then initiate-braking.
•Humans also have many such conditions. Some of which are
learned responses. Some of which are innate (inborn)
responses
–Blinking when something approaches the eye. 35
Simple Reflex Agent sensors
What the world
is like now
What action I
should do now
Condition - action rules
effectors
Environment
function SIMPLE-REFLEX-AGENT(percept) returns action
static: rules, a set of condition-action rules
state  INTERPRET-INPUT (percept)
rule  RULE-MATCH (state,rules)
action  RULE-ACTION [rule]
return action
Structure of a simple reflex agent
36
Model-Based Reflex Agent
•This is a reflex agent with internal state.
–It keeps track of the world that it can’t see now.
•It works by finding a rule whose condition matches the current
situation (as defined by the percept and the stored internal
state)
–If the car is a recent model -- there is a centrally mounted brake light. With
older models, there is no centrally mounted, so what if the agent gets
confused?
Is it a parking light? Is it a brake light? Is it a turn signal light?
–Some sort of internal state should be in order to choose an action.
–The camera should detect two red lights at the edge of the vehicle go ON
or OFF simultaneously.
•The driver should look in the rear-view mirror to check on the
location of near by vehicles. In order to decide on lane-change
the driver needs to know whether or not they are there. The
driver sees, and there is already stored information, and then
does the action associated with that rule.
37
sensors
What the world
is like now
What action I
should do now
Condition - action rules
effectors
Environment
State
How the world evolves
What my actions do
function REFLEX-AGENT-WITH-STATE (percept) returns
action
static: state, a description of the current world state
rules, a set of condition-action rules
state  UPDATE-STATE (state, percept)
rule  RULE-MATCH (state, rules)
action  RULE-ACTION [rule]
state  UPDATE-STATE (state, action)
Structure of a Model-Based reflex agent
38
Goal based agents
•Choose actions that achieve the goal (an agent with
explicit goals)
•Involves consideration of the future:
–Knowing about the current state of the environment is not always
enough to decide what to do.
For example, at a road junction, the taxi can turn left, right or go
straight.
–The right decision depends on where the taxi is trying to get to. As
well as a current state description, the agent needs some sort of goal
information, which describes situations that are desirable. E.g. being
at the passenger's destination.
•The agent may need to consider long sequences, twists
and turns to find a way to achieve a goal.
39
sensors
What the world
is like now
What action I
should do now
Goals
effectors
Environment
State
How the world evolves
What my actions do
What it will be like
if I do action A
Decision making of this kind is fundamentally different from
the condition-action rules described earlier. It involves
•What will happen if I take such and such action?
•Will that enable me reach goal?
Structure of a goal-based agent
40
Utility based agents
•Goals are not really enough to generate high
quality behavior.
For e.g., there are many action sequences that will get
the taxi to its destination, thereby achieving the goal.
Some are quicker, safer, more reliable, or cheaper than
others. We need to consider Speed and safety
•When there are several goals that the agent can
aim for, non of which can be achieved with
certainty. Utility provides a way in which the
likelihood of success can be weighed up against
the importance of the goals.
•An agent that possesses an explicit utility
function can make rational decisions.
41
sensors
What the world
is like now
What action I
should do now
Utility
effectors
Environment
State
How the world evolves
What my actions do
What it will be like
if I do action A
How happy I will
be in such as a state
Structure of a utility-based agent
42
Learning agents
43
Cont’d
• Learning agents are not really an
alternative agent type to those described
above.
• All of the above types can be learning
agents
• Performance element can be replaced
with any of the 4 types described above.
44
Cont’d
The Learning Element is responsible for
suggesting improvements to any part of
the performance element.
 it could suggest an improved condition-
action rule for a simple reflex agent
 it could a modification to the knowledge of
how the world evolves in a model based
agent.
45
Cont’d
The input to the learning element comes from
the Critic.
 analyses incoming precepts and decides if
the actions of the agent have been good or
not.
 it will use an external performance standard.
Example, in a chess-playing program, the
Critic will receive a percept and notice that
the opponent has been check-mated. It is the
performance standard that tells it that this is a
good thing.
46
Cont’d.
 Problem Generator is responsible for suggesting
actions that will result in new knowledge about the
world being acquired.
 actions may not lead to any goals being achieved
in the short term,
 they may result in precepts that the learning
element can use to update the performance
element. example, the taxi-driving system may
suggest testing the brakes in wet conditions, so
that the part of the performance element that
deals with “what my actions do” can be updated.
47

More Related Content

PPTX
Lecture 2 Agents.pptx
PPTX
m2-agents.pptx
PPT
chapter -2 Intelligent Agents power Point .ppt
PPTX
A modern approach to AI AI_02_agents_Strut.pptx
PPTX
Intelligence Agent - Artificial Intelligent (AI)
PPTX
Intelligent Agents
PDF
Lecture 2 agent and environment
PPTX
1.1 What are Agent and Environment.pptx
Lecture 2 Agents.pptx
m2-agents.pptx
chapter -2 Intelligent Agents power Point .ppt
A modern approach to AI AI_02_agents_Strut.pptx
Intelligence Agent - Artificial Intelligent (AI)
Intelligent Agents
Lecture 2 agent and environment
1.1 What are Agent and Environment.pptx

Similar to Elective(Intellegent agent )__cha.Two.ppt (20)

PDF
AI week 2.pdf
PPTX
Intelligent Agents, A discovery on How A Rational Agent Acts
PPTX
AI: Artificial Agents on the Go and its types
PPTX
AIML Unit1 ppt for the betterment and help of students
PPTX
Unit-1.pptx
PPTX
artificial intelligence introduction slides
PPT
Lec 2-agents
PPT
Unit 1.ppt
PPTX
AI Basic.pptx
PDF
Reasoning under UncertaintyReasoning under Uncertainty.pdf
PPTX
Lecture 1 about the Agents in AI & .pptx
PDF
Lec 2 agents
PDF
Artificial Intelligence chapter 1 and 2(1).pdf
PPT
Intelligent agent artificial intelligent CSE 315
PDF
AI Module 1.pptx.pdf Artificial Intelligence Notes
PDF
Agents1
PPTX
2. Intelligent_Agents_ShgfutydtfxcfdxdfL.pptx
PPTX
AI_02_Intelligent Agents.pptx
PDF
Artificial Intelligence (Complete Notes).pdf
PPT
Lecture 2
AI week 2.pdf
Intelligent Agents, A discovery on How A Rational Agent Acts
AI: Artificial Agents on the Go and its types
AIML Unit1 ppt for the betterment and help of students
Unit-1.pptx
artificial intelligence introduction slides
Lec 2-agents
Unit 1.ppt
AI Basic.pptx
Reasoning under UncertaintyReasoning under Uncertainty.pdf
Lecture 1 about the Agents in AI & .pptx
Lec 2 agents
Artificial Intelligence chapter 1 and 2(1).pdf
Intelligent agent artificial intelligent CSE 315
AI Module 1.pptx.pdf Artificial Intelligence Notes
Agents1
2. Intelligent_Agents_ShgfutydtfxcfdxdfL.pptx
AI_02_Intelligent Agents.pptx
Artificial Intelligence (Complete Notes).pdf
Lecture 2
Ad

More from amarehope21 (14)

PPT
Chapter 3. Literature review_______.ppt
PDF
Backpropagation_Backpropagation a step by step.pdf
PDF
IP Address Routing _________________2_IP Routing.pdf
PDF
1_________Elements of Moderen Network.pdf
PDF
NLP_Chapter #2 Morphological Analysis.pdf
PPTX
Chapter #1 Introduction to NConfigure and administer Server LP.pptx
PPT
INTRODUCTION TO INFORMATION RETRIEVALChapter 1-IR.ppt
PPTX
chapter 4 enterprise/product/service .ppt.pptx
PPTX
Advanced Networking link layer..chap-5.pptx
PDF
introduction to computer network lecture one
PPTX
chapter 2(IO and stream)/chapter 2, IO and stream
PPTX
CHAPTER FIVE WEBBASED MARKET5.pptx/ WEBBASED MARKET5
PDF
wireless networking chapter three WAN.pdf
PPTX
basic research in it introduction BRMIT.ppt
Chapter 3. Literature review_______.ppt
Backpropagation_Backpropagation a step by step.pdf
IP Address Routing _________________2_IP Routing.pdf
1_________Elements of Moderen Network.pdf
NLP_Chapter #2 Morphological Analysis.pdf
Chapter #1 Introduction to NConfigure and administer Server LP.pptx
INTRODUCTION TO INFORMATION RETRIEVALChapter 1-IR.ppt
chapter 4 enterprise/product/service .ppt.pptx
Advanced Networking link layer..chap-5.pptx
introduction to computer network lecture one
chapter 2(IO and stream)/chapter 2, IO and stream
CHAPTER FIVE WEBBASED MARKET5.pptx/ WEBBASED MARKET5
wireless networking chapter three WAN.pdf
basic research in it introduction BRMIT.ppt
Ad

Recently uploaded (20)

PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Empathic Computing: Creating Shared Understanding
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Cloud computing and distributed systems.
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Approach and Philosophy of On baking technology
Understanding_Digital_Forensics_Presentation.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
Encapsulation_ Review paper, used for researhc scholars
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
Empathic Computing: Creating Shared Understanding
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Network Security Unit 5.pdf for BCA BBA.
Cloud computing and distributed systems.
Digital-Transformation-Roadmap-for-Companies.pptx
The AUB Centre for AI in Media Proposal.docx
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Review of recent advances in non-invasive hemoglobin estimation
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
sap open course for s4hana steps from ECC to s4
Approach and Philosophy of On baking technology

Elective(Intellegent agent )__cha.Two.ppt

  • 2. Objectives  Gives ideas about agent, agent function, agent program and architecture, environment, percept, sensor, actuator (effectors)  Gives idea on how agent should act  How to measure agent success  Rational agent, autonomous agent  Types of environment  Types of agent 2
  • 3. Intelligent Agent • I want to build a robot that will – Clean my house – Cook when I don’t want to – Wash my clothes – Cut my hair – Fix my car (or take it to be fixed) – Take a note when I am in a meeting i.e. do the things that I don’t feel like doing… • AI is the science of building machines (agents) that act rationally with respect to a goal. • AI is the study and construction of rational agents – A rational agent is one that does the right thing 3
  • 4. Agent •Agent is something that perceives its environment through SENSORS and acts upon that environment through EFFECTORS. •The agent is assumed to exist in an environment in which it perceives and acts •An agent is rational since it does the right thing. 4
  • 5. Agent Human beings Agents Sensors Eyes, Ears, Nose Cameras, Scanners Effectors Hands, Legs, Mouth Various Motors 5
  • 6. Agent type Percepts Actions Goals Environment Medical diagnosis system Symptoms, findings, patient's answers Questions, tests, treatments Healthy patients, minimize costs Patient, hospital Interactive English tutor Typed words Print exercises, suggestions, corrections Maximize student's score on test Set of students Part-picking robot Pixels of varying intensity Pick up parts and sort into bins Place parts in correct bins Conveyor belts with parts Satellite image analysis system Pixels of varying intensity, color Print a categorization of scene Correct categorization Images from orbiting satellite Examples of agents in different types of applications Refinery controller Temperature, pressure readings Open, close valves; adjust temperature Maximize purity, yield, safety Refinery 6
  • 7. Agents and environments • The agent function maps from percept histories to actions: [f: P*  A] • The agent program runs on the physical architecture to produce f • agent = architecture + program • Ideal Example of Agent – Vacuum-cleaner world • Percepts: location and contents – e.g., [A, Dirty] • Actions: Left, Right, Suck 7
  • 8. Rationality vs. Omniscience •A rational agent is an agent that does the right thing for the perceived data from the environment. •Rational agent acts so as to achieve one's goals, given one's beliefs (one that does the right thing). –What does right thing mean? one that will cause the agent to be most successful and is expected to maximize goal achievement, given the available information •An Omniscient agent knows the actual outcome of its actions, and can act accordingly, but in reality omniscience is impossible. •Rational agent could make a mistake because of unpredictable factors at the time of making decision. •Omniscient agent that act and think rationally never make a mistake 8
  • 9. Example – You are walking along the road to mebrat- haile; You see an old friend across the street. There is no traffic. – So, being rational, you start to cross the street. – Meanwhile a big banner falls off from above and before you finish crossing the road, you are flattened. Were you irrational to cross the street? 9
  • 10. Rationality • This points out that rationality is concerned with expected success, given what has been perceived. • Crossing the street was rational, because most of the time, the crossing would be successful, and there was no way you could have foreseen the falling banner. 10
  • 11. Rational agent •The EXAMPLE shows that we can not blame an agent for failing to take into account something it could not perceive. Or for failing to take an action that it is incapable of taking. •In summary what is rational at any given point depends on four things. –Everything that the agent has perceived so far –What an agent knows about the environment –The actions that the agent can perform –The performance measure that defines degrees of success; 11
  • 12. Performance measure • How do we decide whether an agent is successful or not? – Establish a standard of what it means to be successful in an environment and use it to measure the performance – A rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has. • What is the performance measure for “crossing the road”? 12
  • 13. cont’d  Performance measure (how?) ◦ Subjective measure using the agent  How happy is the agent at the end of the action  Agent should answer based on his opinion  Some agents are unable to answer, some are delude themselves, some over estimate and some under estimate their success  Therefore, subjective measure is not a better way.  Objective Measure imposed by some authority is an alternative 13
  • 14.  Objective Measure ◦ Needs standard to measure success ◦ Provides quantitative value of success measure of agent ◦ Involves factors that affect performance and weight to each factors • E.g., performance measure of a vacuum-cleaner agent could be • Amount of dirt cleaned up, • Amount of time taken, • Amount of electricity consumed, • Amount of noise generated, etc. • The when of evaluating performance is also important for success. • It may include knowing starting time, finishing time, duration of job, consistency etc 14 cont’d
  • 15. Factors to measure rationality of agents 1. Percept sequence perceived so far (do we have the entire history of how the world evolve or not) 2. The set of actions that the agent can perform (agents designed to do the same job with different action set will have different performance) 3. Performance measures ( is it subjective or objective? What are the factors and their weights) 4. The agent knowledge about the environment (what kind of sensor does the agent have? Does the agent knows every thing about the environment or not) 15
  • 16. Autonomous agent •It is a system situated in, and a part of, an environment, which senses that environment, and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future. –This agenda evolves from drives (or programmed goals). –exercises control over its own actions •An agent is autonomous autonomous if its behavior is determined by its own experience (with ability to learn and adapt) •Agent that lacks autonomous, lacks autonomous, if its actions are based completely on built-in knowledge 16
  • 17. Cont’d •A fully autonomous robot has the ability to –Gain information about the environment. –Work for an extended period without human intervention. –Move throughout its operating environment without human assistance. –Avoid situations that are harmful to people, property, or itself. –Learn or gain new capabilities like adjusting strategies for accomplishing its task(s) or adapting to changing surroundings. –Autonomous robots still require regular maintenance, as do other machines. 17
  • 18. Cont. • AI assistants, like Alexa and Siri, are examples of intelligent agents as they use sensors to perceive a request made by the user and the automatically collect data from the internet without the user's help. • They can be used to gather information about its perceived environment such as weather and time. 18
  • 19. Cont.. • Alexa is a smart voice assistant developed by Amazon, which is capable of a multitude of tasks. With a simple voice command, Alexa can set alarms, reminders, play music, answer questions, search the internet, and control smart home devices. 19
  • 20. Designing an agent • An agent has two parts: architecture + program – This course concentrates on the program • Architecture – Runs the programs – Makes the percept from the sensors available to the programs – Feeds the program’s action choices to the effectors • Programs – Accepts percept from an environment and generates actions • Before designing an agent program, we need to know the possible percept and actions – By enabling a learning mechanism, the agent could have a degree of autonomy, such that it can reason and take decision 20
  • 21. function SKELETON-AGENT (percept) returns action static: memory, the agent’s memory of the world memory  UPDATE-MEMORY(memory,percept) action  CHOOSE-BEST-ACTION(memory) memory  UPDATE-MEMORY(memory, action) return action On each invocation, the agent’s memory is updated to reflect the new percept, the best action is chosen, and the fact that the action was taken is also stored in the memory. The memory persists from one invocation to the next. Program Skeleton of Agent NOTE: Performance measure is not part of the agent 21
  • 22. Cont’d • Design of intelligent agent needs prior knowledge of • Performance measure or Goal the agent supposed to achieve, • On what kind of Environment it operates • What kind of Actuators it has (what are the possible Actions), • What kind of Sensors the agent has (what are the possible Percepts) • Performance measure  Environment  Actuators  Sensors are abbreviated as PEAS • Percepts Actions Goal  Environment are abbreviated as PAGE 22
  • 23. Cont’d  Agent Agent: Medical diagnosis system ◦ Environment Environment: Patient, hospital, physician, nurses, ◦ Sensors Sensors: Keyboard (percept percept can be symptoms, findings, patient's answers) ◦ Actuators: Actuators: Screen display (action action can be questions, tests, diagnoses, treatments, referrals) ◦ Performance measure Performance measure: Healthy patient, minimize costs, lawsuits 23
  • 24. Example Example: Automating taxi driving. – what are the sensors, effectors, goals, environment and performance measure? • Note: – The taxi driver needs to know where it is? What else is on the road? How fast it has to drive? How to communicate with the passengers and other vehicles? 24
  • 25. Classes of Environments •Actions are done by the agent on the environment. Environments provide percepts to an agent. •Agent perceives and acts in an environment. •Properties of Environments: –Fully observable vs. partially observable –Deterministic vs. stochastic –Episodic vs. non-episodic –Static vs. Dynamic –Discrete vs. continuous –Single vs. multiagent 25
  • 26. Fully observable vs. partially observable • Based on the portion of the environment observable • Does the agent’s sensory see the complete state of the environment? – If an agent has access to the complete state of the environment, then the environment is accessible or fully observable. • An environment is effectively accessible if the sensors detect all aspects that are relevant to the choice of action. • Taxi driving is partially observable 26
  • 27. Deterministic vs. stochastic • Based on the effect of the agent action • Is there a unique mapping from one state to another state for a given action? • The environment is deterministic if the next state is completely determined by – the current state of the environment and – the actions selected by the agents. • Taxi driving is non-deterministic (i.e. stochastic) • Strategic: If the environment is deterministic except for the actions of other agents, then the environment is strategic 27
  • 28. Cont. • Most real situations are so complex that it is impossible to keep track of all the unobserved aspects; for practical purposes, they must be treated as stochastic. • Taxi driving is clearly stochastic in this sense, because one can never predict the behavior of traffic exactly; • We say an UNCERTAIN environment is uncertain if it is not fully observable or not deterministic. 28
  • 29. Episodic vs. Sequential • Based on loosely dependent sub-objectives • Does the next “episode” depend on the actions taken in previous episodes? • In an episodic environment, the agent's experience is divided into "episodes". – Each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself. • In sequential environment the current decision could affect all future decisions • Taxi driving is sequential 29
  • 30. Static vs. Dynamic • Based on the effect of time • Can the world change while the agent is thinking? – If the environment can change while the agent is on purpose, then we say the environment is dynamic for that agent – otherwise it is static. • Taxi driving is dynamic 30
  • 31. Discrete vs. Continuous • Based on the state, action and percept space pattern • Are the distinct percepts & actions limited or unlimited? • Discrete: A limited number of distinct, clearly defined state, percepts and actions. • Continuous: state, percept and action are consciously changing variables – If there are a limited number of distinct, clearly defined percepts and actions, we say the environment is discrete. • Taxi driving is continuous – speed, location, steering angels are in a range of continuous values. • Chess is discrete - there are a fixed number of possible moves on each item 31
  • 32. Single vs. Multi • Based on the number agent involved • Single agent A single agent operating by itself in an environment. • Multi-agent: multiple agents are involved in the environment 32
  • 33. Example Chess with Chess without Taxi driving a clocka clock Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes No Single agent No No No • The environment type largely determines the agent design • The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent 33
  • 34. Environment Types Problems Observable Deterministic Episodic Static Discrete Crossword Puzzle Yes Yes No Yes Yes Part-picking robot No No Yes No No Web shopping program No No No No Yes Tutor No No No Yes Yes Medical Diagnosis No No No No No Taxi driving No No No No No •Hardest case: a environment that is inaccessible, sequential, non-deterministic, dynamic, continuous. Below are lists of properties of a number of familiar environments 34
  • 35. Types of agents Simple reflex agents •works by finding a rule whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule. E.g. If the car in front brakes, and its brake lights come on, then the driver should notice this and initiate braking, –Some processing is done on the visual input to establish the condition. If "The car in front is braking"; then this triggers some established connection in the agent program to the action "initiate braking". We call such a connection a condition-action rule written as: If car-in-front-is breaking then initiate-braking. •Humans also have many such conditions. Some of which are learned responses. Some of which are innate (inborn) responses –Blinking when something approaches the eye. 35
  • 36. Simple Reflex Agent sensors What the world is like now What action I should do now Condition - action rules effectors Environment function SIMPLE-REFLEX-AGENT(percept) returns action static: rules, a set of condition-action rules state  INTERPRET-INPUT (percept) rule  RULE-MATCH (state,rules) action  RULE-ACTION [rule] return action Structure of a simple reflex agent 36
  • 37. Model-Based Reflex Agent •This is a reflex agent with internal state. –It keeps track of the world that it can’t see now. •It works by finding a rule whose condition matches the current situation (as defined by the percept and the stored internal state) –If the car is a recent model -- there is a centrally mounted brake light. With older models, there is no centrally mounted, so what if the agent gets confused? Is it a parking light? Is it a brake light? Is it a turn signal light? –Some sort of internal state should be in order to choose an action. –The camera should detect two red lights at the edge of the vehicle go ON or OFF simultaneously. •The driver should look in the rear-view mirror to check on the location of near by vehicles. In order to decide on lane-change the driver needs to know whether or not they are there. The driver sees, and there is already stored information, and then does the action associated with that rule. 37
  • 38. sensors What the world is like now What action I should do now Condition - action rules effectors Environment State How the world evolves What my actions do function REFLEX-AGENT-WITH-STATE (percept) returns action static: state, a description of the current world state rules, a set of condition-action rules state  UPDATE-STATE (state, percept) rule  RULE-MATCH (state, rules) action  RULE-ACTION [rule] state  UPDATE-STATE (state, action) Structure of a Model-Based reflex agent 38
  • 39. Goal based agents •Choose actions that achieve the goal (an agent with explicit goals) •Involves consideration of the future: –Knowing about the current state of the environment is not always enough to decide what to do. For example, at a road junction, the taxi can turn left, right or go straight. –The right decision depends on where the taxi is trying to get to. As well as a current state description, the agent needs some sort of goal information, which describes situations that are desirable. E.g. being at the passenger's destination. •The agent may need to consider long sequences, twists and turns to find a way to achieve a goal. 39
  • 40. sensors What the world is like now What action I should do now Goals effectors Environment State How the world evolves What my actions do What it will be like if I do action A Decision making of this kind is fundamentally different from the condition-action rules described earlier. It involves •What will happen if I take such and such action? •Will that enable me reach goal? Structure of a goal-based agent 40
  • 41. Utility based agents •Goals are not really enough to generate high quality behavior. For e.g., there are many action sequences that will get the taxi to its destination, thereby achieving the goal. Some are quicker, safer, more reliable, or cheaper than others. We need to consider Speed and safety •When there are several goals that the agent can aim for, non of which can be achieved with certainty. Utility provides a way in which the likelihood of success can be weighed up against the importance of the goals. •An agent that possesses an explicit utility function can make rational decisions. 41
  • 42. sensors What the world is like now What action I should do now Utility effectors Environment State How the world evolves What my actions do What it will be like if I do action A How happy I will be in such as a state Structure of a utility-based agent 42
  • 44. Cont’d • Learning agents are not really an alternative agent type to those described above. • All of the above types can be learning agents • Performance element can be replaced with any of the 4 types described above. 44
  • 45. Cont’d The Learning Element is responsible for suggesting improvements to any part of the performance element.  it could suggest an improved condition- action rule for a simple reflex agent  it could a modification to the knowledge of how the world evolves in a model based agent. 45
  • 46. Cont’d The input to the learning element comes from the Critic.  analyses incoming precepts and decides if the actions of the agent have been good or not.  it will use an external performance standard. Example, in a chess-playing program, the Critic will receive a percept and notice that the opponent has been check-mated. It is the performance standard that tells it that this is a good thing. 46
  • 47. Cont’d.  Problem Generator is responsible for suggesting actions that will result in new knowledge about the world being acquired.  actions may not lead to any goals being achieved in the short term,  they may result in precepts that the learning element can use to update the performance element. example, the taxi-driving system may suggest testing the brakes in wet conditions, so that the part of the performance element that deals with “what my actions do” can be updated. 47

Editor's Notes

  • #6: refinery / B rfanri / A noun [count] a factory where things are removed from a natural substance to make it pure
  • #13: Delude - to make someone think something that is not true: