SlideShare a Scribd company logo
Unit 2: Agents and Environment LH 7
Presented By : Tekendra Nath Yogi
Tekendranath@gmail.com
College Of Applied Business And Technology
Contd…
• Contents:
– 2.1 Agent, Rational agent, and Intelligent Agent
– 2.2 Relationship between agents and environments
– 2.3 Environments and its properties(Fully observable vs. partially observable,
single agent vs. multi-agent, deterministic vs. stochastic, episodic vs.
sequential, static vs. dynamic, discrete vs. continuous, known vs. unknown)
– 2.4 Agent structures
• 2.4.1 Simple reflex agents
• 2.4.2 Model-based reflex agents
• 2.4.3 Goal-based agents
• 2.4.4 Utility-based agents
• 2.4.5 Learning agents
– 2.5 Performance evaluation of agents: PEAS description
212/2/2018 By: Tekendra Nath Yogi
Agent, Rational agent, intelligent agent
• Agent:
– An agent is just something that acts.
• Rational Agent:
– A Rational Agent is one that acts so as to achieve the best outcome or,
when there is uncertainty, the best expected outcome.
• I.e., one that behaves as well as possible.
– How well an agent can behave depends on the nature of the
environment; some environments are more difficult than others.
312/2/2018 By: Tekendra Nath Yogi
Contd…
• Intelligent agent:
– A Successful system can be called intelligent agent.
– Fundamental faculties of intelligence are: Acting, Sensing, Understanding
,Reasoning, Learning
– In order to act intelligent agent must sense. Blind actions is not characterization of
intelligence. Understanding is essential to interpret the sensory percepts and decide
on an action.
– Therefore, Intelligent agent: Must act, Must sense, Must be autonomous, Must be rational.
– Note: intelligent agent means it does things based on reasoning, while rational agent
means it does the best action (or reaction) for a given situation.
– However, Throughout this course we will use the term agent, rational agent and
intelligence agent synonymously.
412/2/2018 By: Tekendra Nath Yogi
Basic terminology
• Percept: Refer to the agent's perceptual inputs at any given instant.
• percept sequence:
– An agent's percept sequence is the complete history of everything the agent has
ever perceived.
– In general, an agent's choice of action at any given instant can depend on the entire
percept sequence observed to date.
• Agent Function:
– The agent function is mathematical concept that maps percept sequence to
actions(agent‟s behavior).
f : P* A
• Agent Program: The agent program is a concrete implementation of agent function
,running within some physical architecture to produce f.
512/2/2018 By: Tekendra Nath Yogi
What do you mean, sensors, percepts effectors and actions?
• For Humans
– Sensors:
• Eyes (vision), ears (hearing), skin (touch), tongue (gestation), nose
(olfaction).
– Percepts:
• At the lowest level – electrical signals from these sensors
• After preprocessing – objects in the visual field, auditory streams .
– Effectors:
• limbs, digits, eyes, tongue, …..
– Actions:
• lift a finger, turn left, walk, run, carry an object, …
612/2/2018 By: Tekendra Nath Yogi
Agents and Environments
• An agent is just something that acts.
• To act an agent perceives its environment via sensors and acts rationally upon that
environment with its effectors (actuators).
• This simple idea is illustrated in the following figure:
Fig: Agents interact with environments through sensors and actuators
Presented By: Tekendra Nath Yogi 7
Contd…
• Examples of Agent:
– A human agent
• has eyes, ears, and other organs for sensors and hands, legs, mouth, and
other body parts for actuators.
– A robotic agent:
• might have cameras and infrared range finders for sensors and various
motors for actuators.
– A software agent:
• receives keystrokes, file contents, and network packets as sensory inputs
and acts on the environment by displaying on the screen, writing files, and
sending network packets.
812/2/2018 By: Tekendra Nath Yogi
Contd…
• Properties of the agent:
– An agent is just something that act. Of course, all computer programs
do something, but computer agent are expected to do more:
• Operate autonomously i.e., can work on their own.
• Perceive and react to their environment.
• Pro- active(i.e., should be goal oriented)
• capable of taking on another„s goal.
• They are persistent over a prolonged time period. And
• Adapt to change i.e., They should have ability to learn.
912/2/2018 By: Tekendra Nath Yogi
Contd..
The vacuum-cleaner world: Example of Agent
• To illustrate the intelligent agent, a very simple example-the vacuum-cleaner world
is used as shown in Figure below:
• This world is so simple that we can describe
everything that happens; it's also a made-up
world, so we can invent many variations.
• This particular world has just two locations: squares A and B.
– I.e. Environment: square A and B
• The vacuum agent perceives which square it is in and whether there is dirt in the
square.
– i.e., Percepts: [location and content] E.g. [A, Dirty]
• It can choose to move left, move right, suck up the dirt, or do nothing.
– i.e., Actions: left, right, suck, and no-op
Presented By: Tekendra Nath Yogi 10
Contd..
The vacuum-cleaner world: Example of Agent
• One very simple agent function is the following:
– if the current square is dirty, then suck, otherwise move to the other square.
• A partial tabulation of agent function is shown in Table below:
• A simple agent program for this agent function is given in the next
slide.
Presented By: Tekendra Nath Yogi 11
Contd..
The vacuum-cleaner world: Example of Agent
• Function Vacuum Agent[location, Status] returns an action
– If status = Dirty then Return suck
– Else if location = A then Return Right
– Else if location = B then return Left
Presented By: Tekendra Nath Yogi 12
Good Behavior: The concept of rationality
• A rational agent is one that does the right thing i.e., one that behaves as
well as possible.
• Right thing is the one that will cause the agent to be most successful.
– For example: When an agent is in an environment, it generates a sequence of
actions according to the percepts it receives. This sequence of actions causes
the environment to go through a sequence of states. If the sequence is
desirable, then the agent has performed well.
• This notion of desirability is captured by a performance evaluation.
Presented By: Tekendra Nath Yogi 13
Contd.. Performance Measures
• Evaluates any given sequence of environment states and determine the success of
the agent.
• But, It is not easy task to choose the performance measure of an agent. Because the
performance measure doesn‟t depends on the task and agent but it depends on the
circumstances.
• Therefore, It is better to design Performance measure according to what is wanted in
the environment instead of how the agents should behave.
– E.g., The possible performance measure of a vacuum-cleaner agent could be amount of
dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise
generated, etc.
– But if the performance measure for automated vacuum cleaner is ―The amount of dirt
cleaned within a certain time. Then a rational agent can maximize this performance by
cleaning up the dirt , then dumping it all on the floor, then cleaning it up again , and so
on.
– So, “How clean the floor is” is better choice for performance measure of vacuum cleaner.
Presented By: Tekendra Nath Yogi
14
Contd…
• What is Rationality at any given time depends on four things:
– The performance measure that define the criterion of success.
– The agent‟s prior knowledge of the environment
– The actions that the agent can perform
– The agent‟s percept sequence to date.
Presented By: Tekendra Nath Yogi 15
Omniscience Versus rationality
• An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality.
• Rationality is not the same as perfection. Rationality maximizes expected
performance, while perfection maximizes actual performance.
– For Example: I am walking along the Ring road one day and I see an old
friend across the road. There is no traffic nearby and I'm not otherwise
engaged, so, being rational, I start to cross the road. Meanwhile, at 33,000
feet, a cargo door falls off a passing airliner, and before I make it to the
other side of the road I am flattened.
Presented By: Tekendra Nath Yogi 16
Environments
• The first step to design a rational agent is the specification of its task
environment.
• Task environments are essentially the „problems‟ to which rational agents are
the „solutions‟.
• Generally task environments are specified by using the following four
parameter
– Performance
– Environment
– Actuators
– Sensors
• Therefore, task environment is also called PEAS description of the
environment.
Presented By: Tekendra Nath Yogi 17
Contd…
• Example :PEAS description of the task environment for an Fully automated taxi
– Performance: The following can be the possible measures for the performance of
automated taxi:
• Getting to the correct destination(destination).
• Minimizing the fuel consumption and wear and tear(damage that naturally occurs
as a result of aging), and Minimizing the trip time or cost( Profit)
• Minimizing the violations of traffic laws and disturbances to other
drivers(legality)
• Maximizing the safety and passenger comfort ( Safety and comfort )
– Environment: This is the driving environment that the taxi will face
• Streets/freeways, other traffic, pedestrians, weather(rain , snow,..etc), police cars,
etc.
1812/2/2018 By: Tekendra Nath Yogi
Contd…
• Actuators: The Actuators for an automated taxi include those available to a
human driver:
– Steering, accelerator, brake, horn, speaker/display,…
• Sensors: The basic sensors for the taxi includes :
– One or more controllable video cameras to see the road
– Infrared and sonar sensors to detect the distances to other cars and obstacles
– To avoid speeding tickets, the taxi should have a speedometers
– To control the vehicle on the curve it should have an accelerometer
– To determine the mechanical state of the vehicle , it should have engine sensors.
– It should have GPS so that it doesn‟t get lost.
– It should have keyboard or microphone for the passenger to request the destination.
1912/2/2018 By: Tekendra Nath Yogi
Properties of environment (classes of environment)
• Following are the dimensions along which environment can be categorized:
– Fully observable versus partially observable
– Single agent versus multi-agent
– Deterministic versus stochastic
– Episodic versus sequential
– Static versus dynamic
– Discrete versus continuous
– Known versus unknown.
2012/2/2018 By: Tekendra Nath Yogi
Contd…
• Fully observable versus partially observable:
– If an agent's sensors give it access to the complete state of the
environment at each point in time, then we say that the task
environment is fully observable.
• For example: chess playing.
– An environment might be partially observable because of noisy and
inaccurate sensors.
• For example:
– a vacuum agent with only a local dirt sensor cannot tell whether there
is dirt in other squares.
2112/2/2018 By: Tekendra Nath Yogi
Contd…
• Single agent vs. multi-agent:
• Example: an agent solving a crossword puzzle by itself is clearly in a
single-agent environment, whereas an agent playing chess is in a two-agent
environment.
• Multi-agent environment can be:
– Competitive: For example, in chess, the opponent entity B is trying to
maximize its performance measure, which, by the rules of chess, minimizes
agent A's performance measure. Thus, chess is a competitive multi-agent
environment.
– Cooperative: In the taxi-driving environment, on the other hand, avoiding
collisions maximizes the performance measure of all agents, so it is a partially
cooperative multi-agent environment.
2212/2/2018 By: Tekendra Nath Yogi
Contd…
• Deterministic vs. stochastic:
– If the next state of the environment is completely determined by the
current state and the action executed by the agent, then we say the
environment is deterministic; otherwise, it is stochastic.
– The simple vacuum world is deterministic where as the Taxi driving is
clearly stochastic in this sense, because one can never predict the
behavior of traffic exactly; moreover, one's tires blow out and one's
engine seizes up without warning.
2312/2/2018 By: Tekendra Nath Yogi
Contd…
• Episodic versus sequential:
– In episodic environments, the choice of action in each episode depends
only on the episode itself i.e., the next episode does not depend on the
actions taken in previous episodes.
• For example an agent that has to spot defective parts on an assembly line
bases each decision on the current part, regardless of previous decisions;
moreover, the current decision doesn't affect whether the next part is
defective.
– In sequential environments, on the other hand, the current decision
could affect all future decisions.
• For example: Chess and taxi driving are sequential
2412/2/2018 By: Tekendra Nath Yogi
Contd…
• Static vs. dynamic:
– If the environment can change while an agent is deliberating, then the
environment is dynamic for that agent; otherwise, it is static.
– Static environments are easy to deal with because the agent need not
keep looking at the world while it is deciding on an action, nor need it
worry about the passage of time.
• For Example: Crossword puzzles are static.
– Dynamic environments, on the other hand, are continuously asking the
agent what it wants to do; if it hasn't decided yet, that counts as deciding
to do nothing.
• For example: Taxi driving is dynamic because the other cars and the taxi
itself keep moving while the driving algorithm dithers about what to do next.
2512/2/2018 By: Tekendra Nath Yogi
Contd…
• Discrete vs. continuous:
– The discrete/continuous distinction can be applied to the state of the
environment, to the way time is handled, and to the percepts and
actions of the agent.
• For example, a discrete-state environment such as a chess game
has a finite number of distinct states. Chess also has a discrete set
of percepts and actions.
• Example of continuous state environment includes Taxi driving :
the speed and location of the taxi sweep through a range of
continuous values and do so smoothly over time.
2612/2/2018 By: Tekendra Nath Yogi
Contd…
• Known versus unknown:
– This distinction refers not to the environment itself but to the agent‟s
state of knowledge about the environment.
– In a known environment, the outcomes for all actions are given.
– Obviously, if the environment is unknown, the agent will have to learn
how it works in order to make good decisions.
2712/2/2018 By: Tekendra Nath Yogi
The structure of the agents
• Agent‟s structure can be viewed as:
– Agent = Architecture + Agent Program
• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.
Presented By: Tekendra Nath Yogi 28
Contd…
• Agents are grouped into five classes based on their degree of
perceived intelligence and capability:
– simple reflex agents
– model-based reflex agents
– goal-based agents
– utility-based agents
– learning agents
2912/2/2018 By: Tekendra Nath Yogi
Contd…
• Simple reflex agents:
– Simple reflex agents act only on the basis of the current percept,
ignoring the rest of the percept history.
• For example: vacuum cleaner agent.
– First of all the simple reflex agent perceives the percepts from the
environment and the agent interpret input to generate an abstract state
description of the current state from the percept.
– This generated state description is then matched against the condition
part of the rules in the rule set.
– Then it act according to a first rule whose condition matches the current
state , as defined by the percept.
3012/2/2018 By: Tekendra Nath Yogi
Contd…
• The following figure shows the structure of the simple reflex agent
Fig: simple reflex agent
Presented By: Tekendra Nath Yogi 31
Contd…
• Characteristics:
– simple, but very limited intelligence.
– The simple reflex agent work only if the environment is fully
observable. Even a little bit of unobservability can cause serious
trouble.
– Lacking history, easily get stuck in infinite loops
3212/2/2018 By: Tekendra Nath Yogi
Contd…
• Model Based Reflex agent:
• Maintain a internal state to keep track of part of world it can not see
now.
• Internal state is based on percept history and keeps two kinds of
knowledge:
• how the world evolves independently of the agent
• How the agent's own actions affect the world
• Then it combines current percept with the old internal state to generate
the updated description of the current state.
• It then chooses an action in the same way as reflex agent.
3312/2/2018 By: Tekendra Nath Yogi
Contd…
• The following figure shows the structure of the model based reflex agent
Fig: Model based reflex agent
3412/2/2018 By: Tekendra Nath Yogi
Contd…
• Goal Based agent:
– Goal-based agents further expand on the capabilities of the model-
based agents, by using "goal" information.
– Goal information describes situations that are desirable. This allows the
agent a way to choose among multiple possibilities, selecting the one
which reaches a goal state.
– It is more flexible because the knowledge that supports its decisions is
represented explicitly and can be modified.
3512/2/2018 By: Tekendra Nath Yogi
Contd…
• The following figure shows the structure of the goal based agent
Fig: Goal base agent
3612/2/2018 By: Tekendra Nath Yogi
Contd…
• Utility based agent:
– Goal-based agents only distinguish between goal states and non-goal
states.
– It is possible to define a measure of how desirable a particular state is.
This measure can be obtained through the use of a utility
function which maps a state to a measure of the utility of the state.
– A more general performance (for example, speed and safety)measure
should allow a comparison of different world states according to
exactly how happy they would make the agent. The term utility can be
used to describe how "happy" the agent is.
3712/2/2018 By: Tekendra Nath Yogi
Contd…
• The following figure shows the structure of the utility based agent
Fig: utility base agent
3812/2/2018 By: Tekendra Nath Yogi
Contd…
• Learning agents: A learning agent can be divided into four
conceptual components:
– "learning element", which is responsible for making improvements
– "performance element“ (entire agent), which is responsible for selecting
external actions. i.e., it takes in percepts and decides on actions.
– The learning element uses feedback from the "critic" on how the agent is
doing and determines how the performance element should be modified to
do better in the future.
– The last component of the learning agent is the "problem generator". It is
responsible for suggesting actions that will lead to new and informative
experiences.
3912/2/2018 By: Tekendra Nath Yogi
Contd…
• following figure shows the structure of the Learning agent
Fig: Learning base agent
4012/2/2018 By: Tekendra Nath Yogi
Applications of the agents
• Intelligent agents are applied as automated online assistants, where they
function to perceive the needs of customers in order to perform
individualized customer service.
• Such an agent may basically consist of a dialog system, as well an expert
system to provide specific expertise to the user.
• They can also be used to optimize coordination of human groups online.
4112/2/2018 By: Tekendra Nath Yogi
Homework
• Define in your own words the following terms: agent, agent function, agent
program, rationality, autonomy, reflex agent, model-based agent, goal-
based agent, utility-based agent, learning agent.
• Both the performance measure and the utility function measure how well
an agent is doing. Explain the difference between the two
• What is the differences between agent functions and agent programs.
• What an agent comprises of?
• What are the various task environments?
12/2/2018 By: Tekendra Nath Yogi 42
Thank You !
43By: Tekendra Nath Yogi12/2/2018

More Related Content

PPTX
Intelligent agent
PPT
2-Agents- Artificial Intelligence
PPTX
Problem solving in Artificial Intelligence.pptx
PDF
Lecture 2 agent and environment
PDF
Unit8: Uncertainty in AI
PPTX
Agents and environments
PDF
Unit1: Introduction to AI
PDF
Intelligent agent
2-Agents- Artificial Intelligence
Problem solving in Artificial Intelligence.pptx
Lecture 2 agent and environment
Unit8: Uncertainty in AI
Agents and environments
Unit1: Introduction to AI

What's hot (20)

PDF
Agent architectures
PPT
Heuristic Search Techniques {Artificial Intelligence}
PDF
Unit3:Informed and Uninformed search
PPTX
Structure of agents
PPT
AI Lecture 4 (informed search and exploration)
PPTX
Local search algorithm
PPTX
Intelligence Agent - Artificial Intelligent (AI)
PDF
CS8691 - Artificial Intelligence.pdf
PDF
Approximation Algorithms
PPTX
Reflex and model based agents
PPTX
PPTX
Artificial Intelligence
PPTX
Informed and Uninformed search Strategies
PDF
Unit9:Expert System
PPTX
AI_Session 10 Local search in continious space.pptx
PDF
Lecture 5 - Agent communication
PDF
Reinforcement learning, Q-Learning
PPTX
State space search
PDF
Unit7: Production System
PPT
HCI 3e - Ch 7: Design rules
Agent architectures
Heuristic Search Techniques {Artificial Intelligence}
Unit3:Informed and Uninformed search
Structure of agents
AI Lecture 4 (informed search and exploration)
Local search algorithm
Intelligence Agent - Artificial Intelligent (AI)
CS8691 - Artificial Intelligence.pdf
Approximation Algorithms
Reflex and model based agents
Artificial Intelligence
Informed and Uninformed search Strategies
Unit9:Expert System
AI_Session 10 Local search in continious space.pptx
Lecture 5 - Agent communication
Reinforcement learning, Q-Learning
State space search
Unit7: Production System
HCI 3e - Ch 7: Design rules
Ad

Similar to Unit2: Agents and Environment (20)

PDF
PDF
Reasoning under UncertaintyReasoning under Uncertainty.pdf
PPT
Agents_AI.ppt
PPTX
Artificial intelligence Agents lecture slides
PPTX
INTELLIGENT AGENTS.pptx
PPTX
Intelligent Agents, A discovery on How A Rational Agent Acts
PPT
Unit 1.ppt
PDF
ai-slides-1233566181695672-2 (1).pdf
PPTX
UNIT 1 INTELLIGENT AGENTS ARTIFICIAL INTELIGENCE
PPT
Elective(Intellegent agent )__cha.Two.ppt
PPTX
A modern approach to AI AI_02_agents_Strut.pptx
PPTX
2. Intelligent_Agents_ShgfutydtfxcfdxdfL.pptx
PPTX
AI_Ch2.pptx
PPTX
ARTIFICIAL INTELLIGENCE TO GENERATE BEST COMPILER DESIGN
PDF
Artificial Intelligence Course of BIT Unit 2
PPT
Artificial intelligence introduction
PPTX
Intelligent AGent class.pptx
PPTX
Introduction to Artificial Intelligence Agents.pptx
PDF
Lecture Note on Introduction to Intelligent Agents.pdf
PDF
agents.pdf
Reasoning under UncertaintyReasoning under Uncertainty.pdf
Agents_AI.ppt
Artificial intelligence Agents lecture slides
INTELLIGENT AGENTS.pptx
Intelligent Agents, A discovery on How A Rational Agent Acts
Unit 1.ppt
ai-slides-1233566181695672-2 (1).pdf
UNIT 1 INTELLIGENT AGENTS ARTIFICIAL INTELIGENCE
Elective(Intellegent agent )__cha.Two.ppt
A modern approach to AI AI_02_agents_Strut.pptx
2. Intelligent_Agents_ShgfutydtfxcfdxdfL.pptx
AI_Ch2.pptx
ARTIFICIAL INTELLIGENCE TO GENERATE BEST COMPILER DESIGN
Artificial Intelligence Course of BIT Unit 2
Artificial intelligence introduction
Intelligent AGent class.pptx
Introduction to Artificial Intelligence Agents.pptx
Lecture Note on Introduction to Intelligent Agents.pdf
agents.pdf
Ad

More from Tekendra Nath Yogi (20)

PDF
Unit5: Learning
PDF
Unit4: Knowledge Representation
PDF
Unit 6: Application of AI
PPTX
PDF
BIM Data Mining Unit5 by Tekendra Nath Yogi
PDF
BIM Data Mining Unit4 by Tekendra Nath Yogi
PDF
BIM Data Mining Unit3 by Tekendra Nath Yogi
PDF
BIM Data Mining Unit2 by Tekendra Nath Yogi
PDF
BIM Data Mining Unit1 by Tekendra Nath Yogi
PPTX
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
PPTX
B. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
PPTX
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
PPTX
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
PPTX
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
PPTX
B. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath Yogi
PPTX
B. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath Yogi
Unit5: Learning
Unit4: Knowledge Representation
Unit 6: Application of AI
BIM Data Mining Unit5 by Tekendra Nath Yogi
BIM Data Mining Unit4 by Tekendra Nath Yogi
BIM Data Mining Unit3 by Tekendra Nath Yogi
BIM Data Mining Unit2 by Tekendra Nath Yogi
BIM Data Mining Unit1 by Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath Yogi

Recently uploaded (20)

PDF
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...
PPTX
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
PDF
. Radiology Case Scenariosssssssssssssss
PDF
MIRIDeepImagingSurvey(MIDIS)oftheHubbleUltraDeepField
PPTX
SCIENCE10 Q1 5 WK8 Evidence Supporting Plate Movement.pptx
PPTX
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
PPT
Chemical bonding and molecular structure
PPTX
Taita Taveta Laboratory Technician Workshop Presentation.pptx
PPTX
neck nodes and dissection types and lymph nodes levels
PPTX
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
PDF
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
PPTX
Vitamins & Minerals: Complete Guide to Functions, Food Sources, Deficiency Si...
PDF
HPLC-PPT.docx high performance liquid chromatography
PPT
protein biochemistry.ppt for university classes
PPTX
microscope-Lecturecjchchchchcuvuvhc.pptx
PPTX
2. Earth - The Living Planet Module 2ELS
PPTX
INTRODUCTION TO EVS | Concept of sustainability
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PPTX
Introduction to Fisheries Biotechnology_Lesson 1.pptx
PPTX
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
. Radiology Case Scenariosssssssssssssss
MIRIDeepImagingSurvey(MIDIS)oftheHubbleUltraDeepField
SCIENCE10 Q1 5 WK8 Evidence Supporting Plate Movement.pptx
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
Chemical bonding and molecular structure
Taita Taveta Laboratory Technician Workshop Presentation.pptx
neck nodes and dissection types and lymph nodes levels
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
Vitamins & Minerals: Complete Guide to Functions, Food Sources, Deficiency Si...
HPLC-PPT.docx high performance liquid chromatography
protein biochemistry.ppt for university classes
microscope-Lecturecjchchchchcuvuvhc.pptx
2. Earth - The Living Planet Module 2ELS
INTRODUCTION TO EVS | Concept of sustainability
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
Introduction to Fisheries Biotechnology_Lesson 1.pptx
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS

Unit2: Agents and Environment

  • 1. Unit 2: Agents and Environment LH 7 Presented By : Tekendra Nath Yogi Tekendranath@gmail.com College Of Applied Business And Technology
  • 2. Contd… • Contents: – 2.1 Agent, Rational agent, and Intelligent Agent – 2.2 Relationship between agents and environments – 2.3 Environments and its properties(Fully observable vs. partially observable, single agent vs. multi-agent, deterministic vs. stochastic, episodic vs. sequential, static vs. dynamic, discrete vs. continuous, known vs. unknown) – 2.4 Agent structures • 2.4.1 Simple reflex agents • 2.4.2 Model-based reflex agents • 2.4.3 Goal-based agents • 2.4.4 Utility-based agents • 2.4.5 Learning agents – 2.5 Performance evaluation of agents: PEAS description 212/2/2018 By: Tekendra Nath Yogi
  • 3. Agent, Rational agent, intelligent agent • Agent: – An agent is just something that acts. • Rational Agent: – A Rational Agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome. • I.e., one that behaves as well as possible. – How well an agent can behave depends on the nature of the environment; some environments are more difficult than others. 312/2/2018 By: Tekendra Nath Yogi
  • 4. Contd… • Intelligent agent: – A Successful system can be called intelligent agent. – Fundamental faculties of intelligence are: Acting, Sensing, Understanding ,Reasoning, Learning – In order to act intelligent agent must sense. Blind actions is not characterization of intelligence. Understanding is essential to interpret the sensory percepts and decide on an action. – Therefore, Intelligent agent: Must act, Must sense, Must be autonomous, Must be rational. – Note: intelligent agent means it does things based on reasoning, while rational agent means it does the best action (or reaction) for a given situation. – However, Throughout this course we will use the term agent, rational agent and intelligence agent synonymously. 412/2/2018 By: Tekendra Nath Yogi
  • 5. Basic terminology • Percept: Refer to the agent's perceptual inputs at any given instant. • percept sequence: – An agent's percept sequence is the complete history of everything the agent has ever perceived. – In general, an agent's choice of action at any given instant can depend on the entire percept sequence observed to date. • Agent Function: – The agent function is mathematical concept that maps percept sequence to actions(agent‟s behavior). f : P* A • Agent Program: The agent program is a concrete implementation of agent function ,running within some physical architecture to produce f. 512/2/2018 By: Tekendra Nath Yogi
  • 6. What do you mean, sensors, percepts effectors and actions? • For Humans – Sensors: • Eyes (vision), ears (hearing), skin (touch), tongue (gestation), nose (olfaction). – Percepts: • At the lowest level – electrical signals from these sensors • After preprocessing – objects in the visual field, auditory streams . – Effectors: • limbs, digits, eyes, tongue, ….. – Actions: • lift a finger, turn left, walk, run, carry an object, … 612/2/2018 By: Tekendra Nath Yogi
  • 7. Agents and Environments • An agent is just something that acts. • To act an agent perceives its environment via sensors and acts rationally upon that environment with its effectors (actuators). • This simple idea is illustrated in the following figure: Fig: Agents interact with environments through sensors and actuators Presented By: Tekendra Nath Yogi 7
  • 8. Contd… • Examples of Agent: – A human agent • has eyes, ears, and other organs for sensors and hands, legs, mouth, and other body parts for actuators. – A robotic agent: • might have cameras and infrared range finders for sensors and various motors for actuators. – A software agent: • receives keystrokes, file contents, and network packets as sensory inputs and acts on the environment by displaying on the screen, writing files, and sending network packets. 812/2/2018 By: Tekendra Nath Yogi
  • 9. Contd… • Properties of the agent: – An agent is just something that act. Of course, all computer programs do something, but computer agent are expected to do more: • Operate autonomously i.e., can work on their own. • Perceive and react to their environment. • Pro- active(i.e., should be goal oriented) • capable of taking on another„s goal. • They are persistent over a prolonged time period. And • Adapt to change i.e., They should have ability to learn. 912/2/2018 By: Tekendra Nath Yogi
  • 10. Contd.. The vacuum-cleaner world: Example of Agent • To illustrate the intelligent agent, a very simple example-the vacuum-cleaner world is used as shown in Figure below: • This world is so simple that we can describe everything that happens; it's also a made-up world, so we can invent many variations. • This particular world has just two locations: squares A and B. – I.e. Environment: square A and B • The vacuum agent perceives which square it is in and whether there is dirt in the square. – i.e., Percepts: [location and content] E.g. [A, Dirty] • It can choose to move left, move right, suck up the dirt, or do nothing. – i.e., Actions: left, right, suck, and no-op Presented By: Tekendra Nath Yogi 10
  • 11. Contd.. The vacuum-cleaner world: Example of Agent • One very simple agent function is the following: – if the current square is dirty, then suck, otherwise move to the other square. • A partial tabulation of agent function is shown in Table below: • A simple agent program for this agent function is given in the next slide. Presented By: Tekendra Nath Yogi 11
  • 12. Contd.. The vacuum-cleaner world: Example of Agent • Function Vacuum Agent[location, Status] returns an action – If status = Dirty then Return suck – Else if location = A then Return Right – Else if location = B then return Left Presented By: Tekendra Nath Yogi 12
  • 13. Good Behavior: The concept of rationality • A rational agent is one that does the right thing i.e., one that behaves as well as possible. • Right thing is the one that will cause the agent to be most successful. – For example: When an agent is in an environment, it generates a sequence of actions according to the percepts it receives. This sequence of actions causes the environment to go through a sequence of states. If the sequence is desirable, then the agent has performed well. • This notion of desirability is captured by a performance evaluation. Presented By: Tekendra Nath Yogi 13
  • 14. Contd.. Performance Measures • Evaluates any given sequence of environment states and determine the success of the agent. • But, It is not easy task to choose the performance measure of an agent. Because the performance measure doesn‟t depends on the task and agent but it depends on the circumstances. • Therefore, It is better to design Performance measure according to what is wanted in the environment instead of how the agents should behave. – E.g., The possible performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. – But if the performance measure for automated vacuum cleaner is ―The amount of dirt cleaned within a certain time. Then a rational agent can maximize this performance by cleaning up the dirt , then dumping it all on the floor, then cleaning it up again , and so on. – So, “How clean the floor is” is better choice for performance measure of vacuum cleaner. Presented By: Tekendra Nath Yogi 14
  • 15. Contd… • What is Rationality at any given time depends on four things: – The performance measure that define the criterion of success. – The agent‟s prior knowledge of the environment – The actions that the agent can perform – The agent‟s percept sequence to date. Presented By: Tekendra Nath Yogi 15
  • 16. Omniscience Versus rationality • An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality. • Rationality is not the same as perfection. Rationality maximizes expected performance, while perfection maximizes actual performance. – For Example: I am walking along the Ring road one day and I see an old friend across the road. There is no traffic nearby and I'm not otherwise engaged, so, being rational, I start to cross the road. Meanwhile, at 33,000 feet, a cargo door falls off a passing airliner, and before I make it to the other side of the road I am flattened. Presented By: Tekendra Nath Yogi 16
  • 17. Environments • The first step to design a rational agent is the specification of its task environment. • Task environments are essentially the „problems‟ to which rational agents are the „solutions‟. • Generally task environments are specified by using the following four parameter – Performance – Environment – Actuators – Sensors • Therefore, task environment is also called PEAS description of the environment. Presented By: Tekendra Nath Yogi 17
  • 18. Contd… • Example :PEAS description of the task environment for an Fully automated taxi – Performance: The following can be the possible measures for the performance of automated taxi: • Getting to the correct destination(destination). • Minimizing the fuel consumption and wear and tear(damage that naturally occurs as a result of aging), and Minimizing the trip time or cost( Profit) • Minimizing the violations of traffic laws and disturbances to other drivers(legality) • Maximizing the safety and passenger comfort ( Safety and comfort ) – Environment: This is the driving environment that the taxi will face • Streets/freeways, other traffic, pedestrians, weather(rain , snow,..etc), police cars, etc. 1812/2/2018 By: Tekendra Nath Yogi
  • 19. Contd… • Actuators: The Actuators for an automated taxi include those available to a human driver: – Steering, accelerator, brake, horn, speaker/display,… • Sensors: The basic sensors for the taxi includes : – One or more controllable video cameras to see the road – Infrared and sonar sensors to detect the distances to other cars and obstacles – To avoid speeding tickets, the taxi should have a speedometers – To control the vehicle on the curve it should have an accelerometer – To determine the mechanical state of the vehicle , it should have engine sensors. – It should have GPS so that it doesn‟t get lost. – It should have keyboard or microphone for the passenger to request the destination. 1912/2/2018 By: Tekendra Nath Yogi
  • 20. Properties of environment (classes of environment) • Following are the dimensions along which environment can be categorized: – Fully observable versus partially observable – Single agent versus multi-agent – Deterministic versus stochastic – Episodic versus sequential – Static versus dynamic – Discrete versus continuous – Known versus unknown. 2012/2/2018 By: Tekendra Nath Yogi
  • 21. Contd… • Fully observable versus partially observable: – If an agent's sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. • For example: chess playing. – An environment might be partially observable because of noisy and inaccurate sensors. • For example: – a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares. 2112/2/2018 By: Tekendra Nath Yogi
  • 22. Contd… • Single agent vs. multi-agent: • Example: an agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two-agent environment. • Multi-agent environment can be: – Competitive: For example, in chess, the opponent entity B is trying to maximize its performance measure, which, by the rules of chess, minimizes agent A's performance measure. Thus, chess is a competitive multi-agent environment. – Cooperative: In the taxi-driving environment, on the other hand, avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperative multi-agent environment. 2212/2/2018 By: Tekendra Nath Yogi
  • 23. Contd… • Deterministic vs. stochastic: – If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic. – The simple vacuum world is deterministic where as the Taxi driving is clearly stochastic in this sense, because one can never predict the behavior of traffic exactly; moreover, one's tires blow out and one's engine seizes up without warning. 2312/2/2018 By: Tekendra Nath Yogi
  • 24. Contd… • Episodic versus sequential: – In episodic environments, the choice of action in each episode depends only on the episode itself i.e., the next episode does not depend on the actions taken in previous episodes. • For example an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions; moreover, the current decision doesn't affect whether the next part is defective. – In sequential environments, on the other hand, the current decision could affect all future decisions. • For example: Chess and taxi driving are sequential 2412/2/2018 By: Tekendra Nath Yogi
  • 25. Contd… • Static vs. dynamic: – If the environment can change while an agent is deliberating, then the environment is dynamic for that agent; otherwise, it is static. – Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time. • For Example: Crossword puzzles are static. – Dynamic environments, on the other hand, are continuously asking the agent what it wants to do; if it hasn't decided yet, that counts as deciding to do nothing. • For example: Taxi driving is dynamic because the other cars and the taxi itself keep moving while the driving algorithm dithers about what to do next. 2512/2/2018 By: Tekendra Nath Yogi
  • 26. Contd… • Discrete vs. continuous: – The discrete/continuous distinction can be applied to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. • For example, a discrete-state environment such as a chess game has a finite number of distinct states. Chess also has a discrete set of percepts and actions. • Example of continuous state environment includes Taxi driving : the speed and location of the taxi sweep through a range of continuous values and do so smoothly over time. 2612/2/2018 By: Tekendra Nath Yogi
  • 27. Contd… • Known versus unknown: – This distinction refers not to the environment itself but to the agent‟s state of knowledge about the environment. – In a known environment, the outcomes for all actions are given. – Obviously, if the environment is unknown, the agent will have to learn how it works in order to make good decisions. 2712/2/2018 By: Tekendra Nath Yogi
  • 28. The structure of the agents • Agent‟s structure can be viewed as: – Agent = Architecture + Agent Program • Architecture = the machinery that an agent executes on. • Agent Program = an implementation of an agent function. Presented By: Tekendra Nath Yogi 28
  • 29. Contd… • Agents are grouped into five classes based on their degree of perceived intelligence and capability: – simple reflex agents – model-based reflex agents – goal-based agents – utility-based agents – learning agents 2912/2/2018 By: Tekendra Nath Yogi
  • 30. Contd… • Simple reflex agents: – Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. • For example: vacuum cleaner agent. – First of all the simple reflex agent perceives the percepts from the environment and the agent interpret input to generate an abstract state description of the current state from the percept. – This generated state description is then matched against the condition part of the rules in the rule set. – Then it act according to a first rule whose condition matches the current state , as defined by the percept. 3012/2/2018 By: Tekendra Nath Yogi
  • 31. Contd… • The following figure shows the structure of the simple reflex agent Fig: simple reflex agent Presented By: Tekendra Nath Yogi 31
  • 32. Contd… • Characteristics: – simple, but very limited intelligence. – The simple reflex agent work only if the environment is fully observable. Even a little bit of unobservability can cause serious trouble. – Lacking history, easily get stuck in infinite loops 3212/2/2018 By: Tekendra Nath Yogi
  • 33. Contd… • Model Based Reflex agent: • Maintain a internal state to keep track of part of world it can not see now. • Internal state is based on percept history and keeps two kinds of knowledge: • how the world evolves independently of the agent • How the agent's own actions affect the world • Then it combines current percept with the old internal state to generate the updated description of the current state. • It then chooses an action in the same way as reflex agent. 3312/2/2018 By: Tekendra Nath Yogi
  • 34. Contd… • The following figure shows the structure of the model based reflex agent Fig: Model based reflex agent 3412/2/2018 By: Tekendra Nath Yogi
  • 35. Contd… • Goal Based agent: – Goal-based agents further expand on the capabilities of the model- based agents, by using "goal" information. – Goal information describes situations that are desirable. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. – It is more flexible because the knowledge that supports its decisions is represented explicitly and can be modified. 3512/2/2018 By: Tekendra Nath Yogi
  • 36. Contd… • The following figure shows the structure of the goal based agent Fig: Goal base agent 3612/2/2018 By: Tekendra Nath Yogi
  • 37. Contd… • Utility based agent: – Goal-based agents only distinguish between goal states and non-goal states. – It is possible to define a measure of how desirable a particular state is. This measure can be obtained through the use of a utility function which maps a state to a measure of the utility of the state. – A more general performance (for example, speed and safety)measure should allow a comparison of different world states according to exactly how happy they would make the agent. The term utility can be used to describe how "happy" the agent is. 3712/2/2018 By: Tekendra Nath Yogi
  • 38. Contd… • The following figure shows the structure of the utility based agent Fig: utility base agent 3812/2/2018 By: Tekendra Nath Yogi
  • 39. Contd… • Learning agents: A learning agent can be divided into four conceptual components: – "learning element", which is responsible for making improvements – "performance element“ (entire agent), which is responsible for selecting external actions. i.e., it takes in percepts and decides on actions. – The learning element uses feedback from the "critic" on how the agent is doing and determines how the performance element should be modified to do better in the future. – The last component of the learning agent is the "problem generator". It is responsible for suggesting actions that will lead to new and informative experiences. 3912/2/2018 By: Tekendra Nath Yogi
  • 40. Contd… • following figure shows the structure of the Learning agent Fig: Learning base agent 4012/2/2018 By: Tekendra Nath Yogi
  • 41. Applications of the agents • Intelligent agents are applied as automated online assistants, where they function to perceive the needs of customers in order to perform individualized customer service. • Such an agent may basically consist of a dialog system, as well an expert system to provide specific expertise to the user. • They can also be used to optimize coordination of human groups online. 4112/2/2018 By: Tekendra Nath Yogi
  • 42. Homework • Define in your own words the following terms: agent, agent function, agent program, rationality, autonomy, reflex agent, model-based agent, goal- based agent, utility-based agent, learning agent. • Both the performance measure and the utility function measure how well an agent is doing. Explain the difference between the two • What is the differences between agent functions and agent programs. • What an agent comprises of? • What are the various task environments? 12/2/2018 By: Tekendra Nath Yogi 42
  • 43. Thank You ! 43By: Tekendra Nath Yogi12/2/2018