2. Content
• Introduction of agents,
• Structure of Intelligent Agent,
• Characteristics of Intelligent Agents
• Types of Agents: Simple Reflex, Model Based, Goal
Based, Utility Based Agents.
• Environment Types: Deterministic, Stochastic,
Static, Dynamic,Observable, Semi-observable,
Single Agent, Multi Agent
3. Agent
• An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators
5. Vacuum-cleaner world
• This world has just two locations: squares A and B.
• The vacuum agent perceives which square it is in and
whether there is dirt in the square.
• It can choose to move left, move right, suck up the
dirt, or do nothing.
• One very simple agent function is the following: if
the current square is dirty, then suck; otherwise,
move to the other square.
• A partial tabulation of this agent function is shown
9. Structure of an AI Agent
• The task of AI is to design an agent program which implements the agent
function.
• The structure of an intelligent agent is a combination of architecture and
agent program. It can be viewed as:
• Agent = Architecture + Agent program
• Architecture: Architecture is machinery that an AI agent executes on.
• Agent Function: Agent function is used to map a percept to an action.
• f:P* → A
• Agent program: Agent program is an implementation of agent function.
An agent program executes on the physical architecture to produce
function f.
10. Characteristics of Agent
• Situatedness
The agent receives some form of sensory input from its
environment, and it performs some action that changes its
environment in some way.
• Examples of environments: Vacuum cleaner
• Autonomy
The agent can act without direct intervention by humans or other
agents and that it has control over its own actions and internal
state.
11. • Adaptivity
The agent is capable of (1) reacting flexibly to changes in its
environment; (2) taking goal-directed initiative (i.e., is pro-active),
when appropriate; and (3) learning from its own experience, its
environment, and interactions with others.
• Sociability
The agent is capable of interacting in a peer-to-peer manner with
other agents or humans.
12. Types of Environment
• Fully observable vs. partially observable
• Single agent vs. multiagent
• Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Known vs. unknown
13. 1.Fully observable vs. partially
observable:
• If an agent’s sensors give it access to the complete state of the
environment at each point in time, then we say that the task
environment is fully observable.
• Fully observable environments are convenient because agent need not
maintain any internal state to keep track of the world.
• An environment is partially observable because of noisy and inaccurate
sensors or parts of state are missing from sensor data.
• For example, a vacuum agent with only a local dirt sensor cannot tell
whether there is dirt in other squares, and an automated taxi cannot see
what other drivers are thinking.
15. 2.Single agent vs. multiagent:
• For example, an agent solving a crossword puzzle by itself is clearly
in a single-agent environment,
• whereas an agent playing chess is in a two agent environment.
• Chess is a competitive multiagent environment.
• Carrom is multiagent.
16. 3.Deterministic vs. stochastic.
• Deterministic vs. stochastic.
• If the next state of the environment is completely
determined by the current state and the action executed by
the agent, then we say the environment is deterministic;
• otherwise, it is stochastic.
• Playing chess is deterministic.
• Taxi driving is clearly stochastic because one can never
predict the behaviour of traffic.
.
17. 4.Episodic vs. sequential:
• In an episodic task environment, the agent’s experience is divided into
atomic episodes.
• In each episode the agent receives a percept and then performs a single
action.The next episode does not depend on the actions taken in
previous episodes. Many classification tasks are episodic.
• For example, an agent that has to spot defective parts on an assembly
line bases each decision on the current part, regardless of previous
decisions;
• In sequential environment current decision could affect all future
decisions.
• Chess and taxi driving are sequential: in both cases, short-term actions
can have long-term consequences.
19. 5.Static vs. dynamic:
• If the environment can change while an agent is deliberating, then we
say the environment is dynamic for that agent; otherwise, it is static.
• Eg. Taxi driving is clearly dynamic: the other cars and the taxi itself
keep moving
• Crossword puzzles are static
20. 6.Discrete vs. continuous
• If the environment has limited number of distinct state clearly
defined percepts and actions then the environment is discrete.
• For example, the chess environment has a finite number of
distinct states.
• Taxi driving is a continuous-state and continuous-time
problem: the speed and location of the taxi and of the other
vehicles sweep through a range of continuous values.
21. 7.Known vs. unknown:
• In a known environment, the outcomes for all actions are
given.
• Obviously, if the environment is unknown, the agent will have
to learn how it works in order to make good decisions.
• for example, in solitaire card games, I know the rules but am
still unable to see the cards that have not yet been turned over.
24. Type of Agents
Agents can be grouped into four classes based on their degree of perceived
intelligence and capability :
• Simple reflex agents;
• Model-based reflex agents;
• Goal-based agents; and
• Utility-based agents.
26. 1.Simple reflex agent.
• These agents select actions based on the current percept, ignoring the
rest of the percept history.
• For example, the vacuum agent because its decision is based only on
the current location and on whether that location contains dirt
• condition–action rule, written as
• if status = Dirty then Clean
• else if location = A then Right
EG: Medical diagnosis system
If the patient has cold,fever,cough, breathing problem then start the
treatment for covid.
27. The agent function is based on the condition-action rule. A condition-action rule is
a rule that maps a state i.e, condition to an action. If the condition is true, then the
action is taken, else not.
28. 2.Model-based reflex agent.
• It uses internal model to keep track of the
current state of the world.
• The Model-based agent can work in a partially
observable environment, and track the situation
• For other driving tasks such as changing lanes,
the agent needs to keep track of where the other
cars are if it can’t see them all at once.
29. ● Model: It is knowledge about "how things happen in the
world," so it is called a Model-based agent.
● Internal State: It is a representation of the current state
based on percept history.
31. 3.Goal Based Agent
● The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
● An agent knows the description of current state and also needs
some sort of goal information that describes situation that are
desirable.
• The action matched with the current state selected depends on the
goal state.
• The goal based agent is more flexible for more than one
destination.
• After identifying one destination the new destination is specified
goal based agent is activated to come up with a behavior.
33. 4.Utility based Agent
• These agents are similar to the goal-based agent but
provide an extra component of utility measurement
which makes them different by providing a measure
of success at a given state.
• Utility-based agent act based not only goals but also
the best way to achieve the goal.
• The Utility-based agent is useful when there are
multiple possible alternatives, and an agent has to
choose in order to perform the best action.
• The utility function maps each state to a real
number to check how efficiently each action
achieves the goals.
34. Utility based Agent
• Goals alone are not enough to generate high-quality
behavior in most environments.
• For example, many action sequences will get the taxi
to its destination (thereby achieving the goal) but
some are quicker, safer, more reliable, or cheaper
than others.
• when there are conflicting goals,only some of which
can be achieved (for example, speed and safety),
• The utility function specifies the appropriate tradeoff
or the most important goal
• they choose the action based on a preference
(utility)for each state.
36. Difference between goal-based
agents and utility-based agents
• Goal based agents decides its actions based on goal whereas
Utility based agents decides its actions based on utilities.
• Goal based agents are more flexible whereas Utility based agents
are less flexible.
• Goal based agents are less faster whereas Utility based agents are
more faster.
• Goal based agents are not enough to generate high-quality
behavior in most environment whereas Utility based agents are
enough to generate high-quality behavior in most environment.
• Goal based agents can not specify the appropriate tradeoff
whereas Utility based agents can specify the appropriate tradeoff .
• Goal based agents are less safer whereas Utility based agents are
more safer
37. Rational
• Rational dictionary meaning is something
logical, sensible and not emotional.
• A rational agent is one that does the right
thing.
• The rationality of an agent is measured by its
performance measure.