SlideShare a Scribd company logo
Module-I:Introduction
Chapter-I: Introduction to Artificial Intelligence
What is AI ?
Foundations of Artificial Intelligence
History of Artificial Intelligence
Chapter-II:
Agent and Environment.
The concept of rationality
The nature of environment.
Structure of agent.
Problem solving agents.
What is Artificial Intelligent?
Homo Sapiens : The name is Latin for "wise man”.
Philosophy of AI - “Can a machine think and behave like humans do?”
In Simple Words - Artificial Intelligence is a way of making a computer, a computer
controlled robot, or a software think intelligently, in the similar manner the intelligent
humans think.
 Artificial intelligence (AI) is an area of computer science that emphasizes the creation of
intelligent machines that work and react like humans.
AI is accomplished by studying how human brain thinks and how humans learn, decide,
and work while trying to solve a problem, and then using the outcomes of this study as a
basis of developing intelligent software and systems.
What is Artificial Intelligent?
Views of AI fall into four categories:
i. Thinking humanly
ii. Thinking rationally
iii. Acting humanly
iv. Acting rationally
In Figure we see eight definitions of AI, laid out along two dimensions.
The definitions on top are concerned with thought processes and reasoning,
whereas the ones on the bottom address behavior.
The definitions on the left measure success in terms of fidelity to human performance,
 whereas the ones on the right measure against an ideal performance measure, called
rationality. A system is rational if it does the “right thing,” given what it knows.
Historically, all four approaches to AI have been followed, each by different people with
different methods. A human-centered approach must be in part an empirical science,
involving observations and hypotheses about human behavior. A rationalist1 approach
involves a combination of mathematics and engineering. The various group have both
disparaged and helped each other.
Processes and
Reasoning
Behavior
Human
Performance
Rationality
“Right Thing,”
I-Thinking humanly: The cognitive modeling approach.
If we are going to say that given program thinks like a human, we must have some way of
determining how humans think.
We need to get inside the actual working of human minds.
There are 3 ways to do it:
i. Through introspection
--Trying to catch our own thoughts as they go
ii. Through psychological experiments
--Observing a person in action.
iii. Through brain imaging
--Observing the brain in action
Comparison of the trace of computer program reasoning steps to traces of human subjects
solving the same problem.
Cognitive Science brings together computer models from AI and experimental techniques
from psychology to try to construct precise and testable theories of the working of the human
mind.
Now distinct from AI
-- AI and Cognitive Science fertilize each other in the areas of vision and natural
language.
Once we have a sufficiently precise theory of the mind, it becomes possible to express
the theory as a computer program.
If the program’s input-output behavior matches corresponding human behavior, that is
evidence that the program’s mechanisms could also be working in humans.
 For example, Allen Newell and Herbert Simon, who developed GPS, the "General
Problem Solver”.
II: Acting humanly: The Turing Test approach:
Turing (1950) developed "Computing machinery and intelligence":
 "Can machines think?" or "Can machines behave intelligently?"
Operational test for intelligent behavior: the Imitation Game .
A computer passes the test if a human interrogator, after posing some written questions,
cannot tell whether the written responses come from a person or from a machine.
 Suggested major components of AI: knowledge, reasoning, language understanding,
learning
The computer would need to posses the following capabilities:
• Natural Language Processing : To enable it to communicate successfully in English.
• Knowledge representation: To store what it knows or hear.
•Automated reasoning: To use the stored information to answer questions and to draw
new conclusions.
• Machine Learning : To adopt to new circumstances and to detect and extrapolate
patterns.
•To pass the Total Turing Test
• Computer vision: To perceive objects.
• Robotics: To manipulate objects and move about.
III. Thinking rationally: The “laws of thought” approach
Greek scientist Aristotle was one of the first to attempt to codify ―right thinking, that is,
‖
unquestionable reasoning processes. His syllogisms provided patterns for argument
structures that always yielded correct conclusions when given correct premises.
Eg:
Socrates is a man;
All men are mortal;
Therefore, Socrates is mortal.– logic
There are two main obstacles to this approach.
1. It is not easy to take informal knowledge and state it in the formal terms required by
logical notation, particularly when the knowledge is less than 100% certain.
2. Second, there is a big difference between solving a problem ―in principle and solving it
in practice.
IV. Acting rationally: The rational agent approach
An agent is just something that acts. Agent=Architect + Program
All computer programs do something, but computer agents are expected to do more:
operate autonomously, perceive their environment, persist over a prolonged time period, and
adapt to change, and create and pursue goals.
A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
In the ―laws of thought approach to AI, the emphasis was on correct inferences.
‖
 On the other hand, correct inference is not all of rationality; in some situations, there is no
provably correct thing to do, but something must still be done.
For example, recoiling from a hot stove is a reflex action that is usually more successful
than a slower action taken after careful deliberation.
• What means “behave rationally” for a person/system:
Take the right/ best action to achieve the goals, based on his/its knowledge and belief.
Example: Assume I don’t like to get wet in rain (my goal), so I bring an umbrella (my
action). Do I behave rationally? .
The answer is dependent on my knowledge and belief
o If I’ve heard the forecast for rain and I believe it, then bringing the umbrella is
rational.
oIf I’ve not heard the forecast for rain and I do not believe that it is going to rain, then
bringing the umbrella is not rational
“Behave rationally” does not always achieve the goals successfully
Example:
 My goals –
(i) do not get wet if rain;
(ii) do not looked stupid (such as bring an umbrella when not raining)
My knowledge/belief – weather forecast for rain and I believe it.
My rational behavior – bring an umbrella.
The outcome of my behavior:
If rain, then my rational behavior achieves both goals; If no rain, then my
rational behavior fails to achieve the 2nd goal.
The successfulness of “behave rationally” is limited by my knowledge and belief
Foundations of Artificial Intelligence
I: Philosophy
• Can formal rules (exist and effective) be used to draw valid conclusions?
• How does the mind arise from a physical brain? Where does knowledge come from?
• How does knowledge lead to action?
• Aristotle was the first to formulate a precise set of laws governing the rational part of the
mind.
He developed an informal system(relationship with formal laws) of syllogisms(Logical
Reasoning) for proper reasoning, which in principle allowed one to generate conclusions
mechanically, given initial premises.
All dogs are animals;
All animals have four legs;
Therefore, all dogs have four legs
• Descartes was a strong advocate of the power of reasoning in understanding the world,
philosophy now called as rationalism.
II: Mathematics
• What are the formal rules to draw valid conclusions? What can be computed?
• How do we reason with uncertain information?
• Formal representation and proof algorithms: Propositional logic Computation: Turing tried to
characterize exactly which functions are computable - capable of being computed.
• (un)decidability: Incompleteness theory showed that in any formal theory, there are true
statements that are un-decidable i.e. they have no proof within the theory.
o “ a line can be extended infinitely in both directions”
• (in)tractability: A problem is called intractable ( quality of bring very difficult to solve &
manage)if the time required to solve instances of the problem grows exponentially with the
size of the instance.
• probability: Predicting the future.
III: Economics
• How should we make decisions so as to maximize payoff?
• Economics is the study of how people make choices that lead to preferred
outcomes(utility).
• Decision theory: It combines probability theory with utility theory, provides a formal and
complete framework for decisions made under uncertainty.
IV: Neuroscience
• How do brains process information?
• Neuroscience is the study of the nervous system, particularly brain.
• Brain consists of nerve cells or neurons. 10^11 neurons.
• Neurons are considered as Computational units.
V:Psychology
• Behaviorism movement, led by John Watson(1878-1958). Behaviorists insisted on studying
only objective measures of the percepts(stimulus) given to an animal and its resulting actions(or
response). Behaviorism discovered a lot about rats and pigeons but had less success at
understanding human.
Cognitive psychology, views the brain as an information processing device. Common view
among psychologist that a cognitive theory should be like a computer program.(Anderson 1980)
i.e. It should describe a detailed information processing mechanism whereby some cognitive
function might be implemented.
Behaviors and action
VI: Computer engineering: How can we build an efficient computer?
• For artificial intelligence to succeed, we need two things: intelligence and an artifact. The
computer has been the artifact(object) of choice.
• The first operational computer was the electromechanical Heath Robinson, built in 1940 by
Alan Turing's team for a single purpose: deciphering German messages.
• The first operational programmable computer was the Z-3, the invention of KonradZuse in
Germany in 1941.
• The first electronic computer, the ABC, was assembled by John Atanasoff and his student
Clifford Berry between 1940 and 1942 at Iowa State University
•The first programmable machine was a loom, devised in 1805 by Joseph Marie Jacquard
(1752-1834) that used punched cards to store instructions for the pattern to be woven.
VII: Control theory and cybernetics :
How can artifacts operate under their own control?
• Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water clock
with a regulator that maintained a constant flow rate. This invention changed the definition
of what an artifact could do.
• Modern control theory, especially the branch known as stochastic optimal control, has as its
goal the design of systems that maximize an objective function over time. This roughly
OBJECTIVE FUNCTION matches our view of Al: designing systems that behave optimally.
• Calculus and matrix algebra- the tools of control theory
VIII: Linguistics: How does language relate to thought? (related to language)
• In 1957, B. F. Skinner published Verbal Behaviour. This was a comprehensive, detailed
account of the behaviourist approach to language learning, written by the foremost expert in
the field.
• Modern linguistics and AI were ―born at about the same time, and grew up together,
intersecting in a hybrid field called computational linguistics or natural language processing.
• The problem of understanding language soon turned out to be considerably more complex
than it seemed in 1957.
• Understanding language requires an understanding of the subject matter and context,
not just an understanding of the structure of sentences.
History of Artificial Intelligence
1. The gestation of artificial intelligence (1943–1955):
The gestation of artificial intelligence (AI) during the period from 1943 to 1955
marked the early theoretical and conceptual groundwork for the field. This period laid the
foundation for the subsequent development of AI
2. The birth of artificial intelligence (1956):
The birth of artificial intelligence (AI) in 1956 is commonly associated with the
Dartmouth Conference, a seminal event that took place at Dartmouth College in Hanover,
New Hampshire.
3. Early enthusiasm, great expectations (1952–1969):
The period from 1952 to 1969 in the history of artificial intelligence (AI) was
characterized by early enthusiasm and great expectations. Researchers during this time
were optimistic about the potential of AI and believed that significant progress could be
made in creating machines with human-like intelligence
Ground work
Birth from
conference
Research create
HLI
4. A dose of reality (1966–1973):
The period from 1966 to 1973 in the history of artificial intelligence (AI) is often
referred to as "A Dose of Reality." During this time, researchers faced challenges and setbacks
that led to a re-evaluation of the initial optimism and expectations surrounding AI.
5. Knowledge-based systems: The key to power? (1969–1979)
The period from 1969 to 1979 in the history of artificial intelligence (AI) is
characterized by a focus on knowledge-based systems, with researchers exploring the use of
symbolic representation of knowledge to address challenges in AI. This era saw efforts to build
expert systems, which were designed to emulate human expertise in specific domains.
6. AI becomes an industry (1980–present):
The period from 1980 to the present marks the evolution of artificial intelligence (AI)
into an industry, witnessing significant advancements, increased commercialization, and
widespread applications across various domains.
Expectations
Knowledge based
system
Wide spread in
various domain
7. The return of neural networks (1986–present):
The period from 1986 to the present is characterized by the resurgence and dominance
of neural networks in the field of artificial intelligence (AI). This era is marked by significant
advancements in the development of neural network architectures, training algorithms, and the
widespread adoption of deep learning techniques.
8. AI adopts the scientific method (1987–present):
The period from 1987 to the present has seen the adoption of the scientific method in
the field of artificial intelligence (AI), reflecting a more rigorous and empirical approach to
research. This shift has involved the application of experimental methodologies,
reproducibility, and a greater emphasis on evidence-based practices.
9. The emergence of intelligent agents (1995–present):
The period from 1995 to the present has been marked by the emergence and evolution
of intelligent agents in the field of artificial intelligence (AI). Intelligent agents are
autonomous entities that perceive their environment, make decisions, and take actions to
achieve goals.
Intelligent agent
Evidence based
practice
NNA, Training
algorithm & DL
10. The availability of very large data sets (2001–present):
The period from 2001 to the present has been characterized by the availability and
utilization of very large datasets in the field of artificial intelligence (AI). This era has
witnessed an unprecedented growth in the volume and diversity of data, providing a
foundation for training and enhancing increasingly sophisticated AI models.
Diversity & Develop
sophisticated AI
models.
Chapter II: Intelligent Agents
Introduction:
•In this chapter we will see that the concept of rationality can be applied to a wide variety of
agents operating in any imaginable environment.
•Use this concept to develop a small set of design principles for building successful agents—
systems that can reasonably be called intelligent.
•We begin by examining agents, environments, and the coupling between them.
• The observation that some agents behave better than others leads naturally to the idea of a
rational agent—one that behaves as well as possible.
• How well an agent can behave depends on the nature of the environment; some
environments are more difficult than others.
•We give a crude categorization of environments and show how properties of an environment
influence the design of suitable agents for that environment.
Agents and environment
An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators(Convertor). simple idea is illustrated in
Figure.
•Percept − It is agent’s perceptual inputs at a given instance.(in general meaning: rule of
action/behavior)
•Percept Sequence − It is the history of all that an agent has perceived till date.
•Agent Function − It is a map from the precept sequence to an action.
•Performance Measure of Agent − It is the criteria, which determines how successful an
agent is.
•Behavior of Agent − It is the action that agent performs after any given sequence of percepts
•We can imagine tabulating the agent function that describes any given agent; for most
agents, this would be a very large table—infinite, in fact, unless we place a bound on the
length of percept sequences we want to consider.
• Given an agent to experiment with, we can, in principle, construct this table by trying out
all possible percept sequences and recording which actions the agent does in response.
• The table is, of course, an external characterization of the agent. Internally, the agent
function for an artificial agent will be implemented by an agent program.
• It is important to keep these two ideas distinct.
• The agent function is an abstract mathematical description.
• The agent program is a concrete implementation, running within some physical system.
To illustrate these ideas, we use a very simple example—the vacuum-cleaner world
This particular world has just two locations:
squares A and B.
The vacuum agent perceives which square it is in and whether there is dirt in the square.
It can choose to move left, move right, suck up the dirt, or do nothing.
One very simple agent function is the following: if the current square is dirty, then suck;
otherwise, move to the other square.
A partial tabulation of this agent function is shown in Figure
A Vacuum Cleaner world with just two location
Module-I -Final Copy (1).pptx xcvbgnhjmcvb
Fig: Agent Function for reflex agent in the two state vacuum environment.
Concept of Rationality
A rational agent is one that does the right thing—conceptually speaking, every entry in the
table for the agent function is filled out correctly.
Rationality Rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent's prior knowledge of the environment.
• The actions that the agent can perform.
• The agent's percept sequence to date.
A definition of a rational agent:
“ For each possible percept sequence, a rational agent should select an action that is ex-
pected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.”
Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the
other square if not; this is the agent function tabulated in Figure 2.3. Is this a rational agent?
That depends! First, we need to say what the performance measure is, what is known about the
environment, and what sensors and actuators the agent has. Let us assume the following:
The performance measure awards one point for each clean square at each time step, over a
“lifetime” of 1000 time steps.
The “geography” of the environment is known a priori (Figure 2.2) but the dirt distribution and
the initial location of the agent are not. Clean squares stay clean and sucking cleans the current
square. The Left and Right actions move the agent left and right except when this would take
the agent outside the environment, in which case the agent remains where it is.
• The only available actions are Left, Right, and Suck.
• The agent correctly perceives its location and whether that location contains dirt. We claim
that under these circumstances the agent is indeed rational
The nature of environment
I:Specifying the task environment:
•In our discussion of the rationality of the simple vacuum-cleaner agent, we had to specify the
performance measure, the environment, and the agent’s actuators and sensors (PEAS).
•We group all these under the heading of the task environment.
•Task environments, which are essentially the “problems” to which rational agents are the
“solutions.”
•To specify the performance measure, the environment, and the agent’s actuators and sensors called
the PEAS (Performance, Environment, Actuators, Sensors) description.
•In designing an agent, the first step must always be to specify the task environment as fully as
possible.
The vacuum world was a simple example; let us consider a more complex problem: an
automated taxi driver.
PEAS description of an automated taxi driver
Figure 2.4 summarizes the PEAS description for the taxi’s task environment.
First, what is the performance measure to which we would like our automated driver to aspire?
• Desirable qualities include getting to the correct destination; minimizing fuel
consumption and wear and tear;
• minimizing the trip time or cost;
• minimizing violations of traffic laws and disturbances to other drivers;
• maximizing safety and passenger comfort;
• maximizing profits. Obviously, some of these goals conflict, so tradeoffs will be required
Next, what is the driving environment that the taxi will face?
Any taxi driver must deal with a variety of roads, ranging from rural lanes and urban
alleys to 12-lane freeways. The roads contain other traffic, pedestrians, stray animals,
road works, police cars, puddles,
•The actuators for an automated taxi include those available to a human driver: control over
the engine through the accelerator and control over steering and braking. In addition, it will
need output to a display screen or voice synthesizer to talk back to the passengers, and perhaps
some way to communicate with other vehicles, politely or otherwise.
•The basic sensors for the taxi will include one or more controllable video cameras so that it
can see the road; it might augment these with infrared or sonar sensors to detect distances to
other cars and obstacles. To avoid speeding tickets, the taxi should have a speedometer, and to
control the vehicle properly, especially on curves, it should have an accelerator.
II. Properties of task environments
•The range of task environments that might arise in AI is obviously vast.
•We can, however, identify a fairly small number of dimensions along which task
environments can be categorized.
•These dimensions determine, to a large extent, the appropriate agent design and the
applicability of each of the principal families of techniques for agent implementation.
• First, we list the dimensions, then we analyze several task environments to illustrate the
ideas. The definitions here are informal;
The basic PEAS elements for a number of additional agent types.
I. Fully observable vs. partially observable:
Fully observable partially observable
If an agent’s sensors give it access to the complete
state of the environment at each point in time, then
we say that the task environment is fully observable.
An environment might be partially observable
because of noisy and inaccurate sensors or because
parts of the state are simply missing from the sensor
data
A task environment is effectively fully observable if
the sensors detect all aspects that are relevant to the
choice of action; relevance, in turn, depends on the
performance measure.
for example, a vacuum agent with only a local dirt
sensor cannot tell whether there is dirt in other
squares, and an automated taxi cannot see what
other drivers are thinking.
Fully observable environments are convenient
because the agent need not maintain any internal
state to keep track of the world.
If the agent has no sensors at all then the
environment is unobservable
II. Single agent vs. Multi Agent:
•An agent solving a crossword puzzle by itself is clearly in a single-agent environment,
whereas an agent playing chess is in a two agent environment.
•There are, however, some subtle issues. First, we have described how an entity may be
viewed as an agent, but we have not explained which entities must be viewed as agents.
• Does an agent A (the taxi driver for example) have to treat an object B (another vehicle) as
an agent, or can it be treated merely as an object behaving according to the laws of physics,
analogous to waves at the beach or leaves blowing in the wind?
•The key distinction is whether B’s behavior is best described as maximizing a performance
measure whose value depends on agent A’s behavior.
•For example, in chess, the opponent entity B is trying to maximize its performance measure,
which, by the rules of chess, minimizes agent A’s performance measure.
III. Deterministic vs. Stochastic
•If the next state of the environment is completely determined by the current state and the
action executed by the agent, then we say the environment is deterministic; otherwise, it is
stochastic.
•In principle, an agent need not worry about uncertainty in a fully observable, deterministic
environment. (In our definition, we ignore uncertainty that arises purely from the actions of
other agents in a multi agent environment; thus, a game can be deterministic even though
each agent may be unable to predict the actions of the others.)
•If the environment is partially observable, however, then it could appear to be stochastic.
Most real situations are so complex that it is impossible to keep track of all the unobserved
aspects; for practical purposes, they must be treated as stochastic. Taxi driving is clearly
stochastic in this sense, because one can never predict the behavior of traffic exactly
IV. Episodic vs. sequential :
•In an episodic task environment
• the agent’s experience is divided into atomic(tiny) episodes.
• In each episode the agent receives a percept and then performs a single action In episodes
do not depend on the actions taken in previous episodes, and they do not influence future
episodes
•Ex: an agent that has to spot defective parts on an assembly line,
• In sequential environments the current decision could affect future decisions actions can
⇒
have long-term consequences.
•Ex: chess, taxi driving, ... Episodic environments are much simpler than sequential ones .
•No need to think ahead!
V. Static vs. dynamic :
•The task environment is dynamic if it can change while the agent is choosing an action,
static otherwise agent needs keep looking at the world while deciding an action
⇒
• Ex: crossword puzzles are static, taxi driving is dynamic The task environment is semi
dynamic if the environment itself does not change with time, but the agent’s performance
score does
• Ex: chess with a clock Static environments are easier to deal wrt. [semi]dynamic ones.
VI. Discrete vs. continuous:
•The discrete/continuous distinction applies to the state of the environment, to the way time is
handled, and to the percepts and actions of the agent.
For example:
The chess environment has a finite number of distinct states (excluding the clock). Chess also
has a discrete set of percepts and actions.
Taxi driving is a continuous-state and continuous-time problem.
• Ex: Crossword puzzles: discrete state, time, percepts & actions
• Ex: Taxi driving: continuous state, time, percepts & actions.
Note:
The simplest environment is fully observable, single-agent, deterministic, episodic,
static and discrete. Ex: simple vacuum cleaner.
Most real-world situations are partially observable, multi-agent, stochastic, sequential,
dynamic, and continuous. Ex: taxi driving
Module-I -Final Copy (1).pptx xcvbgnhjmcvb
Properties of the Agent’s State of Knowledge:
Known vs. unknown
•Describes the agent’s (or designer’s) state of knowledge about the “laws of physics” of the
environment
• if the environment is known, then the outcomes (or outcome probabilities if stochastic)
for all actions are given.
• if the environment is unknown, then the agent will have to learn how it works in order to
make good decisions.
•Orthogonal wrt. task-environment properties. Known not equal to Fully observable
• a known environment can be partially observable (Ex: a solitaire card games).
• an unknown environment can be fully observable (Ex: a game I don’t know the rules of)
Structure of agent
•So far we have talked about agents by describing behavior—the action that is performed
after any given sequence of percepts. Now we must bite the bullet and talk about how the
insides work.
•The job of AI is to design an agent program that implements the agent function— the
mapping from percepts to actions.
•We assume this program will run on some sort of computing device with physical sensors
and actuators—we call this the architecture.
agent = architecture + program
i. Agent programs
• AI Job: design an agent program implementing the agent function .
• The agent program runs on some computing device with physical sensors and actuators:
the agent architecture .
• All agents have the same skeleton:
• Input: current percepts .
• Output: action
• Program: manipulates input to produce output.
• The agent function takes the entire percept history as input.
• The agent program takes only the current percept as input.
• If the actions need to depend on the entire percept sequence, the agent will have to
remember the percepts.
Example: The Table-Driven Agent
The table represents explicitly the agent function Ex: the simple vacuum cleaner
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability.
Note:
The function consists in a lookup table of actions to be taken for every possible state of the
environment.
Only work for a small number of possible states for the environment.
we as designers must construct a table that contains the appropriate action for every possible
percept sequence.
It is instructive to consider why the table-driven approach to agent construction is
doomed(fail) to failure.
Let P be the set of possible percepts and let T be the lifetime of the agent (the total number
of percepts it will receive).
The lookup table will contain Entries.
All these agents can improve their performance and generate better action over the time.
These are given below:
• Simple Reflex Agent • Model-based reflex agent
• Goal-based agents • Utility-based agent
• Learning agent
Simple reflex agents:
• The Simple reflex agents are the simplest agents. These agents take decisions on the basis of
the current percepts and ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their decision
and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the current
state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
• Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.
•Simple reflex behaviors occur even in more complex environments.
•Imagine yourself as the driver of the automated taxi.
•If the car in front brakes and its brake lights come on, then you should notice this and
initiate braking.
•In other words, some processing is done on the visual input to establish the condition we
call “The car in front is braking.” Then, this triggers some established connection in the
agent program to the action “initiate braking.”
• We call such a connection a condition–action rule, and written as
if car-in-front-is-braking then initiate-braking
Simple reflex agents:
Current internal state
Background information
used in process
Current state
Return rule from
set
•We use rectangles to denote the current internal state of the agent’s decision process, and
ovals to represent the background information used in the process.
• The agent program, which is also very simple, is shown in Figure 2.10.
•The INTERPRET-INPUT function generates an abstracted description of the current state
from the percept, and the RULE-MATCH function returns the first rule in the set of rules that
matches the given state description.
•Note that the description in terms of “rules” and “matching” is purely conceptual; actual
implementations can be as simple as a collection of logic gates implementing a Boolean
circuit.
Model-based reflex agent:
The Model-based agent can work in a partially observable environment, and track the
situation.
A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
• Internal State: It is a representation of the current state based on percept
history.
These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
Model based reflex agent
creating the new internal state
description
Return rule from
set
•For the braking problem, the internal state is not too extensive— just the previous frame
from the camera, allowing the agent to detect when two red lights at the edge of the vehicle
go on or off simultaneously.
•For other driving tasks such as changing lanes, the agent needs to keep track of where the
other cars are if it can’t see them all at once. And for any driving to be possible at all, the
agent needs to keep track of where its keys are.
•Updating this internal state information as time goes by requires two kinds of knowledge to
be encoded in the agent program.
• First, we need some information about how the world evolves independently of the agent
—for example, that an overtaking car generally will be closer behind than it was a moment
ago.
•Second, we need some information about how the agent’s own actions affect the world—
for example, that when the agent turns the steering wheel clockwise, the car turns to the
right, or that after driving for five minutes northbound on the freeway, one is usually about
five miles north of where one was five minutes ago.
•This knowledge about “how the world works”—whether implemented in simple Boolean
circuits or in complete scientific theories—is called a model of the world. An agent that uses
such a model is called a model-based agent.
Goal-based agents
•The knowledge of the current state environment is not always sufficient to decide for an agent
to what to do.
•The agent needs to know its goal which describes desirable situations.
•Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
• They choose an action, so that they can achieve the goal.
•These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
• Sometimes goal-based action selection is straightforward: for example when goal satisfaction
results immediately from a single action.
•Sometimes it will be trickier: for example, when the agent has to consider long sequences of
twists and turns to find a way to achieve the goal.
•Search and planning are the subfields of AI devoted to finding action sequences that achieve
the agent’s goals.
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
•Utility-based Agents advantages wrt. goal-based:
• with conflicting goals, utility specifies and appropriate tradeoff.
• with several goals none of which can be achieved with certainty, utility selects proper
tradeoff between importance of goals and likelihood of success.
• still complicate to implement.
• require sophisticated perception, reasoning, and learning
• may require expensive computation.
Module-I -Final Copy (1).pptx xcvbgnhjmcvb
Learning Agents:
• Problem Previous agent programs describe methods for selecting actions
• How are these agent programs programmed?
• Programming by hand inefficient and ineffective!
• Solution: build learning machines and then teach them (rather than instruct them).
• Advantage: robustness of the agent program toward initially-unknown environments
• Performance element: selects actions based on percepts Corresponds to the previous agent
programs
•Learning element: introduces improvements uses feedback from the critic on how the agent is
doing determines improvements for the performance element.
• Critic tells how the agent is doing w.r.t. performance standard.
• Problem generator: suggests actions that will lead to new and informative experiences forces
exploration of new stimulating scenarios
•Example: Taxi Driving
 After the taxi makes a quick left turn across three lanes, the critic observes the
shocking language used by other drivers.
 From this experience, the learning element formulates a rule saying this was a
bad action.
 The performance element is modified by adding the new rule.
 The problem generator might identify certain areas of behavior in need of
improvement, and suggest trying out the brakes on different road surfaces under different
conditions
Problem solving agents
PROBLEM-SOLVING APPROACH IN ARTIFICIAL INTELLIGENCE PROBLEMS:
• The reflex agents are known as the simplest agents because they directly map states into actions.
• Unfortunately, these agents fail to operate in an environment where the mapping is too large to
store and learn.
•Goal-based agent, on the other hand, considers future actions and the desired outcomes.
•Here, we will discuss one type of goal-based agent known as a problem-solving agent, which
uses atomic representation with no internal states visible to the problem-solving algorithms.
Problem-solving agent:
The problem-solving agent performs precisely by defining problems and its several solutions.
•According to psychology, “a problem-solving refers to a state where we wish to reach to
a definite goal from a present state or condition.”
• According to computer science,” a problem-solving is a part of artificial intelligence
which bonded with a number of techniques such as algorithms, heuristics to solve a
problem.”
•Therefore, a problem-solving agent is a goal-driven agent and focuses on satisfying the
goal.
PROBLEM DEFINITION :
•To build a system to solve a particular problem, we need to do four things:
(i) Define the problem precisely. This definition must include specification of the
initial situations and also final situations which constitute (i.e) acceptable
solution to the problem.
(ii) Analyze the problem (i.e) important features have an immense (i.e) huge impact
on the appropriateness of various techniques for solving the problems.
(iii) Isolate and represent the knowledge to solve the problem.
(iv) Choose the best problem – solving techniques and apply it to the particular
problem.
Steps performed by Problem-solving agent :
•Goal Formulation: It is the first and simplest step in problem-solving. It organizes the
steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve
that goal. Goal formulation is based on the current situation and the agent’s performance measure.
(steps to carryout to achieve the goal)
•Problem Formulation: It is the most important step of problem-solving which decides what
actions should be taken to achieve the formulated goal. There are following five components
involved in problem formulation: ( steps to decide what action taken to achieve goal).
•Initial State: It is the starting state or initial step of the agent towards its goal.
•Actions: It is the description of the possible actions available to the agent.
•Transition Model(Successor Step): It describes what each action does.
•Goal Test: It determines if the given state is a goal state.
•Path cost: It assigns a numeric cost to each path that follows the goal. The problem solving
agent selects a cost function, which reflects its performance measure.
Example on problem solving agent:
1. Toy problem:
1. 8 puzzle problem
2. Vacuum problem
3. 4/8 Queen Problem
2. Real world problem
1. Robot navigation:
Robot navigation is a generalization of the route-finding problem described
earlier. Rather than following a discrete set of routes, a robot can move in a
continuous space with (in principle) an infinite set of possible actions and states.
2. traveling salesperson problem (TSP):
The traveling salesperson problem (TSP) is a touring problem in which each
city must be visited exactly once. The aim is to find the shortest tour.
VTU University questions:
1. What is PEAS? Explain different agent types with their PEAS description. (vtu 2019-20)
2. Explain in details the properties of Task Environment. (vtu 2019-20).
3. Define AI and list the task domain of AI. (vtu 21-22).
4. Explain types of environment. (2019).
5. Explain the concept of rationality. (2019)
6. Explain the reflex agent with state. (simple reflex [vacuum world]) (2019).
7. Elaborate artificial intelligence with suitable example along with its application. (2019).
8. Discuss the historical evaluation of AI. (2019).
9. State the relationship between agent and environment.(2019)
• Assignment questions:
1. Define the Artificial Intelligence with 4 category.
2. Explain agent & its environment with vacuum cleaner world and algorithm.
3. Explain the agent program concept with table driven algorithm.
4. Explain the following terms (algorithm & example)
1. Simple reflex agent.
2. Goal based agent.
3. Utility based agent.
4. Model based agent.
5. Explain problem solving agent with steps to performance.
6. With the help of problem solving agent solve the given toy problems.
1. 8 puzzle game.
2. Vacuum world problem.
3. 4 & 8 queen problem.

More Related Content

PPTX
FOUNDATIONS OF ARTIFICIAL INTELIGENCE BASICS
PPTX
AI INTRODUCTION Artificial intelligence.ppt
PDF
Sehran Rubani Artificial intelligence presentation by Dr
PPTX
Introduction to Artificial Intelligence and History of AI
PPTX
Basics of artificial intelligence and machine learning
PPTX
Artificial Intelligence problem solving agents
PPTX
Module 01 IS & MLA.pptx for 22 scheme notes in a ppt form
PDF
ARTIFICIAL INTELLIGENCETterm Paper
FOUNDATIONS OF ARTIFICIAL INTELIGENCE BASICS
AI INTRODUCTION Artificial intelligence.ppt
Sehran Rubani Artificial intelligence presentation by Dr
Introduction to Artificial Intelligence and History of AI
Basics of artificial intelligence and machine learning
Artificial Intelligence problem solving agents
Module 01 IS & MLA.pptx for 22 scheme notes in a ppt form
ARTIFICIAL INTELLIGENCETterm Paper

Similar to Module-I -Final Copy (1).pptx xcvbgnhjmcvb (20)

PPTX
AI UNIT-1(PPT)ccccxffrfydtffyfftdtxgxfxt
DOCX
Artificial intelligence
PDF
PPTX
Artificial Intelligences -CHAPTER 1_1.pptx
PDF
Cognitive Science Unit 2
DOCX
Cosc 208 lecture note-1
PDF
Artificial intelligence - Approach and Method
PPTX
محاضرة في الذكاء الاصطناعي والخدمات المستفادة.pptx
PPTX
Artificial Intelligence Chapter 1 and chapter 2
PDF
Presentation of Intro to AI unit -3.pdf
PPTX
1 Introduction to AI.pptx
PDF
Artificial intelligence
PPTX
Introduction to ArtificiaI Intelligence.pptx
PPTX
AI Chapter 1.pptx
PPTX
Unit 1 AI.pptx
PPT
Lec-01.ppt
PPTX
AI111111111111111111111111111111111.pptx
PPT
cloud computing and distributedcomputing
PPT
AI.ppt
PPTX
Introduction to Artificial Intelligence.
AI UNIT-1(PPT)ccccxffrfydtffyfftdtxgxfxt
Artificial intelligence
Artificial Intelligences -CHAPTER 1_1.pptx
Cognitive Science Unit 2
Cosc 208 lecture note-1
Artificial intelligence - Approach and Method
محاضرة في الذكاء الاصطناعي والخدمات المستفادة.pptx
Artificial Intelligence Chapter 1 and chapter 2
Presentation of Intro to AI unit -3.pdf
1 Introduction to AI.pptx
Artificial intelligence
Introduction to ArtificiaI Intelligence.pptx
AI Chapter 1.pptx
Unit 1 AI.pptx
Lec-01.ppt
AI111111111111111111111111111111111.pptx
cloud computing and distributedcomputing
AI.ppt
Introduction to Artificial Intelligence.
Ad

Recently uploaded (20)

PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PDF
Foundation of Data Science unit number two notes
PPTX
1_Introduction to advance data techniques.pptx
PPT
ISS -ESG Data flows What is ESG and HowHow
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
Computer network topology notes for revision
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPTX
climate analysis of Dhaka ,Banglades.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PDF
Business Analytics and business intelligence.pdf
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPTX
Introduction to Knowledge Engineering Part 1
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
oil_refinery_comprehensive_20250804084928 (1).pptx
Foundation of Data Science unit number two notes
1_Introduction to advance data techniques.pptx
ISS -ESG Data flows What is ESG and HowHow
IB Computer Science - Internal Assessment.pptx
Computer network topology notes for revision
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
climate analysis of Dhaka ,Banglades.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Miokarditis (Inflamasi pada Otot Jantung)
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
Introduction-to-Cloud-ComputingFinal.pptx
IBA_Chapter_11_Slides_Final_Accessible.pptx
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Business Analytics and business intelligence.pdf
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
Business Ppt On Nestle.pptx huunnnhhgfvu
Introduction to Knowledge Engineering Part 1
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Ad

Module-I -Final Copy (1).pptx xcvbgnhjmcvb

  • 1. Module-I:Introduction Chapter-I: Introduction to Artificial Intelligence What is AI ? Foundations of Artificial Intelligence History of Artificial Intelligence Chapter-II: Agent and Environment. The concept of rationality The nature of environment. Structure of agent. Problem solving agents.
  • 2. What is Artificial Intelligent? Homo Sapiens : The name is Latin for "wise man”. Philosophy of AI - “Can a machine think and behave like humans do?” In Simple Words - Artificial Intelligence is a way of making a computer, a computer controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.  Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. AI is accomplished by studying how human brain thinks and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems.
  • 3. What is Artificial Intelligent? Views of AI fall into four categories: i. Thinking humanly ii. Thinking rationally iii. Acting humanly iv. Acting rationally
  • 4. In Figure we see eight definitions of AI, laid out along two dimensions. The definitions on top are concerned with thought processes and reasoning, whereas the ones on the bottom address behavior. The definitions on the left measure success in terms of fidelity to human performance,  whereas the ones on the right measure against an ideal performance measure, called rationality. A system is rational if it does the “right thing,” given what it knows. Historically, all four approaches to AI have been followed, each by different people with different methods. A human-centered approach must be in part an empirical science, involving observations and hypotheses about human behavior. A rationalist1 approach involves a combination of mathematics and engineering. The various group have both disparaged and helped each other.
  • 6. I-Thinking humanly: The cognitive modeling approach. If we are going to say that given program thinks like a human, we must have some way of determining how humans think. We need to get inside the actual working of human minds. There are 3 ways to do it: i. Through introspection --Trying to catch our own thoughts as they go ii. Through psychological experiments --Observing a person in action. iii. Through brain imaging --Observing the brain in action Comparison of the trace of computer program reasoning steps to traces of human subjects solving the same problem. Cognitive Science brings together computer models from AI and experimental techniques from psychology to try to construct precise and testable theories of the working of the human mind.
  • 7. Now distinct from AI -- AI and Cognitive Science fertilize each other in the areas of vision and natural language. Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as a computer program. If the program’s input-output behavior matches corresponding human behavior, that is evidence that the program’s mechanisms could also be working in humans.  For example, Allen Newell and Herbert Simon, who developed GPS, the "General Problem Solver”.
  • 8. II: Acting humanly: The Turing Test approach: Turing (1950) developed "Computing machinery and intelligence":  "Can machines think?" or "Can machines behave intelligently?" Operational test for intelligent behavior: the Imitation Game . A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or from a machine.  Suggested major components of AI: knowledge, reasoning, language understanding, learning
  • 9. The computer would need to posses the following capabilities: • Natural Language Processing : To enable it to communicate successfully in English. • Knowledge representation: To store what it knows or hear. •Automated reasoning: To use the stored information to answer questions and to draw new conclusions. • Machine Learning : To adopt to new circumstances and to detect and extrapolate patterns. •To pass the Total Turing Test • Computer vision: To perceive objects. • Robotics: To manipulate objects and move about.
  • 10. III. Thinking rationally: The “laws of thought” approach Greek scientist Aristotle was one of the first to attempt to codify ―right thinking, that is, ‖ unquestionable reasoning processes. His syllogisms provided patterns for argument structures that always yielded correct conclusions when given correct premises. Eg: Socrates is a man; All men are mortal; Therefore, Socrates is mortal.– logic There are two main obstacles to this approach. 1. It is not easy to take informal knowledge and state it in the formal terms required by logical notation, particularly when the knowledge is less than 100% certain. 2. Second, there is a big difference between solving a problem ―in principle and solving it in practice.
  • 11. IV. Acting rationally: The rational agent approach An agent is just something that acts. Agent=Architect + Program All computer programs do something, but computer agents are expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, and adapt to change, and create and pursue goals. A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome. In the ―laws of thought approach to AI, the emphasis was on correct inferences. ‖  On the other hand, correct inference is not all of rationality; in some situations, there is no provably correct thing to do, but something must still be done. For example, recoiling from a hot stove is a reflex action that is usually more successful than a slower action taken after careful deliberation.
  • 12. • What means “behave rationally” for a person/system: Take the right/ best action to achieve the goals, based on his/its knowledge and belief. Example: Assume I don’t like to get wet in rain (my goal), so I bring an umbrella (my action). Do I behave rationally? . The answer is dependent on my knowledge and belief o If I’ve heard the forecast for rain and I believe it, then bringing the umbrella is rational. oIf I’ve not heard the forecast for rain and I do not believe that it is going to rain, then bringing the umbrella is not rational
  • 13. “Behave rationally” does not always achieve the goals successfully Example:  My goals – (i) do not get wet if rain; (ii) do not looked stupid (such as bring an umbrella when not raining) My knowledge/belief – weather forecast for rain and I believe it. My rational behavior – bring an umbrella. The outcome of my behavior: If rain, then my rational behavior achieves both goals; If no rain, then my rational behavior fails to achieve the 2nd goal. The successfulness of “behave rationally” is limited by my knowledge and belief
  • 14. Foundations of Artificial Intelligence I: Philosophy • Can formal rules (exist and effective) be used to draw valid conclusions? • How does the mind arise from a physical brain? Where does knowledge come from? • How does knowledge lead to action? • Aristotle was the first to formulate a precise set of laws governing the rational part of the mind. He developed an informal system(relationship with formal laws) of syllogisms(Logical Reasoning) for proper reasoning, which in principle allowed one to generate conclusions mechanically, given initial premises. All dogs are animals; All animals have four legs; Therefore, all dogs have four legs
  • 15. • Descartes was a strong advocate of the power of reasoning in understanding the world, philosophy now called as rationalism. II: Mathematics • What are the formal rules to draw valid conclusions? What can be computed? • How do we reason with uncertain information? • Formal representation and proof algorithms: Propositional logic Computation: Turing tried to characterize exactly which functions are computable - capable of being computed. • (un)decidability: Incompleteness theory showed that in any formal theory, there are true statements that are un-decidable i.e. they have no proof within the theory. o “ a line can be extended infinitely in both directions”
  • 16. • (in)tractability: A problem is called intractable ( quality of bring very difficult to solve & manage)if the time required to solve instances of the problem grows exponentially with the size of the instance. • probability: Predicting the future. III: Economics • How should we make decisions so as to maximize payoff? • Economics is the study of how people make choices that lead to preferred outcomes(utility). • Decision theory: It combines probability theory with utility theory, provides a formal and complete framework for decisions made under uncertainty.
  • 17. IV: Neuroscience • How do brains process information? • Neuroscience is the study of the nervous system, particularly brain. • Brain consists of nerve cells or neurons. 10^11 neurons. • Neurons are considered as Computational units. V:Psychology • Behaviorism movement, led by John Watson(1878-1958). Behaviorists insisted on studying only objective measures of the percepts(stimulus) given to an animal and its resulting actions(or response). Behaviorism discovered a lot about rats and pigeons but had less success at understanding human. Cognitive psychology, views the brain as an information processing device. Common view among psychologist that a cognitive theory should be like a computer program.(Anderson 1980) i.e. It should describe a detailed information processing mechanism whereby some cognitive function might be implemented. Behaviors and action
  • 18. VI: Computer engineering: How can we build an efficient computer? • For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been the artifact(object) of choice. • The first operational computer was the electromechanical Heath Robinson, built in 1940 by Alan Turing's team for a single purpose: deciphering German messages. • The first operational programmable computer was the Z-3, the invention of KonradZuse in Germany in 1941. • The first electronic computer, the ABC, was assembled by John Atanasoff and his student Clifford Berry between 1940 and 1942 at Iowa State University •The first programmable machine was a loom, devised in 1805 by Joseph Marie Jacquard (1752-1834) that used punched cards to store instructions for the pattern to be woven.
  • 19. VII: Control theory and cybernetics : How can artifacts operate under their own control? • Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water clock with a regulator that maintained a constant flow rate. This invention changed the definition of what an artifact could do. • Modern control theory, especially the branch known as stochastic optimal control, has as its goal the design of systems that maximize an objective function over time. This roughly OBJECTIVE FUNCTION matches our view of Al: designing systems that behave optimally. • Calculus and matrix algebra- the tools of control theory
  • 20. VIII: Linguistics: How does language relate to thought? (related to language) • In 1957, B. F. Skinner published Verbal Behaviour. This was a comprehensive, detailed account of the behaviourist approach to language learning, written by the foremost expert in the field. • Modern linguistics and AI were ―born at about the same time, and grew up together, intersecting in a hybrid field called computational linguistics or natural language processing. • The problem of understanding language soon turned out to be considerably more complex than it seemed in 1957. • Understanding language requires an understanding of the subject matter and context, not just an understanding of the structure of sentences.
  • 21. History of Artificial Intelligence 1. The gestation of artificial intelligence (1943–1955): The gestation of artificial intelligence (AI) during the period from 1943 to 1955 marked the early theoretical and conceptual groundwork for the field. This period laid the foundation for the subsequent development of AI 2. The birth of artificial intelligence (1956): The birth of artificial intelligence (AI) in 1956 is commonly associated with the Dartmouth Conference, a seminal event that took place at Dartmouth College in Hanover, New Hampshire. 3. Early enthusiasm, great expectations (1952–1969): The period from 1952 to 1969 in the history of artificial intelligence (AI) was characterized by early enthusiasm and great expectations. Researchers during this time were optimistic about the potential of AI and believed that significant progress could be made in creating machines with human-like intelligence Ground work Birth from conference Research create HLI
  • 22. 4. A dose of reality (1966–1973): The period from 1966 to 1973 in the history of artificial intelligence (AI) is often referred to as "A Dose of Reality." During this time, researchers faced challenges and setbacks that led to a re-evaluation of the initial optimism and expectations surrounding AI. 5. Knowledge-based systems: The key to power? (1969–1979) The period from 1969 to 1979 in the history of artificial intelligence (AI) is characterized by a focus on knowledge-based systems, with researchers exploring the use of symbolic representation of knowledge to address challenges in AI. This era saw efforts to build expert systems, which were designed to emulate human expertise in specific domains. 6. AI becomes an industry (1980–present): The period from 1980 to the present marks the evolution of artificial intelligence (AI) into an industry, witnessing significant advancements, increased commercialization, and widespread applications across various domains. Expectations Knowledge based system Wide spread in various domain
  • 23. 7. The return of neural networks (1986–present): The period from 1986 to the present is characterized by the resurgence and dominance of neural networks in the field of artificial intelligence (AI). This era is marked by significant advancements in the development of neural network architectures, training algorithms, and the widespread adoption of deep learning techniques. 8. AI adopts the scientific method (1987–present): The period from 1987 to the present has seen the adoption of the scientific method in the field of artificial intelligence (AI), reflecting a more rigorous and empirical approach to research. This shift has involved the application of experimental methodologies, reproducibility, and a greater emphasis on evidence-based practices. 9. The emergence of intelligent agents (1995–present): The period from 1995 to the present has been marked by the emergence and evolution of intelligent agents in the field of artificial intelligence (AI). Intelligent agents are autonomous entities that perceive their environment, make decisions, and take actions to achieve goals. Intelligent agent Evidence based practice NNA, Training algorithm & DL
  • 24. 10. The availability of very large data sets (2001–present): The period from 2001 to the present has been characterized by the availability and utilization of very large datasets in the field of artificial intelligence (AI). This era has witnessed an unprecedented growth in the volume and diversity of data, providing a foundation for training and enhancing increasingly sophisticated AI models. Diversity & Develop sophisticated AI models.
  • 25. Chapter II: Intelligent Agents Introduction: •In this chapter we will see that the concept of rationality can be applied to a wide variety of agents operating in any imaginable environment. •Use this concept to develop a small set of design principles for building successful agents— systems that can reasonably be called intelligent. •We begin by examining agents, environments, and the coupling between them. • The observation that some agents behave better than others leads naturally to the idea of a rational agent—one that behaves as well as possible. • How well an agent can behave depends on the nature of the environment; some environments are more difficult than others. •We give a crude categorization of environments and show how properties of an environment influence the design of suitable agents for that environment.
  • 26. Agents and environment An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators(Convertor). simple idea is illustrated in Figure.
  • 27. •Percept − It is agent’s perceptual inputs at a given instance.(in general meaning: rule of action/behavior) •Percept Sequence − It is the history of all that an agent has perceived till date. •Agent Function − It is a map from the precept sequence to an action. •Performance Measure of Agent − It is the criteria, which determines how successful an agent is. •Behavior of Agent − It is the action that agent performs after any given sequence of percepts
  • 28. •We can imagine tabulating the agent function that describes any given agent; for most agents, this would be a very large table—infinite, in fact, unless we place a bound on the length of percept sequences we want to consider. • Given an agent to experiment with, we can, in principle, construct this table by trying out all possible percept sequences and recording which actions the agent does in response. • The table is, of course, an external characterization of the agent. Internally, the agent function for an artificial agent will be implemented by an agent program. • It is important to keep these two ideas distinct. • The agent function is an abstract mathematical description. • The agent program is a concrete implementation, running within some physical system.
  • 29. To illustrate these ideas, we use a very simple example—the vacuum-cleaner world This particular world has just two locations: squares A and B. The vacuum agent perceives which square it is in and whether there is dirt in the square. It can choose to move left, move right, suck up the dirt, or do nothing. One very simple agent function is the following: if the current square is dirty, then suck; otherwise, move to the other square. A partial tabulation of this agent function is shown in Figure
  • 30. A Vacuum Cleaner world with just two location
  • 32. Fig: Agent Function for reflex agent in the two state vacuum environment.
  • 33. Concept of Rationality A rational agent is one that does the right thing—conceptually speaking, every entry in the table for the agent function is filled out correctly. Rationality Rational at any given time depends on four things: • The performance measure that defines the criterion of success. • The agent's prior knowledge of the environment. • The actions that the agent can perform. • The agent's percept sequence to date. A definition of a rational agent: “ For each possible percept sequence, a rational agent should select an action that is ex- pected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.”
  • 34. Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the other square if not; this is the agent function tabulated in Figure 2.3. Is this a rational agent? That depends! First, we need to say what the performance measure is, what is known about the environment, and what sensors and actuators the agent has. Let us assume the following: The performance measure awards one point for each clean square at each time step, over a “lifetime” of 1000 time steps. The “geography” of the environment is known a priori (Figure 2.2) but the dirt distribution and the initial location of the agent are not. Clean squares stay clean and sucking cleans the current square. The Left and Right actions move the agent left and right except when this would take the agent outside the environment, in which case the agent remains where it is. • The only available actions are Left, Right, and Suck. • The agent correctly perceives its location and whether that location contains dirt. We claim that under these circumstances the agent is indeed rational
  • 35. The nature of environment I:Specifying the task environment: •In our discussion of the rationality of the simple vacuum-cleaner agent, we had to specify the performance measure, the environment, and the agent’s actuators and sensors (PEAS). •We group all these under the heading of the task environment. •Task environments, which are essentially the “problems” to which rational agents are the “solutions.” •To specify the performance measure, the environment, and the agent’s actuators and sensors called the PEAS (Performance, Environment, Actuators, Sensors) description. •In designing an agent, the first step must always be to specify the task environment as fully as possible.
  • 36. The vacuum world was a simple example; let us consider a more complex problem: an automated taxi driver. PEAS description of an automated taxi driver Figure 2.4 summarizes the PEAS description for the taxi’s task environment.
  • 37. First, what is the performance measure to which we would like our automated driver to aspire? • Desirable qualities include getting to the correct destination; minimizing fuel consumption and wear and tear; • minimizing the trip time or cost; • minimizing violations of traffic laws and disturbances to other drivers; • maximizing safety and passenger comfort; • maximizing profits. Obviously, some of these goals conflict, so tradeoffs will be required Next, what is the driving environment that the taxi will face? Any taxi driver must deal with a variety of roads, ranging from rural lanes and urban alleys to 12-lane freeways. The roads contain other traffic, pedestrians, stray animals, road works, police cars, puddles,
  • 38. •The actuators for an automated taxi include those available to a human driver: control over the engine through the accelerator and control over steering and braking. In addition, it will need output to a display screen or voice synthesizer to talk back to the passengers, and perhaps some way to communicate with other vehicles, politely or otherwise. •The basic sensors for the taxi will include one or more controllable video cameras so that it can see the road; it might augment these with infrared or sonar sensors to detect distances to other cars and obstacles. To avoid speeding tickets, the taxi should have a speedometer, and to control the vehicle properly, especially on curves, it should have an accelerator.
  • 39. II. Properties of task environments •The range of task environments that might arise in AI is obviously vast. •We can, however, identify a fairly small number of dimensions along which task environments can be categorized. •These dimensions determine, to a large extent, the appropriate agent design and the applicability of each of the principal families of techniques for agent implementation. • First, we list the dimensions, then we analyze several task environments to illustrate the ideas. The definitions here are informal;
  • 40. The basic PEAS elements for a number of additional agent types.
  • 41. I. Fully observable vs. partially observable: Fully observable partially observable If an agent’s sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action; relevance, in turn, depends on the performance measure. for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other drivers are thinking. Fully observable environments are convenient because the agent need not maintain any internal state to keep track of the world. If the agent has no sensors at all then the environment is unobservable
  • 42. II. Single agent vs. Multi Agent: •An agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two agent environment. •There are, however, some subtle issues. First, we have described how an entity may be viewed as an agent, but we have not explained which entities must be viewed as agents. • Does an agent A (the taxi driver for example) have to treat an object B (another vehicle) as an agent, or can it be treated merely as an object behaving according to the laws of physics, analogous to waves at the beach or leaves blowing in the wind? •The key distinction is whether B’s behavior is best described as maximizing a performance measure whose value depends on agent A’s behavior. •For example, in chess, the opponent entity B is trying to maximize its performance measure, which, by the rules of chess, minimizes agent A’s performance measure.
  • 43. III. Deterministic vs. Stochastic •If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic. •In principle, an agent need not worry about uncertainty in a fully observable, deterministic environment. (In our definition, we ignore uncertainty that arises purely from the actions of other agents in a multi agent environment; thus, a game can be deterministic even though each agent may be unable to predict the actions of the others.) •If the environment is partially observable, however, then it could appear to be stochastic. Most real situations are so complex that it is impossible to keep track of all the unobserved aspects; for practical purposes, they must be treated as stochastic. Taxi driving is clearly stochastic in this sense, because one can never predict the behavior of traffic exactly
  • 44. IV. Episodic vs. sequential : •In an episodic task environment • the agent’s experience is divided into atomic(tiny) episodes. • In each episode the agent receives a percept and then performs a single action In episodes do not depend on the actions taken in previous episodes, and they do not influence future episodes •Ex: an agent that has to spot defective parts on an assembly line, • In sequential environments the current decision could affect future decisions actions can ⇒ have long-term consequences. •Ex: chess, taxi driving, ... Episodic environments are much simpler than sequential ones . •No need to think ahead!
  • 45. V. Static vs. dynamic : •The task environment is dynamic if it can change while the agent is choosing an action, static otherwise agent needs keep looking at the world while deciding an action ⇒ • Ex: crossword puzzles are static, taxi driving is dynamic The task environment is semi dynamic if the environment itself does not change with time, but the agent’s performance score does • Ex: chess with a clock Static environments are easier to deal wrt. [semi]dynamic ones.
  • 46. VI. Discrete vs. continuous: •The discrete/continuous distinction applies to the state of the environment, to the way time is handled, and to the percepts and actions of the agent. For example: The chess environment has a finite number of distinct states (excluding the clock). Chess also has a discrete set of percepts and actions. Taxi driving is a continuous-state and continuous-time problem. • Ex: Crossword puzzles: discrete state, time, percepts & actions • Ex: Taxi driving: continuous state, time, percepts & actions. Note: The simplest environment is fully observable, single-agent, deterministic, episodic, static and discrete. Ex: simple vacuum cleaner. Most real-world situations are partially observable, multi-agent, stochastic, sequential, dynamic, and continuous. Ex: taxi driving
  • 48. Properties of the Agent’s State of Knowledge: Known vs. unknown •Describes the agent’s (or designer’s) state of knowledge about the “laws of physics” of the environment • if the environment is known, then the outcomes (or outcome probabilities if stochastic) for all actions are given. • if the environment is unknown, then the agent will have to learn how it works in order to make good decisions. •Orthogonal wrt. task-environment properties. Known not equal to Fully observable • a known environment can be partially observable (Ex: a solitaire card games). • an unknown environment can be fully observable (Ex: a game I don’t know the rules of)
  • 49. Structure of agent •So far we have talked about agents by describing behavior—the action that is performed after any given sequence of percepts. Now we must bite the bullet and talk about how the insides work. •The job of AI is to design an agent program that implements the agent function— the mapping from percepts to actions. •We assume this program will run on some sort of computing device with physical sensors and actuators—we call this the architecture. agent = architecture + program
  • 50. i. Agent programs • AI Job: design an agent program implementing the agent function . • The agent program runs on some computing device with physical sensors and actuators: the agent architecture . • All agents have the same skeleton: • Input: current percepts . • Output: action • Program: manipulates input to produce output. • The agent function takes the entire percept history as input. • The agent program takes only the current percept as input. • If the actions need to depend on the entire percept sequence, the agent will have to remember the percepts.
  • 51. Example: The Table-Driven Agent The table represents explicitly the agent function Ex: the simple vacuum cleaner Agents can be grouped into five classes based on their degree of perceived intelligence and capability. Note: The function consists in a lookup table of actions to be taken for every possible state of the environment. Only work for a small number of possible states for the environment.
  • 52. we as designers must construct a table that contains the appropriate action for every possible percept sequence. It is instructive to consider why the table-driven approach to agent construction is doomed(fail) to failure. Let P be the set of possible percepts and let T be the lifetime of the agent (the total number of percepts it will receive). The lookup table will contain Entries. All these agents can improve their performance and generate better action over the time. These are given below: • Simple Reflex Agent • Model-based reflex agent • Goal-based agents • Utility-based agent • Learning agent
  • 53. Simple reflex agents: • The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current percepts and ignore the rest of the percept history. • These agents only succeed in the fully observable environment. • The Simple reflex agent does not consider any part of percepts history during their decision and action process. • The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room. • Problems for the simple reflex agent design approach: • They have very limited intelligence • They do not have knowledge of non-perceptual parts of the current state • Mostly too big to generate and to store. • Not adaptive to changes in the environment.
  • 54. •Simple reflex behaviors occur even in more complex environments. •Imagine yourself as the driver of the automated taxi. •If the car in front brakes and its brake lights come on, then you should notice this and initiate braking. •In other words, some processing is done on the visual input to establish the condition we call “The car in front is braking.” Then, this triggers some established connection in the agent program to the action “initiate braking.” • We call such a connection a condition–action rule, and written as if car-in-front-is-braking then initiate-braking
  • 55. Simple reflex agents: Current internal state Background information used in process Current state Return rule from set
  • 56. •We use rectangles to denote the current internal state of the agent’s decision process, and ovals to represent the background information used in the process. • The agent program, which is also very simple, is shown in Figure 2.10. •The INTERPRET-INPUT function generates an abstracted description of the current state from the percept, and the RULE-MATCH function returns the first rule in the set of rules that matches the given state description. •Note that the description in terms of “rules” and “matching” is purely conceptual; actual implementations can be as simple as a collection of logic gates implementing a Boolean circuit.
  • 57. Model-based reflex agent: The Model-based agent can work in a partially observable environment, and track the situation. A model-based agent has two important factors: • Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent. • Internal State: It is a representation of the current state based on percept history. These agents have the model, "which is knowledge of the world" and based on the model they perform actions. Updating the agent state requires information about: • How the world evolves • How the agent's action affects the world.
  • 59. creating the new internal state description Return rule from set
  • 60. •For the braking problem, the internal state is not too extensive— just the previous frame from the camera, allowing the agent to detect when two red lights at the edge of the vehicle go on or off simultaneously. •For other driving tasks such as changing lanes, the agent needs to keep track of where the other cars are if it can’t see them all at once. And for any driving to be possible at all, the agent needs to keep track of where its keys are. •Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the agent program. • First, we need some information about how the world evolves independently of the agent —for example, that an overtaking car generally will be closer behind than it was a moment ago.
  • 61. •Second, we need some information about how the agent’s own actions affect the world— for example, that when the agent turns the steering wheel clockwise, the car turns to the right, or that after driving for five minutes northbound on the freeway, one is usually about five miles north of where one was five minutes ago. •This knowledge about “how the world works”—whether implemented in simple Boolean circuits or in complete scientific theories—is called a model of the world. An agent that uses such a model is called a model-based agent.
  • 62. Goal-based agents •The knowledge of the current state environment is not always sufficient to decide for an agent to what to do. •The agent needs to know its goal which describes desirable situations. •Goal-based agents expand the capabilities of the model-based agent by having the "goal" information. • They choose an action, so that they can achieve the goal. •These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not. Such considerations of different scenario are called searching and planning, which makes an agent proactive. • Sometimes goal-based action selection is straightforward: for example when goal satisfaction results immediately from a single action.
  • 63. •Sometimes it will be trickier: for example, when the agent has to consider long sequences of twists and turns to find a way to achieve the goal. •Search and planning are the subfields of AI devoted to finding action sequences that achieve the agent’s goals.
  • 64. Utility-based agents • These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state. • Utility-based agent act based not only goals but also the best way to achieve the goal. • The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action. • The utility function maps each state to a real number to check how efficiently each action achieves the goals.
  • 65. •Utility-based Agents advantages wrt. goal-based: • with conflicting goals, utility specifies and appropriate tradeoff. • with several goals none of which can be achieved with certainty, utility selects proper tradeoff between importance of goals and likelihood of success. • still complicate to implement. • require sophisticated perception, reasoning, and learning • may require expensive computation.
  • 67. Learning Agents: • Problem Previous agent programs describe methods for selecting actions • How are these agent programs programmed? • Programming by hand inefficient and ineffective! • Solution: build learning machines and then teach them (rather than instruct them). • Advantage: robustness of the agent program toward initially-unknown environments
  • 68. • Performance element: selects actions based on percepts Corresponds to the previous agent programs •Learning element: introduces improvements uses feedback from the critic on how the agent is doing determines improvements for the performance element. • Critic tells how the agent is doing w.r.t. performance standard. • Problem generator: suggests actions that will lead to new and informative experiences forces exploration of new stimulating scenarios
  • 69. •Example: Taxi Driving  After the taxi makes a quick left turn across three lanes, the critic observes the shocking language used by other drivers.  From this experience, the learning element formulates a rule saying this was a bad action.  The performance element is modified by adding the new rule.  The problem generator might identify certain areas of behavior in need of improvement, and suggest trying out the brakes on different road surfaces under different conditions
  • 70. Problem solving agents PROBLEM-SOLVING APPROACH IN ARTIFICIAL INTELLIGENCE PROBLEMS: • The reflex agents are known as the simplest agents because they directly map states into actions. • Unfortunately, these agents fail to operate in an environment where the mapping is too large to store and learn. •Goal-based agent, on the other hand, considers future actions and the desired outcomes. •Here, we will discuss one type of goal-based agent known as a problem-solving agent, which uses atomic representation with no internal states visible to the problem-solving algorithms.
  • 71. Problem-solving agent: The problem-solving agent performs precisely by defining problems and its several solutions. •According to psychology, “a problem-solving refers to a state where we wish to reach to a definite goal from a present state or condition.” • According to computer science,” a problem-solving is a part of artificial intelligence which bonded with a number of techniques such as algorithms, heuristics to solve a problem.” •Therefore, a problem-solving agent is a goal-driven agent and focuses on satisfying the goal.
  • 72. PROBLEM DEFINITION : •To build a system to solve a particular problem, we need to do four things: (i) Define the problem precisely. This definition must include specification of the initial situations and also final situations which constitute (i.e) acceptable solution to the problem. (ii) Analyze the problem (i.e) important features have an immense (i.e) huge impact on the appropriateness of various techniques for solving the problems. (iii) Isolate and represent the knowledge to solve the problem. (iv) Choose the best problem – solving techniques and apply it to the particular problem.
  • 73. Steps performed by Problem-solving agent : •Goal Formulation: It is the first and simplest step in problem-solving. It organizes the steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve that goal. Goal formulation is based on the current situation and the agent’s performance measure. (steps to carryout to achieve the goal) •Problem Formulation: It is the most important step of problem-solving which decides what actions should be taken to achieve the formulated goal. There are following five components involved in problem formulation: ( steps to decide what action taken to achieve goal). •Initial State: It is the starting state or initial step of the agent towards its goal. •Actions: It is the description of the possible actions available to the agent. •Transition Model(Successor Step): It describes what each action does. •Goal Test: It determines if the given state is a goal state. •Path cost: It assigns a numeric cost to each path that follows the goal. The problem solving agent selects a cost function, which reflects its performance measure.
  • 74. Example on problem solving agent: 1. Toy problem: 1. 8 puzzle problem 2. Vacuum problem 3. 4/8 Queen Problem 2. Real world problem 1. Robot navigation: Robot navigation is a generalization of the route-finding problem described earlier. Rather than following a discrete set of routes, a robot can move in a continuous space with (in principle) an infinite set of possible actions and states. 2. traveling salesperson problem (TSP): The traveling salesperson problem (TSP) is a touring problem in which each city must be visited exactly once. The aim is to find the shortest tour.
  • 75. VTU University questions: 1. What is PEAS? Explain different agent types with their PEAS description. (vtu 2019-20) 2. Explain in details the properties of Task Environment. (vtu 2019-20). 3. Define AI and list the task domain of AI. (vtu 21-22). 4. Explain types of environment. (2019). 5. Explain the concept of rationality. (2019) 6. Explain the reflex agent with state. (simple reflex [vacuum world]) (2019). 7. Elaborate artificial intelligence with suitable example along with its application. (2019). 8. Discuss the historical evaluation of AI. (2019). 9. State the relationship between agent and environment.(2019)
  • 76. • Assignment questions: 1. Define the Artificial Intelligence with 4 category. 2. Explain agent & its environment with vacuum cleaner world and algorithm. 3. Explain the agent program concept with table driven algorithm. 4. Explain the following terms (algorithm & example) 1. Simple reflex agent. 2. Goal based agent. 3. Utility based agent. 4. Model based agent. 5. Explain problem solving agent with steps to performance. 6. With the help of problem solving agent solve the given toy problems. 1. 8 puzzle game. 2. Vacuum world problem. 3. 4 & 8 queen problem.

Editor's Notes

  • #7: GPS: intended to work as a universal problem solver machine. Applying heuristics techniques in solving a given problem and conduct a mean ends assessment. Heuristics technique: type of search process used in problem solving. It involves using previously known information to reduce the amount of searching that needs to be done for an optimal solution.
  • #15: Descartes: French philosopher and mathematician developed dualistic theory of mind and matter. Turing Machine: is abstract computational model that compute by reading & writing infinte tape. Provides a powerful results, even the problem cant simply solve.
  • #16: utility theory: level of satisfaction a person derives from consuming a goods or service. Probability theory: it’s a systematics study of outcome result.
  • #19: Control theory: the study of how the agents are interact with their environment to achieve a desire goal. Design the some algorithms that enable agents to make optimal decisions. Stochastic model: it is real world mathematical model of system which has a continuous random varying nature.