SlideShare a Scribd company logo
1
LECTURE NOTES
ON
ARTIFICIAL
INTELLIGENCE
2
ARTIFICIAL INTELLIGENCE
What is Artificial Intelligence?
It is a branch of Computer Science that pursues creating the computers or machines as intelligent
as human beings.
It is the science and engineering of making intelligent machines, especially intelligent computer
programs.
It is related to the similar task of using computers to understand human intelligence, but AI does
not have to confine itself to methods that are biologically observable
Definition: Artificial Intelligence is the study of how to make computers do things, which, at the
moment, people do better.
According to the father of Artificial Intelligence, John McCarthy, it is “The science and
engineering of making intelligent machines, especially intelligent computer programs”.
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a
software think intelligently, in the similar manner the intelligent humans think.
AI is accomplished by studying how human brain thinks and how humans learn, decide, and
work while trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.
It has gained prominence recently due, in part, to big data, or the increase in speed, size and
variety of data businesses are now collecting. AI can perform tasks such as identifying patterns
in the data more efficiently than humans, enabling businesses to gain more insight out of
their data.
From a business perspective AI is a set of very powerful tools, and methodologies for using
those tools to solve business problems.
From a programming perspective, AI includes the study of symbolic programming, problem
solving, and search.
AI Vocabulary
Intelligence relates to tasks involving higher mental processes, e.g. creativity, solving problems,
pattern recognition, classification, learning, induction, deduction, building analogies,
optimization, language processing, knowledge and many more. Intelligence is the computational
part of the ability to achieve goals.
Intelligent behaviour is depicted by perceiving one’s environment, acting in complex
environments, learning and understanding from experience, reasoning to solve problems and
discover hidden knowledge, applying knowledge successfully in new situations, thinking
abstractly, using analogies, communicating with others and more.
3
Science based goals of AI pertain to developing concepts, mechanisms and understanding
biological intelligent behaviour. The emphasis is on understanding intelligent behaviour.
Engineering based goals of AI relate to developing concepts, theory and practice of building
intelligent machines. The emphasis is on system building.
AI Techniques depict how we represent, manipulate and reason with knowledge in order to
solve problems. Knowledge is a collection of ‘facts’. To manipulate these facts by a program, a
suitable representation is required. A good representation facilitates problem solving.
Learning means that programs learn from what facts or behaviour can represent. Learning
denotes changes in the systems that are adaptive in other words, it enables the system to do the
same task(s) more efficiently next time.
Applications of AI refers to problem solving, search and control strategies, speech recognition,
natural language understanding, computer vision, expert systems, etc.
Problems of AI:
Intelligence does not imply perfect understanding; every intelligent being has limited perception,
memory and computation. Many points on the spectrum of intelligence versus cost are viable,
from insects to humans. AI seeks to understand the computations required from intelligent
behaviour and to produce computer systems that exhibit intelligence. Aspects of intelligence
studied by AI include perception, communicational using human languages, reasoning, planning,
learning and memory.
The following questions are to be considered before we can step forward:
1. What are the underlying assumptions about intelligence?
2. What kinds of techniques will be useful for solving AI problems?
3. At what level human intelligence can be modelled?
4. When will it be realized when an intelligent program has been built?
Branches of AI:
A list of branches of AI is given below. However some branches are surely missing, because no
one has identified them yet. Some of these may be regarded as concepts or topics rather than full
branches.
Logical AI — In general the facts of the specific situation in which it must act, and its goals are
all represented by sentences of some mathematical logical language. The program decides what
to do by inferring that certain actions are appropriate for achieving its goals.
4
Search — Artificial Intelligence programs often examine large numbers of possibilities – for
example, moves in a chess game and inferences by a theorem proving program. Discoveries are
frequently made about how to do this more efficiently in various domains.
Pattern Recognition — When a program makes observations of some kind, it is often planned
to compare what it sees with a pattern. For example, a vision program may try to match a pattern
of eyes and a nose in a scene in order to find a face. More complex patterns are like a natural
language text, a chess position or in the history of some event. These more complex patterns
require quite different methods than do the simple patterns that have been studied the most.
Representation — Usually languages of mathematical logic are used to represent the facts about
the world.
Inference — Others can be inferred from some facts. Mathematical logical deduction is
sufficient for some purposes, but new methods of non-monotonic inference have been added to
the logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in
which a conclusion is to be inferred by default. But the conclusion can be withdrawn if there is
evidence to the divergent. For example, when we hear of a bird, we infer that it can fly, but this
conclusion can be reversed when we hear that it is a penguin. It is the possibility that a
conclusion may have to be withdrawn that constitutes the non-monotonic character of the
reasoning. Normal logical reasoning is monotonic, in that the set of conclusions can be drawn
from a set of premises, i.e. monotonic increasing function of the premises. Circumscription is
another form of non-monotonic reasoning.
Common sense knowledge and Reasoning — This is the area in which AI is farthest from the
human level, in spite of the fact that it has been an active research area since the 1950s. While
there has been considerable progress in developing systems of non-monotonic reasoning and
theories of action, yet more new ideas are needed.
Learning from experience — There are some rules expressed in logic for learning. Programs
can only learn what facts or behaviour their formalisms can represent, and unfortunately learning
systems are almost all based on very limited abilities to represent information.
Planning — Planning starts with general facts about the world (especially facts about the effects
of actions), facts about the particular situation and a statement of a goal. From these, planning
programs generate a strategy for achieving the goal. In the most common cases, the strategy is
just a sequence of actions.
Epistemology — This is a study of the kinds of knowledge that are required for solving
problems in the world.
Ontology — Ontology is the study of the kinds of things that exist. In AI the programs and
sentences deal with various kinds of objects and we study what these kinds are and what their
basic properties are. Ontology assumed importance from the 1990s.
5
Heuristics — A heuristic is a way of trying to discover something or an idea embedded in a
program. The term is used variously in AI. Heuristic functions are used in some approaches to
search or to measure how far a node in a search tree seems to be from a goal. Heuristic
predicates that compare two nodes in a search tree to see if one is better than the other, i.e.
constitutes an advance toward the goal, and may be more useful.
Genetic programming — Genetic programming is an automated method for creating a working
computer program from a high-level problem statement of a problem. Genetic programming
starts from a high-level statement of ‘what needs to be done’ and automatically creates a
computer program to solve the problem.
Applications of AI
AI has applications in all fields of human study, such as finance and economics, environmental
engineering, chemistry, computer science, and so on. Some of the applications of AI are listed
below:
 Perception
■ Machine vision
■ Speech understanding
■ Touch ( tactile or haptic) sensation
 Robotics
 Natural Language Processing
■ Natural Language Understanding
■ Speech Understanding
■ Language Generation
■ Machine Translation
 Planning
 Expert Systems
 Machine Learning
 Theorem Proving
 Symbolic Mathematics
 Game Playing
AI Technique:
Artificial Intelligence research during the last three decades has concluded that Intelligence
requires knowledge. To compensate overwhelming quality, knowledge possesses less desirable
properties.
A. It is huge.
B. It is difficult to characterize correctly.
C. It is constantly varying.
D. It differs from data by being organized in a way that corresponds to its application.
E. It is complicated.
6
An AI technique is a method that exploits knowledge that is represented so that:
 The knowledge captures generalizations that share properties, are
grouped together, rather than being allowed separate representation.
 It can be understood by people who must provide it—even though for many
programs bulk of the data comes automatically from readings.
 In many AI domains, how the people understand the same people must supply the
knowledge to a program.
 It can be easily modified to correct errors and reflect changes in real conditions.
 It can be widely used even if it is incomplete or inaccurate.
 It can be used to help overcome its own sheer bulk by helping to narrow the range
of possibilities that must be usually considered.
In order to characterize an AI technique let us consider initially OXO or tic-tac-toe and use a
series of different approaches to play the game.
The programs increase in complexity, their use of generalizations, the clarity of their
knowledge and the extensibility of their approach. In this way they move towards being
representations of AI techniques.
Example-1: Tic-Tac-Toe
1.1 The first approach (simple)
The Tic-Tac-Toe game consists of a nine element vector called BOARD; it represents the
numbers 1 to 9 in three rows.
An element contains the value 0 for blank, 1 for X and 2 for O. A MOVETABLE vector consists
of 19,683 elements (39
) and is needed where each element is a nine element vector. The contents
of the vector are especially chosen to help the algorithm.
The algorithm makes moves by pursuing the following:
1. View the vector as a ternary number. Convert it to a decimal number.
2. Use the decimal number as an index in MOVETABLE and access the vector.
3. Set BOARD to this vector indicating how the board looks after the move. This approach is
capable in time but it has several disadvantages. It takes more space and requires stunning
7
effort to calculate the decimal numbers. This method is specific to this game and cannot be
completed.
1.2 The second approach
Tic-Tac-Toe Game Playing using Magic Square – Program 2 in Player 1 – uses X and Player 2 – use O.
So, a player who gets 3 consecutive marks first, the player will win the game.
Here, we assign board positions to vector elements.
The Sum of all rows, columns, and diagonals must be 15.
Algorithm – Tic-Tac-Toe Game Playing using Magic
 The first machine will check, the chance to win,
 If the difference between 15 and the sum of the two squares, If this difference is not positive or if it
is greater than 9, then the original two squares were not collinear and so can be ignored.
or it will check the opponent of winning and block the chances of winning.
Note: refer to the exercise done in the class.
1.3 The final approach
The structure of the data consists of BOARD which contains a nine element vector, a list of
board positions that could result from the next move and a number representing an estimation of how
the board position leads to an ultimate win for the player to move.
This algorithm looks ahead to make a decision on the next move by deciding which the most
promising move or the most suitable move at any stage would be and selects the same.
Consider all possible moves and replies that the program can make. Continue this process for
as long as time permits until a winner emerges, and then choose the move that leads to the computer
program winning, if possible in the shortest time.
Note: refer to the exercise done in the class
8
PROBLEMS, PROBLEM SPACES AND SEARCH
To solve the problem of building a system you should take the following steps:
1. Define the problem accurately including detailed specifications and what constitutes
a suitable solution.
2. Scrutinize the problem carefully, for some features may have a central affect on
the chosen method of solution.
3. Segregate and represent the background knowledge needed in the solution of
the problem.
4. Choose the best solving techniques for the problem to solve a solution.
Problem solving is a process of generating solutions from observed data.
• a ‘problem’ is characterized by a set of goals,
• a set of objects, and
• a set of operations.
These could be ill-defined and may evolve during problem solving.
• A ‘problem space’ is an abstract space.
 A problem space encompasses all valid states that can be generated by the
application of any combination of operators on any combination of objects.
 The problem space may contain one or more solutions. A solution is a
combination of operations and objects that achieve the goals.
• A ‘search’ refers to the search for a solution in a problem space.
 Search proceeds with different types of ‘search control strategies’.
 The depth-first search and breadth-first search are the two common search
strategies.
2.1 AI - General Problem Solving
Problem solving has been the key area of concern for Artificial Intelligence.
Problem solving is a process of generating solutions from observed or given data. It is
however not always possible to use direct methods (i.e. go directly from data to solution).
Instead, problem solving often needs to use indirect or modelbased methods.
Problem definitions
A problem is defined by its ‘elements’ and their ‘relations’. To provide a formal description of a
problem, we need to do the following:
a. Define a state space that contains all the possible configurations of the relevant
objects, including some impossible ones.
b. Specify one or more states that describe possible situations, from which the problem-
solving process may start. These states are called initial states.
c. Specify one or more states that would be acceptable solution to the problem.
9
These states are called goal states.
Specify a set of rules that describe the actions (operators) available.
The problem can then be solved by using the rules, in combination with an appropriate control
strategy, to move through the problem space until a path from an initial state to a goal state is
found. This process is known as ‘search’. Thus:
 Search is fundamental to the problem-solving process.
 Search is a general mechanism that can be used when a more
direct method is not known.
 Search provides the framework into which more direct methods for
solving subparts of a problem can be embedded. A very large number of
AI problems are formulated as search problems.
 Problem space
A problem space is represented by a directed graph, where nodes represent search state and paths
represent the operators applied to change the state.
To simplify search algorithms, it is often convenient to logically and programmatically represent
a problem space as a tree. A tree usually decreases the complexity of a search at a cost. Here, the
cost is due to duplicating some nodes on the tree that were linked numerous times in the graph,
e.g. node B and node D.
A tree is a graph in which any two vertices are connected by exactly one path. Alternatively, any
connected graph with no cycles is a tree.
10
• Problem solving: The term, Problem Solving relates to analysis in AI. Problem solving may be
characterized as a systematic search through a range of possible actions to reach some
predefined goal or solution. Problem-solving methods are categorized as special purpose and
general purpose.
• A special-purpose method is tailor-made for a particular problem, often exploits very
specific features of the situation in which the problem is embedded.
• A general-purpose method is applicable to a wide variety of problems. One General-purpose
technique used in AI is ‘means-end analysis’: a step-bystep, or incremental, reduction of the
difference between current state and final goal.
11
2.3 DEFINING PROBLEM AS A STATE SPACE SEARCH
To solve the problem of playing a game, we require the rules of the game and targets for winning
as well as representing positions in the game. The opening position can be defined as the initial
state and a winning position as a goal state. Moves from initial state to other states leading to the
goal state follow legally. However, the rules are far too abundant in most games— especially in
chess, where they exceed the number of particles in the universe. Thus, the rules cannot be
supplied accurately and computer programs cannot handle easily. The storage also presents
another problem but searching can be achieved by hashing.
The number of rules that are used must be minimized and the set can be created by expressing
each rule in a form as possible. The representation of games leads to a state space representation
and it is common for well-organized games with some structure. This representation allows for
the formal definition of a problem that needs the movement from a set of initial positions to one
of a set of target positions. It means that the solution involves using known techniques and a
systematic search. This is quite a common method in Artificial Intelligence.
2.3.1 State Space Search
A state space represents a problem in terms of states and operators that change states.
A state space consists of:
 A representation of the states the system can be in. For example, in
a board game, the board represents the current state of the game.
 A set of operators that can change one state into another state. In a board
game, the operators are the legal moves from any given state. Often the
operators are represented as programs that change a state representation
to represent the new state.
 An initial state.
 A set of final states; some of these may be desirable, others undesirable.
This set is often represented implicitly by a program that detects
terminal states.
2.3.2 The Water Jug Problem
In this problem, we use two jugs called four and three; four holds a maximum of four gallons of
water and three a maximum of three gallons of water. How can we get two gallons of water in
the four jug?
The state space is a set of prearranged pairs giving the number of gallons of water in the pair
of jugs at any time, i.e., (four, three) where four = 0, 1, 2, 3 or 4 and three = 0, 1, 2 or 3.
The start state is (0, 0) and the goal state is (2, n) where n may be any but it is limited to three
holding from 0 to 3 gallons of water or empty. Three and four shows the name and numerical
number shows the amount of water in jugs for solving the water jug problem. The major
production rules for solving this problem are shown below:
12
Initial condition Goal comment
1. (four, three) if four < 4 (4, three) fill four from tap
2. (four, three) if three< 3 (four, 3) fill three from tap
3. (four, three) If four > 0 (0, three) empty four into drain
4. (four, three) if three > 0 (four, 0) empty three into drain
5. (four, three) if four + three<4 (four + three, 0) empty three
into four
6. (four, three) if four + three<3 (0, four + three) empty four into
three
7. (0, three) If three > 0 (three, 0) empty three into four
8. (four, 0) if four > 0 (0, four) empty four into three
9. (0, 2) (2, 0) empty three into four
10. (2, 0) (0, 2) empty four into three
11. (four, three) if four < 4 (4, three-diff) pour diff, 4-four, into
four from three
12. (three, four) if three < 3 (four-diff, 3) pour diff, 3-three, into
three from four and a solution is
given below four three rule
(Fig. 2.2 Production Rules for the Water Jug Problem)
Gallons in Four Jug Gallons in Three Jug Rules Applied
0 0 -
0 3 2
3 0 7
3 3 2
4 2 11
0 2 3
2 0 10
(Fig. 2.3 One Solution to the Water Jug Problem)
The problem solved by using the production rules in combination with an appropriate control
strategy, moving through the problem space until a path from an initial state to a goal state is
found. In this problem solving process, search is the fundamental concept. For simple problems
it is easier to achieve this goal by hand but there will be cases where this is far too difficult.
Issues in the Design of Search Programs
Each search process can be considered to be a tree traversal. The object of the search is to find a
path from the initial state to a goal state using a tree. The number of nodes generated might be
huge; and in practice many of the nodes would not be needed. The secret of a good search
routine is to generate only those nodes that are likely to be useful, rather than having a precise
tree. The rules are used to represent the tree implicitly and only to create nodes explicitly if they
are actually to be of use.
13
The following issues arise when searching:
• The tree can be searched forward from the initial node to the goal state or backwards from
the goal state to the initial state.
• To select applicable rules, it is critical to have an efficient procedure for matching rules against
states.
• How to represent each node of the search process? This is the knowledge representation
problem or the frame problem. In games, an array suffices; in other problems, more complex
data structures are needed.
Finally in terms of data structures, considering the water jug as a typical problem do we use a
graph or tree? The breadth-first structure does take note of all nodes generated but the depth-first
one can be modified.
14
Check duplicate nodes
1. Observe all nodes that are already generated, if a new node is present.
2. If it exists add it to the graph.
3. If it already exists, then
a. Set the node that is being expanded to the point to the already existing node
corresponding to its successor rather than to the new one. The new one can be
thrown away.
b. If the best or shortest path is being determined, check to see if this path is better
or worse than the old one. If worse, do nothing.
Better save the new path and work the change in length through the chain of successor nodes if
necessary.
Example: Tic-Tac-Toe
State spaces are good representations for board games such as Tic-Tac-Toe. The position of a
game can be explained by the contents of the board and the player whose turn is next. The board
can be represented as an array of 9 cells, each of which may contain an X or O or be empty.
• State:
 Player to move next: X or O.
 Board configuration:
• Operators: Change an empty cell to X or O.
• Start State: Board empty; X’s turn.
• Terminal States: Three X’s in a row; Three O’s in a row; All cells full.
Search Tree
The sequence of states formed by possible moves is called a search tree. Each level of the tree is
called a ply.
Since the same state may be reachable by different sequences of moves, the state space may in
general be a graph. It may be treated as a tree for simplicity, at the cost of duplicating states.
15
Solving problems using search
• Given an informal description of the problem, construct a formal description as a state space:
 Define a data structure to represent the state.
 Make a representation for the initial state from the given data.
 Write programs to represent operators that change a given state representation to a
new state representation.
 Write a program to detect terminal states.
• Choose an appropriate search technique:
 How large is the search space?
 How well structured is the domain?
 What knowledge about the domain can be used to guide the search?
16
HEURISTIC SEARCH TECHNIQUES:
Search Algorithms
Many traditional search algorithms are used in AI applications. For complex problems, the
traditional algorithms are unable to find the solutions within some practical time and space
limits. Consequently, many special techniques are developed, using heuristic functions.
The algorithms that use heuristic functions are called heuristic algorithms.
• Heuristic algorithms are not really intelligent; they appear to be intelligent because they
achieve better performance.
• Heuristic algorithms are more efficient because they take advantage of feedback from the
data to direct the search path.
• Uninformed search algorithms or Brute-force algorithms, search through the search space all
possible candidates for the solution checking whether each candidate satisfies the problem’s
statement.
• Informed search algorithms use heuristic functions that are specific to the problem, apply
them to guide the search through the search space to try to reduce the amount of time spent in
searching.
A good heuristic will make an informed search dramatically outperform any uninformed search:
for example, the Traveling Salesman Problem (TSP), where the goal is to find is a good solution
instead of finding the best solution.
• Uninformed Search: Also called blind, exhaustive or brute-force search, it uses no
information about the problem to guide the search and therefore may not be very
efficient.
• Informed Search: Also called heuristic or intelligent search, this uses information about the
problem to guide the search—usually guesses the distance to a goal state and is therefore
efficient, but the search may not be always possible.
17
18
Breadth-first search
A Search strategy, in which the highest layer of a decision tree is searched completely before
proceeding to the next layer is called Breadth-first search (BFS).
• In this strategy, no viable solutions are omitted and therefore it is guaranteed that an
optimal solution is found.
• This strategy is often not feasible when the search space is large.
Algorithm
1. Create a variable called LIST and set it to be the starting state.
2. Loop until a goal state is found or LIST is empty, Do
a. Remove the first element from the LIST and call it E. If the LIST is empty, quit.
b. For every path each rule can match the state E, Do
(i) Apply the rule to generate a new state.
(ii) If the new state is a goal state, quit and return this state.
(iii) Otherwise, add the new state to the end of LIST.
19
Advantages
1. Guaranteed to find an optimal solution (in terms of shortest number of
steps to reach the goal).
2. Can always find a goal node if one exists (complete).
Disadvantages
1. High storage requirement: exponential with tree depth.
Depth-first search
A search strategy that extends the current path as far as possible before backtracking to the last
choice point and trying the next alternative path is called Depth-first search (DFS).
• This strategy does not guarantee that the optimal solution has been found.
• In this strategy, search reaches a satisfactory solution more rapidly than breadth first, an
advantage when the search space is large.
Algorithm
Depth-first search applies operators to each newly generated state, trying to drive directly toward
the goal.
1. If the starting state is a goal state, quit and return success.
2. Otherwise, do the following until success or failure is signalled:
a. Generate a successor E to the starting state. If there are no more successors, then signal failure.
b. Call Depth-first Search with E as the starting state.
c. If success is returned signal success; otherwise, continue in the loop.
Advantages
1. Low storage requirement: linear with tree depth.
2. Easily programmed: function call stack does most of the work of maintaining state of the
search.
Disadvantages
1. May find a sub-optimal solution (one that is deeper or more costly than the best solution).
2. Incomplete: without a depth bound, may not find a solution even if one exists.
Heuristics
A heuristic is a method that improves the efficiency of the search process. These are like tour
guides. There are good to the level that they may neglect the points in general interesting
directions; they are bad to the level that they may neglect points of interest to particular
individuals. Some heuristics help in the search process without sacrificing any claims to entirety
that the process might previously had. Others may occasionally cause an excellent path to be
overlooked. By sacrificing entirety it increases efficiency. Heuristics may not find the best
20
solution every time but guarantee that they find a good solution in a reasonable time. These are
particularly useful in solving tough and complex problems, solutions of which would require
infinite time, i.e. far longer than a lifetime for the problems which are not solved in any other
way.
Heuristic search
To find a solution in proper time rather than a complete solution in unlimited time we use
heuristics. ‘A heuristic function is a function that maps from problem state descriptions to
measures of desirability, usually represented as numbers’. Heuristic search methods use
knowledge about the problem domain and choose promising operators first. These heuristic
search methods use heuristic functions to evaluate the next state towards the goal state. For
finding a solution, by using the heuristic technique, one should carry out the following steps:
1. Add domain—specific information to select what is the best path to continue searching along.
2. Define a heuristic function h(n) that estimates the ‘goodness’ of a node n.
Specifically, h(n) = estimated cost(or distance) of minimal cost path from n to a goal state.
3. The term, heuristic means ‘serving to aid discovery’ and is an estimate, based on domain
specific information that is computable from the current state description of how close we are
to a goal.
Finding a route from one city to another city is an example of a search problem in which
different search orders and the use of heuristic knowledge are easily understood.
1. State: The current city in which the traveller is located.
2. Operators: Roads linking the current city to other cities.
3. Cost Metric: The cost of taking a given road between cities.
4. Heuristic information: The search could be guided by the direction of the goal city from the
current city, or we could use airline distance as an estimate of the distance to the goal.
Heuristic search techniques
For complex problems, the traditional algorithms, presented above, are unable to find the
solution within some practical time and space limits. Consequently, many special techniques are
developed, using heuristic functions.
• Blind search is not always possible, because it requires too much time or Space (memory).
Heuristics are rules of thumb; they do not guarantee a solution to a problem.
• Heuristic Search is a weak technique but can be effective if applied correctly; it requires
domain specific information.
Characteristics of heuristic search
• Heuristics are knowledge about domain, which help search and reasoning in its domain.
• Heuristic search incorporates domain knowledge to improve efficiency over blind search.
• Heuristic is a function that, when applied to a state, returns value as estimated merit of
state, with respect to goal.
 Heuristics might (for reasons) underestimate or overestimate the merit of a state with
respect to goal.
 Heuristics that underestimate are desirable and called admissible.
• Heuristic evaluation function estimates likelihood of given state leading to goal state.
• Heuristic search function estimates cost from current state to goal, presuming function
is efficient.
21
Heuristic search compared with other search
The Heuristic search is compared with Brute force or Blind search techniques below:
Comparison of Algorithms
Brute force / Blind search Heuristic search
Can only search what it has knowledge Estimates ‘distance’ to goal state
about already through explored nodes
No knowledge about how far a node Guides search process toward
goal node from goal state
Prefers states (nodes) that lead
close to and not away from
goal state
Generate and Test Strategy
Generate-And-Test Algorithm
Generate-and-test search algorithm is a very simple algorithm that guarantees to find a solution if
done systematically and there exists a solution.
Algorithm: Generate-And-Test
1.Generate a possible solution.
2.Test to see if this is the expected solution.
3.If the solution has been found quit else go to step 1.
Potential solutions that need to be generated vary depending on the kinds of problems. For some
problems the possible solutions may be particular points in the problem space and for some
problems, paths from the start state.
22
Figure: Generate And Test
Generate-and-test, like depth-first search, requires that complete solutions be generated for
testing. In its most systematic form, it is only an exhaustive search of the problem space.
Solutions can also be generated randomly but solution is not guaranteed. This approach is what is
known as British Museum algorithm: finding an object in the British Museum by wandering
randomly.
Systematic Generate-And-Test
While generating complete solutions and generating random solutions are the two extremes there
exists another approach that lies in between. The approach is that the search process proceeds
systematically but some paths that unlikely to lead the solution are not considered. This
evaluation is performed by a heuristic function.
Depth-first search tree with backtracking can be used to implement systematic generate-and-test
procedure. As per this procedure, if some intermediate states are likely to appear often in the
tree, it would be better to modify that procedure to traverse a graph rather than a tree.
Generate-And-Test And Planning
Exhaustive generate-and-test is very useful for simple problems. But for complex problems even
heuristic generate-and-test is not very effective technique. But this may be made effective by
combining with other techniques in such a way that the space in which to search is restricted. An
AI program DENDRAL, for example, uses plan-Generate-and-test technique. First, the planning
process uses constraint-satisfaction techniques and creates lists of recommended and
contraindicated substructures. Then the generate-and-test procedure uses the lists generated and
required to explore only a limited set of structures. Constrained in this way, generate-and-test
proved highly effective. A major weakness of planning is that it often produces inaccurate
solutions as there is no feedback from the world. But if it is used to produce only pieces of
solutions then lack of detailed accuracy becomes unimportant.
23
Hill Climbing
Hill Climbing is heuristic search used for mathematical optimization problems in the field of
Artificial Intelligence .
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good
solution to the problem. This solution may not be the global optimal maximum.
 In the above definition, mathematical optimization problems implies that hill climbing
solves the problems where we need to maximize or minimize a given real function by
choosing values from the given inputs. Example-Travelling salesman problem where we
need to minimize the distance traveled by salesman.
 ‘Heuristic search’ means that this search algorithm may not find the optimal solution to
the problem. However, it will give a good solution in reasonable time.
 A heuristic function is a function that will rank all the possible alternatives at any
branching step in search algorithm based on the available information. It helps the
algorithm to select the best route out of possible routes.
Features of Hill Climbing
1. Variant of generate and test algorithm : It is a variant of generate and test algorithm. The
generate and test algorithm is as follows :
1. Generate a possible solutions.
2. Test to see if this is the expected solution.
3. If the solution has been found quit else go to step 1.
Hence we call Hill climbing as a variant of generate and test algorithm as it takes the feedback
from test procedure. Then this feedback is utilized by the generator in deciding the next move in
search space.
2. Uses the Greedy approach : At any point in state space, the search moves in that direction
only which optimizes the cost of function with the hope of finding the optimal solution at
the end.
Types of Hill Climbing
1. Simple Hill climbing : It examines the neighboring nodes one by one and selects the
first neighboring node which optimizes the current cost as next node.
Algorithm for Simple Hill climbing :
Step 1 : Evaluate the initial state. If it is a goal state then stop and return success. Otherwise,
make initial state as current state.
Step 2 : Loop until the solution state is found or there are no new operators present which can be
applied to current state.
a) Select a state that has not been yet applied to the current state and apply it to produce a new
state.
b) Perform these to evaluate new state
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed further.
iii. If it is not better than the current state, then continue in the loop until a solution is found.
Step 3 : Exit.
24
2. Steepest-Ascent Hill climbing : It first examines all the neighboring nodes and
then selects the node closest to the solution state as next node.
Step 1 : Evaluate the initial state. If it is goal state then exit else make the current state as initial
state
Step 2 : Repeat these steps until a solution is found or current state does not change
i. Let ‘target’ be a state such that any successor of the current state will be better than it;
ii. for each operator that applies to the current state
a. apply the new operator and create a new state
b. evaluate the new state
c. if this state is goal state then quit else compare with ‘target’
d. if this state is better than ‘target’, set this state as ‘target’
e. if target is better than current state set current state to Target
Step 3 : Exit
3. Stochastic hill climbing : It does not examine all the neighboring nodes before deciding
which node to select .It just selects a neighboring node at random, and decides (based
on the amount of improvement in that neighbor) whether to move to that neighbor or to
examine another.
State Space diagram for Hill Climbing
State space diagram is a graphical representation of the set of states our search algorithm can
reach vs the value of our objective function(the function which we wish to maximize).
X-axis : denotes the state space ie states or configuration our algorithm may reach.
Y-axis : denotes the values of objective function corresponding to to a particular state.
The best solution will be that state space where objective function has maximum value(global
maximum).
Different regions in the State Space Diagram
1. Local maximum : It is a state which is better than its neighboring state however there
exists a state which is better than it(global maximum). This state is better because
here value of objective function is higher than its neighbors.
25
2. Global maximum : It is the best possible state in the state space diagram. This because at
this state, objective function has highest value.
3. Plateua/flat local maximum : It is a flat region of state space where neighboring
states have the same value.
4. Ridge : It is region which is higher than its neighbours but itself has a slope. It is a
special kind of local maximum.
5. Current state : The region of state space diagram where we are currently present
during the search.
6. Shoulder : It is a plateau that has an uphill
edge. Problems in different regions in Hill climbing
Hill climbing cannot reach the optimal/best state(global maximum) if it enters any of the
following regions :
1. Local maximum : At a local maximum all neighboring states have a values which is
worse than than the current state. Since hill climbing uses greedy approach, it will not
move to the worse state and terminate itself. The process will end even though a
better solution may exist.
To overcome local maximum problem : Utilize backtracking technique. Maintain a list of
visited states. If the search reaches an undesirable state, it can backtrack to the previous
configuration and explore a new path.
2. Plateau : On plateau all neighbors have same value . Hence, it is not possible to select
the best direction.
To overcome plateaus : Make a big jump. Randomly select a state far away from current state.
Chances are that we will land at a non-plateau region
3. Ridge : Any point on a ridge can look like peak because movement in all possible
directions is downward. Hence the algorithm stops when it reaches this state.
To overcome Ridge : In this kind of obstacle, use two or more rules before testing.
It implies moving in several directions at once.
Best First Search (Informed Search)
In BFS and DFS, when we are at a node, we can consider any of the adjacent as next
node. So both BFS and DFS blindly explore paths without considering any cost function. The
idea of Best First Search is to use an evaluation function to decide which adjacent is most
promising and then explore. Best First Search falls under the category of Heuristic Search or
Informed Search.
We use a priority queue to store costs of nodes. So the implementation is a variation of BFS, we
just need to change Queue to PriorityQueue.
Algorithm:
Best-First-Search(Grah g, Node start)
1) Create an empty
PriorityQueue PriorityQueue
pq;
2) Insert "start" in
pq. pq.insert(start)
3) Until PriorityQueue is empty
u = PriorityQueue.DeleteMin
26
If u is the goal
Exit
Else
Foreach neighbor v of u
If v "Unvisited"
Mark v "Visited"
pq.insert(v)
Mark v "Examined"
End procedure
Let us consider below example.
We start from source "S" and search for
goal "I" using given costs and Best
First search.
pq initially contains S
We remove s from and process unvisited
neighbors of S to pq.
pq now contains {A, C, B} (C is put
before B because C has lesser cost)
We remove A from pq and process unvisited
neighbors of A to pq.
pq now contains {C, B, E, D}
27
We remove C from pq and process unvisited
neighbors of C to pq.
pq now contains {B, H, E, D}
We remove B from pq and process unvisited
neighbors of B to pq.
pq now contains {H, E, D, F, G}
We remove H from pq. Since our goal
"I" is a neighbor of H, we return.

More Related Content

PDF
AIML-M1 and M2-TIEpdf.pdf
PDF
DOC-20221019-WA0003..pdf
PDF
DOC-20221019-WA0003..pdf
PDF
271_AI Lect Notes.pdf
DOCX
Unit I What is Artificial Intelligence.docx
PDF
ARTIFICIAL INTELLIGENCETterm Paper
PPT
Artificial intel
PDF
AI Mod1@AzDOCUMENTS.in.pdf
AIML-M1 and M2-TIEpdf.pdf
DOC-20221019-WA0003..pdf
DOC-20221019-WA0003..pdf
271_AI Lect Notes.pdf
Unit I What is Artificial Intelligence.docx
ARTIFICIAL INTELLIGENCETterm Paper
Artificial intel
AI Mod1@AzDOCUMENTS.in.pdf

Similar to ARTIFICIAL INTELLIGENCE 271_AI Lect Notes.docx (20)

DOCX
Artificial Intelligence power point presentation document
PPTX
1 Introduction to AI.pptx
DOC
Chapter 1 (final)
PPTX
AI ghfghfghfghfghfghfghfgh fhgfghfgjh Lecture.pptx
PPT
artificial intelligence
PPT
artificial intelligence
PPTX
Introduction to Artificial Intelligence
PPTX
AI CH 1d.pptx
DOCX
Ai complete note
PPTX
Understanding Artificial intelligence
DOCX
Cosc 208 lecture note-1
PPTX
Intro artificial intelligence
PPTX
Artificial Intelligence problem solving agents
PPTX
Basics of artificial intelligence and machine learning
DOCX
Artificial intelligence
PDF
Artificial intelligence uses in productive systems and impacts on the world...
PDF
Lecture1-Artificial Intelligence.pptx.pdf
DOCX
Artificial Intelligence and Problem state, space
Artificial Intelligence power point presentation document
1 Introduction to AI.pptx
Chapter 1 (final)
AI ghfghfghfghfghfghfghfgh fhgfghfgjh Lecture.pptx
artificial intelligence
artificial intelligence
Introduction to Artificial Intelligence
AI CH 1d.pptx
Ai complete note
Understanding Artificial intelligence
Cosc 208 lecture note-1
Intro artificial intelligence
Artificial Intelligence problem solving agents
Basics of artificial intelligence and machine learning
Artificial intelligence
Artificial intelligence uses in productive systems and impacts on the world...
Lecture1-Artificial Intelligence.pptx.pdf
Artificial Intelligence and Problem state, space
Ad

More from RBeze58 (10)

PPTX
Fundamentals of Data Science Probability Distributions
PPTX
Fundamentals of Data Science Modeling Lec
DOCX
IT Laws and Practices Module 3 to Module 5
DOCX
COI/IT LAWS AND PRACTICES Case Study.docx
DOCX
COI/IT LAWS AND PRACTICES Module2_Casestudy.docx
PPTX
COI/ IT LAWS AND PRACTICES Module 3.pptx
PPTX
COI/ IT LAWS AND PRACTICES Module 2.pptx
PPTX
COI/ IT LAWS AND PRACTICES Module 1.pptx
PDF
Marketing Communication & Advertising.pdf
PPTX
Computer Networks 04 Data and Signal Fundamentals.pptx
Fundamentals of Data Science Probability Distributions
Fundamentals of Data Science Modeling Lec
IT Laws and Practices Module 3 to Module 5
COI/IT LAWS AND PRACTICES Case Study.docx
COI/IT LAWS AND PRACTICES Module2_Casestudy.docx
COI/ IT LAWS AND PRACTICES Module 3.pptx
COI/ IT LAWS AND PRACTICES Module 2.pptx
COI/ IT LAWS AND PRACTICES Module 1.pptx
Marketing Communication & Advertising.pdf
Computer Networks 04 Data and Signal Fundamentals.pptx
Ad

Recently uploaded (20)

PDF
PPT on Performance Review to get promotions
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPT
Project quality management in manufacturing
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Geodesy 1.pptx...............................................
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
Sustainable Sites - Green Building Construction
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Construction Project Organization Group 2.pptx
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPT on Performance Review to get promotions
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Project quality management in manufacturing
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
CYBER-CRIMES AND SECURITY A guide to understanding
Geodesy 1.pptx...............................................
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
Sustainable Sites - Green Building Construction
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
OOP with Java - Java Introduction (Basics)
Lecture Notes Electrical Wiring System Components
Construction Project Organization Group 2.pptx
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Internet of Things (IOT) - A guide to understanding
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
UNIT 4 Total Quality Management .pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Embodied AI: Ushering in the Next Era of Intelligent Systems

ARTIFICIAL INTELLIGENCE 271_AI Lect Notes.docx

  • 2. 2 ARTIFICIAL INTELLIGENCE What is Artificial Intelligence? It is a branch of Computer Science that pursues creating the computers or machines as intelligent as human beings. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable Definition: Artificial Intelligence is the study of how to make computers do things, which, at the moment, people do better. According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”. Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think. AI is accomplished by studying how human brain thinks and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems. It has gained prominence recently due, in part, to big data, or the increase in speed, size and variety of data businesses are now collecting. AI can perform tasks such as identifying patterns in the data more efficiently than humans, enabling businesses to gain more insight out of their data. From a business perspective AI is a set of very powerful tools, and methodologies for using those tools to solve business problems. From a programming perspective, AI includes the study of symbolic programming, problem solving, and search. AI Vocabulary Intelligence relates to tasks involving higher mental processes, e.g. creativity, solving problems, pattern recognition, classification, learning, induction, deduction, building analogies, optimization, language processing, knowledge and many more. Intelligence is the computational part of the ability to achieve goals. Intelligent behaviour is depicted by perceiving one’s environment, acting in complex environments, learning and understanding from experience, reasoning to solve problems and discover hidden knowledge, applying knowledge successfully in new situations, thinking abstractly, using analogies, communicating with others and more.
  • 3. 3 Science based goals of AI pertain to developing concepts, mechanisms and understanding biological intelligent behaviour. The emphasis is on understanding intelligent behaviour. Engineering based goals of AI relate to developing concepts, theory and practice of building intelligent machines. The emphasis is on system building. AI Techniques depict how we represent, manipulate and reason with knowledge in order to solve problems. Knowledge is a collection of ‘facts’. To manipulate these facts by a program, a suitable representation is required. A good representation facilitates problem solving. Learning means that programs learn from what facts or behaviour can represent. Learning denotes changes in the systems that are adaptive in other words, it enables the system to do the same task(s) more efficiently next time. Applications of AI refers to problem solving, search and control strategies, speech recognition, natural language understanding, computer vision, expert systems, etc. Problems of AI: Intelligence does not imply perfect understanding; every intelligent being has limited perception, memory and computation. Many points on the spectrum of intelligence versus cost are viable, from insects to humans. AI seeks to understand the computations required from intelligent behaviour and to produce computer systems that exhibit intelligence. Aspects of intelligence studied by AI include perception, communicational using human languages, reasoning, planning, learning and memory. The following questions are to be considered before we can step forward: 1. What are the underlying assumptions about intelligence? 2. What kinds of techniques will be useful for solving AI problems? 3. At what level human intelligence can be modelled? 4. When will it be realized when an intelligent program has been built? Branches of AI: A list of branches of AI is given below. However some branches are surely missing, because no one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches. Logical AI — In general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals.
  • 4. 4 Search — Artificial Intelligence programs often examine large numbers of possibilities – for example, moves in a chess game and inferences by a theorem proving program. Discoveries are frequently made about how to do this more efficiently in various domains. Pattern Recognition — When a program makes observations of some kind, it is often planned to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns are like a natural language text, a chess position or in the history of some event. These more complex patterns require quite different methods than do the simple patterns that have been studied the most. Representation — Usually languages of mathematical logic are used to represent the facts about the world. Inference — Others can be inferred from some facts. Mathematical logical deduction is sufficient for some purposes, but new methods of non-monotonic inference have been added to the logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default. But the conclusion can be withdrawn if there is evidence to the divergent. For example, when we hear of a bird, we infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Normal logical reasoning is monotonic, in that the set of conclusions can be drawn from a set of premises, i.e. monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning. Common sense knowledge and Reasoning — This is the area in which AI is farthest from the human level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. Learning from experience — There are some rules expressed in logic for learning. Programs can only learn what facts or behaviour their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information. Planning — Planning starts with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, planning programs generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions. Epistemology — This is a study of the kinds of knowledge that are required for solving problems in the world. Ontology — Ontology is the study of the kinds of things that exist. In AI the programs and sentences deal with various kinds of objects and we study what these kinds are and what their basic properties are. Ontology assumed importance from the 1990s.
  • 5. 5 Heuristics — A heuristic is a way of trying to discover something or an idea embedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search or to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, and may be more useful. Genetic programming — Genetic programming is an automated method for creating a working computer program from a high-level problem statement of a problem. Genetic programming starts from a high-level statement of ‘what needs to be done’ and automatically creates a computer program to solve the problem. Applications of AI AI has applications in all fields of human study, such as finance and economics, environmental engineering, chemistry, computer science, and so on. Some of the applications of AI are listed below:  Perception ■ Machine vision ■ Speech understanding ■ Touch ( tactile or haptic) sensation  Robotics  Natural Language Processing ■ Natural Language Understanding ■ Speech Understanding ■ Language Generation ■ Machine Translation  Planning  Expert Systems  Machine Learning  Theorem Proving  Symbolic Mathematics  Game Playing AI Technique: Artificial Intelligence research during the last three decades has concluded that Intelligence requires knowledge. To compensate overwhelming quality, knowledge possesses less desirable properties. A. It is huge. B. It is difficult to characterize correctly. C. It is constantly varying. D. It differs from data by being organized in a way that corresponds to its application. E. It is complicated.
  • 6. 6 An AI technique is a method that exploits knowledge that is represented so that:  The knowledge captures generalizations that share properties, are grouped together, rather than being allowed separate representation.  It can be understood by people who must provide it—even though for many programs bulk of the data comes automatically from readings.  In many AI domains, how the people understand the same people must supply the knowledge to a program.  It can be easily modified to correct errors and reflect changes in real conditions.  It can be widely used even if it is incomplete or inaccurate.  It can be used to help overcome its own sheer bulk by helping to narrow the range of possibilities that must be usually considered. In order to characterize an AI technique let us consider initially OXO or tic-tac-toe and use a series of different approaches to play the game. The programs increase in complexity, their use of generalizations, the clarity of their knowledge and the extensibility of their approach. In this way they move towards being representations of AI techniques. Example-1: Tic-Tac-Toe 1.1 The first approach (simple) The Tic-Tac-Toe game consists of a nine element vector called BOARD; it represents the numbers 1 to 9 in three rows. An element contains the value 0 for blank, 1 for X and 2 for O. A MOVETABLE vector consists of 19,683 elements (39 ) and is needed where each element is a nine element vector. The contents of the vector are especially chosen to help the algorithm. The algorithm makes moves by pursuing the following: 1. View the vector as a ternary number. Convert it to a decimal number. 2. Use the decimal number as an index in MOVETABLE and access the vector. 3. Set BOARD to this vector indicating how the board looks after the move. This approach is capable in time but it has several disadvantages. It takes more space and requires stunning
  • 7. 7 effort to calculate the decimal numbers. This method is specific to this game and cannot be completed. 1.2 The second approach Tic-Tac-Toe Game Playing using Magic Square – Program 2 in Player 1 – uses X and Player 2 – use O. So, a player who gets 3 consecutive marks first, the player will win the game. Here, we assign board positions to vector elements. The Sum of all rows, columns, and diagonals must be 15. Algorithm – Tic-Tac-Toe Game Playing using Magic  The first machine will check, the chance to win,  If the difference between 15 and the sum of the two squares, If this difference is not positive or if it is greater than 9, then the original two squares were not collinear and so can be ignored. or it will check the opponent of winning and block the chances of winning. Note: refer to the exercise done in the class. 1.3 The final approach The structure of the data consists of BOARD which contains a nine element vector, a list of board positions that could result from the next move and a number representing an estimation of how the board position leads to an ultimate win for the player to move. This algorithm looks ahead to make a decision on the next move by deciding which the most promising move or the most suitable move at any stage would be and selects the same. Consider all possible moves and replies that the program can make. Continue this process for as long as time permits until a winner emerges, and then choose the move that leads to the computer program winning, if possible in the shortest time. Note: refer to the exercise done in the class
  • 8. 8 PROBLEMS, PROBLEM SPACES AND SEARCH To solve the problem of building a system you should take the following steps: 1. Define the problem accurately including detailed specifications and what constitutes a suitable solution. 2. Scrutinize the problem carefully, for some features may have a central affect on the chosen method of solution. 3. Segregate and represent the background knowledge needed in the solution of the problem. 4. Choose the best solving techniques for the problem to solve a solution. Problem solving is a process of generating solutions from observed data. • a ‘problem’ is characterized by a set of goals, • a set of objects, and • a set of operations. These could be ill-defined and may evolve during problem solving. • A ‘problem space’ is an abstract space.  A problem space encompasses all valid states that can be generated by the application of any combination of operators on any combination of objects.  The problem space may contain one or more solutions. A solution is a combination of operations and objects that achieve the goals. • A ‘search’ refers to the search for a solution in a problem space.  Search proceeds with different types of ‘search control strategies’.  The depth-first search and breadth-first search are the two common search strategies. 2.1 AI - General Problem Solving Problem solving has been the key area of concern for Artificial Intelligence. Problem solving is a process of generating solutions from observed or given data. It is however not always possible to use direct methods (i.e. go directly from data to solution). Instead, problem solving often needs to use indirect or modelbased methods. Problem definitions A problem is defined by its ‘elements’ and their ‘relations’. To provide a formal description of a problem, we need to do the following: a. Define a state space that contains all the possible configurations of the relevant objects, including some impossible ones. b. Specify one or more states that describe possible situations, from which the problem- solving process may start. These states are called initial states. c. Specify one or more states that would be acceptable solution to the problem.
  • 9. 9 These states are called goal states. Specify a set of rules that describe the actions (operators) available. The problem can then be solved by using the rules, in combination with an appropriate control strategy, to move through the problem space until a path from an initial state to a goal state is found. This process is known as ‘search’. Thus:  Search is fundamental to the problem-solving process.  Search is a general mechanism that can be used when a more direct method is not known.  Search provides the framework into which more direct methods for solving subparts of a problem can be embedded. A very large number of AI problems are formulated as search problems.  Problem space A problem space is represented by a directed graph, where nodes represent search state and paths represent the operators applied to change the state. To simplify search algorithms, it is often convenient to logically and programmatically represent a problem space as a tree. A tree usually decreases the complexity of a search at a cost. Here, the cost is due to duplicating some nodes on the tree that were linked numerous times in the graph, e.g. node B and node D. A tree is a graph in which any two vertices are connected by exactly one path. Alternatively, any connected graph with no cycles is a tree.
  • 10. 10 • Problem solving: The term, Problem Solving relates to analysis in AI. Problem solving may be characterized as a systematic search through a range of possible actions to reach some predefined goal or solution. Problem-solving methods are categorized as special purpose and general purpose. • A special-purpose method is tailor-made for a particular problem, often exploits very specific features of the situation in which the problem is embedded. • A general-purpose method is applicable to a wide variety of problems. One General-purpose technique used in AI is ‘means-end analysis’: a step-bystep, or incremental, reduction of the difference between current state and final goal.
  • 11. 11 2.3 DEFINING PROBLEM AS A STATE SPACE SEARCH To solve the problem of playing a game, we require the rules of the game and targets for winning as well as representing positions in the game. The opening position can be defined as the initial state and a winning position as a goal state. Moves from initial state to other states leading to the goal state follow legally. However, the rules are far too abundant in most games— especially in chess, where they exceed the number of particles in the universe. Thus, the rules cannot be supplied accurately and computer programs cannot handle easily. The storage also presents another problem but searching can be achieved by hashing. The number of rules that are used must be minimized and the set can be created by expressing each rule in a form as possible. The representation of games leads to a state space representation and it is common for well-organized games with some structure. This representation allows for the formal definition of a problem that needs the movement from a set of initial positions to one of a set of target positions. It means that the solution involves using known techniques and a systematic search. This is quite a common method in Artificial Intelligence. 2.3.1 State Space Search A state space represents a problem in terms of states and operators that change states. A state space consists of:  A representation of the states the system can be in. For example, in a board game, the board represents the current state of the game.  A set of operators that can change one state into another state. In a board game, the operators are the legal moves from any given state. Often the operators are represented as programs that change a state representation to represent the new state.  An initial state.  A set of final states; some of these may be desirable, others undesirable. This set is often represented implicitly by a program that detects terminal states. 2.3.2 The Water Jug Problem In this problem, we use two jugs called four and three; four holds a maximum of four gallons of water and three a maximum of three gallons of water. How can we get two gallons of water in the four jug? The state space is a set of prearranged pairs giving the number of gallons of water in the pair of jugs at any time, i.e., (four, three) where four = 0, 1, 2, 3 or 4 and three = 0, 1, 2 or 3. The start state is (0, 0) and the goal state is (2, n) where n may be any but it is limited to three holding from 0 to 3 gallons of water or empty. Three and four shows the name and numerical number shows the amount of water in jugs for solving the water jug problem. The major production rules for solving this problem are shown below:
  • 12. 12 Initial condition Goal comment 1. (four, three) if four < 4 (4, three) fill four from tap 2. (four, three) if three< 3 (four, 3) fill three from tap 3. (four, three) If four > 0 (0, three) empty four into drain 4. (four, three) if three > 0 (four, 0) empty three into drain 5. (four, three) if four + three<4 (four + three, 0) empty three into four 6. (four, three) if four + three<3 (0, four + three) empty four into three 7. (0, three) If three > 0 (three, 0) empty three into four 8. (four, 0) if four > 0 (0, four) empty four into three 9. (0, 2) (2, 0) empty three into four 10. (2, 0) (0, 2) empty four into three 11. (four, three) if four < 4 (4, three-diff) pour diff, 4-four, into four from three 12. (three, four) if three < 3 (four-diff, 3) pour diff, 3-three, into three from four and a solution is given below four three rule (Fig. 2.2 Production Rules for the Water Jug Problem) Gallons in Four Jug Gallons in Three Jug Rules Applied 0 0 - 0 3 2 3 0 7 3 3 2 4 2 11 0 2 3 2 0 10 (Fig. 2.3 One Solution to the Water Jug Problem) The problem solved by using the production rules in combination with an appropriate control strategy, moving through the problem space until a path from an initial state to a goal state is found. In this problem solving process, search is the fundamental concept. For simple problems it is easier to achieve this goal by hand but there will be cases where this is far too difficult. Issues in the Design of Search Programs Each search process can be considered to be a tree traversal. The object of the search is to find a path from the initial state to a goal state using a tree. The number of nodes generated might be huge; and in practice many of the nodes would not be needed. The secret of a good search routine is to generate only those nodes that are likely to be useful, rather than having a precise tree. The rules are used to represent the tree implicitly and only to create nodes explicitly if they are actually to be of use.
  • 13. 13 The following issues arise when searching: • The tree can be searched forward from the initial node to the goal state or backwards from the goal state to the initial state. • To select applicable rules, it is critical to have an efficient procedure for matching rules against states. • How to represent each node of the search process? This is the knowledge representation problem or the frame problem. In games, an array suffices; in other problems, more complex data structures are needed. Finally in terms of data structures, considering the water jug as a typical problem do we use a graph or tree? The breadth-first structure does take note of all nodes generated but the depth-first one can be modified.
  • 14. 14 Check duplicate nodes 1. Observe all nodes that are already generated, if a new node is present. 2. If it exists add it to the graph. 3. If it already exists, then a. Set the node that is being expanded to the point to the already existing node corresponding to its successor rather than to the new one. The new one can be thrown away. b. If the best or shortest path is being determined, check to see if this path is better or worse than the old one. If worse, do nothing. Better save the new path and work the change in length through the chain of successor nodes if necessary. Example: Tic-Tac-Toe State spaces are good representations for board games such as Tic-Tac-Toe. The position of a game can be explained by the contents of the board and the player whose turn is next. The board can be represented as an array of 9 cells, each of which may contain an X or O or be empty. • State:  Player to move next: X or O.  Board configuration: • Operators: Change an empty cell to X or O. • Start State: Board empty; X’s turn. • Terminal States: Three X’s in a row; Three O’s in a row; All cells full. Search Tree The sequence of states formed by possible moves is called a search tree. Each level of the tree is called a ply. Since the same state may be reachable by different sequences of moves, the state space may in general be a graph. It may be treated as a tree for simplicity, at the cost of duplicating states.
  • 15. 15 Solving problems using search • Given an informal description of the problem, construct a formal description as a state space:  Define a data structure to represent the state.  Make a representation for the initial state from the given data.  Write programs to represent operators that change a given state representation to a new state representation.  Write a program to detect terminal states. • Choose an appropriate search technique:  How large is the search space?  How well structured is the domain?  What knowledge about the domain can be used to guide the search?
  • 16. 16 HEURISTIC SEARCH TECHNIQUES: Search Algorithms Many traditional search algorithms are used in AI applications. For complex problems, the traditional algorithms are unable to find the solutions within some practical time and space limits. Consequently, many special techniques are developed, using heuristic functions. The algorithms that use heuristic functions are called heuristic algorithms. • Heuristic algorithms are not really intelligent; they appear to be intelligent because they achieve better performance. • Heuristic algorithms are more efficient because they take advantage of feedback from the data to direct the search path. • Uninformed search algorithms or Brute-force algorithms, search through the search space all possible candidates for the solution checking whether each candidate satisfies the problem’s statement. • Informed search algorithms use heuristic functions that are specific to the problem, apply them to guide the search through the search space to try to reduce the amount of time spent in searching. A good heuristic will make an informed search dramatically outperform any uninformed search: for example, the Traveling Salesman Problem (TSP), where the goal is to find is a good solution instead of finding the best solution. • Uninformed Search: Also called blind, exhaustive or brute-force search, it uses no information about the problem to guide the search and therefore may not be very efficient. • Informed Search: Also called heuristic or intelligent search, this uses information about the problem to guide the search—usually guesses the distance to a goal state and is therefore efficient, but the search may not be always possible.
  • 17. 17
  • 18. 18 Breadth-first search A Search strategy, in which the highest layer of a decision tree is searched completely before proceeding to the next layer is called Breadth-first search (BFS). • In this strategy, no viable solutions are omitted and therefore it is guaranteed that an optimal solution is found. • This strategy is often not feasible when the search space is large. Algorithm 1. Create a variable called LIST and set it to be the starting state. 2. Loop until a goal state is found or LIST is empty, Do a. Remove the first element from the LIST and call it E. If the LIST is empty, quit. b. For every path each rule can match the state E, Do (i) Apply the rule to generate a new state. (ii) If the new state is a goal state, quit and return this state. (iii) Otherwise, add the new state to the end of LIST.
  • 19. 19 Advantages 1. Guaranteed to find an optimal solution (in terms of shortest number of steps to reach the goal). 2. Can always find a goal node if one exists (complete). Disadvantages 1. High storage requirement: exponential with tree depth. Depth-first search A search strategy that extends the current path as far as possible before backtracking to the last choice point and trying the next alternative path is called Depth-first search (DFS). • This strategy does not guarantee that the optimal solution has been found. • In this strategy, search reaches a satisfactory solution more rapidly than breadth first, an advantage when the search space is large. Algorithm Depth-first search applies operators to each newly generated state, trying to drive directly toward the goal. 1. If the starting state is a goal state, quit and return success. 2. Otherwise, do the following until success or failure is signalled: a. Generate a successor E to the starting state. If there are no more successors, then signal failure. b. Call Depth-first Search with E as the starting state. c. If success is returned signal success; otherwise, continue in the loop. Advantages 1. Low storage requirement: linear with tree depth. 2. Easily programmed: function call stack does most of the work of maintaining state of the search. Disadvantages 1. May find a sub-optimal solution (one that is deeper or more costly than the best solution). 2. Incomplete: without a depth bound, may not find a solution even if one exists. Heuristics A heuristic is a method that improves the efficiency of the search process. These are like tour guides. There are good to the level that they may neglect the points in general interesting directions; they are bad to the level that they may neglect points of interest to particular individuals. Some heuristics help in the search process without sacrificing any claims to entirety that the process might previously had. Others may occasionally cause an excellent path to be overlooked. By sacrificing entirety it increases efficiency. Heuristics may not find the best
  • 20. 20 solution every time but guarantee that they find a good solution in a reasonable time. These are particularly useful in solving tough and complex problems, solutions of which would require infinite time, i.e. far longer than a lifetime for the problems which are not solved in any other way. Heuristic search To find a solution in proper time rather than a complete solution in unlimited time we use heuristics. ‘A heuristic function is a function that maps from problem state descriptions to measures of desirability, usually represented as numbers’. Heuristic search methods use knowledge about the problem domain and choose promising operators first. These heuristic search methods use heuristic functions to evaluate the next state towards the goal state. For finding a solution, by using the heuristic technique, one should carry out the following steps: 1. Add domain—specific information to select what is the best path to continue searching along. 2. Define a heuristic function h(n) that estimates the ‘goodness’ of a node n. Specifically, h(n) = estimated cost(or distance) of minimal cost path from n to a goal state. 3. The term, heuristic means ‘serving to aid discovery’ and is an estimate, based on domain specific information that is computable from the current state description of how close we are to a goal. Finding a route from one city to another city is an example of a search problem in which different search orders and the use of heuristic knowledge are easily understood. 1. State: The current city in which the traveller is located. 2. Operators: Roads linking the current city to other cities. 3. Cost Metric: The cost of taking a given road between cities. 4. Heuristic information: The search could be guided by the direction of the goal city from the current city, or we could use airline distance as an estimate of the distance to the goal. Heuristic search techniques For complex problems, the traditional algorithms, presented above, are unable to find the solution within some practical time and space limits. Consequently, many special techniques are developed, using heuristic functions. • Blind search is not always possible, because it requires too much time or Space (memory). Heuristics are rules of thumb; they do not guarantee a solution to a problem. • Heuristic Search is a weak technique but can be effective if applied correctly; it requires domain specific information. Characteristics of heuristic search • Heuristics are knowledge about domain, which help search and reasoning in its domain. • Heuristic search incorporates domain knowledge to improve efficiency over blind search. • Heuristic is a function that, when applied to a state, returns value as estimated merit of state, with respect to goal.  Heuristics might (for reasons) underestimate or overestimate the merit of a state with respect to goal.  Heuristics that underestimate are desirable and called admissible. • Heuristic evaluation function estimates likelihood of given state leading to goal state. • Heuristic search function estimates cost from current state to goal, presuming function is efficient.
  • 21. 21 Heuristic search compared with other search The Heuristic search is compared with Brute force or Blind search techniques below: Comparison of Algorithms Brute force / Blind search Heuristic search Can only search what it has knowledge Estimates ‘distance’ to goal state about already through explored nodes No knowledge about how far a node Guides search process toward goal node from goal state Prefers states (nodes) that lead close to and not away from goal state Generate and Test Strategy Generate-And-Test Algorithm Generate-and-test search algorithm is a very simple algorithm that guarantees to find a solution if done systematically and there exists a solution. Algorithm: Generate-And-Test 1.Generate a possible solution. 2.Test to see if this is the expected solution. 3.If the solution has been found quit else go to step 1. Potential solutions that need to be generated vary depending on the kinds of problems. For some problems the possible solutions may be particular points in the problem space and for some problems, paths from the start state.
  • 22. 22 Figure: Generate And Test Generate-and-test, like depth-first search, requires that complete solutions be generated for testing. In its most systematic form, it is only an exhaustive search of the problem space. Solutions can also be generated randomly but solution is not guaranteed. This approach is what is known as British Museum algorithm: finding an object in the British Museum by wandering randomly. Systematic Generate-And-Test While generating complete solutions and generating random solutions are the two extremes there exists another approach that lies in between. The approach is that the search process proceeds systematically but some paths that unlikely to lead the solution are not considered. This evaluation is performed by a heuristic function. Depth-first search tree with backtracking can be used to implement systematic generate-and-test procedure. As per this procedure, if some intermediate states are likely to appear often in the tree, it would be better to modify that procedure to traverse a graph rather than a tree. Generate-And-Test And Planning Exhaustive generate-and-test is very useful for simple problems. But for complex problems even heuristic generate-and-test is not very effective technique. But this may be made effective by combining with other techniques in such a way that the space in which to search is restricted. An AI program DENDRAL, for example, uses plan-Generate-and-test technique. First, the planning process uses constraint-satisfaction techniques and creates lists of recommended and contraindicated substructures. Then the generate-and-test procedure uses the lists generated and required to explore only a limited set of structures. Constrained in this way, generate-and-test proved highly effective. A major weakness of planning is that it often produces inaccurate solutions as there is no feedback from the world. But if it is used to produce only pieces of solutions then lack of detailed accuracy becomes unimportant.
  • 23. 23 Hill Climbing Hill Climbing is heuristic search used for mathematical optimization problems in the field of Artificial Intelligence . Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good solution to the problem. This solution may not be the global optimal maximum.  In the above definition, mathematical optimization problems implies that hill climbing solves the problems where we need to maximize or minimize a given real function by choosing values from the given inputs. Example-Travelling salesman problem where we need to minimize the distance traveled by salesman.  ‘Heuristic search’ means that this search algorithm may not find the optimal solution to the problem. However, it will give a good solution in reasonable time.  A heuristic function is a function that will rank all the possible alternatives at any branching step in search algorithm based on the available information. It helps the algorithm to select the best route out of possible routes. Features of Hill Climbing 1. Variant of generate and test algorithm : It is a variant of generate and test algorithm. The generate and test algorithm is as follows : 1. Generate a possible solutions. 2. Test to see if this is the expected solution. 3. If the solution has been found quit else go to step 1. Hence we call Hill climbing as a variant of generate and test algorithm as it takes the feedback from test procedure. Then this feedback is utilized by the generator in deciding the next move in search space. 2. Uses the Greedy approach : At any point in state space, the search moves in that direction only which optimizes the cost of function with the hope of finding the optimal solution at the end. Types of Hill Climbing 1. Simple Hill climbing : It examines the neighboring nodes one by one and selects the first neighboring node which optimizes the current cost as next node. Algorithm for Simple Hill climbing : Step 1 : Evaluate the initial state. If it is a goal state then stop and return success. Otherwise, make initial state as current state. Step 2 : Loop until the solution state is found or there are no new operators present which can be applied to current state. a) Select a state that has not been yet applied to the current state and apply it to produce a new state. b) Perform these to evaluate new state i. If the current state is a goal state, then stop and return success. ii. If it is better than the current state, then make it current state and proceed further. iii. If it is not better than the current state, then continue in the loop until a solution is found. Step 3 : Exit.
  • 24. 24 2. Steepest-Ascent Hill climbing : It first examines all the neighboring nodes and then selects the node closest to the solution state as next node. Step 1 : Evaluate the initial state. If it is goal state then exit else make the current state as initial state Step 2 : Repeat these steps until a solution is found or current state does not change i. Let ‘target’ be a state such that any successor of the current state will be better than it; ii. for each operator that applies to the current state a. apply the new operator and create a new state b. evaluate the new state c. if this state is goal state then quit else compare with ‘target’ d. if this state is better than ‘target’, set this state as ‘target’ e. if target is better than current state set current state to Target Step 3 : Exit 3. Stochastic hill climbing : It does not examine all the neighboring nodes before deciding which node to select .It just selects a neighboring node at random, and decides (based on the amount of improvement in that neighbor) whether to move to that neighbor or to examine another. State Space diagram for Hill Climbing State space diagram is a graphical representation of the set of states our search algorithm can reach vs the value of our objective function(the function which we wish to maximize). X-axis : denotes the state space ie states or configuration our algorithm may reach. Y-axis : denotes the values of objective function corresponding to to a particular state. The best solution will be that state space where objective function has maximum value(global maximum). Different regions in the State Space Diagram 1. Local maximum : It is a state which is better than its neighboring state however there exists a state which is better than it(global maximum). This state is better because here value of objective function is higher than its neighbors.
  • 25. 25 2. Global maximum : It is the best possible state in the state space diagram. This because at this state, objective function has highest value. 3. Plateua/flat local maximum : It is a flat region of state space where neighboring states have the same value. 4. Ridge : It is region which is higher than its neighbours but itself has a slope. It is a special kind of local maximum. 5. Current state : The region of state space diagram where we are currently present during the search. 6. Shoulder : It is a plateau that has an uphill edge. Problems in different regions in Hill climbing Hill climbing cannot reach the optimal/best state(global maximum) if it enters any of the following regions : 1. Local maximum : At a local maximum all neighboring states have a values which is worse than than the current state. Since hill climbing uses greedy approach, it will not move to the worse state and terminate itself. The process will end even though a better solution may exist. To overcome local maximum problem : Utilize backtracking technique. Maintain a list of visited states. If the search reaches an undesirable state, it can backtrack to the previous configuration and explore a new path. 2. Plateau : On plateau all neighbors have same value . Hence, it is not possible to select the best direction. To overcome plateaus : Make a big jump. Randomly select a state far away from current state. Chances are that we will land at a non-plateau region 3. Ridge : Any point on a ridge can look like peak because movement in all possible directions is downward. Hence the algorithm stops when it reaches this state. To overcome Ridge : In this kind of obstacle, use two or more rules before testing. It implies moving in several directions at once. Best First Search (Informed Search) In BFS and DFS, when we are at a node, we can consider any of the adjacent as next node. So both BFS and DFS blindly explore paths without considering any cost function. The idea of Best First Search is to use an evaluation function to decide which adjacent is most promising and then explore. Best First Search falls under the category of Heuristic Search or Informed Search. We use a priority queue to store costs of nodes. So the implementation is a variation of BFS, we just need to change Queue to PriorityQueue. Algorithm: Best-First-Search(Grah g, Node start) 1) Create an empty PriorityQueue PriorityQueue pq; 2) Insert "start" in pq. pq.insert(start) 3) Until PriorityQueue is empty u = PriorityQueue.DeleteMin
  • 26. 26 If u is the goal Exit Else Foreach neighbor v of u If v "Unvisited" Mark v "Visited" pq.insert(v) Mark v "Examined" End procedure Let us consider below example. We start from source "S" and search for goal "I" using given costs and Best First search. pq initially contains S We remove s from and process unvisited neighbors of S to pq. pq now contains {A, C, B} (C is put before B because C has lesser cost) We remove A from pq and process unvisited neighbors of A to pq. pq now contains {C, B, E, D}
  • 27. 27 We remove C from pq and process unvisited neighbors of C to pq. pq now contains {B, H, E, D} We remove B from pq and process unvisited neighbors of B to pq. pq now contains {H, E, D, F, G} We remove H from pq. Since our goal "I" is a neighbor of H, we return.