2. 2
EXHAUSTIVE SEARCH
For discrete problems in which no efficient solution method is known, it might
be necessary to test each possibility sequentially in order to determine if it is
the solution.
Such exhaustive examination of all possibilities is known as exhaustive
search, complete search or direct search.
Exhaustive search is simply a brute force approach to combinatorial problems
(Minimization or maximization of optimization problems and constraint
satisfaction problems).
Reason to choose brute-force / exhaustive search approach as an important
algorithm design strategy
3. 3
Cont..
Reason to choose brute-force / exhaustive search approach as an important
algorithm design strategy
1. First, unlike some of the other strategies, brute force is applicable to a
very wide variety of problems. In fact, it seems to be the only general
approach for which it is more difficult to point out problems it cannot
tackle.
2. Second, for some important problems, e.g., sorting, searching, matrix
multiplication, string matching the brute-force approach yields reasonable
algorithms of at least some practical value with no limitation on instance
size.
3. Third, the expense of designing a more efficient algorithm may be
unjustifiable if only a few instances of a problem need to be solved and a
brute-force algorithm can solve those instances with acceptable speed.
4. 4
Cont..
4. Fourth, even if too inefficient in general, a brute-force algorithm can still
be useful
for solving small-size instances of a problem.
Exhaustive Search is applied to the important problems like
Traveling Salesman Problem
Knapsack Problem
Assignment Problem.
5. 5
TRAVELING SALESMAN PROBLEM
TRAVELING SALESMAN PROBLEM
The traveling salesman problem (TSP) is one of the combinatorial problems.
The problem asks to find the shortest tour through a given set of n cities that
visits each city exactly once before returning to the city where it started.
The problem can be conveniently modeled by a weighted graph, with the
graph’s vertices representing the cities and the edge weights specifying the
distances. Then the problem can be stated as the problem of finding the
shortest Hamiltonian circuit of the graph.
(A Hamiltonian circuit is defined as a cycle that passes through all the vertices
of the graph exactly once).
A Hamiltonian circuit can also be defined as a sequence of n + 1 adjacent
vertices vi0, vi1, . . . , vin−1, vi0, where the first vertex of the sequence is the
same as the last one and all the other n − 1 vertices are distinct. All circuits
start and end at one particular vertex.
7. 7
Cont..
Time efficiency
We can get all the tours by generating all the permutations of n − 1
intermediate cities from a particular city.. i.e. (n - 1)!
Consider two intermediate vertices, say, b and c, and then only permutations
in which b precedes c. (This trick implicitly defines a tour’s direction.)
An inspection of Figure 2.4 reveals three pairs of tours that differ only by
their direction. Hence, we could cut the number of vertex permutations by half
because cycle total lengths in both directions are same.
The total number of permutations needed is still (n − 1)!, which makes the
exhaustivesearch approach impractical for large n. It is useful for very small
values of n.
8. 8
KNAPSACK PROBLEM
Given n items of known weights w1, w2, . . . , wn and values v1, v2, . . . , vn
and a knapsack of capacity W, find the most valuable subset of the items that fit
into the knapsack.
Real time examples:
A Thief who wants to steal the most valuable loot that fits into his knapsack,
A transport plane that has to deliver the most valuable set of items to a
remote location without exceeding the plane’s capacity.
The exhaustive-search approach to this problem leads to generating all the
subsets of the set of n items given, computing the total weight of each subset in
order to identify feasible subsets (i.e., the ones with the total weight not
exceeding the knapsack capacity), and finding a subset of the
largest value among them.
9. 9
Cont..
Given n items of known weights w1, w2, . . . , wn and values v1, v2, . . . , vn
and a knapsack of capacity W, find the most valuable subset of the items that fit
into the knapsack.
Real time examples:
A Thief who wants to steal the most valuable loot that fits into his knapsack,
A transport plane that has to deliver the most valuable set of items to a
remote location without exceeding the plane’s capacity.
The exhaustive-search approach to this problem leads to generating all the
subsets of the set of n items given, computing the total weight of each subset in
order to identify feasible subsets (i.e., the ones with the total weight not
exceeding the knapsack capacity), and finding a subset of the
largest value among them.
12. 12
Cont..
Time efficiency:
As given in the example, the solution to the instance of Figure 2.5 is
given in Figure 2.6. Since the number of subsets of an n-element set is
2n
, the exhaustive search leads to a Ω(2n) algorithm, no matter how
efficiently individual subsets are generated.
13. 13
Divide and Conquer
Divide-and-conquer is probably the best-known general algorithm design
technique. Though its fame may have something to do with its catchy name, it
is well deserved: quite a few very efficient algorithms are specific
implementations of this general strategy.
Divide-and-conquer algorithms work according to the following general plan:
1. A problem is divided into several sub-problems of the same type, ideally of
about equal size.
2. The sub-problems are solved (typically recursively, though sometimes a
different algorithm is employed, especially when sub-problems become small
enough).
3. If necessary, the solutions to the sub-problems are combined to get a
solution to the original problem.
15. 15
Merge Sort
Mergesort is a perfect example of a successful application of the divide-and
conquer technique. It sorts a given array A[0..n − 1] by dividing it into two
halves A[0..n/2 − 1] and A[n/2..n − 1], sorting each of them recursively, and
then merging the two smaller sorted arrays into a single sorted one.
25. 25
Decrease and Conquer
The decrease-and-conquer technique is based on exploiting
the relationship between a solution to a given instance of a
problem and a solution to its smaller instance.
Once such a relationship is established, it can be exploited
either top down or bottom up. The former leads naturally to a
recursive implementation, although, as one can see from several
examples in this chapter, an ultimate implementation may well
be nonrecursive.
The bottom-up variation is usually implemented iteratively,
starting with a solution to the smallest instance of the problem;
it is called sometimes the incremental approach.
There are three major variations of decrease-and-conquer:
decrease by a constant
decrease by a constant factor
variable size decrease
26. 26
Cont..
In the decrease-by-a-constant variation, the size of an instance is
reduced by the same constant on each iteration of the algorithm.
Typically, this constant is equal to one (Figure 4.1), although other
constant size reductions do happen occasionally.
Consider, as an example, the exponentiation problem of
computing an where a = 0 and n is a nonnegative integer.
The relationship between a solution to an instance of size n and
an instance of size n − 1 is obtained by the obvious formula an =
an−1 . a.
28. 28
Cont..
The decrease-by-a-constant-factor technique suggests reducing
a problem instance by the same constant factor on each iteration
of the algorithm. In most applications, this constant factor is
equal to two.
For an example, let us revisit the exponentiation problem. If the
instance of size n is to compute an, the instance of half its size is
to compute a n/2, with the
obvious relationship between the two: an
= (an/2
)2
.
30. 30
Insertion sort
Insertion sort
It is an application for decease-by-one technique to sort an array
A[0…n-1]. Every time the element is inserted, it is stored in its
position. So after every insertion the array remains sorted.
32. 32
Cont..
ALGORITHM Insertion Sort(A[0..n − 1])
//Sorts a given array by insertion sort
//Input: An array A[0..n − 1] of n orderable elements
//Output: Array A[0..n − 1] sorted in non decreasing order
for i ←1 to n − 1 do
{
v ←A[i]
j ←i − 1
while j ≥ 0 and A[j ]> v do
{
A[j + 1]←A[j ]
j ←j − 1
}
A[j + 1]←v
}
34. 34
Topological Sort
Topological Sort: For topological sorting to be possible, a
digraph in question must be a DAG. i.e., if a digraph has no
directed cycles, the topological sorting problem for it has a
solution.
There are two efficient algorithms that both verify whether a
digraph is a dag and, if it is, produce an ordering of vertices
that solves the topological sorting problem.
The first one is based on depth-first search; the second is based
on a direct application of the decrease-by-one technique.
35. 35
Cont..
Topological Sorting based on DFS Method
1. Perform a DFS traversal and note the order in which vertices
become dead-ends
2. Reversing this order yields a solution to the topological sorting
problem, provided, of course, no back edge has been encountered
during the traversal. If a back edge has been encountered, the
digraph is not a DAG, and topological sorting of its vertices is
impossible.
36. 36
Cont..
Illustration
a) Digraph for which the topological sorting problem needs to be
solved.
b) DFS traversal stack wth
the subscript numbers indicating the
popping off order.
c) Solution to the problem. Here we have drawn the edges of the
digraph, and they all point from left to right as the problem’s
statement requires. It is a convenient way to check visually the
correctness of a solution to an instance of the topological sorting
problem.
37. 37
Cont..
Topological Sorting using decrease-and-conquer technique
Method:
The algorithm is based on a direct implementation of the
decrease-(by one)-and- conquer technique:
38. 38
Cont..
1.Repeatedly, identify in a remaining digraph a source, which is
a vertex with no incoming edges, and delete it along with all the
edges outgoing from it. (If there are several sources, break the tie
arbitrarily. If there are none, stop because the problem cannot be
solved.)
2. The order in which the vertices are deleted yields a solution to
the topological sorting problem.
40. 40
Binary Tree Traversal
we see how the divide-and-conquer technique can be applied to
binary trees. A binary tree T is defined as a finite set of nodes
that is either empty or consists of a root and two disjoint binary
trees TL and TR called, respectively, the left and right subtree of
the root.
We usually think of a binary tree as a special case of an ordered
tree.
Since the definition itself divides a binary tree into two smaller
structures of the same type, the left subtree and the right subtree,
many problems about binary trees can be solved by applying the
divide-and-conquer technique.
As an example, let us consider a recursive algorithm for
computing the height of a binary tree.
42. 42
MULTIPLICATION OF LARGE INTEGERS
Some applications like modern cryptography require
manipulation of integers that are over 100 decimal digits long.
Since such integers are too long to fit in a single word of a
modern computer, they require special treatment.
In the conventional pen-and-pencil algorithm for multiplying
two n-digit integers, each of the n digits of the first number is
multiplied by each of the n digits of the second number for the
total of n2 digit multiplications.
44. 44
Cont..
The multiplication of n-digit numbers requires three
multiplications of n/2-digit numbers,
the recurrence for the number of multiplications M(n) is
M(n) = 3M(n/2) for n > 1, M(1) = 1.
45. 45
STRASSEN’S MATRIX MULTIPLICATION
The Strassen’s Matrix Multiplication find the product C of two
2 × 2 matrices A and B with just seven multiplications as
opposed to the eight required by the brute-force algorithm.
47. 47
Cont..
Thus, to multiply two 2 × 2 matrices, Strassen’s algorithm makes
7 multiplications and 18 additions/subtractions, whereas the
brute-force algorithm requires 8 multiplications and 4 additions.
These numbers should not lead us to multiplying 2 × 2 matrices
by Strassen’s algorithm. Its importance stems from its asymptotic
superiority as matrix order n goes to infinity.
Let A and B be two n × n matrices where n is a power of 2. (If n is
not a power of 2, matrices can be padded with rows and columns
of zeros.) We can divide A, B, and their product C into four n/2 ×
n/2 submatrices each as follows:
48. 48
Cont..
The asymptotic efficiency of Strassen’s matrix
multiplication algorithm
If M(n) is the number of multiplications made by Strassen’s
algorithm in multiplying two n×n matrices, where n is a power
of 2, The recurrence relation is M(n) = 7M(n/2) for n > 1,
M(1)=1.