1. Decrease and Conquer
(Chapter-4) and Divide and
Conquer (Chapter-5)
Team – I Presentation (Solvi Magnusson, Ishan Anand, Rishi Tej Talluri)
Reference : Introduction to Design and Analysis of Algorithms, 3rd
Edition by Anany
Levitin
Note: All of the slides have been prepared according to the syllabus mentioned and no
other topic has been covered except than that for both chapters.
2. Decease and conquer Introduction
• The decrease and conquer technique is based on exploiting the
relationship between a solution to a given instance of a problem and
a solution to it’s smaller instance.
• Once such relationship is established, it can be exploited either top
down or bottom up.
• The bottom up variation is usually implemented iteratively, starting
with a solution to the smallest instance of the problem; its called
incremental approach.
3. Major variation of decrease and conquer
• Thee are three major variations to decease and conquer. They are:
1. Decrease by a constant
In this, the size of an instance is educed by the same constant on each iteration of the
algorithm. Typically, this constant is equal to one, although constant size reductions do
happen occasionally.
2. Decrease by a constant factor
In this, the technique suggest reducing a problem instance by the same constant factor on
each iteration of the algorithm. In most applications, the constant factor is equal to two.
3. Variable size decrease
In this the size reduction pattern varies from one iteration of an algorithm to another.
Euclid’s algorithm for computing greatest common divisor provides a good example of such
a situation.
4. Insertion Sort
• Insertion sort is a simple sorting algorithm that builds the
final sorted array (or list) one item at a time.
• It is much less efficient on large lists than more advanced algorithms
such as quicksort, heapsort, or merge sort.
• Insertion sort iterates consuming one input element each repetition,
and growing a sorted output list.
• At each iteration, insertion sort removes one element from the input
data, finds the location it belongs within the sorted list, and inserts it
there.
• It repeats until no input elements remain.
5. Insertion sort Algorithm
ALGORITHM InsertionSort(A[0..n − 1])
//Sorts a given array by insertion sort
//Input: An array A[0..n − 1] of n orderable elements
//Output: Array A[0..n − 1] sorted in non-decreasing order
for i ← 1 to n − 1 do
← A[i] j ← i − 1
while j ≥ 0 and A[j ] > v do
A[j + 1] ← A[j ]
← j − 1
A[j + 1] ← v
(The number of key comparisons in this algorithm obviously depends on the nature of the input.)
6. Insertion sort Algorithm – Worst Case
• In the worst case, A[j ] > v is executed the largest number of times,
i.e., for every j = i − 1, . . . , 0. Since v = A[i], it happens if and only if A[j
] > A[i] for j = i − 1, . . . , 0.
• Thus, for the worst-case input, we get A[0] > A[1] (for i = 1), A[1] >
A[2] (for i = 2), . . . , A[n − 2] > A[n − 1] (for i = n − 1).
• The number of key comparisons for such an input is –
Cworst =
7. Insertion sort Algorithm – Best Case
• In the best case, the comparison A[j ] > v is executed only once on
every iteration of the outer loop. It happens if and only if A[i − 1] ≤
A[i] for every i = 1, . . . , n − 1, i.e., if the input array is already sorted in
non-decreasing order.
• Thus, for sorted arrays, the number of key comparisons is
Cbest(n)=
8. Insertion sort Algorithm – Average Case
• A rigorous analysis of the algorithm’s average-case efficiency is based
on investigating the number of element pairs that are out of order.
• It shows that on randomly ordered arrays, insertion sort makes on
average half as many comparisons as on decreasing arrays, which is as
follows –
Cavg(n) =
9. Topological Sorting (Directed Graph Pre-
requisites)
• A directed graph, or digraph for short, is a graph with directions
specified for all its edges.
• The adjacency matrix and adjacency lists are still two principal means
of representing a digraph.
• There are only two notable differences between undirected and
directed graphs in representing them:
1. The adjacency matrix of a directed graph does not have to be symmetric.
2. An edge in a directed graph has just one (not two) corresponding nodes in
the digraph’s adjacency lists.
10. Topological Sorting- Introduction
• A topological sort or topological ordering of a directed graph is a linear
ordering of its vertices such that for every directed edge ab from vertex
a to vertex b, a comes before b in the ordering.
• Depth-first search and breadth-first search are principal traversal
algorithms for traversing digraphs as well, but the structure of
corresponding forests can be more complex than for undirected graphs.
• A directed cycle in a digraph is a sequence of three or more of its
vertices that starts and ends with the same vertex and in which every
vertex is connected to its immediate predecessor by an edge directed
from the predecessor to the successor.
11. Topological Sort – Contd.
• Topological Sort can be posed for an arbitrary digraph, but it is easy to
see that the problem cannot have a solution if a digraph has a
directed cycle.
• Thus, for topological sorting to be possible, a digraph in question
must be a dag.
• It turns out that being a dag is not only necessary but also sufficient
for topological sorting to be possible; i.e., if a digraph has no directed
cycles, the topological sorting problem for it has a solution.
12. Topological Sort – Algorithms Types
• There are two efficient algorithms that both verify whether a digraph
is a dag and, if it is, produce an ordering of vertices that solves the
topological sorting problem.
• ALGORITHM -1
The first algorithm is a simple application of depth-first search
• ALGORITHM -2
The second algorithm is based on a direct implementation of the decrease-and-
conquer technique
13. Topological Sort – Algorithm 1
• It perform a DFS traversal and note the order in which vertices
become dead-ends.
• Reversing this order yields a solution to the topological sorting
problem if no back edge is encountered during traversal.
• If a back edge has been encountered, then topological sorting of its
vertices is impossible.
14. Topological Sort – Algorithm 2
• It is based on a direct implementation of the decrease-and-conquer
technique.
• That means that the algorithm identify in a remaining digraph a
source, which is a vertex with no incoming edges, and delete it along
with all the edges outgoing from it.
• The order in which the vertices are deleted yields a solution to the
topological sorting problem.
15. Solutions from Topological Sort Algorithms
• The solution obtained by the source-removal algorithm (Algorithm-1)
is different from the one obtained by the DFS-based algorithm
(Algorithm-2).
• Both of them are correct as the topological sorting problem may have
several alternative solutions.
16. Searching and Insertion in Binary Search tree
• Consider a binary tree whose nodes contain elements of a set of
orderable items, one element per node, so that for every node all
elements in the left subtree are smaller and all the elements in the right
subtree are greater than the element in the subtree’s root.
• When we need to search for an element of a given value v in such a tree,
we do it recursively in the following manner.
• If the tree is empty, the search ends in failure. If the tree is not empty, we
compare v with the tree’s root K(r). If they match, a desired element is
found and the search can be stopped; if they do not match, we continue
with the search in the left subtree of the root if v < K(r) and in the right
subtree if v > K(r).
17. Searching and Insertion in Binary Search tree
Contd.
• Thus, on each iteration of the algorithm, the problem of searching in a
binary search tree is reduced to searching in a smaller binary search
tree.
• The most sensible measure of the size of a search tree is its height;
obviously, the decrease in a tree’s height normally changes from one
iteration to another of the binary tree search—thus giving us an
excellent example of a variable-size-decrease algorithm.
• In the worst case of the binary tree search, the tree is severely
skewed. This happens, in particular, if a tree is constructed by
successive insertions of an increasing or decreasing sequence of keys.
18. Divide-and-Conquer
• Divide-and-conquer is probably the best-known general algorithm
design technique.
• A problem is divided into several subproblems of the same type,
ideally of about equal size.
• The subproblems are solved (typically recursively, though sometimes
a different algorithm is employed, especially when subproblems
become small enough).
• If necessary, the solutions to the subproblems are combined to get a
solution to the original problem.
19. Divide-and-Conquer Contd.
• Diagram depicts the case of dividing a problem into two smaller
subproblems, by far the most widely occurring case (at least for
divide-and-conquer algorithms designed to be executed on a single-
processor computer).
• Divide-and-conquer is probably the best-known general algorithm
design technique.
21. Merge Sort
• Merges ort is a perfect example of a successful application of the
divide-and conquer technique.
• It sorts a given array A[0..n − 1] by dividing it into two halves A[0..n/2
− 1] and A[n/2..n − 1], sorting each of them recursively, and
then merging the two smaller sorted arrays into a single sorted one.
22. Merge Sort Algorithm
ALGORITHM
//Sorts array A[0..n − 1] by recursive mergesort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in non-decreasing order
if n > 1
copy A[0..n/2 − 1] to B[0..n/2 − 1]
copy A[n/2..n − 1] to C[0..n/2 − 1]
Mergesort(B[0..n/2 − 1])
Mergesort(C[0..n/2 − 1])
Merge(B, C, A)
23. Merge Sort Efficiency
• Assuming for simplicity that n is a power of 2, the recurrence
relation for the number of key comparisons C(n) is
C(n) = 2C(n/2) + Cmerge(n) for n > 1, C(1) = 0.
• At each step, exactly one comparison is made, after which the total
number of elements in the two arrays still needing to be processed
is reduced by 1.
24. Quicksort
• Quicksort is the other important sorting algorithm that is based on
the divide-and conquer approach.
• Unlike merge sort, which divides its input elements according to their
position in the array, quicksort divides them according to their value.
25. Quicksort Algorithm
ALGORITHM
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r
//Output: Subarray A[l..r] sorted in nondecreasing order
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r])
26. Quicksort Efficiency
EFFECIENCY
• The number of key comparisons made before a partition is achieved is
n + 1 if the scanning indices cross over and n if they coincide.
Cbest(n) = 2Cbest(n/2) + n for n > 1, Cbest(1) = 0.
• If all the splits happen in the middle of corresponding subarrays, we
will have the best case.
• The number of key comparisons in the best case satisfies the
recurrence.
27. Binary Tree Traversals
• We usually think of a binary tree as a special case of an ordered tree.
• A binary tree T is defined as a finite set of nodes that is either empty
or consists of a root and two disjoint binary trees TL and TR called,
respectively, the left and right subtree of the root.
28. Binary Tree Traversals Algorithm
ALGORITHM
//Computes recursively the height of a binary tree
//Input: A binary tree T
//Output: The height of T
if T = return −1
∅
else return max{Height(Tlef t ), Height(Tright)} + 1
29. Binary Tree Traversals
• We measure the problem’s instance size by the number of nodes n(T )
in a given binary tree T .
• The number of comparisons made to compute the maximum of two
numbers and the number of additions A(n(T )) made by the algorithm
are the same.
We have the following recurrence relation for A(n(T )):
A(n(T )) = A(n(Tlef t)) + A(n(Tright)) + 1 for n(T ) > 0