SlideShare a Scribd company logo
Algorithm Program
At Design time At Implementation time
Domain knowledge Programmer
In any language Computer language
Analysis Testing
Algorithm
• A complete, detailed and precise step by step method
for solving a problem independently of software or
hardware of the computer.
• A well-defined computational procedure that takes
some value, or a set of values, as input and produces
some value, or a set of values, as output.
• Sequence of computational steps that transform the
input into the output.
An algorithm can be expressed in two ways
1. In any natural language such as English called pseudo code
2. In the form of diagrammatic symbols called Flow chart
Characteristics of an Algorithm
Input : must have 0 or more input data
Output: must have at least one output
Finiteness: must be terminated correctly in finite time or finite steps
Definiteness: each and every statement must have clear/unambiguous meaning
Effectiveness: Every statement written should have some purpose/objective
Analysis of an Algorithm?
Algorithm analysis helps us determining which of them is
efficient in terms of time and space consumed.
in computer science there can be multiple algorithms exist
for solving the same problem e.g. sorting problem has lot
of algorithms like insertion sort, selection sort, quick sort
and many more.
The goal of is to compare algorithms (or solutions) mainly in terms
of running time but also in terms of other factors (e.g., memory,
developer's effort etc.)
Goal of analysis of Algorithm?
swap(a , b)
{
temp a;
a=b;
b=temp;
}
How to analyze an Algorithm?
Time function
1
1
1
f(n) =3
Constant algorithm
space function
a= 1
b= 1
temp=1
S(n) =3
Constant algorithm
sum(A , n)
{
s=0;
for(i=0;i<n; i++)
{
s=s+a[i];
}
return s;
}
How to analyze an Algorithm?
Time function
1
n+1
n
1
f(n) =2n+3
Linear time algorithm
i=0
i=1
i=2
i=3
i=4
i=5 x
sum(A ,B, n)
{
for(i=0;i<n; i++)
{
for(j=0;j<n; j++)
{
C[i][j]=A[i][j]+ B[i][j];
}
}
How to analyze an Algorithm?
Time function
n+1
n(n+1)
n * n
f(n) =2n^2+2n+2
quadratic time
algorithm
Space function
A n* n
B n * n
C n * n
n 1
i 1
j 1
f(n) =3n^2+3
Quadratic time
algorithm
Time function for multiplication of two matrices?
Algorithm Design Techniques?
1. Recursive
2. Divide and Conquer
3. Greedy Approach
4. Dynamic Programming
5. Branch and Bound
6. Backtracking
7. Randomized
Efficiency of an Algorithm (Termination and correctness)
Efficiency of an algorithm means how fast it can produce the
correct result for the given problem.
It depends upon its complexity.
There are two important factors for judging the complexity of an
algorithm are
1. Space Complexity
It refers to the amount of memory required by the algorithm for
its execution and generation of final result.
2. Time complexity ( running time of a program)
It refers to the amount of computer time required by an
algorithm for its execution and generation of final output. This
time include both compile time and run time. In other words,
no of machine instructions which a program executes and this
number depends on input and algorithm used.
Two points for good programming are appropriate data structure
and appropriate algorithm
Time complexity or space complexity is basically a f(n) where n is the
input size.
Time complexity is more critical than space complexity
If we need an algorithm that requires less memory space,
then we choose the first algorithm at the cost of more
execution time. On the other hand if we need an algorithm
that requires less time for execution, then we choose the
second algorithm at the cost of more memory space.
The time and space complexity can be expressed using a function f(n)
where n is the input size. Or
The number of statements executed in the program for n elements of
the data is a function of the number of elements.
Expressing the complexity is required
• to predict the rate of growth of complexity as input size increases
• to find which algorithm is most efficient.
Time –Space trade-off
The best algorithm to solve a given problem is one that
requires less memory space and less time to run to
completion. But in practice, it is not always possible to
obtain both of these objectives.
One algorithm may require less memory space but may
take more time to complete its execution. On the other
hand, the other algorithm may require more memory
space but may take less time to run to completion. Thus,
we have to sacrifice one at the cost of other. In other
words, there is Space-Time trade-off between algorithms.
The rate at which the running time increases as a function
of input is called Rate of Growth. They are:
Constant time Algorithm: Time complexity is O(1)
An algorithm of efficiency O(1) is that the algorithm returns always
return the same value regardless of input . E.g. Adding an element to
the start of Linked list.
• Linear time Algorithm: Time complexity is O(n)
An algorithm of efficiency O(n) is that the algorithm require only
one pass over an entire input. E.g. a linear search algorithm .
• Logarithmic time Algorithm: Time complexity is O(log n)
The binary search algorithm is another example of a O(log n).
• Linear Logarithmic time Algorithm: Time complexity is
O(n log n).
An example of an algorithm with this efficiency is merge sort
• Quadratic Time Algorithm: Time complexity is O(n2
)
e.g. Shortest path between two nodes in a graph
• Cubic Time Algorithm : Time complexity is O(n3
)
e.g. Matrix Multiplication
• Exponential Time Algorithm: Time complexity is O(2n
)
e.g. The Towers of Hanoi problem
Polynomial time Algorithm: Time complexity is O(n ͫ) m>1
selection sort is the example of this efficiency.
Slowest to fastest growth rate:
1< log n< n< n log n< n2
< n3
< 2n
< n!
Number of operations for different functions of n
n O(1) O(n) O(log n) O(n log n) O(n^2) O(n^3)
1 1 1 0 0 1 1
2 1 2 1 2 4 8
4 1 4 2 8 16 64
8 1 8 3 24 64 512
16 1 16 4 64 256 4096
Listed from slowest to fastest growth:
• 1
• log n
• n
• n log n
• n2
• n3
• 2n
• n!
1. for(i=0;i<n; i++)
{ statement(s)}
n+1
n
f(n)=2n+1
Linear Time algorithm :
O(n)
2. for(i=0;i<n; i++)
{ for(j=0;j<n; i++)
{ statement(s)} }
n+1
n*(n+1)
n* n
f(n)=2n2
+2n+2
Quadratic Time algorithm :
O(n2
)
3. for(i=0;i<n; i++)
{ for(j=0;j<i; j++)
{ statement(s)} }
i j no. of times
0 0 x 0
1 0 1
1 x
2 0 2
1
2 x
n 0 n
1
2
.
.
n
f(n)=1+2+3+……..+n
F(n)=n(n+1)/2
Quadratic Time
O(n2
)
1. for(i=0;i<n; i++)
{ statement(s)} Linear loop: f(n)= n
3. for(i=0;i<n; i++)
{ for(j=0;j<n; j++)
{ statement(s)} }
4. for(i=0;i<n; i++)
{ for(j=0;j<i; j++)
{ statement(s)} }
Types of loops
2. for(i=0;i<n; i*=2)
{ statement(s)}
logrithmic loop: f(n)=log n
Quadratic loop: f(n)= n2
Dependent Quadratic loop: f(n)= n2
5. for(i=0;i<n; i++)
{ for(j=1;j<n; j*=2)
{ statement(s)} }
Linear Logrithmic loop: f(n)= nlogn
Categories of running time complexity are:
• Worst Case: Defines the input for which the algorithm
takes huge time (An upper bound of the running time for
any input). It gives us an assurance that the algorithm will
not go beyond this limit.
• Best Case: Defines the input for which the algorithm
takes lowest time (An lower bound of the running time for
any input).
Average case: Provides a prediction about the running
time of the algorithm and assumes that the input is
random(An estimate of the running time for an average
input).
Asymptotic Notation? Mathematical notations used to describe the
running time of an algorithm.
Having the expressions for best case, average case and worst case, for
all the three cases we need to identify the upper bound, lower
bounds and tight bound.
In order to represent these upper bound and lower bounds we need
some syntax.
Let us assume that the given algorithm is represented in the form of
function .
Following are the asymptotic notations
• Big O notation
• Big Omega notation
• Big theta notation
Big-Oh notation
• Big-O notation is expressed as O(n) is the order of the
magnitude of algorithm.
• Big-O notation is a way of ranking about how much time
it takes for an algorithm to execute
• How many operations will be done when the program is
executed?
• Big-O notation is concerned with what happens for large
number of elements - asymptotic order.
• Big-O notation provides a strict upper bound for f(n) .
• This means the f(n) can do better but not worse than the
specified value. Here f(n) is the number of statements
executed in the program for n data elements.
Big-Oh notation
By Definition,
if there are two functions f(n) and
g(n) for positive integer n then f(n)
= O(g(n) ) iff positive constants c
and n0 exists such that f(n) <=cg(n)
whenever c>0 for all integers n>n0.
Hence g provides upper bound. C
depends on the following factors:
• programming language used
• quality of compiler or interpreter
• CPU speed
• size of main memory and
• algorithm itself.
Big-Omega notation
By Definition,
if there are two functions f(n) and
g(n) for positive integer n then f(n)
= omega(g(n) ) iff positive
constants c and n0 exists such that
f(n) >=cg(n) whenever c>0 for all
integers n>n0. Hence g provides
lower bound. c depends on the
following factors:
• programming language used
• quality of compiler or interpreter
• CPU speed
• size of main memory and
• algorithm itself.
Big-Theta notation
By Definition,
if there are two functions f(n) and
g(n) for positive integer n then f(n) =
Θ(g(n) ) iff positive constants c and k
exists such that c1 g(n) < f(n) < c2 g(n)
whenever c1 , c2 >0 for all n>n0. Hence
g provides upper bound. c depends
on the following factors:
• programming language used
• quality of compiler or interpreter
• CPU speed
• size of main memory and
• algorithm itself.
How to find the upper bound, lower bound and average bound of
a f(n)?
Example: Find the upper bound, tight bound and lower bound of
the f(n) = 2n+3
Solution: for upper bound, by definition f(n)<cg(n)
Here assume cg(n)=3n
(Note: always increase the coefficient of first term by 1 for upper bound)
Now, by putting the values of f(n) and cg(n), we get
2n+3<=3n So, c=3 and g(n) = n
For n=1, 5<=3 False
For n=2, 7<=6 False
For n=3, 9<=9 True
Hence, f(n)=O(n) for n>=3 and c = 3
1< log n< n< n log n< n2
< n3
< 2n
< n!
f(n) = 2n+3 g(n) = n2
1< log n< n< n log n< n2
< n3
< 2n
< n!<n^n
2n+3<= 3n2
for lower bound, by definition f(n)> cg(n)
Here assume cg(n)=2n
(Note: Keep the coefficient of first term same for lower bound)
Now, by putting the values of f(n) and cg(n), we get
2n+3>2n So, c=2 and g(n) = n
For n=1, 5>2 True
Hence, f(n)=Ω(n) for n>=1 and c = 2
for tight bound, by definition, c1 g(n) < f(n) < c2 g(n)
Here assume c1 g(n) =2n and c2 g(n) =3n
(Note: Keep the coefficient of first term same for lower bound)
Now, by putting the values of f(n) and c1 g(n) and c2 g(n) , we get
2n< 2n+3 < 3n So, c1 =2, c1 =3 and g(n) = n
For n=1, 2< 5 < 3 False
For n=2, 4< 5 < 6 True
Hence, f(n)= Θ(n) for n>=2 and c1 = 2 and c2 = 3
Asymptotic Properties
Show tat the running time f(n)= n3
+20n+1 is O(n3
)
Method 1: By definition f(n)=O(g(n) if f(n)<= cg(n) if for some
positive constants c and n0 for all n> n0
Here f(n)= n3
+20n+1 and g(n)= n3
So n3
+20n+1<=cn3
Or (n3
+20n+1)/ n3
<=c
Or c>=1+20/ n3
+1/ n3
Or C>=22 for n>= n0 =1
There for the Big –Oh condition holds for n> n0 =1 and C>=22.
* Larger value for n results in smaller value of c
Algorithm for the DAA agscsnak javausmagagah
Algorithm for the DAA agscsnak javausmagagah
Examples
• 4n^2=O(n^3)
• 400n^3+20n^2=O(n^3)
• 2n^2+2n+1=O(n^2)
• 3n+7=O(n)
• (n+1)^3=O(n^3)
• 10n^3+20n#O(n^2)
• n^2-2n+1!=O(n)
• n^3-3n^2+3n-1!=O(n^2)
• n3 + 20n + 1 is O(n3)
• n3 + 20n + 1 is not O(n2)
Recurrence relation
Many algorithms are recursive in nature. When we analyze them,
we get a recurrence relation for time complexity.
A recurrence is an equation or inequality that describes a
function in terms of its values on smaller inputs.
To solve a Recurrence Relation means to obtain a function defined
on the natural numbers that satisfy the recurrence.
.
For example in Merge Sort, to sort a given array, we divide it in
two halves and recursively repeat the process for the two halves.
Finally we merge the results. Time complexity of Merge Sort can
be written as T(n) = 2T(n/2) + cn. There are many other algorithms
like Binary Search, Tower of Hanoi, etc.
There are mainly three ways for solving recurrences.
• Master Method
• Iteration Method
• Recursion Tree Method
void tail(int n)
T(n)
{
if (n>0) {
printf(“%d”,n);
1
tail(n-1) T(n-1)
}
T(n) = 1 n=0
T(n)= T(n-1)+1 n>0
Master Method
• The problem is divided into a number of sub-problems each of size
n/b and need a f(n) to combine or break the solution.
• We can apply this method if recurrence is in the form of
• T(n) = aT(n/b) +f(n) where a>=1 and b>=1 and f(n) >= 0
• There are three cases :
• Case 1: Worst case Analysis(Big O)
If f(n) = O(nlog
b
a-ε
) such that ε>0 then T(n)= Θ(nlog
b
a
)
Case 2: Average case Analysis(Big Θ)
If f(n) = Θ(nlog
b
a
) then T(n)= Θ(nlog
b
a
log n)
Case 3: Best case Analysis(Big Ω)
If f(n) = Ω(nlog
b
a+ε
) for ε>0 then T(n)= Θ(f(n)) iff the regularity condition
holds a.f(n/b)<=cf(n) for c<1
• T(n)= 4T(n/2)+n
• T(n)= T(n/2)+1
• T(n)= 2T(n/2)+n^4
• T(n)= 3T(n/2)+n^2
• T(n)= 4T(n/2)+n^2
• T(n)= 3T(n/2)+n^3
• T(n)= 8T(n/4)+n
Extension of Master Theorem
If f(n)=Θ(n log
b
a
log k
n)
• Case 1: If k>-1 then T(n) = Θ(n log
b
a
log k+1
n)
• Case 2: If k=-1 then T(n) = Θ(n log
b
a
log log n)
• Case 3: If k<-1 then T(n) = Θ(n log
b
a
)
• T(n)= 2T(n/2)+nlogn
• T(n)= 4T(n/2)+n^2logn
• T(n)= 2T(n/2)+n/logn
Iteration Method
• T(n)= T(n-1)+1 n>0 , T(n)=1 n=0
• T(n)=T(n-1)+n n>0, T(n)=1 n=0
• T(n)=T(n-1)+log n n>0, T(n)=1 n=0
• T(n)=2T(n/2)+n n>1, T(n)=1 n=1
• T(n)=2T(n-1) n>1, T(n)=1, n=1
Example2: Consider the Recurrence
•T (n) = T (n-1) +1 and T (1) = 1.
Solution:
Consider T (n) = T (n-1) +1 ……. (1)
Putting n=n-1 equation (1), we get
T(n-1) = T (n-2) +1 …….. (2)
Similarly, we get
T(n-2) = T (n-3) +1 ……… (3)
Now putting the value of T(n-1) in equation ….(1), we get
T(n)= T (n-2) +1 +1 = T(n-2)+2 ……….(4)
Now putting the value of T(n-2) in equation ….(4), we get
T(n)= T (n-3) +1 +1+1 = T(n-3)+3
.
Repeating the procedure i times, we get
T(n) = T (n-i) +i ………….. (5)
Assume that n-i=1 then T(1) = 1 and i=n-1
Putting the values of n-i and i in equation (5), we get
T (n) = 1 + (n-1) = 1+n-1=n= Θ (n).
Iteration Method
Example1: Consider the Recurrence
•T (n) = 1 if n=1
• T(n) = 2T (n-1) if n>1
Solution:
Consider T (n) = 2T (n-1) ….. (1)
Putting n=n-1 equation (1), we get
T (n-1) = 2T (n-2) ……. (2)
Similarly we get T (n-2) = 2T (n-3) and so.. On.
Putting the value of T(n-1) in equation (1) we get
T(n) = 2[2T (n-2)] = 22
T (n-2)…. (3)
Now putting value of T(n-2) in equation (3), we get
T(n)= 4[2T (n-3)] = 23
T (n-3)
.
.
Repeat the procedure for i times and get
T(n) = 2i
T (n-i) …. (4)
Assume that n-i=1 or i= n-1 in equation (4) and get
T (n) = 2n-1
T (1)
= 2n-1
.1 {T (1) =1 .....given}
= 2n-1
= Θ(2n
)
Recursion Tree Method
1. A pictorial representation of an iteration method which is in
the form of a tree where at each level nodes are expanded.
2. In general, we consider the second term in recurrence as root.
3. It is useful when the divide & Conquer algorithm is used.
4. In Recursion tree, each root and child represents the cost of a
single sub-problem.
5. We sum the costs within each of the levels of the tree to obtain
a set of pre-level costs and then sum all pre-level costs to
determine the total cost of all levels of the recursion.
6. A Recursion Tree is best used to generate a good guess, which
can be verified by the Substitution Method.
• T(n)=2T(n/2)+n
• T(n)=T(n/3)+T(2n/3)+n
• T(n)=2T(n/2)+n^2
• T(n)=T(n/4)+T(n/2)+n^2
Example 1
Consider T (n) = 2T
Consider We have to obtain the asymptotic bound using recursion tree method. T (n)
= 2T(n/2) + n
n
n/2
n/2
n/4 n/4
n/4 n/4
n/8 n/8
n/8 n/8 n/8 n/8 n/8 n/8
n/2k
n/2k
n
n
n
k = log2n
n/2k
= 1 2k
= n
Total time = kn
= n log2n
T(n) = Θ(n log2n)
Example 1
Consider T (n) = 2T
Consider T (n) = T(n/3) + T(2n/3) + n We have to obtain the asymptotic bound using
recursion tree method.
n
n/3
2n/3
2n/9
4n/9
n/9 2n/9
n/27 2n/27
2n/27 4n/27 2n/27 4n/27 4n/27 8n/27
n/3k
n/(3/2)k
n
n
n
k
=
log
3
n
n/3
k
=
1
3
k
=
n
Min.Total time = kn
= n log3n
T(n) = Θ(n log3n)
k = log3/2n
n/(3/2)k
= 1 (3/2)k
= n
Max.Total time = kn
= n log3/2n
T(n) = Θ(n log3/2n)
Master Theorem
can be interpreted as
+ f (n) with a≥1 and b≥1 be constant & f(n) be a function and
can be interpreted as
Algorithm for the DAA agscsnak javausmagagah

More Related Content

PPTX
Unit ii algorithm
PPTX
BCSE202Lkkljkljkbbbnbnghghjghghghghghghghgh
PDF
BCS401 ADA First IA Test Question Bank.pdf
PDF
12200223054_SrijanGho;sh_DAA_19.pdfkmkmm
PPTX
Algorithms & Complexity Calculation
PPTX
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptx
PPTX
Algorithm.pptx
PPTX
Algorithm.pptx
Unit ii algorithm
BCSE202Lkkljkljkbbbnbnghghjghghghghghghghgh
BCS401 ADA First IA Test Question Bank.pdf
12200223054_SrijanGho;sh_DAA_19.pdfkmkmm
Algorithms & Complexity Calculation
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptx
Algorithm.pptx
Algorithm.pptx

Similar to Algorithm for the DAA agscsnak javausmagagah (20)

PDF
Ch1. Analysis of Algorithms.pdf
PPTX
design analysis of algorithmaa unit 1.pptx
PPTX
2. Introduction to Algorithm.pptx
PPTX
Design Analysis of Alogorithm 1 ppt 2024.pptx
PPTX
Analysis of Algorithm full version 2024.pptx
PPTX
Design and analysis of algorithms unit1.pptx
PPTX
Unit 1, ADA.pptx
PPTX
Unit 1.pptx
PDF
Data Structure & Algorithms - Mathematical
PPTX
complexity big oh notation notation.pptx
PPT
Data Structures- Part2 analysis tools
PPTX
DAA-Unit1.pptx
PPT
Introduction to design and analysis of algorithm
PPTX
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
PPTX
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
PPTX
Searching Algorithms
PDF
Algorithm Analysis.pdf
PPTX
algorithmanalysis and effciency.pptx
PPTX
02 Introduction to Data Structures & Algorithms.pptx
PPTX
Unit i basic concepts of algorithms
Ch1. Analysis of Algorithms.pdf
design analysis of algorithmaa unit 1.pptx
2. Introduction to Algorithm.pptx
Design Analysis of Alogorithm 1 ppt 2024.pptx
Analysis of Algorithm full version 2024.pptx
Design and analysis of algorithms unit1.pptx
Unit 1, ADA.pptx
Unit 1.pptx
Data Structure & Algorithms - Mathematical
complexity big oh notation notation.pptx
Data Structures- Part2 analysis tools
DAA-Unit1.pptx
Introduction to design and analysis of algorithm
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
Searching Algorithms
Algorithm Analysis.pdf
algorithmanalysis and effciency.pptx
02 Introduction to Data Structures & Algorithms.pptx
Unit i basic concepts of algorithms
Ad

Recently uploaded (20)

PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPT
Project quality management in manufacturing
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
DOCX
573137875-Attendance-Management-System-original
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PDF
Digital Logic Computer Design lecture notes
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
Construction Project Organization Group 2.pptx
PPTX
Sustainable Sites - Green Building Construction
PPTX
Lecture Notes Electrical Wiring System Components
PDF
PPT on Performance Review to get promotions
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
CH1 Production IntroductoryConcepts.pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Project quality management in manufacturing
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
573137875-Attendance-Management-System-original
Arduino robotics embedded978-1-4302-3184-4.pdf
Digital Logic Computer Design lecture notes
UNIT 4 Total Quality Management .pptx
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
CYBER-CRIMES AND SECURITY A guide to understanding
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Construction Project Organization Group 2.pptx
Sustainable Sites - Green Building Construction
Lecture Notes Electrical Wiring System Components
PPT on Performance Review to get promotions
bas. eng. economics group 4 presentation 1.pptx
OOP with Java - Java Introduction (Basics)
CH1 Production IntroductoryConcepts.pptx
Ad

Algorithm for the DAA agscsnak javausmagagah

  • 1. Algorithm Program At Design time At Implementation time Domain knowledge Programmer In any language Computer language Analysis Testing
  • 2. Algorithm • A complete, detailed and precise step by step method for solving a problem independently of software or hardware of the computer. • A well-defined computational procedure that takes some value, or a set of values, as input and produces some value, or a set of values, as output. • Sequence of computational steps that transform the input into the output. An algorithm can be expressed in two ways 1. In any natural language such as English called pseudo code 2. In the form of diagrammatic symbols called Flow chart
  • 3. Characteristics of an Algorithm Input : must have 0 or more input data Output: must have at least one output Finiteness: must be terminated correctly in finite time or finite steps Definiteness: each and every statement must have clear/unambiguous meaning Effectiveness: Every statement written should have some purpose/objective
  • 4. Analysis of an Algorithm? Algorithm analysis helps us determining which of them is efficient in terms of time and space consumed. in computer science there can be multiple algorithms exist for solving the same problem e.g. sorting problem has lot of algorithms like insertion sort, selection sort, quick sort and many more. The goal of is to compare algorithms (or solutions) mainly in terms of running time but also in terms of other factors (e.g., memory, developer's effort etc.) Goal of analysis of Algorithm?
  • 5. swap(a , b) { temp a; a=b; b=temp; } How to analyze an Algorithm? Time function 1 1 1 f(n) =3 Constant algorithm space function a= 1 b= 1 temp=1 S(n) =3 Constant algorithm
  • 6. sum(A , n) { s=0; for(i=0;i<n; i++) { s=s+a[i]; } return s; } How to analyze an Algorithm? Time function 1 n+1 n 1 f(n) =2n+3 Linear time algorithm i=0 i=1 i=2 i=3 i=4 i=5 x
  • 7. sum(A ,B, n) { for(i=0;i<n; i++) { for(j=0;j<n; j++) { C[i][j]=A[i][j]+ B[i][j]; } } How to analyze an Algorithm? Time function n+1 n(n+1) n * n f(n) =2n^2+2n+2 quadratic time algorithm Space function A n* n B n * n C n * n n 1 i 1 j 1 f(n) =3n^2+3 Quadratic time algorithm Time function for multiplication of two matrices?
  • 8. Algorithm Design Techniques? 1. Recursive 2. Divide and Conquer 3. Greedy Approach 4. Dynamic Programming 5. Branch and Bound 6. Backtracking 7. Randomized
  • 9. Efficiency of an Algorithm (Termination and correctness) Efficiency of an algorithm means how fast it can produce the correct result for the given problem. It depends upon its complexity. There are two important factors for judging the complexity of an algorithm are 1. Space Complexity It refers to the amount of memory required by the algorithm for its execution and generation of final result.
  • 10. 2. Time complexity ( running time of a program) It refers to the amount of computer time required by an algorithm for its execution and generation of final output. This time include both compile time and run time. In other words, no of machine instructions which a program executes and this number depends on input and algorithm used. Two points for good programming are appropriate data structure and appropriate algorithm Time complexity or space complexity is basically a f(n) where n is the input size. Time complexity is more critical than space complexity
  • 11. If we need an algorithm that requires less memory space, then we choose the first algorithm at the cost of more execution time. On the other hand if we need an algorithm that requires less time for execution, then we choose the second algorithm at the cost of more memory space. The time and space complexity can be expressed using a function f(n) where n is the input size. Or The number of statements executed in the program for n elements of the data is a function of the number of elements. Expressing the complexity is required • to predict the rate of growth of complexity as input size increases • to find which algorithm is most efficient.
  • 12. Time –Space trade-off The best algorithm to solve a given problem is one that requires less memory space and less time to run to completion. But in practice, it is not always possible to obtain both of these objectives. One algorithm may require less memory space but may take more time to complete its execution. On the other hand, the other algorithm may require more memory space but may take less time to run to completion. Thus, we have to sacrifice one at the cost of other. In other words, there is Space-Time trade-off between algorithms.
  • 13. The rate at which the running time increases as a function of input is called Rate of Growth. They are: Constant time Algorithm: Time complexity is O(1) An algorithm of efficiency O(1) is that the algorithm returns always return the same value regardless of input . E.g. Adding an element to the start of Linked list. • Linear time Algorithm: Time complexity is O(n) An algorithm of efficiency O(n) is that the algorithm require only one pass over an entire input. E.g. a linear search algorithm . • Logarithmic time Algorithm: Time complexity is O(log n) The binary search algorithm is another example of a O(log n). • Linear Logarithmic time Algorithm: Time complexity is O(n log n). An example of an algorithm with this efficiency is merge sort
  • 14. • Quadratic Time Algorithm: Time complexity is O(n2 ) e.g. Shortest path between two nodes in a graph • Cubic Time Algorithm : Time complexity is O(n3 ) e.g. Matrix Multiplication • Exponential Time Algorithm: Time complexity is O(2n ) e.g. The Towers of Hanoi problem Polynomial time Algorithm: Time complexity is O(n ͫ) m>1 selection sort is the example of this efficiency. Slowest to fastest growth rate: 1< log n< n< n log n< n2 < n3 < 2n < n!
  • 15. Number of operations for different functions of n n O(1) O(n) O(log n) O(n log n) O(n^2) O(n^3) 1 1 1 0 0 1 1 2 1 2 1 2 4 8 4 1 4 2 8 16 64 8 1 8 3 24 64 512 16 1 16 4 64 256 4096
  • 16. Listed from slowest to fastest growth: • 1 • log n • n • n log n • n2 • n3 • 2n • n!
  • 17. 1. for(i=0;i<n; i++) { statement(s)} n+1 n f(n)=2n+1 Linear Time algorithm : O(n) 2. for(i=0;i<n; i++) { for(j=0;j<n; i++) { statement(s)} } n+1 n*(n+1) n* n f(n)=2n2 +2n+2 Quadratic Time algorithm : O(n2 ) 3. for(i=0;i<n; i++) { for(j=0;j<i; j++) { statement(s)} } i j no. of times 0 0 x 0 1 0 1 1 x 2 0 2 1 2 x n 0 n 1 2 . . n f(n)=1+2+3+……..+n F(n)=n(n+1)/2 Quadratic Time O(n2 )
  • 18. 1. for(i=0;i<n; i++) { statement(s)} Linear loop: f(n)= n 3. for(i=0;i<n; i++) { for(j=0;j<n; j++) { statement(s)} } 4. for(i=0;i<n; i++) { for(j=0;j<i; j++) { statement(s)} } Types of loops 2. for(i=0;i<n; i*=2) { statement(s)} logrithmic loop: f(n)=log n Quadratic loop: f(n)= n2 Dependent Quadratic loop: f(n)= n2 5. for(i=0;i<n; i++) { for(j=1;j<n; j*=2) { statement(s)} } Linear Logrithmic loop: f(n)= nlogn
  • 19. Categories of running time complexity are: • Worst Case: Defines the input for which the algorithm takes huge time (An upper bound of the running time for any input). It gives us an assurance that the algorithm will not go beyond this limit. • Best Case: Defines the input for which the algorithm takes lowest time (An lower bound of the running time for any input). Average case: Provides a prediction about the running time of the algorithm and assumes that the input is random(An estimate of the running time for an average input).
  • 20. Asymptotic Notation? Mathematical notations used to describe the running time of an algorithm. Having the expressions for best case, average case and worst case, for all the three cases we need to identify the upper bound, lower bounds and tight bound. In order to represent these upper bound and lower bounds we need some syntax. Let us assume that the given algorithm is represented in the form of function . Following are the asymptotic notations • Big O notation • Big Omega notation • Big theta notation
  • 21. Big-Oh notation • Big-O notation is expressed as O(n) is the order of the magnitude of algorithm. • Big-O notation is a way of ranking about how much time it takes for an algorithm to execute • How many operations will be done when the program is executed? • Big-O notation is concerned with what happens for large number of elements - asymptotic order. • Big-O notation provides a strict upper bound for f(n) . • This means the f(n) can do better but not worse than the specified value. Here f(n) is the number of statements executed in the program for n data elements.
  • 22. Big-Oh notation By Definition, if there are two functions f(n) and g(n) for positive integer n then f(n) = O(g(n) ) iff positive constants c and n0 exists such that f(n) <=cg(n) whenever c>0 for all integers n>n0. Hence g provides upper bound. C depends on the following factors: • programming language used • quality of compiler or interpreter • CPU speed • size of main memory and • algorithm itself.
  • 23. Big-Omega notation By Definition, if there are two functions f(n) and g(n) for positive integer n then f(n) = omega(g(n) ) iff positive constants c and n0 exists such that f(n) >=cg(n) whenever c>0 for all integers n>n0. Hence g provides lower bound. c depends on the following factors: • programming language used • quality of compiler or interpreter • CPU speed • size of main memory and • algorithm itself.
  • 24. Big-Theta notation By Definition, if there are two functions f(n) and g(n) for positive integer n then f(n) = Θ(g(n) ) iff positive constants c and k exists such that c1 g(n) < f(n) < c2 g(n) whenever c1 , c2 >0 for all n>n0. Hence g provides upper bound. c depends on the following factors: • programming language used • quality of compiler or interpreter • CPU speed • size of main memory and • algorithm itself.
  • 25. How to find the upper bound, lower bound and average bound of a f(n)? Example: Find the upper bound, tight bound and lower bound of the f(n) = 2n+3 Solution: for upper bound, by definition f(n)<cg(n) Here assume cg(n)=3n (Note: always increase the coefficient of first term by 1 for upper bound) Now, by putting the values of f(n) and cg(n), we get 2n+3<=3n So, c=3 and g(n) = n For n=1, 5<=3 False For n=2, 7<=6 False For n=3, 9<=9 True Hence, f(n)=O(n) for n>=3 and c = 3 1< log n< n< n log n< n2 < n3 < 2n < n!
  • 26. f(n) = 2n+3 g(n) = n2 1< log n< n< n log n< n2 < n3 < 2n < n!<n^n 2n+3<= 3n2
  • 27. for lower bound, by definition f(n)> cg(n) Here assume cg(n)=2n (Note: Keep the coefficient of first term same for lower bound) Now, by putting the values of f(n) and cg(n), we get 2n+3>2n So, c=2 and g(n) = n For n=1, 5>2 True Hence, f(n)=Ω(n) for n>=1 and c = 2 for tight bound, by definition, c1 g(n) < f(n) < c2 g(n) Here assume c1 g(n) =2n and c2 g(n) =3n (Note: Keep the coefficient of first term same for lower bound) Now, by putting the values of f(n) and c1 g(n) and c2 g(n) , we get 2n< 2n+3 < 3n So, c1 =2, c1 =3 and g(n) = n For n=1, 2< 5 < 3 False For n=2, 4< 5 < 6 True Hence, f(n)= Θ(n) for n>=2 and c1 = 2 and c2 = 3
  • 29. Show tat the running time f(n)= n3 +20n+1 is O(n3 ) Method 1: By definition f(n)=O(g(n) if f(n)<= cg(n) if for some positive constants c and n0 for all n> n0 Here f(n)= n3 +20n+1 and g(n)= n3 So n3 +20n+1<=cn3 Or (n3 +20n+1)/ n3 <=c Or c>=1+20/ n3 +1/ n3 Or C>=22 for n>= n0 =1 There for the Big –Oh condition holds for n> n0 =1 and C>=22. * Larger value for n results in smaller value of c
  • 32. Examples • 4n^2=O(n^3) • 400n^3+20n^2=O(n^3) • 2n^2+2n+1=O(n^2) • 3n+7=O(n) • (n+1)^3=O(n^3) • 10n^3+20n#O(n^2) • n^2-2n+1!=O(n) • n^3-3n^2+3n-1!=O(n^2) • n3 + 20n + 1 is O(n3) • n3 + 20n + 1 is not O(n2)
  • 33. Recurrence relation Many algorithms are recursive in nature. When we analyze them, we get a recurrence relation for time complexity. A recurrence is an equation or inequality that describes a function in terms of its values on smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the natural numbers that satisfy the recurrence. . For example in Merge Sort, to sort a given array, we divide it in two halves and recursively repeat the process for the two halves. Finally we merge the results. Time complexity of Merge Sort can be written as T(n) = 2T(n/2) + cn. There are many other algorithms like Binary Search, Tower of Hanoi, etc.
  • 34. There are mainly three ways for solving recurrences. • Master Method • Iteration Method • Recursion Tree Method void tail(int n) T(n) { if (n>0) { printf(“%d”,n); 1 tail(n-1) T(n-1) } T(n) = 1 n=0 T(n)= T(n-1)+1 n>0
  • 35. Master Method • The problem is divided into a number of sub-problems each of size n/b and need a f(n) to combine or break the solution. • We can apply this method if recurrence is in the form of • T(n) = aT(n/b) +f(n) where a>=1 and b>=1 and f(n) >= 0 • There are three cases : • Case 1: Worst case Analysis(Big O) If f(n) = O(nlog b a-ε ) such that ε>0 then T(n)= Θ(nlog b a ) Case 2: Average case Analysis(Big Θ) If f(n) = Θ(nlog b a ) then T(n)= Θ(nlog b a log n) Case 3: Best case Analysis(Big Ω) If f(n) = Ω(nlog b a+ε ) for ε>0 then T(n)= Θ(f(n)) iff the regularity condition holds a.f(n/b)<=cf(n) for c<1
  • 36. • T(n)= 4T(n/2)+n • T(n)= T(n/2)+1 • T(n)= 2T(n/2)+n^4 • T(n)= 3T(n/2)+n^2 • T(n)= 4T(n/2)+n^2 • T(n)= 3T(n/2)+n^3 • T(n)= 8T(n/4)+n
  • 37. Extension of Master Theorem If f(n)=Θ(n log b a log k n) • Case 1: If k>-1 then T(n) = Θ(n log b a log k+1 n) • Case 2: If k=-1 then T(n) = Θ(n log b a log log n) • Case 3: If k<-1 then T(n) = Θ(n log b a ) • T(n)= 2T(n/2)+nlogn • T(n)= 4T(n/2)+n^2logn • T(n)= 2T(n/2)+n/logn
  • 38. Iteration Method • T(n)= T(n-1)+1 n>0 , T(n)=1 n=0 • T(n)=T(n-1)+n n>0, T(n)=1 n=0 • T(n)=T(n-1)+log n n>0, T(n)=1 n=0 • T(n)=2T(n/2)+n n>1, T(n)=1 n=1 • T(n)=2T(n-1) n>1, T(n)=1, n=1
  • 39. Example2: Consider the Recurrence •T (n) = T (n-1) +1 and T (1) = 1. Solution: Consider T (n) = T (n-1) +1 ……. (1) Putting n=n-1 equation (1), we get T(n-1) = T (n-2) +1 …….. (2) Similarly, we get T(n-2) = T (n-3) +1 ……… (3) Now putting the value of T(n-1) in equation ….(1), we get T(n)= T (n-2) +1 +1 = T(n-2)+2 ……….(4) Now putting the value of T(n-2) in equation ….(4), we get T(n)= T (n-3) +1 +1+1 = T(n-3)+3 . Repeating the procedure i times, we get T(n) = T (n-i) +i ………….. (5) Assume that n-i=1 then T(1) = 1 and i=n-1 Putting the values of n-i and i in equation (5), we get T (n) = 1 + (n-1) = 1+n-1=n= Θ (n).
  • 40. Iteration Method Example1: Consider the Recurrence •T (n) = 1 if n=1 • T(n) = 2T (n-1) if n>1 Solution: Consider T (n) = 2T (n-1) ….. (1) Putting n=n-1 equation (1), we get T (n-1) = 2T (n-2) ……. (2) Similarly we get T (n-2) = 2T (n-3) and so.. On. Putting the value of T(n-1) in equation (1) we get T(n) = 2[2T (n-2)] = 22 T (n-2)…. (3) Now putting value of T(n-2) in equation (3), we get T(n)= 4[2T (n-3)] = 23 T (n-3) . . Repeat the procedure for i times and get T(n) = 2i T (n-i) …. (4) Assume that n-i=1 or i= n-1 in equation (4) and get T (n) = 2n-1 T (1) = 2n-1 .1 {T (1) =1 .....given} = 2n-1 = Θ(2n )
  • 41. Recursion Tree Method 1. A pictorial representation of an iteration method which is in the form of a tree where at each level nodes are expanded. 2. In general, we consider the second term in recurrence as root. 3. It is useful when the divide & Conquer algorithm is used. 4. In Recursion tree, each root and child represents the cost of a single sub-problem. 5. We sum the costs within each of the levels of the tree to obtain a set of pre-level costs and then sum all pre-level costs to determine the total cost of all levels of the recursion. 6. A Recursion Tree is best used to generate a good guess, which can be verified by the Substitution Method.
  • 42. • T(n)=2T(n/2)+n • T(n)=T(n/3)+T(2n/3)+n • T(n)=2T(n/2)+n^2 • T(n)=T(n/4)+T(n/2)+n^2
  • 43. Example 1 Consider T (n) = 2T Consider We have to obtain the asymptotic bound using recursion tree method. T (n) = 2T(n/2) + n n n/2 n/2 n/4 n/4 n/4 n/4 n/8 n/8 n/8 n/8 n/8 n/8 n/8 n/8 n/2k n/2k n n n k = log2n n/2k = 1 2k = n Total time = kn = n log2n T(n) = Θ(n log2n)
  • 44. Example 1 Consider T (n) = 2T Consider T (n) = T(n/3) + T(2n/3) + n We have to obtain the asymptotic bound using recursion tree method. n n/3 2n/3 2n/9 4n/9 n/9 2n/9 n/27 2n/27 2n/27 4n/27 2n/27 4n/27 4n/27 8n/27 n/3k n/(3/2)k n n n k = log 3 n n/3 k = 1 3 k = n Min.Total time = kn = n log3n T(n) = Θ(n log3n) k = log3/2n n/(3/2)k = 1 (3/2)k = n Max.Total time = kn = n log3/2n T(n) = Θ(n log3/2n)
  • 45. Master Theorem can be interpreted as + f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be interpreted as