SlideShare a Scribd company logo
ALGORITHM ANALYSIS
NURJAHAN NIPA
LECTURER, DEPT OF ICT, BDU
Lecture Outlines
■ Algorithm Complexity of Algorithms
■ Time Complexity
■ Space Complexity
Algorithm Analysis
Analysis of efficiency of an algorithm can be performed at two different stages, before
implementation and after implementation, as
■ A priori analysis − This is defined as theoretical analysis of an algorithm. Efficiency of
algorithm is measured by assuming that all other factors e.g. speed of processor, are
constant and have no effect on implementation.
■ A posterior analysis − This is defined as empirical analysis of an algorithm. The chosen
algorithm is implemented using programming language. Next the chosen algorithm is
executed on target computer machine. In this analysis, actual statistics like running time
and space needed are collected.
Algorithm analysis is dealt with the execution or running time of various operations
involved. Running time of an operation can be defined as number of computer instructions
executed per operation.
The complexity of an algorithm is a function describing the efficiency of the algorithm in
terms of the amount of data the algorithm must process. There are two main complexity
measures of the efficiency of an algorithm:
■ Time complexity is a function describing the amount of time an algorithm takes in
terms of the amount of input to the algorithm.
■ Space complexity is a function describing the amount of memory (space) an algorithm
takes in terms of the amount of input to the algorithm.
Time Complexity
There are three types of time complexities, which can be found in the analysis of an
algorithm:
 Best case time complexity
 Average case time complexity
 Worst case time complexity
Best-case time complexity
The best-case time complexity of an algorithm is a measure of the minimum time that the algorithm
will require. For example, the best case for a simple linear search on a list occurs when the desired
element is the first element of the list.
Worst-case time complexity
The worst-case time complexity of an algorithm is a measure of the maximum time that the
algorithm will require. A worst-case estimate is normally computed because it provides an upper
bound for all inputs including the extreme slowest case also. For example, the worst case for a simple
linear search on a list occurs when the desired element is found at the last position of the list or not
on the list.
Average-case time complexity
The average-case time complexity of an algorithm is a measure of average time of all instances taken
by an algorithm. Average case analysis does not provide the upper bound and sometimes it is difficult
to compute.
Average-case time complexity and worst-case time complexity are the most used in algorithm
analysis. Best-case time complexity is rarely found but is does not have any uses.
What is Asymptotic Notation?
■ Whenever we want to perform analysis of an algorithm, we need to calculate the
complexity of that algorithm. But when we calculate the complexity of an algorithm it
does not provide the exact amount of resource required. So instead of taking the exact
amount of resource, we represent that complexity in a general form (Notation) which
produces the basic nature of that algorithm. We use that general form (Notation) for
analysis process.
■ Asymptotic notation of an algorithm is a mathematical representation of its complexity.
Majorly, we use THREE types of Asymptotic Notations and those are as follows...
■ Big - Oh (O)
■ Big - Omega (Ω)
■ Big - Theta (Θ)
Big - Oh Notation (O)
■ Big - Oh notation is used to define the upper bound of
an algorithm in terms of Time Complexity.
■ That means Big - Oh notation always indicates the
maximum time required by an algorithm for all input
values. That means Big - Oh notation describes the
worst case of an algorithm time complexity.
Big - Oh Notation can be defined as follows...
■ Consider function f(n) as time complexity of an
algorithm and g(n) is the most significant term. If f(n)
<= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we
can represent f(n) as O(g(n)).
f(n) = O(g(n))
Big - Omega Notation (Ω)
■ Big - Omega notation is used to define the lower bound
of an algorithm in terms of Time Complexity.
■ That means Big-Omega notation always indicates the
minimum time required by an algorithm for all input
values. That means Big-Omega notation describes the best
case of an algorithm time complexity.
Big - Omega Notation can be defined as follows...
■ Consider function f(n) as time complexity of an algorithm
and g(n) is the most significant term. If f(n) >= C g(n) for
all n >= n0, C > 0 and n0 >= 1. Then we can represent
f(n) as Ω(g(n)).
f(n) = Ω(g(n))
Big - Theta Notation (Θ)
■ Big - Theta notation is used to define the average bound
of an algorithm in terms of Time Complexity.
■ That means Big - Theta notation always indicates the
average time required by an algorithm for all input values.
That means Big - Theta notation describes the average
case of an algorithm time complexity.
■ Consider function f(n) as time complexity of an
algorithm and g(n) is the most significant term. If
C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 >
0 and n0 >= 1. Then we can represent f(n) as Θ(g(n)).
f(n) = Θ(g(n))
What effects run time of an algorithm?
Complexity of an algorithm is a measure of the amount of time and/or space
required by an algorithm for an input of a given size (n). computer used, the hardware
platform.
 Representation of abstract data types (ADT’s)
 Efficiency of compiler
 Competence of implementer (programming skills)
 Complexity of underlying algorithm
 Size of the input
Linear Loops
To calculate the efficiency of an algorithm that has a single loop, we need to first determine the
number of times the statements in the loop will be executed. This is because the number of
iterations is directly proportional to the loop factor. Greater the loop factor, more is the number of
iterations. For example, consider the loop given below:
Here, 100 is the loop factor. We have already said that efficiency is directly proportional to the
number of iterations. Hence, the general formula in the case of linear loops may be given as
However calculating efficiency is not as simple as is shown in the above example. Consider the
loop given below:
Here, the number of iterations is half the number of the loop factor. So, here the efficiency can be
given as
Logarithmic Loops
We have seen that in linear loops, the loop updation statement either adds or subtracts the loop-
controlling variable. However, in logarithmic loops, the loop-controlling variable is either multiplied or
divided during each iteration of the loop. For example, look at the loops given below:
■ Consider the first for loop in which the loop-controlling variable i is multiplied by 2. The loop will
be executed only 10 times and not 1000 times because in each iteration the value of I doubles.
Now, consider the second loop in which the loop-controlling variable i is divided by 2.
■ In this case also, the loop will be executed 10 times. Thus, the number of iterations is a function of
the number by which the loop-controlling variable is divided or multiplied. In the examples
discussed, it is 2. That is, when n = 1000, the number of iterations can be given by log 1000 which
is approximately equal to 10.
■ Therefore, putting this analysis in general terms, we can conclude that the efficiency of loops in
which iterations divide or multiply the loop-controlling variables can be given as
f(n) = log n
Linear logarithmic loop
■ Consider the following code in which the loop-controlling variable of the inner loop is
multiplied after each iteration. The number of iterations in the inner loop is log 10. This
inner loop is controlled by an outer loop which iterates 10 times. Therefore, according to
the formula, the number of iterations for this code can be given as 10 log 10.
■ In more general terms, the efficiency of such loops can be given as f(n) = n log n.
Quadratic loop
In a quadratic loop, the number of iterations in the inner loop is equal to the number of
iterations in the outer loop. Consider the following code in which the outer loop executes 10
times and for each iteration of the outer loop, the inner loop also executes 10 times. Therefore,
the efficiency here is 100.
The generalized formula for quadratic loop can be given as f(n) = n2
Dependent quadratic loop
In a dependent quadratic loop, the number of iterations in the inner loop is dependent on
the outer loop. Consider the code given below:
In this code, the inner loop will execute just once in the first iteration, twice in the second
iteration, thrice in the third iteration, so on and so forth. In this way, the number of
iterations can be calculated as
If we calculate the average of this loop (55/10 = 5.5), we will observe that it is equal to the
number of iterations in the outer loop (10) plus 1 divided by 2. In general terms, the inner
loop iterates (n + 1)/2 times. Therefore, the efficiency of such a code can be given as
Lecture 03 algorithm analysis
Space complexity
Space complexity of an algorithm represents the amount of memory space needed the
algorithm in its life cycle. Space needed by an algorithm is equal to the sum of the
following two components
■ A fixed part that is a space required to store certain data and variables (i.e. simple
variables and constants, program size etc.), that are not dependent of the size of
the problem.
■ A variable part is a space required by variables, whose size is totally dependent on
the size of the problem. For example, recursion stack space, dynamic memory
allocation etc.
Example of Space Complexity
Space complexity S(p) of any algorithm p is S(p) = A + Sp(I) Where A is treated as the fixed
part and S(I) is treated as the variable part of the algorithm which depends on instance
characteristic I. Following is a simple example that tries to explain the concept
Algorithm
■ SUM(P, Q)
■ Step 1 - START
■ Step 2 - R ← P + Q + 10
■ Step 3 - Stop
Here we have three variables P, Q and R and one constant. Hence S(p) = 1+3. Now space is
dependent on data types of given constant types and variables and it will be multiplied
accordingly.
THANK YOU!

More Related Content

PPT
PPT
Fundamentals of the Analysis of Algorithm Efficiency
PPTX
Performance analysis and randamized agoritham
PPT
02 order of growth
PPTX
Lecture 5: Asymptotic analysis of algorithms
PPTX
Analysis of algorithn class 2
PPTX
asymptotic analysis and insertion sort analysis
PPTX
Analysis of algorithms
Fundamentals of the Analysis of Algorithm Efficiency
Performance analysis and randamized agoritham
02 order of growth
Lecture 5: Asymptotic analysis of algorithms
Analysis of algorithn class 2
asymptotic analysis and insertion sort analysis
Analysis of algorithms

What's hot (20)

PDF
Theory of algorithms final
PDF
Lecture 4 asymptotic notations
PPTX
Performance analysis(Time & Space Complexity)
PDF
Complexity
PPTX
Mathematical Analysis of Recursive Algorithm.
PPT
Algorithm And analysis Lecture 03& 04-time complexity.
PPT
how to calclute time complexity of algortihm
PPTX
Unit i basic concepts of algorithms
PPT
Complexity of Algorithm
PDF
Introduction to Algorithms Complexity Analysis
PPT
Asymptoptic notations
PPTX
Algorithm Analysis
PDF
Anlysis and design of algorithms part 1
DOC
Time and space complexity
PDF
Analysis and design of algorithms part2
PPTX
Complexity analysis in Algorithms
PDF
Data Structures - Lecture 8 - Study Notes
PDF
Algorithm chapter 2
PPT
Analysis Of Algorithms I
PPT
Lec03 04-time complexity
Theory of algorithms final
Lecture 4 asymptotic notations
Performance analysis(Time & Space Complexity)
Complexity
Mathematical Analysis of Recursive Algorithm.
Algorithm And analysis Lecture 03& 04-time complexity.
how to calclute time complexity of algortihm
Unit i basic concepts of algorithms
Complexity of Algorithm
Introduction to Algorithms Complexity Analysis
Asymptoptic notations
Algorithm Analysis
Anlysis and design of algorithms part 1
Time and space complexity
Analysis and design of algorithms part2
Complexity analysis in Algorithms
Data Structures - Lecture 8 - Study Notes
Algorithm chapter 2
Analysis Of Algorithms I
Lec03 04-time complexity
Ad

Similar to Lecture 03 algorithm analysis (20)

PPTX
Design and analysis of algorithms unit1.pptx
PPTX
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...
PPTX
Data Structure Algorithm -Algorithm Complexity
PDF
Algorithm Analysis.pdf
PPTX
Algorithm analysis and design
PPTX
Algorithm.pptx
PPTX
Algorithm.pptx
PPTX
Unit ii algorithm
PDF
design and analysis of algorithm basic concepts.pdf
PPTX
Asymptotic Notations
PPTX
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
PPTX
Intro to super. advance algorithm..pptx
PPT
Data Structures- Part2 analysis tools
PPTX
Module 1 notes of data warehousing and data
PDF
12200223054_SrijanGho;sh_DAA_19.pdfkmkmm
PPTX
Data Structures and Algorithms for placements
PPTX
ASYMTOTIC NOTATIONS BIG O OEMGA THETE NOTATION.pptx
PPTX
Asymptotic Analysis in Data Structure using C
PPTX
Measuring algorithm performance
PDF
Performance Analysis,Time complexity, Asymptotic Notations
Design and analysis of algorithms unit1.pptx
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...
Data Structure Algorithm -Algorithm Complexity
Algorithm Analysis.pdf
Algorithm analysis and design
Algorithm.pptx
Algorithm.pptx
Unit ii algorithm
design and analysis of algorithm basic concepts.pdf
Asymptotic Notations
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
Intro to super. advance algorithm..pptx
Data Structures- Part2 analysis tools
Module 1 notes of data warehousing and data
12200223054_SrijanGho;sh_DAA_19.pdfkmkmm
Data Structures and Algorithms for placements
ASYMTOTIC NOTATIONS BIG O OEMGA THETE NOTATION.pptx
Asymptotic Analysis in Data Structure using C
Measuring algorithm performance
Performance Analysis,Time complexity, Asymptotic Notations
Ad

Recently uploaded (20)

PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
Structs to JSON How Go Powers REST APIs.pdf
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Well-logging-methods_new................
PPTX
Sustainable Sites - Green Building Construction
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PDF
Digital Logic Computer Design lecture notes
PPT
Project quality management in manufacturing
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPT
Mechanical Engineering MATERIALS Selection
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Structs to JSON How Go Powers REST APIs.pdf
UNIT 4 Total Quality Management .pptx
Well-logging-methods_new................
Sustainable Sites - Green Building Construction
Arduino robotics embedded978-1-4302-3184-4.pdf
Digital Logic Computer Design lecture notes
Project quality management in manufacturing
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Mechanical Engineering MATERIALS Selection
Internet of Things (IOT) - A guide to understanding
Lesson 3_Tessellation.pptx finite Mathematics
CH1 Production IntroductoryConcepts.pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
CYBER-CRIMES AND SECURITY A guide to understanding
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT

Lecture 03 algorithm analysis

  • 2. Lecture Outlines ■ Algorithm Complexity of Algorithms ■ Time Complexity ■ Space Complexity
  • 3. Algorithm Analysis Analysis of efficiency of an algorithm can be performed at two different stages, before implementation and after implementation, as ■ A priori analysis − This is defined as theoretical analysis of an algorithm. Efficiency of algorithm is measured by assuming that all other factors e.g. speed of processor, are constant and have no effect on implementation. ■ A posterior analysis − This is defined as empirical analysis of an algorithm. The chosen algorithm is implemented using programming language. Next the chosen algorithm is executed on target computer machine. In this analysis, actual statistics like running time and space needed are collected. Algorithm analysis is dealt with the execution or running time of various operations involved. Running time of an operation can be defined as number of computer instructions executed per operation.
  • 4. The complexity of an algorithm is a function describing the efficiency of the algorithm in terms of the amount of data the algorithm must process. There are two main complexity measures of the efficiency of an algorithm: ■ Time complexity is a function describing the amount of time an algorithm takes in terms of the amount of input to the algorithm. ■ Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm.
  • 5. Time Complexity There are three types of time complexities, which can be found in the analysis of an algorithm:  Best case time complexity  Average case time complexity  Worst case time complexity
  • 6. Best-case time complexity The best-case time complexity of an algorithm is a measure of the minimum time that the algorithm will require. For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list. Worst-case time complexity The worst-case time complexity of an algorithm is a measure of the maximum time that the algorithm will require. A worst-case estimate is normally computed because it provides an upper bound for all inputs including the extreme slowest case also. For example, the worst case for a simple linear search on a list occurs when the desired element is found at the last position of the list or not on the list. Average-case time complexity The average-case time complexity of an algorithm is a measure of average time of all instances taken by an algorithm. Average case analysis does not provide the upper bound and sometimes it is difficult to compute. Average-case time complexity and worst-case time complexity are the most used in algorithm analysis. Best-case time complexity is rarely found but is does not have any uses.
  • 7. What is Asymptotic Notation? ■ Whenever we want to perform analysis of an algorithm, we need to calculate the complexity of that algorithm. But when we calculate the complexity of an algorithm it does not provide the exact amount of resource required. So instead of taking the exact amount of resource, we represent that complexity in a general form (Notation) which produces the basic nature of that algorithm. We use that general form (Notation) for analysis process. ■ Asymptotic notation of an algorithm is a mathematical representation of its complexity. Majorly, we use THREE types of Asymptotic Notations and those are as follows... ■ Big - Oh (O) ■ Big - Omega (Ω) ■ Big - Theta (Θ)
  • 8. Big - Oh Notation (O) ■ Big - Oh notation is used to define the upper bound of an algorithm in terms of Time Complexity. ■ That means Big - Oh notation always indicates the maximum time required by an algorithm for all input values. That means Big - Oh notation describes the worst case of an algorithm time complexity. Big - Oh Notation can be defined as follows... ■ Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as O(g(n)). f(n) = O(g(n))
  • 9. Big - Omega Notation (Ω) ■ Big - Omega notation is used to define the lower bound of an algorithm in terms of Time Complexity. ■ That means Big-Omega notation always indicates the minimum time required by an algorithm for all input values. That means Big-Omega notation describes the best case of an algorithm time complexity. Big - Omega Notation can be defined as follows... ■ Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as Ω(g(n)). f(n) = Ω(g(n))
  • 10. Big - Theta Notation (Θ) ■ Big - Theta notation is used to define the average bound of an algorithm in terms of Time Complexity. ■ That means Big - Theta notation always indicates the average time required by an algorithm for all input values. That means Big - Theta notation describes the average case of an algorithm time complexity. ■ Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1. Then we can represent f(n) as Θ(g(n)). f(n) = Θ(g(n))
  • 11. What effects run time of an algorithm? Complexity of an algorithm is a measure of the amount of time and/or space required by an algorithm for an input of a given size (n). computer used, the hardware platform.  Representation of abstract data types (ADT’s)  Efficiency of compiler  Competence of implementer (programming skills)  Complexity of underlying algorithm  Size of the input
  • 12. Linear Loops To calculate the efficiency of an algorithm that has a single loop, we need to first determine the number of times the statements in the loop will be executed. This is because the number of iterations is directly proportional to the loop factor. Greater the loop factor, more is the number of iterations. For example, consider the loop given below: Here, 100 is the loop factor. We have already said that efficiency is directly proportional to the number of iterations. Hence, the general formula in the case of linear loops may be given as However calculating efficiency is not as simple as is shown in the above example. Consider the loop given below: Here, the number of iterations is half the number of the loop factor. So, here the efficiency can be given as
  • 13. Logarithmic Loops We have seen that in linear loops, the loop updation statement either adds or subtracts the loop- controlling variable. However, in logarithmic loops, the loop-controlling variable is either multiplied or divided during each iteration of the loop. For example, look at the loops given below: ■ Consider the first for loop in which the loop-controlling variable i is multiplied by 2. The loop will be executed only 10 times and not 1000 times because in each iteration the value of I doubles. Now, consider the second loop in which the loop-controlling variable i is divided by 2. ■ In this case also, the loop will be executed 10 times. Thus, the number of iterations is a function of the number by which the loop-controlling variable is divided or multiplied. In the examples discussed, it is 2. That is, when n = 1000, the number of iterations can be given by log 1000 which is approximately equal to 10. ■ Therefore, putting this analysis in general terms, we can conclude that the efficiency of loops in which iterations divide or multiply the loop-controlling variables can be given as f(n) = log n
  • 14. Linear logarithmic loop ■ Consider the following code in which the loop-controlling variable of the inner loop is multiplied after each iteration. The number of iterations in the inner loop is log 10. This inner loop is controlled by an outer loop which iterates 10 times. Therefore, according to the formula, the number of iterations for this code can be given as 10 log 10. ■ In more general terms, the efficiency of such loops can be given as f(n) = n log n.
  • 15. Quadratic loop In a quadratic loop, the number of iterations in the inner loop is equal to the number of iterations in the outer loop. Consider the following code in which the outer loop executes 10 times and for each iteration of the outer loop, the inner loop also executes 10 times. Therefore, the efficiency here is 100. The generalized formula for quadratic loop can be given as f(n) = n2
  • 16. Dependent quadratic loop In a dependent quadratic loop, the number of iterations in the inner loop is dependent on the outer loop. Consider the code given below: In this code, the inner loop will execute just once in the first iteration, twice in the second iteration, thrice in the third iteration, so on and so forth. In this way, the number of iterations can be calculated as If we calculate the average of this loop (55/10 = 5.5), we will observe that it is equal to the number of iterations in the outer loop (10) plus 1 divided by 2. In general terms, the inner loop iterates (n + 1)/2 times. Therefore, the efficiency of such a code can be given as
  • 18. Space complexity Space complexity of an algorithm represents the amount of memory space needed the algorithm in its life cycle. Space needed by an algorithm is equal to the sum of the following two components ■ A fixed part that is a space required to store certain data and variables (i.e. simple variables and constants, program size etc.), that are not dependent of the size of the problem. ■ A variable part is a space required by variables, whose size is totally dependent on the size of the problem. For example, recursion stack space, dynamic memory allocation etc.
  • 19. Example of Space Complexity Space complexity S(p) of any algorithm p is S(p) = A + Sp(I) Where A is treated as the fixed part and S(I) is treated as the variable part of the algorithm which depends on instance characteristic I. Following is a simple example that tries to explain the concept Algorithm ■ SUM(P, Q) ■ Step 1 - START ■ Step 2 - R ← P + Q + 10 ■ Step 3 - Stop Here we have three variables P, Q and R and one constant. Hence S(p) = 1+3. Now space is dependent on data types of given constant types and variables and it will be multiplied accordingly.

Editor's Notes

  • #8: http://www.btechsmartclass.com/data_structures/asymptotic-notations.html