SlideShare a Scribd company logo
Analysis of Algorithms COMP171 Fall 2006
Introduction What is Algorithm?  a clearly specified  set of simple instructions  to be followed to solve a problem Takes a set of values, as input and  produces a value, or set of values, as output May be specified  In English As a computer program As a pseudo-code Data structures Methods of organizing data Program = algorithms + data structures
Introduction Why need algorithm analysis ? writing a working program is not good enough The program may be inefficient! If the program is run on a  large data set , then the running time becomes an issue
Example: Selection Problem Given a list of N numbers, determine the  k th  largest , where k    N. Algorithm 1: (1)   Read N numbers into an array (2)   Sort the array in decreasing order by some simple algorithm (3)   Return the element in position k
Example: Selection Problem… Algorithm 2: (1)   Read the first k elements into an array and sort them in decreasing order (2)   Each remaining element is read one by one If smaller than the kth element, then it is ignored Otherwise, it is placed in its correct spot in the array, bumping one element out of the array. (3)   The element in the kth position is returned as the answer.
Example: Selection Problem… Which algorithm is better when N =100 and k = 100? N =100 and k = 1? What happens when N = 1,000,000 and k = 500,000? There exist better algorithms
Algorithm Analysis We only analyze  correct  algorithms An algorithm is correct If, for every input instance, it halts with the correct output Incorrect algorithms Might not halt at all on some input instances Might halt with other than the desired answer Analyzing an algorithm Predicting  the resources that the algorithm requires Resources include Memory Communication bandwidth Computational time (usually most important)
Algorithm Analysis… Factors affecting the running time computer  compiler algorithm used input to the algorithm The content of the input affects the running time typically, the  input size  (number of items in the input) is the main consideration E.g. sorting problem    the number of items to be sorted E.g. multiply two matrices together    the total number of elements in the two matrices Machine model assumed Instructions are executed one after another, with no concurrent operations    Not parallel computers
Example Calculate Lines 1 and 4 count for one unit each Line 3: executed N times, each time four units Line 2: (1 for initialization, N+1 for all the tests, N for all the increments) total 2N + 2 total cost: 6N + 4    O(N) 1 2 3 4 1 2N+2 4N 1
Worst- / average- / best-case Worst-case running time  of an  algorithm The longest running time for  any  input of size n An upper bound on the running time for any input    guarantee that the algorithm will never take longer Example: Sort a set of numbers in increasing order; and the data is in decreasing order The worst case can occur fairly often E.g. in searching a database for a particular piece of information Best-case running time sort a set of numbers in increasing order; and the data is already in increasing order Average-case running time May be difficult to define what “average” means
Running-time of algorithms Bounds are for the  algorithms , rather than  programs programs are just implementations of an algorithm, and almost always the details of the program do not affect the bounds Bounds are for  algorithms,  rather than  problems A problem can be solved with several algorithms, some are more efficient than others
Growth Rate The idea is to establish a relative order among functions  for large n    c , n 0  > 0 such that  f(N)    c g(N) when N    n 0 f(N) grows no faster than g(N) for “large” N
Asymptotic notation: Big-Oh f(N) = O(g(N)) There are positive constants c and n 0  such that  f(N)    c g(N) when N    n 0 The growth rate of f(N) is  less than or equal to  the growth rate of g(N) g(N) is an upper bound on f(N)
Big-Oh: example Let f(N) = 2N 2 .  Then f(N) = O(N 4 ) f(N) = O(N 3 ) f(N) = O(N 2 ) (best answer, asymptotically tight) O(N 2 ): reads “order N-squared” or “Big-Oh N-squared”
Big Oh: more examples N 2  / 2 – 3N = O(N 2 ) 1 + 4N = O(N) 7N 2  + 10N + 3 = O(N 2 ) = O(N 3 ) log 10  N = log 2  N / log 2  10 = O(log 2  N) = O(log N) sin N = O(1);  10 = O(1), 10 10  = O(1) log N + N = O(N) log k  N = O(N) for any constant k N = O(2 N ),  but  2 N  is not O(N) 2 10N  is not O(2 N )
Math Review: logarithmic functions
Some rules When considering the growth rate of a function using Big-Oh Ignore the lower order terms and the coefficients of the highest-order term No need to specify the base of logarithm Changing the base from one constant to another changes the value of the logarithm by only a constant factor If T 1 (N) = O(f(N) and T 2 (N) = O(g(N)), then T 1 (N) + T 2 (N) = max(O(f(N)),  O(g(N))), T 1 (N) * T 2 (N) = O(f(N) * g(N))
Big-Omega    c , n 0  > 0 such that  f(N)    c g(N) when N    n 0 f(N) grows no slower than g(N) for “large” N
Big-Omega f(N) =   (g(N)) There are positive constants c and n 0  such that  f(N)    c g(N) when N    n 0 The growth rate of f(N) is  greater than or equal to   the growth rate of g(N).
Big-Omega: examples Let f(N) = 2N 2 .  Then f(N) =   (N) f(N) =   (N 2 )  (best answer)
f(N) =   (g(N)) the growth rate of f(N)  is the same as  the growth rate of g(N)
Big-Theta f(N) =   (g(N))   iff  f(N) = O(g(N)) and f(N) =   (g(N)) The growth rate of f(N)  equals  the growth rate of g(N) Example: Let  f(N)=N 2  ,  g(N)=2N 2 Since f(N) = O(g(N)) and f(N) =   (g(N)),  thus f(N) =   (g(N)). Big-Theta means the bound is the tightest possible.
Some rules If T(N) is a polynomial of degree k, then  T(N) =   (N k ). For logarithmic functions, T(log m  N) =   (log N).
Typical Growth Rates
Growth rates … Doubling the input size  f(N) = c    f(2N) = f(N) = c f(N) = log N    f(2N) = f(N) + log 2 f(N) = N    f(2N) = 2 f(N)  f(N) = N 2     f(2N) = 4 f(N)  f(N) = N 3     f(2N) = 8 f(N)  f(N) = 2 N     f(2N) = f 2 (N)  Advantages of algorithm analysis  To eliminate bad algorithms early pinpoints the bottlenecks, which are worth coding carefully
Using L' Hopital's rule L' Hopital's rule If  and  then  = Determine the relative growth rates (using L' Hopital's rule if necessary) compute  if 0:  f(N) = O(g(N))  and f(N) is not   (g(N)) if constant    0:  f(N) =   (g(N)) if   :  f(N) =   (f(N))  and f(N) is not   (g(N)) limit oscillates: no relation
General Rules For loops at most the running time of the statements inside the for-loop (including tests) times the number of iterations. Nested for loops the running time of the statement multiplied by the product of the sizes of all the for-loops. O(N 2 )
General rules (cont’d) Consecutive statements These just add O(N) + O(N 2 ) = O(N 2 ) If S1 Else S2 never more than the running time of the test plus the larger of the running times of S1 and S2.
Another Example Maximum Subsequence Sum Problem Given (possibly negative) integers A 1 , A 2 , ...., A n , find the maximum value of  For convenience, the maximum subsequence sum is 0 if all the integers are negative E.g. for input –2, 11, -4, 13, -5, -2 Answer: 20 (A 2  through A 4 )
Algorithm 1: Simple Exhaustively tries all possibilities (brute force) O(N 3 )
Algorithm 2: Divide-and-conquer  Divide-and-conquer split the problem into two roughly equal subproblems, which are then solved  recursively patch together the two solutions of the subproblems to arrive at a solution for the whole problem   The maximum subsequence sum can be  Entirely in the left half of the input Entirely in the right half of the input It crosses the middle and is in both halves
Algorithm 2 (cont’d) The first two cases can be solved recursively For the last case:  find the largest sum in the first half  that includes the last element in the first half the largest sum in the second half  that includes the first element in the second half add these two sums together
Algorithm 2 … O(1) T(m/2) O(m) O(1) T(m/2)
Algorithm 2 (cont’d) Recurrence equation 2 T(N/2): two subproblems, each of size N/2 N: for “patching” two solutions to find solution to whole problem
Algorithm 2 (cont’d) Solving the recurrence: With k=log N (i.e. 2 k  = N), we have Thus, the running time is  O(N log N) faster than Algorithm 1 for large data sets

More Related Content

PPT
Analysis Of Algorithms I
PPTX
Mathematical Analysis of Recursive Algorithm.
PPTX
Asymptotic Notation
PPT
PDF
01. design & analysis of agorithm intro & complexity analysis
PPTX
Asymptotic notations(Big O, Omega, Theta )
PPTX
Algorithm big o
PPT
Time complexity
Analysis Of Algorithms I
Mathematical Analysis of Recursive Algorithm.
Asymptotic Notation
01. design & analysis of agorithm intro & complexity analysis
Asymptotic notations(Big O, Omega, Theta )
Algorithm big o
Time complexity

What's hot (20)

PDF
PDF
Algorithm chapter 2
PDF
Time complexity (linear search vs binary search)
DOC
Time and space complexity
PPT
how to calclute time complexity of algortihm
PDF
Asymptotic Analysis
PPTX
Asymptotic Notation and Data Structures
PDF
Design and analysis of algorithm
PDF
Lecture 4 asymptotic notations
PPTX
Introduction to the AKS Primality Test
PPTX
Data Structures- Hashing
PPT
Asymptotic analysis
PPT
Asymptotic Notation and Complexity
PPT
Time andspacecomplexity
PDF
Lecture 5 6_7 - divide and conquer and method of solving recurrences
PPTX
Divide and conquer 1
PPT
Divide and conquer
PPT
Big oh Representation Used in Time complexities
PPT
Basic terminologies & asymptotic notations
PPT
Complexity of Algorithm
Algorithm chapter 2
Time complexity (linear search vs binary search)
Time and space complexity
how to calclute time complexity of algortihm
Asymptotic Analysis
Asymptotic Notation and Data Structures
Design and analysis of algorithm
Lecture 4 asymptotic notations
Introduction to the AKS Primality Test
Data Structures- Hashing
Asymptotic analysis
Asymptotic Notation and Complexity
Time andspacecomplexity
Lecture 5 6_7 - divide and conquer and method of solving recurrences
Divide and conquer 1
Divide and conquer
Big oh Representation Used in Time complexities
Basic terminologies & asymptotic notations
Complexity of Algorithm
Ad

Similar to lecture 1 (20)

PPT
analysis.ppt
PDF
Analysis Framework for Analysis of Algorithms.pdf
PPT
AlgorithmAnalysis2.ppt
PDF
Anlysis and design of algorithms part 1
PPT
introduction to algorithm for beginneer1
PPT
Chapter 1 & 2 - Introduction dhjgsdkjfsaf.ppt
PDF
Data Structure - Lecture 1 - Introduction.pdf
PPTX
Chapter two
PPTX
Time complexity.pptxghhhhhhhhhhhhhhhjjjjjjjjjjjjjjjjjjjjjjjjjj
PPTX
UNIT DAA PPT cover all topics 2021 regulation
PPT
How to calculate complexity in Data Structure
PPT
Complete Book Lectures maths theory helpful for kids.ppt
PPTX
3 analysis.gtm
PPT
Time complexity.ppt
PPT
Time complexity.pptr56435 erfgegr t 45t 35
PDF
Unit-1 DAA_Notes.pdf
PPTX
CMSC 56 | Lecture 8: Growth of Functions
PPT
Lec03 04-time complexity
PPTX
Analysis of algorithms
analysis.ppt
Analysis Framework for Analysis of Algorithms.pdf
AlgorithmAnalysis2.ppt
Anlysis and design of algorithms part 1
introduction to algorithm for beginneer1
Chapter 1 & 2 - Introduction dhjgsdkjfsaf.ppt
Data Structure - Lecture 1 - Introduction.pdf
Chapter two
Time complexity.pptxghhhhhhhhhhhhhhhjjjjjjjjjjjjjjjjjjjjjjjjjj
UNIT DAA PPT cover all topics 2021 regulation
How to calculate complexity in Data Structure
Complete Book Lectures maths theory helpful for kids.ppt
3 analysis.gtm
Time complexity.ppt
Time complexity.pptr56435 erfgegr t 45t 35
Unit-1 DAA_Notes.pdf
CMSC 56 | Lecture 8: Growth of Functions
Lec03 04-time complexity
Analysis of algorithms
Ad

More from sajinsc (20)

PPT
lecture 30
PPT
lecture 29
PPT
lecture 28
PPT
lecture 27
PPT
lecture 26
PPT
lecture 25
PPT
lecture 24
PPT
lecture 23
PPT
lecture 22
PPT
lecture 21
PPT
lecture 20
PPT
lecture 19
PPT
lecture 18
PPT
lecture 17
PPT
lecture 16
PPT
lecture 15
PPT
lecture 14
PPT
lecture 13
PPT
lecture 12
PPT
lecture 11
lecture 30
lecture 29
lecture 28
lecture 27
lecture 26
lecture 25
lecture 24
lecture 23
lecture 22
lecture 21
lecture 20
lecture 19
lecture 18
lecture 17
lecture 16
lecture 15
lecture 14
lecture 13
lecture 12
lecture 11

Recently uploaded (20)

PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Encapsulation theory and applications.pdf
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
KodekX | Application Modernization Development
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
Spectroscopy.pptx food analysis technology
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Electronic commerce courselecture one. Pdf
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
Diabetes mellitus diagnosis method based random forest with bat algorithm
Empathic Computing: Creating Shared Understanding
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Network Security Unit 5.pdf for BCA BBA.
Encapsulation theory and applications.pdf
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Mobile App Security Testing_ A Comprehensive Guide.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
KodekX | Application Modernization Development
Digital-Transformation-Roadmap-for-Companies.pptx
Spectroscopy.pptx food analysis technology
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Chapter 3 Spatial Domain Image Processing.pdf
Electronic commerce courselecture one. Pdf
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
20250228 LYD VKU AI Blended-Learning.pptx

lecture 1

  • 1. Analysis of Algorithms COMP171 Fall 2006
  • 2. Introduction What is Algorithm? a clearly specified set of simple instructions to be followed to solve a problem Takes a set of values, as input and produces a value, or set of values, as output May be specified In English As a computer program As a pseudo-code Data structures Methods of organizing data Program = algorithms + data structures
  • 3. Introduction Why need algorithm analysis ? writing a working program is not good enough The program may be inefficient! If the program is run on a large data set , then the running time becomes an issue
  • 4. Example: Selection Problem Given a list of N numbers, determine the k th largest , where k  N. Algorithm 1: (1)   Read N numbers into an array (2)   Sort the array in decreasing order by some simple algorithm (3)   Return the element in position k
  • 5. Example: Selection Problem… Algorithm 2: (1)   Read the first k elements into an array and sort them in decreasing order (2)   Each remaining element is read one by one If smaller than the kth element, then it is ignored Otherwise, it is placed in its correct spot in the array, bumping one element out of the array. (3)   The element in the kth position is returned as the answer.
  • 6. Example: Selection Problem… Which algorithm is better when N =100 and k = 100? N =100 and k = 1? What happens when N = 1,000,000 and k = 500,000? There exist better algorithms
  • 7. Algorithm Analysis We only analyze correct algorithms An algorithm is correct If, for every input instance, it halts with the correct output Incorrect algorithms Might not halt at all on some input instances Might halt with other than the desired answer Analyzing an algorithm Predicting the resources that the algorithm requires Resources include Memory Communication bandwidth Computational time (usually most important)
  • 8. Algorithm Analysis… Factors affecting the running time computer compiler algorithm used input to the algorithm The content of the input affects the running time typically, the input size (number of items in the input) is the main consideration E.g. sorting problem  the number of items to be sorted E.g. multiply two matrices together  the total number of elements in the two matrices Machine model assumed Instructions are executed one after another, with no concurrent operations  Not parallel computers
  • 9. Example Calculate Lines 1 and 4 count for one unit each Line 3: executed N times, each time four units Line 2: (1 for initialization, N+1 for all the tests, N for all the increments) total 2N + 2 total cost: 6N + 4  O(N) 1 2 3 4 1 2N+2 4N 1
  • 10. Worst- / average- / best-case Worst-case running time of an algorithm The longest running time for any input of size n An upper bound on the running time for any input  guarantee that the algorithm will never take longer Example: Sort a set of numbers in increasing order; and the data is in decreasing order The worst case can occur fairly often E.g. in searching a database for a particular piece of information Best-case running time sort a set of numbers in increasing order; and the data is already in increasing order Average-case running time May be difficult to define what “average” means
  • 11. Running-time of algorithms Bounds are for the algorithms , rather than programs programs are just implementations of an algorithm, and almost always the details of the program do not affect the bounds Bounds are for algorithms, rather than problems A problem can be solved with several algorithms, some are more efficient than others
  • 12. Growth Rate The idea is to establish a relative order among functions for large n  c , n 0 > 0 such that f(N)  c g(N) when N  n 0 f(N) grows no faster than g(N) for “large” N
  • 13. Asymptotic notation: Big-Oh f(N) = O(g(N)) There are positive constants c and n 0 such that f(N)  c g(N) when N  n 0 The growth rate of f(N) is less than or equal to the growth rate of g(N) g(N) is an upper bound on f(N)
  • 14. Big-Oh: example Let f(N) = 2N 2 . Then f(N) = O(N 4 ) f(N) = O(N 3 ) f(N) = O(N 2 ) (best answer, asymptotically tight) O(N 2 ): reads “order N-squared” or “Big-Oh N-squared”
  • 15. Big Oh: more examples N 2 / 2 – 3N = O(N 2 ) 1 + 4N = O(N) 7N 2 + 10N + 3 = O(N 2 ) = O(N 3 ) log 10 N = log 2 N / log 2 10 = O(log 2 N) = O(log N) sin N = O(1); 10 = O(1), 10 10 = O(1) log N + N = O(N) log k N = O(N) for any constant k N = O(2 N ), but 2 N is not O(N) 2 10N is not O(2 N )
  • 17. Some rules When considering the growth rate of a function using Big-Oh Ignore the lower order terms and the coefficients of the highest-order term No need to specify the base of logarithm Changing the base from one constant to another changes the value of the logarithm by only a constant factor If T 1 (N) = O(f(N) and T 2 (N) = O(g(N)), then T 1 (N) + T 2 (N) = max(O(f(N)), O(g(N))), T 1 (N) * T 2 (N) = O(f(N) * g(N))
  • 18. Big-Omega  c , n 0 > 0 such that f(N)  c g(N) when N  n 0 f(N) grows no slower than g(N) for “large” N
  • 19. Big-Omega f(N) =  (g(N)) There are positive constants c and n 0 such that f(N)  c g(N) when N  n 0 The growth rate of f(N) is greater than or equal to the growth rate of g(N).
  • 20. Big-Omega: examples Let f(N) = 2N 2 . Then f(N) =  (N) f(N) =  (N 2 ) (best answer)
  • 21. f(N) =  (g(N)) the growth rate of f(N) is the same as the growth rate of g(N)
  • 22. Big-Theta f(N) =  (g(N)) iff f(N) = O(g(N)) and f(N) =  (g(N)) The growth rate of f(N) equals the growth rate of g(N) Example: Let f(N)=N 2 , g(N)=2N 2 Since f(N) = O(g(N)) and f(N) =  (g(N)), thus f(N) =  (g(N)). Big-Theta means the bound is the tightest possible.
  • 23. Some rules If T(N) is a polynomial of degree k, then T(N) =  (N k ). For logarithmic functions, T(log m N) =  (log N).
  • 25. Growth rates … Doubling the input size f(N) = c  f(2N) = f(N) = c f(N) = log N  f(2N) = f(N) + log 2 f(N) = N  f(2N) = 2 f(N) f(N) = N 2  f(2N) = 4 f(N) f(N) = N 3  f(2N) = 8 f(N) f(N) = 2 N  f(2N) = f 2 (N) Advantages of algorithm analysis To eliminate bad algorithms early pinpoints the bottlenecks, which are worth coding carefully
  • 26. Using L' Hopital's rule L' Hopital's rule If and then = Determine the relative growth rates (using L' Hopital's rule if necessary) compute if 0: f(N) = O(g(N)) and f(N) is not  (g(N)) if constant  0: f(N) =  (g(N)) if  : f(N) =  (f(N)) and f(N) is not  (g(N)) limit oscillates: no relation
  • 27. General Rules For loops at most the running time of the statements inside the for-loop (including tests) times the number of iterations. Nested for loops the running time of the statement multiplied by the product of the sizes of all the for-loops. O(N 2 )
  • 28. General rules (cont’d) Consecutive statements These just add O(N) + O(N 2 ) = O(N 2 ) If S1 Else S2 never more than the running time of the test plus the larger of the running times of S1 and S2.
  • 29. Another Example Maximum Subsequence Sum Problem Given (possibly negative) integers A 1 , A 2 , ...., A n , find the maximum value of For convenience, the maximum subsequence sum is 0 if all the integers are negative E.g. for input –2, 11, -4, 13, -5, -2 Answer: 20 (A 2 through A 4 )
  • 30. Algorithm 1: Simple Exhaustively tries all possibilities (brute force) O(N 3 )
  • 31. Algorithm 2: Divide-and-conquer Divide-and-conquer split the problem into two roughly equal subproblems, which are then solved recursively patch together the two solutions of the subproblems to arrive at a solution for the whole problem   The maximum subsequence sum can be Entirely in the left half of the input Entirely in the right half of the input It crosses the middle and is in both halves
  • 32. Algorithm 2 (cont’d) The first two cases can be solved recursively For the last case: find the largest sum in the first half that includes the last element in the first half the largest sum in the second half that includes the first element in the second half add these two sums together
  • 33. Algorithm 2 … O(1) T(m/2) O(m) O(1) T(m/2)
  • 34. Algorithm 2 (cont’d) Recurrence equation 2 T(N/2): two subproblems, each of size N/2 N: for “patching” two solutions to find solution to whole problem
  • 35. Algorithm 2 (cont’d) Solving the recurrence: With k=log N (i.e. 2 k = N), we have Thus, the running time is O(N log N) faster than Algorithm 1 for large data sets