SlideShare a Scribd company logo
TIME EXECUTION
OF
DIFFERENT SORTED ALGORITHMS
Have you ever thought that how our program runs or
compiler in milliseconds but not in few minutes?
How you’re application programs in gadgets runs too fast or
too slow?
All these questions can be answered by –”Time Complexity”.
 It is a finite set of precise instruction for performing for solving a
problem.
 The typical meaning of algorithm is ‘ a formally defined procedure for
performing some calculation’.
 If a procedure is formally defined , then it must be implemented
using some formal language, is known as ‘programming language’.
 first step  “start” and
 the last step  “ end “
 The efficiency of an algorithm is expressed in terms of the number of
elements that has to be processed. So, if n is the number of
elements, then the efficiency can be stated as ;-
 f(n)=efficiency
Linear (without any
loops or recursion)
Certain loops/
recursive functions
Efficiency of that algorithm or
running time of that algorithm
can be given as the number of
instructions it contains.
Efficiency of that algorithm
may vary depending on
the number of loops and
the running time of each
loop in the algorithm.
the efficiency of that algorithm may vary depending on the number of loops and the running time of each loop in the algorithm.the efficiency
 To analyze an algorithm means determining the amount of resource needed to execute it.
 It basically means the running time of a program, as a function of input size.
 In number of machine instruction which a program executes during it’s execution is time
complexity.
 Algorithms are generally designed to work with an arbitrary number of inputs so the
efficiency or complexity of an algorithm is stated in terms of time complexity.
• is primarily dependent on
• it uses are 2 major measures of the efficiency of
an algorithm.
• The complexity of an algorithm is a function which gives the running time and/ or space in
terms of the input size.
•Although we are not able to use the efficient algorithm, since the choice of data structure
depends on many things, including the type of data and frequency with which various data
operation are applied. Sometimes the choice of data structure involves
time -space trade off
This number Size of programs
input and algorithm
used
Time Space
The best algorithm to solve a problem at hand is, no doubt, the one that
requires less memory space and take less time to complete its execution.
 But practically, designing such an ideal algorithm is not a trivial task.
There can be more than one algorithm to solve a particular problem. One
may require less memory space, while the other may require less CPU time
to execute.
 Thus, it is not uncommon to sacrifice one thing for the other. Hence, there
exists a time-space trade off among algorithm.
 So, is big constraint
On the contrary,
is a major constraint
space One might choose a program
that takes less space at cost of
more CPU time.
Time
One might choose program that
takes minimum time to execute
the program
 The time and space complexity can be expressed using a function f(n) where
 n  is the input size for a given instance of the problem being solved
.
We want to predict
rate of complexity
as size of problem
increases.
There are multiple
algorithm to find a
solution to a given
problem and we need
to find the algorithm
that is more efficient.
Expressing the complexity is required when
 Suppose M is an algorithm and
n size of input &
Now, the time
and space needed by algorithm M
are two measures for efficiency of M
Where (i). time measured by counting no. of key
operations(i.e no. of comparisons)
(ii) Space measured by counting maximum of memory
Needed by algorithm
 The complexity of algorithm M is a function which gives
running time pr storage space required of size ‘n’ of input
data.
Example
sum of two number
 Sum (a , b)
{ return a + b;
}
If time taken is a  T sum
here T sum = 2 units
Therefore we can say that time
taken by program is always
constant.
Sum of all elements in list
Sumoflist (int A, n)
1. { total=0;
2. for i= 0 to n-1
3. { total = total + A[i];
}
4. return total;
}
 T sum of list = 1+2(n+1)+2n+1
= 4n+4
also T(n)= C n + C’
1unit1unit
Cost No of time
2
1
2
1
1
n+1
n
1
 As we can see from above example..
that there is a dependency of time taken to solve a problem &
As a function of the problem dimension and size
But, formula may only valid for “large” problems.
so we keep “growth rate” of computational workload as a
function of problem mentioned.
 Other Asymptotic notations of complexity of
algorithm consists of
 Big Omega denotes "more than or the same as".
 Big Theta denotes "the same as".
 Big Oh denotes "fewer than or the same as" .
Out of all three notations  big-oh complexity is being
used for al the algorithms.
 Suppose we have two algorithm
A1= T(n)=5n2 + 7
A2= T(n)=17n2+6n+8
Now these function are corresponding to model machine but
we want some function or some representation which is true,
irrespective of machine and still gives us idea about rate of
growth.
So we use some asymptotic notations which helps us in
classifying functions into their order with respect to the
input.
Note
We have seen number of statements executed in the functions for n elements of
the data is the function of the number of elements, expressed as f(n) .
 Even is the equation is derived for a function may be complex, a dominant
factor in the equation is sufficient to determine the order of the magnitude of
the result
and hence ,
the efficiency of the algorithm.
 This factor is the Big-Oh , as in on the order of, and is expressed as an O(n2 )
 The Big -Oh notation, where the O refer to as “order of”, is concerned with
what happens for very large values of n.
Here closest upper bound is considered for
functions.
 If we have non negative function g(n) that take n as positive
input,
O (g(n))={ f(n): there exist constant c & n0 ,such that
f(n)≤ g(n), n ≥ n0}
Suppose there are two function.
f(n)= 5n2 + 2n+1
g(n)= n2
so C = 5+2+1=8 then, f(n) ≤8 n2
And n0 = 1
Here f(n) running time of algorithm
f(n)≤c g(n)
c>0 and no ≥ 1
n≥no
Therefore
f(n)=O (g(n))
This graph tells us that
After n=1 (i.e )
C(g(n))>f(n)
This assures that f(n) never grows at rate faster than c (g(n)).
Example
If a sorting algorithm performs n2 operations to sort just n
elements, then that algorithm would be described as an O(n2 )
algorithm.
Input cannot be
negative
Big Omega notation (Ω)
The omega notation is used when the function g(n) defines a lower
bound for the function f(n).
Definition :-
A positive function g(n) with positive
input n,
Ω (g(n))= { f(n): there exist constant
C and n0
c(g(n))≤ f(n) , n≥ n0
}
Here only closest lower bound is considered.# Note
 If we have
f(n)= 5n2 + 2n +1 = Ω(n2)
g(n)= n2
so C = 5 then, 5n2 ≤ f(n)
And n0 = 0
so f(n)is Ω of n2
This graph tells us that c(g(n)) will
Never exceeds f(n) for all n≥ n0 .
The theta notation is used when the function f(n) is bounded both from above and
below by the function g(n) i.e f(n) is tight bound function.
If we have positive function g(n),
Θ(g(n))={ f(n): there exists constant
c1 , c2 and no
c1 g(n) ≤ f(n) ≤c2 g(n) , n> no
}
Example
f(n)= 5n2 +2n+1
g(n)= n2
we can choose
c1 = 5, c2= 8 , no =1
so f(n) always lies between these two functions.
Here both lower bound and upper bound is
considered for functions.#Note
 Θ notation is best described or idea of growth of function
f(n) because it gives us a tight bound unlike big O and big Ω
,which gives us an upper bound and lower bound respectively.
 So Θ notation tells us that
g(n) is as close to f(n) (i.e)
growth rate of g(n) is as close to growth rate of f(n)as possible.
But , we also in lot of cases use Big Oh notation which gives us an
idea about runtime algorithm in worst case.
As we proceed further, whenever there is a case of
finding running time :-
The question that suddenly strikes in our mind is that
which situation involves maximum time to complete a
condition.
In a similar way, in computer science there are 3 cases
for which the time complexity differs-
1. Worst – case
2. Average – case
3. Best – case
Worst - case Best - case
• It is an estimate of running
time for an “average” input.
• It specifies the expected
behavior of algorithm when
the input is randomly
drawn from a given
distribution.
• It assumes that input of a
given size are equally likely.
• This denotes the
behavior of
algorithm with
respect to worst
possible case for
any input.
• It assumes that
algorithm will
never go beyond
this limit.
• It is used to
‘analyze’ an
algorithm under
optional
condition.
 Whenever we want to search ,insert, delete an element at
last position, so  it would require maximum time to
reach there / to perform condition.
 Suppose
And
So , it might happen that element
May not be present or might be present at any position.
X Element (we want to search)
A Array
Is an
Is an
X
 Cases are :-
So, there we have that is equally likely at any pos.
X
Element X is not
present in
A
Worst case
C(n)=n
Element X does
appear in
A
If element is at
first positon
If element is at
middle position of
array
Best case Average
case
 Now we will consider two main topics about complexities could be
defined.
1. Searching
2. Sorting
Searching
Linear search Binary search
 It searches an element or value from
an array till the desired element or
value is not found and it searches a
sequence order.
 It compares the element with all the
other elements given in the list and if
the elements is matched it returns
value index else return -1.
 Linear search is applied on unsorted
list.
 The complexity of linear search algorithm is
given by C (n) =n/2.
8 2 6 3 5
 It is applied on sorted array or list.
 In binary search, we first compare
the values with the elements in
middle position of array.
 It is useful when there are large
number of elements in an array.
 The complexity of the binary search
algorithm is given by C(n)=log n
2 7 9 13 15
 Technique Binary search
value
Is matched
Is less than the
middle element
Is greater than
element
Then we return
the value.
Then it must lie in the
lower half of array
Then it must lie in
upper half.
 An operation that segregates items into groups according to specified
collection
 For example- array A[ ]={ 10,7,15, 2, 20, 4 } before sorting
array A[ ] = { 2, 4, 7, 10, 15, 20} after sorting
Deals with deals with
Sorting
Internal sorting External sorting
Sorting the data
stored in computer’s
memory.
Sorting the data stored
in files.
It is applied for
voluminous data
 There are different forms of sorting techniques by
which we can perform a single program by different
types of sorting .
1. Bubble sort
2. Insertion sort
3. Selection sort
4. Shell sort
5. Merge sort
6. Heap sort
7. Radix sort
8. Quick sort
 Review of Complexity
 Most of primary sorting algorithm runs on different space and time. Time
complexity is defined as the “running – time” of a program, as a function of
input size.
 Complexity, in general measures the algorithm efficiency in internal
factors such as time needed to run an algorithm.
 Time complexity also isn’t useful fetching usernames from a database,
concatenating strings or encryption passwords. It is also used for
1. sorting functions,
2. recursive calculations &
3. Things which takes more computing time.
 This is not because we don’t care about the function’s execution time, but
because the difference is neglible. We don’t care if it takes 10ms instead of
3ms to fetch a username. However, if we have a recursive sorting algorithm
which takes 400ms and we can reduce that to 5ms, that would bean
interesting thing to do.
 The below table depicts the time execution of different
sorting.
Algorithm Worst case Average case
Bubble sort n(n-1)/2 = 0 (n2) n(n-1)/2 = 0 (n2)
Insertion sort 0 (n2) 0 (n2)
Selection sort 0 (n2) 0 (n2)
Shell sort 0 (n) total Depends on gap
sequence
Merge sort 0(n log (n)) 0(n log (n))
 The growth of function is usually described using the Big – O notation.
 The Big O notation for the time complexity of an algorithm. It is a
mathematical representation off the upper bound of the limit of sealing
factor of the algorithm.
 “Popular “functions of g (n) are;-
nlogn, 1, 2n, n2, n!, n3, logn
 1
 logn
 n
 nlogn
 n2
 2n
 n!
Growth rate increases
Listed from Slowest to Fastest growth:
1

More Related Content

PPTX
Analysis of algorithms
PDF
Daa notes 2
PDF
Anlysis and design of algorithms part 1
PDF
Analysis and design of algorithms part2
PPTX
Lecture 2 data structures and algorithms
PPTX
Daa unit 5
PPT
Analysis Of Algorithms I
PPTX
Unit ii algorithm
Analysis of algorithms
Daa notes 2
Anlysis and design of algorithms part 1
Analysis and design of algorithms part2
Lecture 2 data structures and algorithms
Daa unit 5
Analysis Of Algorithms I
Unit ii algorithm

What's hot (20)

PPT
Fundamentals of the Analysis of Algorithm Efficiency
PDF
Daa notes 1
PPTX
Data Structures - Lecture 1 [introduction]
PDF
Design & Analysis of Algorithms Lecture Notes
PPT
chapter 1
PDF
Introduction to Algorithms Complexity Analysis
PPT
Algorithm analysis
PPT
Complexity of Algorithm
PPTX
Daa unit 1
PDF
Algorithm chapter 2
PPTX
Algorithm analysis (All in one)
PPT
Data Structures- Part2 analysis tools
PPT
Introduction to Algorithms
PPT
Algorithm And analysis Lecture 03& 04-time complexity.
PPTX
Complexity analysis in Algorithms
PPT
how to calclute time complexity of algortihm
PDF
Lecture 3 insertion sort and complexity analysis
PDF
01 Analysis of Algorithms: Introduction
PPTX
Performance analysis(Time & Space Complexity)
DOCX
Basic Computer Engineering Unit II as per RGPV Syllabus
Fundamentals of the Analysis of Algorithm Efficiency
Daa notes 1
Data Structures - Lecture 1 [introduction]
Design & Analysis of Algorithms Lecture Notes
chapter 1
Introduction to Algorithms Complexity Analysis
Algorithm analysis
Complexity of Algorithm
Daa unit 1
Algorithm chapter 2
Algorithm analysis (All in one)
Data Structures- Part2 analysis tools
Introduction to Algorithms
Algorithm And analysis Lecture 03& 04-time complexity.
Complexity analysis in Algorithms
how to calclute time complexity of algortihm
Lecture 3 insertion sort and complexity analysis
01 Analysis of Algorithms: Introduction
Performance analysis(Time & Space Complexity)
Basic Computer Engineering Unit II as per RGPV Syllabus
Ad

Similar to TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS (20)

PPTX
Algorithm for the DAA agscsnak javausmagagah
PPTX
Data Structures and Algorithms for placements
PDF
Algorithm Analysis.pdf
PPTX
Intro to super. advance algorithm..pptx
PPTX
Algorithm Complexity & Data Structure Notes
PPT
Data_Structure_and_Algorithms_Lecture_1.ppt
PPTX
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptx
PPT
Design and analysis of algorithm in Computer Science
PDF
Data Structure & Algorithms - Mathematical
PDF
lecture2-180129175419 (1).pdfhhhhhhhhhhh
PPT
Complexity Analysis
PPT
Asymptotic Notation
PPTX
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
PPTX
Algorithm Analysis
PPTX
BCSE202Lkkljkljkbbbnbnghghjghghghghghghghgh
PPTX
Design and analysis of algorithms unit1.pptx
PPT
How to calculate complexity in Data Structure
PPT
Time complexity.ppt
PPT
Time complexity.pptr56435 erfgegr t 45t 35
PPTX
Time complexity.pptxghhhhhhhhhhhhhhhjjjjjjjjjjjjjjjjjjjjjjjjjj
Algorithm for the DAA agscsnak javausmagagah
Data Structures and Algorithms for placements
Algorithm Analysis.pdf
Intro to super. advance algorithm..pptx
Algorithm Complexity & Data Structure Notes
Data_Structure_and_Algorithms_Lecture_1.ppt
Data Structures and Agorithm: DS 22 Analysis of Algorithm.pptx
Design and analysis of algorithm in Computer Science
Data Structure & Algorithms - Mathematical
lecture2-180129175419 (1).pdfhhhhhhhhhhh
Complexity Analysis
Asymptotic Notation
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
Algorithm Analysis
BCSE202Lkkljkljkbbbnbnghghjghghghghghghghgh
Design and analysis of algorithms unit1.pptx
How to calculate complexity in Data Structure
Time complexity.ppt
Time complexity.pptr56435 erfgegr t 45t 35
Time complexity.pptxghhhhhhhhhhhhhhhjjjjjjjjjjjjjjjjjjjjjjjjjj
Ad

More from Tanya Makkar (11)

DOCX
Metoo campaign
PDF
Colour models and sweep (2).pdf
PDF
Comparision of scheduling algorithms
PDF
Program to reflecta triangle
PDF
Complete dbms notes
PDF
Cg A buffer.pdf
PPTX
ARBITRATION AND CONCILATION
DOCX
Dbms question paper
PPTX
ROAD SAFETY AND SAFE DRIVE
DOCX
Ds inserting elem in array
PPTX
Tanya makkar (5003)
Metoo campaign
Colour models and sweep (2).pdf
Comparision of scheduling algorithms
Program to reflecta triangle
Complete dbms notes
Cg A buffer.pdf
ARBITRATION AND CONCILATION
Dbms question paper
ROAD SAFETY AND SAFE DRIVE
Ds inserting elem in array
Tanya makkar (5003)

Recently uploaded (20)

PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PDF
PPT on Performance Review to get promotions
PPTX
Welding lecture in detail for understanding
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
composite construction of structures.pdf
PDF
Digital Logic Computer Design lecture notes
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
DOCX
573137875-Attendance-Management-System-original
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
Sustainable Sites - Green Building Construction
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
UNIT 4 Total Quality Management .pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
Construction Project Organization Group 2.pptx
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPT on Performance Review to get promotions
Welding lecture in detail for understanding
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
composite construction of structures.pdf
Digital Logic Computer Design lecture notes
Strings in CPP - Strings in C++ are sequences of characters used to store and...
Arduino robotics embedded978-1-4302-3184-4.pdf
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
573137875-Attendance-Management-System-original
OOP with Java - Java Introduction (Basics)
Sustainable Sites - Green Building Construction
Lecture Notes Electrical Wiring System Components
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
UNIT 4 Total Quality Management .pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Foundation to blockchain - A guide to Blockchain Tech
Construction Project Organization Group 2.pptx

TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS

  • 1. TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS Have you ever thought that how our program runs or compiler in milliseconds but not in few minutes? How you’re application programs in gadgets runs too fast or too slow? All these questions can be answered by –”Time Complexity”.
  • 2.  It is a finite set of precise instruction for performing for solving a problem.  The typical meaning of algorithm is ‘ a formally defined procedure for performing some calculation’.  If a procedure is formally defined , then it must be implemented using some formal language, is known as ‘programming language’.  first step  “start” and  the last step  “ end “  The efficiency of an algorithm is expressed in terms of the number of elements that has to be processed. So, if n is the number of elements, then the efficiency can be stated as ;-  f(n)=efficiency
  • 3. Linear (without any loops or recursion) Certain loops/ recursive functions Efficiency of that algorithm or running time of that algorithm can be given as the number of instructions it contains. Efficiency of that algorithm may vary depending on the number of loops and the running time of each loop in the algorithm. the efficiency of that algorithm may vary depending on the number of loops and the running time of each loop in the algorithm.the efficiency
  • 4.  To analyze an algorithm means determining the amount of resource needed to execute it.  It basically means the running time of a program, as a function of input size.  In number of machine instruction which a program executes during it’s execution is time complexity.  Algorithms are generally designed to work with an arbitrary number of inputs so the efficiency or complexity of an algorithm is stated in terms of time complexity.
  • 5. • is primarily dependent on • it uses are 2 major measures of the efficiency of an algorithm. • The complexity of an algorithm is a function which gives the running time and/ or space in terms of the input size. •Although we are not able to use the efficient algorithm, since the choice of data structure depends on many things, including the type of data and frequency with which various data operation are applied. Sometimes the choice of data structure involves time -space trade off This number Size of programs input and algorithm used Time Space
  • 6. The best algorithm to solve a problem at hand is, no doubt, the one that requires less memory space and take less time to complete its execution.  But practically, designing such an ideal algorithm is not a trivial task. There can be more than one algorithm to solve a particular problem. One may require less memory space, while the other may require less CPU time to execute.  Thus, it is not uncommon to sacrifice one thing for the other. Hence, there exists a time-space trade off among algorithm.  So, is big constraint On the contrary, is a major constraint space One might choose a program that takes less space at cost of more CPU time. Time One might choose program that takes minimum time to execute the program
  • 7.  The time and space complexity can be expressed using a function f(n) where  n  is the input size for a given instance of the problem being solved . We want to predict rate of complexity as size of problem increases. There are multiple algorithm to find a solution to a given problem and we need to find the algorithm that is more efficient. Expressing the complexity is required when
  • 8.  Suppose M is an algorithm and n size of input & Now, the time and space needed by algorithm M are two measures for efficiency of M Where (i). time measured by counting no. of key operations(i.e no. of comparisons) (ii) Space measured by counting maximum of memory Needed by algorithm  The complexity of algorithm M is a function which gives running time pr storage space required of size ‘n’ of input data.
  • 9. Example sum of two number  Sum (a , b) { return a + b; } If time taken is a  T sum here T sum = 2 units Therefore we can say that time taken by program is always constant. Sum of all elements in list Sumoflist (int A, n) 1. { total=0; 2. for i= 0 to n-1 3. { total = total + A[i]; } 4. return total; }  T sum of list = 1+2(n+1)+2n+1 = 4n+4 also T(n)= C n + C’ 1unit1unit Cost No of time 2 1 2 1 1 n+1 n 1
  • 10.  As we can see from above example.. that there is a dependency of time taken to solve a problem & As a function of the problem dimension and size But, formula may only valid for “large” problems. so we keep “growth rate” of computational workload as a function of problem mentioned.
  • 11.  Other Asymptotic notations of complexity of algorithm consists of  Big Omega denotes "more than or the same as".  Big Theta denotes "the same as".  Big Oh denotes "fewer than or the same as" . Out of all three notations  big-oh complexity is being used for al the algorithms.
  • 12.  Suppose we have two algorithm A1= T(n)=5n2 + 7 A2= T(n)=17n2+6n+8 Now these function are corresponding to model machine but we want some function or some representation which is true, irrespective of machine and still gives us idea about rate of growth. So we use some asymptotic notations which helps us in classifying functions into their order with respect to the input.
  • 13. Note We have seen number of statements executed in the functions for n elements of the data is the function of the number of elements, expressed as f(n) .  Even is the equation is derived for a function may be complex, a dominant factor in the equation is sufficient to determine the order of the magnitude of the result and hence , the efficiency of the algorithm.  This factor is the Big-Oh , as in on the order of, and is expressed as an O(n2 )  The Big -Oh notation, where the O refer to as “order of”, is concerned with what happens for very large values of n. Here closest upper bound is considered for functions.
  • 14.  If we have non negative function g(n) that take n as positive input, O (g(n))={ f(n): there exist constant c & n0 ,such that f(n)≤ g(n), n ≥ n0} Suppose there are two function. f(n)= 5n2 + 2n+1 g(n)= n2 so C = 5+2+1=8 then, f(n) ≤8 n2 And n0 = 1
  • 15. Here f(n) running time of algorithm f(n)≤c g(n) c>0 and no ≥ 1 n≥no Therefore f(n)=O (g(n)) This graph tells us that After n=1 (i.e ) C(g(n))>f(n) This assures that f(n) never grows at rate faster than c (g(n)). Example If a sorting algorithm performs n2 operations to sort just n elements, then that algorithm would be described as an O(n2 ) algorithm. Input cannot be negative
  • 16. Big Omega notation (Ω) The omega notation is used when the function g(n) defines a lower bound for the function f(n). Definition :- A positive function g(n) with positive input n, Ω (g(n))= { f(n): there exist constant C and n0 c(g(n))≤ f(n) , n≥ n0 } Here only closest lower bound is considered.# Note
  • 17.  If we have f(n)= 5n2 + 2n +1 = Ω(n2) g(n)= n2 so C = 5 then, 5n2 ≤ f(n) And n0 = 0 so f(n)is Ω of n2 This graph tells us that c(g(n)) will Never exceeds f(n) for all n≥ n0 .
  • 18. The theta notation is used when the function f(n) is bounded both from above and below by the function g(n) i.e f(n) is tight bound function. If we have positive function g(n), Θ(g(n))={ f(n): there exists constant c1 , c2 and no c1 g(n) ≤ f(n) ≤c2 g(n) , n> no } Example f(n)= 5n2 +2n+1 g(n)= n2 we can choose c1 = 5, c2= 8 , no =1 so f(n) always lies between these two functions. Here both lower bound and upper bound is considered for functions.#Note
  • 19.  Θ notation is best described or idea of growth of function f(n) because it gives us a tight bound unlike big O and big Ω ,which gives us an upper bound and lower bound respectively.  So Θ notation tells us that g(n) is as close to f(n) (i.e) growth rate of g(n) is as close to growth rate of f(n)as possible. But , we also in lot of cases use Big Oh notation which gives us an idea about runtime algorithm in worst case.
  • 20. As we proceed further, whenever there is a case of finding running time :- The question that suddenly strikes in our mind is that which situation involves maximum time to complete a condition. In a similar way, in computer science there are 3 cases for which the time complexity differs- 1. Worst – case 2. Average – case 3. Best – case
  • 21. Worst - case Best - case • It is an estimate of running time for an “average” input. • It specifies the expected behavior of algorithm when the input is randomly drawn from a given distribution. • It assumes that input of a given size are equally likely. • This denotes the behavior of algorithm with respect to worst possible case for any input. • It assumes that algorithm will never go beyond this limit. • It is used to ‘analyze’ an algorithm under optional condition.
  • 22.  Whenever we want to search ,insert, delete an element at last position, so  it would require maximum time to reach there / to perform condition.  Suppose And So , it might happen that element May not be present or might be present at any position. X Element (we want to search) A Array Is an Is an X
  • 23.  Cases are :- So, there we have that is equally likely at any pos. X Element X is not present in A Worst case C(n)=n Element X does appear in A If element is at first positon If element is at middle position of array Best case Average case
  • 24.  Now we will consider two main topics about complexities could be defined. 1. Searching 2. Sorting Searching Linear search Binary search  It searches an element or value from an array till the desired element or value is not found and it searches a sequence order.  It compares the element with all the other elements given in the list and if the elements is matched it returns value index else return -1.  Linear search is applied on unsorted list.  The complexity of linear search algorithm is given by C (n) =n/2. 8 2 6 3 5  It is applied on sorted array or list.  In binary search, we first compare the values with the elements in middle position of array.  It is useful when there are large number of elements in an array.  The complexity of the binary search algorithm is given by C(n)=log n 2 7 9 13 15
  • 25.  Technique Binary search value Is matched Is less than the middle element Is greater than element Then we return the value. Then it must lie in the lower half of array Then it must lie in upper half.
  • 26.  An operation that segregates items into groups according to specified collection  For example- array A[ ]={ 10,7,15, 2, 20, 4 } before sorting array A[ ] = { 2, 4, 7, 10, 15, 20} after sorting Deals with deals with Sorting Internal sorting External sorting Sorting the data stored in computer’s memory. Sorting the data stored in files. It is applied for voluminous data
  • 27.  There are different forms of sorting techniques by which we can perform a single program by different types of sorting . 1. Bubble sort 2. Insertion sort 3. Selection sort 4. Shell sort 5. Merge sort 6. Heap sort 7. Radix sort 8. Quick sort
  • 28.  Review of Complexity  Most of primary sorting algorithm runs on different space and time. Time complexity is defined as the “running – time” of a program, as a function of input size.  Complexity, in general measures the algorithm efficiency in internal factors such as time needed to run an algorithm.  Time complexity also isn’t useful fetching usernames from a database, concatenating strings or encryption passwords. It is also used for 1. sorting functions, 2. recursive calculations & 3. Things which takes more computing time.  This is not because we don’t care about the function’s execution time, but because the difference is neglible. We don’t care if it takes 10ms instead of 3ms to fetch a username. However, if we have a recursive sorting algorithm which takes 400ms and we can reduce that to 5ms, that would bean interesting thing to do.
  • 29.  The below table depicts the time execution of different sorting. Algorithm Worst case Average case Bubble sort n(n-1)/2 = 0 (n2) n(n-1)/2 = 0 (n2) Insertion sort 0 (n2) 0 (n2) Selection sort 0 (n2) 0 (n2) Shell sort 0 (n) total Depends on gap sequence Merge sort 0(n log (n)) 0(n log (n))
  • 30.  The growth of function is usually described using the Big – O notation.  The Big O notation for the time complexity of an algorithm. It is a mathematical representation off the upper bound of the limit of sealing factor of the algorithm.  “Popular “functions of g (n) are;- nlogn, 1, 2n, n2, n!, n3, logn  1  logn  n  nlogn  n2  2n  n! Growth rate increases Listed from Slowest to Fastest growth: 1