SlideShare a Scribd company logo
UNIT V SORTING AND
HASHING TECHNIQUES
Sorting — Merge sort — Quick sort —
Insertion sort — Shell sort — Radix sort;
Hashing — Hash Functions — Collision
resolution techniques — Linear Probing —
Separate Chaining — Open Addressing —
Rehashing — Extendible Hashing.
Linear Search
 It sequentially checks each element of the list for the target
value until a match is found
 The time complexity is commonly estimated by counting the
number of elementary steps performed by any algorithm to
finish execution.
 Time complexity of linear search is O(n)
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
Routine
void linear(int x, int a[], int n)
{
int i;
for(i=0;i<n;i++)
{
if(a[i]==x)
{
printf("element is found at position %dn",i);
exit(0);
}
}
printf("element not found");
}
Binary search
• Each step the algorithm compares the input key with the value at the
middle element of array
• In binary search, the list is scanned from left to right.
• It X is the middle element(X==searchkey), the element is found
• Otherwise, if X is smaller than the middle element, apply the strategy
to the sorted sub array to the left of the middle element
• If X is greater than the middle element apply the same strategy to the
right half
Binary search
Binary search
binary.c
SORTING INTRODUCTION
 Sorting is a technique for arranging data in a particular order
 The sorting order can be ascending or descending.
Internal Sorting
 Data resides on main memory of computer.
 It is applicable when the number of elements in the list is small.
 Eg. Bubble Sort, Insertion Sort, Shell Sort, Quick Sort., Selection
sort, Radix sort
External Sorting
 huge amount of data and it resides on secondary device
 Eg. Merge Sort
Bubble sort
 Consecutive adjacent pairs of elements in the array are
compared with each other.
 If the element at the lower index is greater than the element at
the higher index
 the two elements are interchanged so that the element is placed
before the bigger one.
 This process will continue till the list of unsorted elements
exhausts
Example
Routine
void bubble_sort( int A[ ], int n )
{
int temp;
for(int k = 0; k< n-1; k++)
{
// (n-k-1) is for ignoring comparisons of elements which
have already been compared in earlier iterations
for(int i = 0; i < n-k-1; i++)
{
if(A[ i ] > A[ i+1] )
{
temp = A[ i ];
A[ i ] = A[ i+1 ];
A[ i + 1] = temp;
}
}
}
}
Merge Sort
 Merge sort is a sorting technique based on divide and conquer technique.
 Merge sort first divides the array into equal halves and then combines them in a
sorted manner.
 Algorithm
 Merge sort keeps on dividing the list into equal halves until it can no more be
divided. By definition, if it is only one element in the list, it is sorted. Then, merge sort
combines the smaller sorted lists keeping the new list sorted too.
 Step 1 − if it is only one element in the list it is already sorted, return.
 Step 2 − divide the list recursively into two halves until it can no more be divided.
 Step 3 − merge the smaller lists into new list in sorted order.
Divide and Conquer
 Divide: This involves dividing the problem into smaller sub-problems.
 Conquer: Solve sub-problems by calling recursively until solved.
 Combine: Combine the sub-problems to get the final solution of
the whole problem.
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
Quick Sort
 Quick sort is a highly efficient sorting algorithm and is based on
partitioning of array of data into smaller arrays.
 A large array is partitioned into two arrays one of which holds values
smaller than the specified value, say pivot, based on which the
partition is made and another array holds values greater than the
pivot value.
 Divide: In Divide, first pick a pivot element. After that, partition or
rearrange the array into two sub-arrays such that each element in the
left sub-array is less than or equal to the pivot element and each
element in the right sub-array is larger than the pivot element.
 Conquer: Recursively, sort two subarrays with Quicksort.
 Combine: Combine the already sorted array.
 Quicksort picks an element as pivot, and then it partitions the given array
around the picked pivot element.
 In quick sort, a large array is divided into two arrays in which one holds
values that are smaller than the specified value (Pivot), and another array
holds the values that are greater than the pivot.
 After that, left and right sub-arrays are also partitioned using the same
approach. It will continue until the single element remains in the sub-array.
Choosing the pivot
 Picking a good pivot is necessary for the fast implementation of quicksort.
However, it is typical to determine a good pivot. Some of the ways of
choosing a pivot are as follows -
 Pivot can be random, i.e. select the random pivot from the given array.
 Pivot can either be the rightmost element of the leftmost element of the
given array.
 Select median as the pivot element.
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left.
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -
Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.
Because, a[pivot] > a[right],
so, algorithm will swap a[pivot] with a[right], and pivot
moves to right
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24.
Since, pivot is at right, so algorithm starts from left and
moves to right.
As a[pivot] > a[left], so algorithm moves one position to
right
Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As
a[pivot] > a[left], so algorithm moves one position to
right
Now, a[left] = 29, a[right] = 24, and a[pivot] = 24.
As a[pivot] < a[left],
so, swap a[pivot] and a[left], now pivot is at left,
i.e. -
Since, pivot is at left, so algorithm starts from right,
and move to left. Now, a[left] = 24, a[right] = 29,
and a[pivot] = 24. As a[pivot] < a[right], so
algorithm moves one position to left, as -
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As
a[pivot] > a[right], so, swap a[pivot] and a[right],
now pivot is at right,
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24.
Pivot is at right,
so the algorithm starts from left and move to right.
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right
are pointing the same element. It represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the
elements that are left side of element 24 are smaller than it.
Now, in a similar manner, quick sort algorithm is
separately applied to the left and right sub-
arrays. After sorting gets done, the array will be -
Insertion Sort (Straight insertion sort)
 The array of values to be sorted is divided into two sets.
 One that stores sorted values and another that contains unsorted
values.
 Initially, the element with index 0 (assuming LB = 0) is in the sorted
set.
 Rest of the elements are in the unsorted set.
 The first element of the unsorted partition has array index 1 (if LB =
0).
Insertion Sort
 During each iteration of the algorithm, the first element in the
unsorted set is picked up and inserted into the correct position in the
sorted set.
 The sorting algorithm will proceed until there are elements in the
unsorted set.
Insertion Sort
Routine
for(i=1; i<n; i++)
{
key = a[ i ];
j=i-1;
while(j>=0 && a[ j ]>key)
{
a[ j+1 ]=a[ j ];
j=j-1;
}
a[ j+1 ]=key;
}
Selection Sort
 The algorithm maintains two subarrays in a given array.
 The subarray which is already sorted.
 Remaining subarray which is unsorted.
 The selection sort algorithm sorts an array by repeatedly finding the
minimum element from unsorted part and putting it at the beginning.
 In every iteration of selection sort, the minimum element from the
unsorted subarray is picked and moved to the sorted subarray
Selection Sort
Routine
void selection_sort (int A[ ], int n) //n=8
{
// temporary variable to store the position of
minimum element
int minimum;
for(int i = 0; i < n-1 ; i++)
{
//finds the minimum element
minimum = i ;
for(int j = i+1; j < n ; j++ )
{
if(A[ j ] < A[ minimum ])
{
minimum = j ;
}
}
// putting minimum element on its proper
position.
swap ( A[ minimum ], A[ i ]) ; } }
Shell Sort
 Improved version of straight insertion sort
 In shell sort, elements at a specific interval are sorted.
 The interval between the elements is gradually decreased based on the
sequence used.
 Sequence
–Shell's original sequence: N/2 , N/4 , …, 1
 The performance of the shell sort depends on the type of sequence used for a
given input array
 Gap=floor(N/2)
 Gap1=floor(Gap/2)
 Gap2=floor(Gap1/2)
Example 1 Shell Sort with Increments of Three
Shell Sort
int shellSort(int arr[], int n)
{
for (int gap = n/2; gap > 0; gap /= 2)
{
for (int i = gap; i < n; i += 1)
{
int temp = arr[i];
int j;
for (j = i; j >= gap && arr[j - gap] > temp; j -= gap)
arr[j] = arr[j - gap];
arr[j] = temp;
}
}
return 0;
}
Radix Sort
 First, sort elements based on the value of the unit place.
 Then, sort elements based on the value of the tenth place.
 This process goes on until the last significant place
Example [121, 432, 564, 23, 1, 45, 788]
Radix Sort
 Find the largest element in the array, i.e. max.
 Let X be the number of digits in max.
–X is calculated because we have to go through all the significant
places of all elements.
 In this array [121, 432, 564, 23, 1, 45, 788],
–we have the largest number 788.
–It has 3 digits.
–Therefore, the loop should go up to hundreds place (3 times)
Radix Sort
•Step 1 - Define 10 queues each representing a bucket for each digit
from 0 to 9.
•Step 2 - Consider the least significant digit of each number in the list
which is to be sorted.
•Step 3 - Insert each number into their respective queue based on the
least significant digit.
•Step 4 - Group all the numbers from queue 0 to queue 9 in the order
they have inserted into their respective queues.
•Step 5 - Repeat from step 3 based on the next least significant digit.
•Step 6 - Repeat from step 2 until all the numbers are grouped based
on the most significant digit.
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
Time Analysis
Adaptive and Non-Adaptive Sorting Algorithm
•Adaptive sorting algorithm
–It takes advantage of already 'sorted' elements in the list that is to be sorted.
–while sorting if the source list has some element already sorted
•Adaptive algorithms will take this into account and will try not to re-order them.
–Eg: Insertion Sort, selection sort, bubble sort
•Non-Adaptive sorting algorithm
–which does not take into account the elements which are already sorted.
–Force every single element to be re-ordered to confirm their sortedness
–Eg: Shell sort, radix sort
Application of sorting
•Searching
–Binary search enables you to test whether an item is in a dictionary in time, once
the keys are all sorted.
•Closest pair
–Given a set of n numbers
–Find the pair of numbers that have the smallest difference between them
–After the numbers are sorted, the closest pair of numbers will lie next to each
other somewhere in sorted order.
•Element uniqueness
–Are there any duplicates in a given a set of n items?
–The most efficient algorithm is to sort them and then do a linear scan though them
checking all adjacent pairs.
Application of sorting
•Selection
–What is the kth largest item in the set?
–If the keys are placed in sorted order in an array, the kth largest can be found in
constant time by simply looking at the kth position of the array
•Government organizations, financial institutions, and commercial
enterprises
–Organize much of this information by sorting it
•Load-balancing problem
–M identical processors and N jobs to complete
–Goal is to schedule all of the jobs on the processors so that the time when the last
job completes as early as possible.
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
Generation of Hash address
• Using hash function or Hash algorithm generation of
hash address is carried out.
• Its methods are:
• Truncation method
• Folding method
• Mid square method
• Division method
Truncation Method
• Example:
Hash table size 10000
Keys 987456 125978 611790
Hash Address 987 125 611
Folding method
• Given Keys are broken down into groups of digits (maybe
3) and add them up to form hash address.
• Example:
Hash table size 1000
Keys 612111979
Grouping 612 + 111 + 979 = 1702
Note: if table size extends then apply truncation method to
the folding method result.
Mid square method
• It choose the middle of key and square it generate
the hash address.
• Example:
Hash table size: 1000
Keys:456 978 294
Squared value: 207936 956484 86436
Mid Squared Value: 79 64 43
Division Method
• Modular division method is applied and result is taken as hash
address.
H(key) = key % tablesize
• Example:
Hash table size: 11
Keys:43 10 36
H(43) = 43%11 = 10, H(10) = 10%11 = 10,
H(36) = 36%11 = 3
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
Collision Resolution Techniques
 Collision (crash)
Collision occurs when two or more elements are hashed to
same value
 When tow items hash to the same slot, there is a systematic
method for placing the second item in the hash table. This
process is called collision resolution.
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
Separate chaining
• When a collision occurs, elements with the
same hash key will be chained together.
• A chain is simply a linked list of all the elements
with the same hash key.
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
 Advantage:
 More number of elements can be inserted as it uses
array of linked lists
 Disadvantage
 It requires pointers, which occupies more memory
space,
 It takes more effort to perform a search
Open addressing
 Open addressing also called closed hashing, which is
alternative to resolve the collisions with linked lists.
 In this system, if a collision occurs, alternative cells are tried
until an empty cell is found.
 Three common collision resolution strategies are
 Linear probing
 Quadratic probing
 Double hashing
Linear probing
 The simplest approach to resolve a collision is linear probing.
 In this technique, if a value is already stored at a location generated by h(k),
it means collision occurred then we do a sequential search to find the empty
location.
 Here the idea is to place a value in the next available position.
 Because in this approach searches are performed sequentially so it’s known
as linear probing.
 Here array or hash table is considered circular because when the last slot
reached an empty location not found then the search proceeds to the first
location of the array.
 If the location is empty then store value otherwise find the next location.
Following hash function is used to resolve the collision in:
h(k, i) = [h(k) + i] mod m
 Where
 m = size of the hash table,
 i = the probe number that varies from 0 to m–1.
 Insert 42,39,69,21,71,55,33
REHASHING
 As the name suggests, rehashing means hashing again.
 Basically, when the load factor increases to more than its pre-defined
value
 So to overcome this, the size of the array is increased (doubled) and all
the values are hashed again and stored in the new double sized array to
maintain a low load factor and low complexity.
 Builds the new table with twice the size
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
Extendible hashing
 dynamic hashing method wherein directories, and buckets are used to hash
data.
 Directories: These containers store pointers to buckets
 Buckets: They store the hashed keys.
 Global Depth -It is associated with the Directories. Global Depth = Number of
bits in directory id.
 Local Depth-Local Depth is associated with the buckets and not the directories.
 Bucket Splitting: When the number of elements in a bucket exceeds a particular
size, then the bucket is split into two parts.
 Directory Expansion: Directory Expansion Takes place when a bucket
overflows. Directory Expansion is performed when the local depth of the
overflowing bucket is equal to the global depth
 Extendible hashing to be performed in two disk accesses
 Data consists of six bit integers.
 The root of the tree contains four pointers by the leading two bits of the
data. (D)
 Each leaf has upto M=4 elements
 D will be represent the number of bits used by the root
 The number of entries in the directory is 2 D
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
 Suppose we want to insert the key 100100.
 This would go into the third leaf.
 But the third leaf is already full, there is no room.
 So split this leaf into two leaves
 Now the directory size is increased to 3.
UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx
 Advantage
Provide quick access for insertion and searching
 Disadvantage
The algorithm does not work if there are more than
M duplicates.

More Related Content

PPTX
Unit 2 - Quick Sort.pptx
PPTX
Searching and Sorting Algorithms in Data Structures
PPTX
Dsa – data structure and algorithms sorting
PPT
search_sort.ppt
PDF
Sorting Algorithms and their implementations
PDF
Heap, quick and merge sort
PDF
PPTX
data structures and algorithms Unit 3
Unit 2 - Quick Sort.pptx
Searching and Sorting Algorithms in Data Structures
Dsa – data structure and algorithms sorting
search_sort.ppt
Sorting Algorithms and their implementations
Heap, quick and merge sort
data structures and algorithms Unit 3

Similar to UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx (20)

PDF
Data structures arrays
PPTX
Chapter 2 Sorting and Searching .pptx.soft
PDF
Chapter 1 - Introduction to Searching and Sorting Algorithms - Student.pdf
DOCX
MODULE 5-Searching and-sorting
PPTX
Lect-2.pptx
PPTX
my docoment
PDF
An Experiment to Determine and Compare Practical Efficiency of Insertion Sort...
PPT
Lecture_4 (Sorting Algorithms) before mids - Copy.ppt
PPTX
Sortingalgorithminjavamergesortandr.pptx
PDF
Searching
PPTX
Unit vii sorting
PPTX
Lecture_Oct26.pptx
PPTX
searching in data structure.pptx
PPT
search_sort search_sortsearch_sort search_sortsearch_sortsearch_sortsearch_sort
PPTX
Searching and Sorting algorithms and working
PPT
search_sort Search sortSearch sortSearch sortSearch sort
PPT
search_sort_v1.pptgghghhhggggjjjjjjllllllllvbbbbbcfdsdfffg
PPT
21-algorithms (1).ppt
PPT
Data structures arrays
Chapter 2 Sorting and Searching .pptx.soft
Chapter 1 - Introduction to Searching and Sorting Algorithms - Student.pdf
MODULE 5-Searching and-sorting
Lect-2.pptx
my docoment
An Experiment to Determine and Compare Practical Efficiency of Insertion Sort...
Lecture_4 (Sorting Algorithms) before mids - Copy.ppt
Sortingalgorithminjavamergesortandr.pptx
Searching
Unit vii sorting
Lecture_Oct26.pptx
searching in data structure.pptx
search_sort search_sortsearch_sort search_sortsearch_sortsearch_sortsearch_sort
Searching and Sorting algorithms and working
search_sort Search sortSearch sortSearch sortSearch sort
search_sort_v1.pptgghghhhggggjjjjjjllllllllvbbbbbcfdsdfffg
21-algorithms (1).ppt
Ad

Recently uploaded (20)

PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PDF
PPT on Performance Review to get promotions
PPTX
Construction Project Organization Group 2.pptx
PPTX
Geodesy 1.pptx...............................................
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
additive manufacturing of ss316l using mig welding
PPTX
web development for engineering and engineering
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
UNIT 4 Total Quality Management .pptx
PPT
Project quality management in manufacturing
PPTX
Safety Seminar civil to be ensured for safe working.
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
Fundamentals of safety and accident prevention -final (1).pptx
PPT on Performance Review to get promotions
Construction Project Organization Group 2.pptx
Geodesy 1.pptx...............................................
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
additive manufacturing of ss316l using mig welding
web development for engineering and engineering
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
bas. eng. economics group 4 presentation 1.pptx
Internet of Things (IOT) - A guide to understanding
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
UNIT 4 Total Quality Management .pptx
Project quality management in manufacturing
Safety Seminar civil to be ensured for safe working.
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Ad

UNIT V Searching Sorting Hashing Techniques [Autosaved].pptx

  • 1. UNIT V SORTING AND HASHING TECHNIQUES
  • 2. Sorting — Merge sort — Quick sort — Insertion sort — Shell sort — Radix sort; Hashing — Hash Functions — Collision resolution techniques — Linear Probing — Separate Chaining — Open Addressing — Rehashing — Extendible Hashing.
  • 3. Linear Search  It sequentially checks each element of the list for the target value until a match is found  The time complexity is commonly estimated by counting the number of elementary steps performed by any algorithm to finish execution.  Time complexity of linear search is O(n)
  • 5. Routine void linear(int x, int a[], int n) { int i; for(i=0;i<n;i++) { if(a[i]==x) { printf("element is found at position %dn",i); exit(0); } } printf("element not found"); }
  • 6. Binary search • Each step the algorithm compares the input key with the value at the middle element of array • In binary search, the list is scanned from left to right. • It X is the middle element(X==searchkey), the element is found • Otherwise, if X is smaller than the middle element, apply the strategy to the sorted sub array to the left of the middle element • If X is greater than the middle element apply the same strategy to the right half
  • 9. SORTING INTRODUCTION  Sorting is a technique for arranging data in a particular order  The sorting order can be ascending or descending. Internal Sorting  Data resides on main memory of computer.  It is applicable when the number of elements in the list is small.  Eg. Bubble Sort, Insertion Sort, Shell Sort, Quick Sort., Selection sort, Radix sort External Sorting  huge amount of data and it resides on secondary device  Eg. Merge Sort
  • 10. Bubble sort  Consecutive adjacent pairs of elements in the array are compared with each other.  If the element at the lower index is greater than the element at the higher index  the two elements are interchanged so that the element is placed before the bigger one.  This process will continue till the list of unsorted elements exhausts
  • 12. Routine void bubble_sort( int A[ ], int n ) { int temp; for(int k = 0; k< n-1; k++) { // (n-k-1) is for ignoring comparisons of elements which have already been compared in earlier iterations for(int i = 0; i < n-k-1; i++) { if(A[ i ] > A[ i+1] ) { temp = A[ i ]; A[ i ] = A[ i+1 ]; A[ i + 1] = temp; } } } }
  • 13. Merge Sort  Merge sort is a sorting technique based on divide and conquer technique.  Merge sort first divides the array into equal halves and then combines them in a sorted manner.  Algorithm  Merge sort keeps on dividing the list into equal halves until it can no more be divided. By definition, if it is only one element in the list, it is sorted. Then, merge sort combines the smaller sorted lists keeping the new list sorted too.  Step 1 − if it is only one element in the list it is already sorted, return.  Step 2 − divide the list recursively into two halves until it can no more be divided.  Step 3 − merge the smaller lists into new list in sorted order.
  • 14. Divide and Conquer  Divide: This involves dividing the problem into smaller sub-problems.  Conquer: Solve sub-problems by calling recursively until solved.  Combine: Combine the sub-problems to get the final solution of the whole problem.
  • 16. Quick Sort  Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data into smaller arrays.  A large array is partitioned into two arrays one of which holds values smaller than the specified value, say pivot, based on which the partition is made and another array holds values greater than the pivot value.  Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two sub-arrays such that each element in the left sub-array is less than or equal to the pivot element and each element in the right sub-array is larger than the pivot element.  Conquer: Recursively, sort two subarrays with Quicksort.  Combine: Combine the already sorted array.
  • 17.  Quicksort picks an element as pivot, and then it partitions the given array around the picked pivot element.  In quick sort, a large array is divided into two arrays in which one holds values that are smaller than the specified value (Pivot), and another array holds the values that are greater than the pivot.  After that, left and right sub-arrays are also partitioned using the same approach. It will continue until the single element remains in the sub-array.
  • 18. Choosing the pivot  Picking a good pivot is necessary for the fast implementation of quicksort. However, it is typical to determine a good pivot. Some of the ways of choosing a pivot are as follows -  Pivot can be random, i.e. select the random pivot from the given array.  Pivot can either be the rightmost element of the leftmost element of the given array.  Select median as the pivot element.
  • 19. In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24, a[right] = 27 and a[pivot] = 24. Since, pivot is at left, so algorithm starts from right and move towards left. Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -
  • 20. Now, a[left] = 24, a[right] = 19, and a[pivot] = 24. Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves to right Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts from left and moves to right. As a[pivot] > a[left], so algorithm moves one position to right
  • 21. Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one position to right Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and a[left], now pivot is at left, i.e. -
  • 22. Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right] = 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as - Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and a[right], now pivot is at right,
  • 23. Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from left and move to right. Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the same element. It represents the termination of procedure. Element 24, which is the pivot element is placed at its exact position. Elements that are right side of element 24 are greater than it, and the elements that are left side of element 24 are smaller than it.
  • 24. Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub- arrays. After sorting gets done, the array will be -
  • 25. Insertion Sort (Straight insertion sort)  The array of values to be sorted is divided into two sets.  One that stores sorted values and another that contains unsorted values.  Initially, the element with index 0 (assuming LB = 0) is in the sorted set.  Rest of the elements are in the unsorted set.  The first element of the unsorted partition has array index 1 (if LB = 0).
  • 26. Insertion Sort  During each iteration of the algorithm, the first element in the unsorted set is picked up and inserted into the correct position in the sorted set.  The sorting algorithm will proceed until there are elements in the unsorted set.
  • 28. Routine for(i=1; i<n; i++) { key = a[ i ]; j=i-1; while(j>=0 && a[ j ]>key) { a[ j+1 ]=a[ j ]; j=j-1; } a[ j+1 ]=key; }
  • 29. Selection Sort  The algorithm maintains two subarrays in a given array.  The subarray which is already sorted.  Remaining subarray which is unsorted.  The selection sort algorithm sorts an array by repeatedly finding the minimum element from unsorted part and putting it at the beginning.  In every iteration of selection sort, the minimum element from the unsorted subarray is picked and moved to the sorted subarray
  • 31. Routine void selection_sort (int A[ ], int n) //n=8 { // temporary variable to store the position of minimum element int minimum; for(int i = 0; i < n-1 ; i++) { //finds the minimum element minimum = i ; for(int j = i+1; j < n ; j++ ) { if(A[ j ] < A[ minimum ]) { minimum = j ; } } // putting minimum element on its proper position. swap ( A[ minimum ], A[ i ]) ; } }
  • 32. Shell Sort  Improved version of straight insertion sort  In shell sort, elements at a specific interval are sorted.  The interval between the elements is gradually decreased based on the sequence used.  Sequence –Shell's original sequence: N/2 , N/4 , …, 1  The performance of the shell sort depends on the type of sequence used for a given input array  Gap=floor(N/2)  Gap1=floor(Gap/2)  Gap2=floor(Gap1/2)
  • 33. Example 1 Shell Sort with Increments of Three
  • 35. int shellSort(int arr[], int n) { for (int gap = n/2; gap > 0; gap /= 2) { for (int i = gap; i < n; i += 1) { int temp = arr[i]; int j; for (j = i; j >= gap && arr[j - gap] > temp; j -= gap) arr[j] = arr[j - gap]; arr[j] = temp; } } return 0; }
  • 36. Radix Sort  First, sort elements based on the value of the unit place.  Then, sort elements based on the value of the tenth place.  This process goes on until the last significant place
  • 37. Example [121, 432, 564, 23, 1, 45, 788]
  • 38. Radix Sort  Find the largest element in the array, i.e. max.  Let X be the number of digits in max. –X is calculated because we have to go through all the significant places of all elements.  In this array [121, 432, 564, 23, 1, 45, 788], –we have the largest number 788. –It has 3 digits. –Therefore, the loop should go up to hundreds place (3 times)
  • 39. Radix Sort •Step 1 - Define 10 queues each representing a bucket for each digit from 0 to 9. •Step 2 - Consider the least significant digit of each number in the list which is to be sorted. •Step 3 - Insert each number into their respective queue based on the least significant digit. •Step 4 - Group all the numbers from queue 0 to queue 9 in the order they have inserted into their respective queues. •Step 5 - Repeat from step 3 based on the next least significant digit. •Step 6 - Repeat from step 2 until all the numbers are grouped based on the most significant digit.
  • 43. Adaptive and Non-Adaptive Sorting Algorithm •Adaptive sorting algorithm –It takes advantage of already 'sorted' elements in the list that is to be sorted. –while sorting if the source list has some element already sorted •Adaptive algorithms will take this into account and will try not to re-order them. –Eg: Insertion Sort, selection sort, bubble sort •Non-Adaptive sorting algorithm –which does not take into account the elements which are already sorted. –Force every single element to be re-ordered to confirm their sortedness –Eg: Shell sort, radix sort
  • 44. Application of sorting •Searching –Binary search enables you to test whether an item is in a dictionary in time, once the keys are all sorted. •Closest pair –Given a set of n numbers –Find the pair of numbers that have the smallest difference between them –After the numbers are sorted, the closest pair of numbers will lie next to each other somewhere in sorted order. •Element uniqueness –Are there any duplicates in a given a set of n items? –The most efficient algorithm is to sort them and then do a linear scan though them checking all adjacent pairs.
  • 45. Application of sorting •Selection –What is the kth largest item in the set? –If the keys are placed in sorted order in an array, the kth largest can be found in constant time by simply looking at the kth position of the array •Government organizations, financial institutions, and commercial enterprises –Organize much of this information by sorting it •Load-balancing problem –M identical processors and N jobs to complete –Goal is to schedule all of the jobs on the processors so that the time when the last job completes as early as possible.
  • 52. Generation of Hash address • Using hash function or Hash algorithm generation of hash address is carried out. • Its methods are: • Truncation method • Folding method • Mid square method • Division method
  • 53. Truncation Method • Example: Hash table size 10000 Keys 987456 125978 611790 Hash Address 987 125 611
  • 54. Folding method • Given Keys are broken down into groups of digits (maybe 3) and add them up to form hash address. • Example: Hash table size 1000 Keys 612111979 Grouping 612 + 111 + 979 = 1702 Note: if table size extends then apply truncation method to the folding method result.
  • 55. Mid square method • It choose the middle of key and square it generate the hash address. • Example: Hash table size: 1000 Keys:456 978 294 Squared value: 207936 956484 86436 Mid Squared Value: 79 64 43
  • 56. Division Method • Modular division method is applied and result is taken as hash address. H(key) = key % tablesize • Example: Hash table size: 11 Keys:43 10 36 H(43) = 43%11 = 10, H(10) = 10%11 = 10, H(36) = 36%11 = 3
  • 58. Collision Resolution Techniques  Collision (crash) Collision occurs when two or more elements are hashed to same value  When tow items hash to the same slot, there is a systematic method for placing the second item in the hash table. This process is called collision resolution.
  • 61. Separate chaining • When a collision occurs, elements with the same hash key will be chained together. • A chain is simply a linked list of all the elements with the same hash key.
  • 63.  Advantage:  More number of elements can be inserted as it uses array of linked lists  Disadvantage  It requires pointers, which occupies more memory space,  It takes more effort to perform a search
  • 64. Open addressing  Open addressing also called closed hashing, which is alternative to resolve the collisions with linked lists.  In this system, if a collision occurs, alternative cells are tried until an empty cell is found.  Three common collision resolution strategies are  Linear probing  Quadratic probing  Double hashing
  • 65. Linear probing  The simplest approach to resolve a collision is linear probing.  In this technique, if a value is already stored at a location generated by h(k), it means collision occurred then we do a sequential search to find the empty location.  Here the idea is to place a value in the next available position.  Because in this approach searches are performed sequentially so it’s known as linear probing.  Here array or hash table is considered circular because when the last slot reached an empty location not found then the search proceeds to the first location of the array.
  • 66.  If the location is empty then store value otherwise find the next location. Following hash function is used to resolve the collision in: h(k, i) = [h(k) + i] mod m  Where  m = size of the hash table,  i = the probe number that varies from 0 to m–1.
  • 68. REHASHING  As the name suggests, rehashing means hashing again.  Basically, when the load factor increases to more than its pre-defined value  So to overcome this, the size of the array is increased (doubled) and all the values are hashed again and stored in the new double sized array to maintain a low load factor and low complexity.  Builds the new table with twice the size
  • 70. Extendible hashing  dynamic hashing method wherein directories, and buckets are used to hash data.  Directories: These containers store pointers to buckets  Buckets: They store the hashed keys.  Global Depth -It is associated with the Directories. Global Depth = Number of bits in directory id.  Local Depth-Local Depth is associated with the buckets and not the directories.  Bucket Splitting: When the number of elements in a bucket exceeds a particular size, then the bucket is split into two parts.  Directory Expansion: Directory Expansion Takes place when a bucket overflows. Directory Expansion is performed when the local depth of the overflowing bucket is equal to the global depth
  • 71.  Extendible hashing to be performed in two disk accesses  Data consists of six bit integers.  The root of the tree contains four pointers by the leading two bits of the data. (D)  Each leaf has upto M=4 elements  D will be represent the number of bits used by the root  The number of entries in the directory is 2 D
  • 73.  Suppose we want to insert the key 100100.  This would go into the third leaf.  But the third leaf is already full, there is no room.  So split this leaf into two leaves  Now the directory size is increased to 3.
  • 75.  Advantage Provide quick access for insertion and searching  Disadvantage The algorithm does not work if there are more than M duplicates.