SlideShare a Scribd company logo
1
Tool & Techniques of Data
Science
Unsupervised Learning
2
Unsupervised Machine Learning
• Unsupervised machine learning, uses machine learning algorithms to
analyze and cluster unlabeled datasets.
• These algorithms discover hidden patterns or data groupings without
the need for human intervention.
3
Cluster Analysis
4
What is Cluster Analysis?
• A grouping of data objects such that the objects within a group are
similar (or related) to one another and different from (or unrelated to)
the objects in other groups.
5
What is not Cluster Analysis?
• Supervised classification
• Have class label information.
• Simple segmentation
• Dividing students into different registration groups alphabetically, by last name.
• Result of a query
• Grouping are a result of external specifications.
• Graph partitioning
• Some mutual relevance and synergy, but areas are not identical.
6
Applications of Clustering
• Image Processing
• Cluster images based on their visual content
• Web
• Cluster groups of users based on their access patterns on webpages
• Cluster webpages based on their content
• Bioinformatics
• Cluster similar proteins together (similarity w.r.t chemical structure and/or
functionality etc)
• Many more…
7
Types of Clustering
• Partitional Clustering
• A division data objects into non-overlapping subsets (clusters) such that each
data object is in exactly one subset.
• Hierarchical Clustering
• A set of nested clusters organized as a hierarchical tree
8
Partitional Clustering
9
Hierarchical Clustering
10
Other Distinctions between sets of Clusters
• Exclusive versus non-exclusive
• In non-exclusive clustering, points may belong to multiple clusters.
• Can represent multiple classes or ‘border’ points
• Fuzzy versus non-fuzzy
• In fuzzy clustering, a point belongs to every cluster with some weight between 0
and 1
• Weights must sum to 1
• Probabilistic clustering has similar characteristics
• Partial versus complete
• In some cases, we only want to cluster some of the data
• Heterogeneous versus homogeneous
• Cluster of widely different sizes, shapes, and densities
11
Types of Clusters
• Well-separated clusters
• Center-based clusters
• Contiguous clusters
• Density-based clusters
• Property or Conceptual
• Described by an Objective Function
12
Well Separated Clusters
• A cluster is a set of points such that any point in a cluster is closer (or
more similar) to every other point in the cluster than to any point not
in the cluster.
3 well-separated Clusters
13
Center Based Clusters
• A cluster is a set of objects such that an object in a cluster is closer
(more similar) to the “center” of a cluster, than to the center of any
other cluster
• The center of a cluster is often a centroid, the average of all the
points in the cluster, or a medoid, the most “representative” point of
a cluster
14
Contiguous Clusters (Nearest neighbor or Transitive)
• A cluster is a set of points such that a point in a cluster is closer (or
more similar) to one or more other points in the cluster than to any
point not in the cluster.
15
Density based Clusters
• A cluster is a dense region of points, which is separated by low-
density regions, from other regions of high density.
• Used when the clusters are irregular or intertwined, and when noise
and outliers are present.
16
Shared Property or Conceptual Clusters
• Finds clusters that share some common property or represent a
particular concept.
2 Overlapping circles
17
Clusters defined by an Objective Function
• Finds clusters that minimize or maximize an objective function.
• Enumerate all possible ways of dividing the points into clusters and evaluate the
`goodness' of each potential set of clusters by using the given objective function.
(NP Hard)
• Can have global or local objectives.
• Hierarchical clustering algorithms typically have local objectives
• Partitional algorithms typically have global objectives
• A variation of the global objective function approach is to fit the data to a
parameterized model.
• Parameters for the model are determined from the data.
• Mixture models assume that the data is a ‘mixture' of a number of
statistical distributions.
18
Clustering Algorithms
• K-mean Clustering
• Hierarchical Clustering
• Density-based Clustering
19
K-mean Clustering
20
K-Mean Clustering
• Partitional clustering approach
• Each cluster is associated with a centroid (center point)
• Each point is assigned to the cluster with the closest centroid
• Number of clusters, K, must be specified
• The basic algorithm is very simple
21
K-Mean Clustering Algorithm
Input:
• (number of
clusters)
• Training set
= index of cluster (1,2,…, K) to which example
assigned
= cluster centroid ( )
= cluster centroid of cluster to which example
assigned
Notations:
22
K-Mean Clustering Algorithm
K-means algorithm
Randomly initialize cluster
centroids Repeat {
for = 1 to
:= index (from 1
to closest to
for = 1 to
:= average (mean) of points assigned to
cluster
}
23
K-Mean Clustering Algorithm
24
K-Mean Clustering Algorithm
25
K-Mean Clustering Algorithm
26
K-Mean Clustering Algorithm
27
K-Mean Clustering Algorithm
28
K-Mean Clustering Algorithm
29
K-Mean Clustering Algorithm
30
K-Mean Clustering Algorithm
31
K-Mean Clustering Algorithm
32
K-Mean Clustering Example
• Apply K(=2)-Means algorithm over the data (185, 72), (170, 56), (168, 60),
(179,68), (182,72), (188,77) up to two iterations and show the clusters.
Initially choose first two objects as initial centroids.
• Given, number of clusters to be
• created (K) = 2 say c1 and c2,
• number of iterations = 2
• first two objects as initial
• centroids:
• Centroid for first cluster c1 = (185, 72)
Centroid for second cluster c2 = (170, 56)
33
K-Mean Clustering Example
• Iteration 1: Calculating Euclidean distance
34
K-Mean Clustering Example
• Representing the information in tabular form
• The resultant clusters formed are:
35
K-Mean Clustering Example
• Iteration 2:
• Now calculating new centroids
• Again calculate the distance from these
new centroids
36
K-Mean Clustering Example
• Representing the information in tabular form
• The resultant clusters formed are:
37
What is the right value of K?
• Elbow Method:
• a graphical method for finding the
optimal K value in a k-means
clustering algorithm
38
Evaluating K-Mean Clusters
• Most common measure is Sum of Squared Error (SSE)
• For each point, the error is the distance to the nearest cluster
• To get SSE, we square these errors and sum them.
• x is a data point in cluster Ci and mi is the representative point for cluster Ci
• can show that mi corresponds to the center (mean) of the cluster
• Given two clusters, we can choose the one with the smallest error
• One easy way to reduce SSE is to increase K, the number of clusters
• A good clustering with smaller K can have a lower SSE than a poor clustering with higher
K
39
Updating Centers
• In the basic K-means algorithm, centroids are updated after all points
are assigned to a centroid.
• An alternative is to update the centroids after each assignment
(incremental approach)
• Each assignment updates zero or two centroids
• More expensive
• Introduces an order dependency
• Never get an empty cluster
• Can use “weights” to change the impact
40
Limitations of K-Means Clustering
41
Limitations
• K-means has problems when clusters are of differing
• Sizes
• Densities
• Non-globular shapes
• K-means has problems when the data contains outliers
42
Different Sizes
Original Points K-Means(3 Clusters)
43
Different Densities
Original Points K-Means(3 Clusters)
44
Non-globular Shapes
Original Points K-Means(2 Clusters)
45
Overcoming Limitations of K-
Means Clustering
46
Different Sizes
Original Points K-Means(Increase
number of Clusters)
47
Different Densities
Original Points K-Means(Increase
number of Clusters)
48
Non-globular Shapes
Original Points K-Means(Increase
number of Clusters)
49
Hierarchical Clustering
50
What is Hierarchical Clustering?
• Produces a set of nested clusters organized as a hierarchical tree.
• Can be visualized as a dendrogram
• A tree like diagram that records the sequences of merges or splits
51
Strengths of Hierarchical Clustering
• Do not have to assume any
particular number of clusters
• Any desired number of clusters
can be obtained by ‘cutting’ the
dendogram at the proper level
• They may correspond to
meaningful taxonomies
• Example in biological sciences
(e.g., animal kingdom, phylogeny
reconstruction, …)
52
Hierarchical Clustering
• Two main types of hierarchical
clustering
• Agglomerative
• Divisive
• Traditional hierarchical
algorithms use a similarity or
distance matrix
• Merge or split one cluster at a
time
53
Agglomerative Clustering
54
Agglomerative Clustering
• More popular hierarchical clustering
technique:
• Start with the points as individual
clusters
• At each step, merge the closest pair of
clusters until only one cluster (or k
clusters) left
• Key operation is the computation of
the proximity of two clusters
• Different approaches to defining the
distance between clusters distinguish
the different algorithms
55
Agglomerative Clustering Algorithm
• Basic algorithm is straightforward
1.Compute the proximity matrix
2.Let each data point be a cluster
3.Repeat
4.Merge the two closest clusters
5.Update the proximity matrix
6.Untilonly a single cluster remains
56
Starting Situation
• Start with clusters of individual points and a proximity matrix
57
Intermediate Situation
• After some merging steps, we have some clusters
58
Intermediate Situation
• We want to merge the two closest clusters (C2 and C5) and update
the proximity matrix.
59
After Merging
• The question is “How do we update the proximity matrix?”
60
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
61
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
62
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
63
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
64
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
65
Cluster Similarity: MIN or Single Link
• Similarity of two clusters is based on the two most similar (closest)
points in the different clusters
• Determined by one pair of points, i.e., by one link in the proximity graph.
66
Hierarchical Clustering: MIN
67
Strength of MIN
• Can handle non-elliptical shapes
68
Limitations of MIN
• Sensitive to noise and outliers
69
Cluster Similarity: MAX or Complete Linkage
• Similarity of two clusters is based on the two least similar (most
distant) points in the different clusters
• Determined by all pairs of points in the two clusters
70
Hierarchical Clustering: MAX
71
Strength of MAX
• Less susceptible to noise and outliers
72
Limitations of MAX
• Tends to break large clusters
• Biased towards globular clusters
73
Cluster Similarity: Group Average
• Proximity of two clusters is the average of pairwise proximity between
points in the two clusters.
• Need to use average connectivity for scalability since total proximity
favors large clusters
74
Hierarchical Clustering: Group Average
75
Hierarchical Clustering: Group Average
• Compromise between Single and Complete Link
• Strengths
• Less susceptible to noise and outliers
• Limitations
• Biased towards globular clusters
76
Hierarchical Clustering: Comparison
77
Hierarchical Clustering: Problems and Limitations
• Once a decision is made to combine two clusters, it cannot be undone
• No objective function is directly minimized
• Different schemes have problems with one or more of the following:
• Sensitivity to noise and outliers
• Difficulty handling different sized clusters and convex shapes
• Breaking large clusters
78
Hierarchical Clustering(MIN): Example
• Distance between two clusters
have been defined by minimum
distance between objects of two
clusters.
• Iteration 1:
• We have been given a distance
matrix of size 6x6. The distance is
calculated based on features.
• In this case, the closest cluster is F
and D with shortest distance of 0.5
79
Hierarchical Clustering(MIN): Example
• We update the distance matrix
• Iteration 2:
• Calculate the distance of newly
created cluster as follows:
• Similarly calculate all other
distances and we will get an
updated matrix.
• The min distance is between A and
B.
80
Hierarchical Clustering(MIN): Example
• We update the distance matrix
• Iteration 3:
• Calculate the distance of newly
created cluster (A,B) as follows:
• Similarly calculate all other
distances and we will get an
updated matrix.
• The min distance is between E and
(D, F).
81
Hierarchical Clustering(MIN): Example
• We update the distance matrix
• Iteration 4:
• Calculate the distance of newly
created cluster (E, (A,B)) as
follows:
• Similarly calculate all other
distances and we will get an
updated matrix.
• The min distance is between (E,
(D, F)) and C.
82
Hierarchical Clustering(MIN): Example
• The resultant dendogram is as follows:
• The hierarchy is given as follows:
83
Hierarchical Clustering: Practice
• Using the same table as in above example use , max and group
average Hierarchical clustering and compare the results.
• Hint: instead of min function use max and average functions.
84
DBSCAN
85
DBSCAN
• DBSCAN is a density-based algorithm.
• Density = number of points within a specified radius (Eps)
• A point is a core point if it has more than a specified number of points
(MinPts) within Eps
• These are points that are at the interior of a cluster
• A border point has fewer than MinPts within Eps, but is in the neighborhood
of a core point
• A noise point is any point that is not a core point or a border point.
86
DBSCAN: Core, Border, and Noise Points
87
DBSCAN: How to form cluster?
88
DBSCAN: Core, Border and Noise Points
89
When DBSCAN Works Well?
• Resistant to Noise
• Can handle clusters of different shapes and sizes
90
When DBSCAN Does NOT Work Well?
• Varying densities
• High-dimensional data
91
DBSCAN: Determining EPS and MinPts
• Idea is that for points in a cluster, their kth nearest neighbors are at
roughly the same distance
• Noise points have the kth nearest neighbor at farther distance
• So, plot sorted distance of every point to its kth nearest neighbor
92
DBSCAN- Example
• Given the points A(3, 7), B(4, 6), C(5, 5), D(6, 4), E(7, 3), F(6, 2), G(7, 2)
and H(8, 4), Find the core points and outliers using DBSCAN. Take Eps
= 2.5 and MinPts = 3.
• Let’s represent the given data points in tabular form:
93
DBSCAN- Example
• To find the core points, outliers and clusters by using DBSCAN we
need to first calculate the distance among all pairs of given data point.
• Find Euclidean distance between the points.
Distance ≤ Epsilon (i.e. 2.5) is marked red.
94
DBSCAN- Example
• Now, finding all the data points that lie in the Eps-neighborhood of each data
points. That is, put all the points in the neighborhood set of each data point
whose distance is <=2.5.
N(A) = {B}; — — — — — — -→ because distance of B is <= 2.5 with A
N(B) = {A, C}; — — — — — → because distance of A and C is <= 2.5 with B
N(C) = {B, D}; — — — — —→ because distance of B and D is <=2.5 with C
N(D) = {C, E, F, G, H}; — → because distance of C, E, F,G and H is <=2.5 with D
N(E) = {D, F, G, H}; — — → because distance of D, F, G and H is <=2.5 with E
N(F) = {D, E, G}; — — — — → because distance of D, E and G is <=2.5 with F
N(G) = {D, E, F, H}; — — -→ because distance of D, E, F and H is <=2.5 with G
N(H) = {D, E, G}; — — — — → because distance of D, E and G is <=2.5 with H
95
DBSCAN- Example
• Here, data points A, B and C have neighbors <= MinPts (i.e. 3) so can’t
be considered as core points. Since they belong to the neighborhood
of other data points, hence there exist no outliers in the given set of
data points.
• Data points D, E, F, G and H have neighbors >= MinPts (i.e. 3) and
hence are the core data points.
Happy Learning!

More Related Content

PPTX
Clustering on DSS
PPT
Chap8 basic cluster_analysis
PPTX
machine learning - Clustering in R
PDF
[ML]-Unsupervised-learning_Unit2.ppt.pdf
PPT
15857 cse422 unsupervised-learning
PDF
clustering using different methods in .pdf
PDF
ClusteringClusteringClusteringClustering.pdf
PPTX
Unsupervised Learning.pptx
Clustering on DSS
Chap8 basic cluster_analysis
machine learning - Clustering in R
[ML]-Unsupervised-learning_Unit2.ppt.pdf
15857 cse422 unsupervised-learning
clustering using different methods in .pdf
ClusteringClusteringClusteringClustering.pdf
Unsupervised Learning.pptx

Similar to Unsupervised%20Learninffffg (2).pptx. application (20)

PPT
cluster analysis
PDF
iiit delhi unsupervised pdf.pdf
PDF
Chapter#04[Part#01]K-Means Clusterig.pdf
PPT
26-Clustering MTech-2017.ppt
PPTX
unitvclusteranalysis-221214135407-1956d6ef.pptx
PDF
Chapter 5.pdf
PDF
Machine Learning - Clustering
PPTX
Unsupervised learning Algorithms and Assumptions
PPT
Cluster_saumitra.ppt
PDF
Clustering.pdf
PPTX
K MEANS CLUSTERING - UNSUPERVISED LEARNING
PPTX
Data mining Techniques
PPTX
Cluster Analysis.pptx
PPT
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
PDF
ch_5_dm clustering in data mining.......
PPT
K_MeansK_MeansK_MeansK_MeansK_MeansK_MeansK_Means.ppt
PPT
DM_clustering.ppt
cluster analysis
iiit delhi unsupervised pdf.pdf
Chapter#04[Part#01]K-Means Clusterig.pdf
26-Clustering MTech-2017.ppt
unitvclusteranalysis-221214135407-1956d6ef.pptx
Chapter 5.pdf
Machine Learning - Clustering
Unsupervised learning Algorithms and Assumptions
Cluster_saumitra.ppt
Clustering.pdf
K MEANS CLUSTERING - UNSUPERVISED LEARNING
Data mining Techniques
Cluster Analysis.pptx
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
ch_5_dm clustering in data mining.......
K_MeansK_MeansK_MeansK_MeansK_MeansK_MeansK_Means.ppt
DM_clustering.ppt
Ad

Recently uploaded (20)

PDF
Phone away, tabs closed: No multitasking
PDF
High-frequency high-voltage transformer outline drawing
PDF
Africa 2025 - Prospects and Challenges first edition.pdf
DOCX
actividad 20% informatica microsoft project
PPTX
ANATOMY OF ANTERIOR CHAMBER ANGLE AND GONIOSCOPY.pptx
DOCX
algorithm desgin technologycsecsecsecsecse
PDF
Benefits_of_Cast_Aluminium_Doors_Presentation.pdf
PPTX
6- Architecture design complete (1).pptx
PDF
Emailing DDDX-MBCaEiB.pdf DDD_Europe_2022_Intro_to_Context_Mapping_pdf-165590...
PPT
UNIT I- Yarn, types, explanation, process
PPTX
areprosthodontics and orthodonticsa text.pptx
PDF
Wio LTE JP Version v1.3b- 4G, Cat.1, Espruino Compatible\202001935, PCBA;Wio ...
PPTX
Implications Existing phase plan and its feasibility.pptx
PPTX
HPE Aruba-master-icon-library_052722.pptx
PDF
Key Trends in Website Development 2025 | B3AITS - Bow & 3 Arrows IT Solutions
PPTX
BSCS lesson 3.pptxnbbjbb mnbkjbkbbkbbkjb
PPT
EGWHermeneuticsffgggggggggggggggggggggggggggggggg.ppt
PDF
Applied Structural and Petroleum Geology Lec 1.pdf
PPTX
Complete Guide to Microsoft PowerPoint 2019 – Features, Tools, and Tips"
PPTX
building Planning Overview for step wise design.pptx
Phone away, tabs closed: No multitasking
High-frequency high-voltage transformer outline drawing
Africa 2025 - Prospects and Challenges first edition.pdf
actividad 20% informatica microsoft project
ANATOMY OF ANTERIOR CHAMBER ANGLE AND GONIOSCOPY.pptx
algorithm desgin technologycsecsecsecsecse
Benefits_of_Cast_Aluminium_Doors_Presentation.pdf
6- Architecture design complete (1).pptx
Emailing DDDX-MBCaEiB.pdf DDD_Europe_2022_Intro_to_Context_Mapping_pdf-165590...
UNIT I- Yarn, types, explanation, process
areprosthodontics and orthodonticsa text.pptx
Wio LTE JP Version v1.3b- 4G, Cat.1, Espruino Compatible\202001935, PCBA;Wio ...
Implications Existing phase plan and its feasibility.pptx
HPE Aruba-master-icon-library_052722.pptx
Key Trends in Website Development 2025 | B3AITS - Bow & 3 Arrows IT Solutions
BSCS lesson 3.pptxnbbjbb mnbkjbkbbkbbkjb
EGWHermeneuticsffgggggggggggggggggggggggggggggggg.ppt
Applied Structural and Petroleum Geology Lec 1.pdf
Complete Guide to Microsoft PowerPoint 2019 – Features, Tools, and Tips"
building Planning Overview for step wise design.pptx
Ad

Unsupervised%20Learninffffg (2).pptx. application

  • 1. 1 Tool & Techniques of Data Science Unsupervised Learning
  • 2. 2 Unsupervised Machine Learning • Unsupervised machine learning, uses machine learning algorithms to analyze and cluster unlabeled datasets. • These algorithms discover hidden patterns or data groupings without the need for human intervention.
  • 4. 4 What is Cluster Analysis? • A grouping of data objects such that the objects within a group are similar (or related) to one another and different from (or unrelated to) the objects in other groups.
  • 5. 5 What is not Cluster Analysis? • Supervised classification • Have class label information. • Simple segmentation • Dividing students into different registration groups alphabetically, by last name. • Result of a query • Grouping are a result of external specifications. • Graph partitioning • Some mutual relevance and synergy, but areas are not identical.
  • 6. 6 Applications of Clustering • Image Processing • Cluster images based on their visual content • Web • Cluster groups of users based on their access patterns on webpages • Cluster webpages based on their content • Bioinformatics • Cluster similar proteins together (similarity w.r.t chemical structure and/or functionality etc) • Many more…
  • 7. 7 Types of Clustering • Partitional Clustering • A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset. • Hierarchical Clustering • A set of nested clusters organized as a hierarchical tree
  • 10. 10 Other Distinctions between sets of Clusters • Exclusive versus non-exclusive • In non-exclusive clustering, points may belong to multiple clusters. • Can represent multiple classes or ‘border’ points • Fuzzy versus non-fuzzy • In fuzzy clustering, a point belongs to every cluster with some weight between 0 and 1 • Weights must sum to 1 • Probabilistic clustering has similar characteristics • Partial versus complete • In some cases, we only want to cluster some of the data • Heterogeneous versus homogeneous • Cluster of widely different sizes, shapes, and densities
  • 11. 11 Types of Clusters • Well-separated clusters • Center-based clusters • Contiguous clusters • Density-based clusters • Property or Conceptual • Described by an Objective Function
  • 12. 12 Well Separated Clusters • A cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster. 3 well-separated Clusters
  • 13. 13 Center Based Clusters • A cluster is a set of objects such that an object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster • The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster
  • 14. 14 Contiguous Clusters (Nearest neighbor or Transitive) • A cluster is a set of points such that a point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster.
  • 15. 15 Density based Clusters • A cluster is a dense region of points, which is separated by low- density regions, from other regions of high density. • Used when the clusters are irregular or intertwined, and when noise and outliers are present.
  • 16. 16 Shared Property or Conceptual Clusters • Finds clusters that share some common property or represent a particular concept. 2 Overlapping circles
  • 17. 17 Clusters defined by an Objective Function • Finds clusters that minimize or maximize an objective function. • Enumerate all possible ways of dividing the points into clusters and evaluate the `goodness' of each potential set of clusters by using the given objective function. (NP Hard) • Can have global or local objectives. • Hierarchical clustering algorithms typically have local objectives • Partitional algorithms typically have global objectives • A variation of the global objective function approach is to fit the data to a parameterized model. • Parameters for the model are determined from the data. • Mixture models assume that the data is a ‘mixture' of a number of statistical distributions.
  • 18. 18 Clustering Algorithms • K-mean Clustering • Hierarchical Clustering • Density-based Clustering
  • 20. 20 K-Mean Clustering • Partitional clustering approach • Each cluster is associated with a centroid (center point) • Each point is assigned to the cluster with the closest centroid • Number of clusters, K, must be specified • The basic algorithm is very simple
  • 21. 21 K-Mean Clustering Algorithm Input: • (number of clusters) • Training set = index of cluster (1,2,…, K) to which example assigned = cluster centroid ( ) = cluster centroid of cluster to which example assigned Notations:
  • 22. 22 K-Mean Clustering Algorithm K-means algorithm Randomly initialize cluster centroids Repeat { for = 1 to := index (from 1 to closest to for = 1 to := average (mean) of points assigned to cluster }
  • 32. 32 K-Mean Clustering Example • Apply K(=2)-Means algorithm over the data (185, 72), (170, 56), (168, 60), (179,68), (182,72), (188,77) up to two iterations and show the clusters. Initially choose first two objects as initial centroids. • Given, number of clusters to be • created (K) = 2 say c1 and c2, • number of iterations = 2 • first two objects as initial • centroids: • Centroid for first cluster c1 = (185, 72) Centroid for second cluster c2 = (170, 56)
  • 33. 33 K-Mean Clustering Example • Iteration 1: Calculating Euclidean distance
  • 34. 34 K-Mean Clustering Example • Representing the information in tabular form • The resultant clusters formed are:
  • 35. 35 K-Mean Clustering Example • Iteration 2: • Now calculating new centroids • Again calculate the distance from these new centroids
  • 36. 36 K-Mean Clustering Example • Representing the information in tabular form • The resultant clusters formed are:
  • 37. 37 What is the right value of K? • Elbow Method: • a graphical method for finding the optimal K value in a k-means clustering algorithm
  • 38. 38 Evaluating K-Mean Clusters • Most common measure is Sum of Squared Error (SSE) • For each point, the error is the distance to the nearest cluster • To get SSE, we square these errors and sum them. • x is a data point in cluster Ci and mi is the representative point for cluster Ci • can show that mi corresponds to the center (mean) of the cluster • Given two clusters, we can choose the one with the smallest error • One easy way to reduce SSE is to increase K, the number of clusters • A good clustering with smaller K can have a lower SSE than a poor clustering with higher K
  • 39. 39 Updating Centers • In the basic K-means algorithm, centroids are updated after all points are assigned to a centroid. • An alternative is to update the centroids after each assignment (incremental approach) • Each assignment updates zero or two centroids • More expensive • Introduces an order dependency • Never get an empty cluster • Can use “weights” to change the impact
  • 41. 41 Limitations • K-means has problems when clusters are of differing • Sizes • Densities • Non-globular shapes • K-means has problems when the data contains outliers
  • 42. 42 Different Sizes Original Points K-Means(3 Clusters)
  • 45. 45 Overcoming Limitations of K- Means Clustering
  • 46. 46 Different Sizes Original Points K-Means(Increase number of Clusters)
  • 47. 47 Different Densities Original Points K-Means(Increase number of Clusters)
  • 48. 48 Non-globular Shapes Original Points K-Means(Increase number of Clusters)
  • 50. 50 What is Hierarchical Clustering? • Produces a set of nested clusters organized as a hierarchical tree. • Can be visualized as a dendrogram • A tree like diagram that records the sequences of merges or splits
  • 51. 51 Strengths of Hierarchical Clustering • Do not have to assume any particular number of clusters • Any desired number of clusters can be obtained by ‘cutting’ the dendogram at the proper level • They may correspond to meaningful taxonomies • Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …)
  • 52. 52 Hierarchical Clustering • Two main types of hierarchical clustering • Agglomerative • Divisive • Traditional hierarchical algorithms use a similarity or distance matrix • Merge or split one cluster at a time
  • 54. 54 Agglomerative Clustering • More popular hierarchical clustering technique: • Start with the points as individual clusters • At each step, merge the closest pair of clusters until only one cluster (or k clusters) left • Key operation is the computation of the proximity of two clusters • Different approaches to defining the distance between clusters distinguish the different algorithms
  • 55. 55 Agglomerative Clustering Algorithm • Basic algorithm is straightforward 1.Compute the proximity matrix 2.Let each data point be a cluster 3.Repeat 4.Merge the two closest clusters 5.Update the proximity matrix 6.Untilonly a single cluster remains
  • 56. 56 Starting Situation • Start with clusters of individual points and a proximity matrix
  • 57. 57 Intermediate Situation • After some merging steps, we have some clusters
  • 58. 58 Intermediate Situation • We want to merge the two closest clusters (C2 and C5) and update the proximity matrix.
  • 59. 59 After Merging • The question is “How do we update the proximity matrix?”
  • 60. 60 How to Define Inter-Cluster Similarity • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function
  • 61. 61 How to Define Inter-Cluster Similarity • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function
  • 62. 62 How to Define Inter-Cluster Similarity • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function
  • 63. 63 How to Define Inter-Cluster Similarity • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function
  • 64. 64 How to Define Inter-Cluster Similarity • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function
  • 65. 65 Cluster Similarity: MIN or Single Link • Similarity of two clusters is based on the two most similar (closest) points in the different clusters • Determined by one pair of points, i.e., by one link in the proximity graph.
  • 67. 67 Strength of MIN • Can handle non-elliptical shapes
  • 68. 68 Limitations of MIN • Sensitive to noise and outliers
  • 69. 69 Cluster Similarity: MAX or Complete Linkage • Similarity of two clusters is based on the two least similar (most distant) points in the different clusters • Determined by all pairs of points in the two clusters
  • 71. 71 Strength of MAX • Less susceptible to noise and outliers
  • 72. 72 Limitations of MAX • Tends to break large clusters • Biased towards globular clusters
  • 73. 73 Cluster Similarity: Group Average • Proximity of two clusters is the average of pairwise proximity between points in the two clusters. • Need to use average connectivity for scalability since total proximity favors large clusters
  • 75. 75 Hierarchical Clustering: Group Average • Compromise between Single and Complete Link • Strengths • Less susceptible to noise and outliers • Limitations • Biased towards globular clusters
  • 77. 77 Hierarchical Clustering: Problems and Limitations • Once a decision is made to combine two clusters, it cannot be undone • No objective function is directly minimized • Different schemes have problems with one or more of the following: • Sensitivity to noise and outliers • Difficulty handling different sized clusters and convex shapes • Breaking large clusters
  • 78. 78 Hierarchical Clustering(MIN): Example • Distance between two clusters have been defined by minimum distance between objects of two clusters. • Iteration 1: • We have been given a distance matrix of size 6x6. The distance is calculated based on features. • In this case, the closest cluster is F and D with shortest distance of 0.5
  • 79. 79 Hierarchical Clustering(MIN): Example • We update the distance matrix • Iteration 2: • Calculate the distance of newly created cluster as follows: • Similarly calculate all other distances and we will get an updated matrix. • The min distance is between A and B.
  • 80. 80 Hierarchical Clustering(MIN): Example • We update the distance matrix • Iteration 3: • Calculate the distance of newly created cluster (A,B) as follows: • Similarly calculate all other distances and we will get an updated matrix. • The min distance is between E and (D, F).
  • 81. 81 Hierarchical Clustering(MIN): Example • We update the distance matrix • Iteration 4: • Calculate the distance of newly created cluster (E, (A,B)) as follows: • Similarly calculate all other distances and we will get an updated matrix. • The min distance is between (E, (D, F)) and C.
  • 82. 82 Hierarchical Clustering(MIN): Example • The resultant dendogram is as follows: • The hierarchy is given as follows:
  • 83. 83 Hierarchical Clustering: Practice • Using the same table as in above example use , max and group average Hierarchical clustering and compare the results. • Hint: instead of min function use max and average functions.
  • 85. 85 DBSCAN • DBSCAN is a density-based algorithm. • Density = number of points within a specified radius (Eps) • A point is a core point if it has more than a specified number of points (MinPts) within Eps • These are points that are at the interior of a cluster • A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point • A noise point is any point that is not a core point or a border point.
  • 86. 86 DBSCAN: Core, Border, and Noise Points
  • 87. 87 DBSCAN: How to form cluster?
  • 88. 88 DBSCAN: Core, Border and Noise Points
  • 89. 89 When DBSCAN Works Well? • Resistant to Noise • Can handle clusters of different shapes and sizes
  • 90. 90 When DBSCAN Does NOT Work Well? • Varying densities • High-dimensional data
  • 91. 91 DBSCAN: Determining EPS and MinPts • Idea is that for points in a cluster, their kth nearest neighbors are at roughly the same distance • Noise points have the kth nearest neighbor at farther distance • So, plot sorted distance of every point to its kth nearest neighbor
  • 92. 92 DBSCAN- Example • Given the points A(3, 7), B(4, 6), C(5, 5), D(6, 4), E(7, 3), F(6, 2), G(7, 2) and H(8, 4), Find the core points and outliers using DBSCAN. Take Eps = 2.5 and MinPts = 3. • Let’s represent the given data points in tabular form:
  • 93. 93 DBSCAN- Example • To find the core points, outliers and clusters by using DBSCAN we need to first calculate the distance among all pairs of given data point. • Find Euclidean distance between the points. Distance ≤ Epsilon (i.e. 2.5) is marked red.
  • 94. 94 DBSCAN- Example • Now, finding all the data points that lie in the Eps-neighborhood of each data points. That is, put all the points in the neighborhood set of each data point whose distance is <=2.5. N(A) = {B}; — — — — — — -→ because distance of B is <= 2.5 with A N(B) = {A, C}; — — — — — → because distance of A and C is <= 2.5 with B N(C) = {B, D}; — — — — —→ because distance of B and D is <=2.5 with C N(D) = {C, E, F, G, H}; — → because distance of C, E, F,G and H is <=2.5 with D N(E) = {D, F, G, H}; — — → because distance of D, F, G and H is <=2.5 with E N(F) = {D, E, G}; — — — — → because distance of D, E and G is <=2.5 with F N(G) = {D, E, F, H}; — — -→ because distance of D, E, F and H is <=2.5 with G N(H) = {D, E, G}; — — — — → because distance of D, E and G is <=2.5 with H
  • 95. 95 DBSCAN- Example • Here, data points A, B and C have neighbors <= MinPts (i.e. 3) so can’t be considered as core points. Since they belong to the neighborhood of other data points, hence there exist no outliers in the given set of data points. • Data points D, E, F, G and H have neighbors >= MinPts (i.e. 3) and hence are the core data points.