2. 2
Unsupervised Machine Learning
• Unsupervised machine learning, uses machine learning algorithms to
analyze and cluster unlabeled datasets.
• These algorithms discover hidden patterns or data groupings without
the need for human intervention.
4. 4
What is Cluster Analysis?
• A grouping of data objects such that the objects within a group are
similar (or related) to one another and different from (or unrelated to)
the objects in other groups.
5. 5
What is not Cluster Analysis?
• Supervised classification
• Have class label information.
• Simple segmentation
• Dividing students into different registration groups alphabetically, by last name.
• Result of a query
• Grouping are a result of external specifications.
• Graph partitioning
• Some mutual relevance and synergy, but areas are not identical.
6. 6
Applications of Clustering
• Image Processing
• Cluster images based on their visual content
• Web
• Cluster groups of users based on their access patterns on webpages
• Cluster webpages based on their content
• Bioinformatics
• Cluster similar proteins together (similarity w.r.t chemical structure and/or
functionality etc)
• Many more…
7. 7
Types of Clustering
• Partitional Clustering
• A division data objects into non-overlapping subsets (clusters) such that each
data object is in exactly one subset.
• Hierarchical Clustering
• A set of nested clusters organized as a hierarchical tree
10. 10
Other Distinctions between sets of Clusters
• Exclusive versus non-exclusive
• In non-exclusive clustering, points may belong to multiple clusters.
• Can represent multiple classes or ‘border’ points
• Fuzzy versus non-fuzzy
• In fuzzy clustering, a point belongs to every cluster with some weight between 0
and 1
• Weights must sum to 1
• Probabilistic clustering has similar characteristics
• Partial versus complete
• In some cases, we only want to cluster some of the data
• Heterogeneous versus homogeneous
• Cluster of widely different sizes, shapes, and densities
11. 11
Types of Clusters
• Well-separated clusters
• Center-based clusters
• Contiguous clusters
• Density-based clusters
• Property or Conceptual
• Described by an Objective Function
12. 12
Well Separated Clusters
• A cluster is a set of points such that any point in a cluster is closer (or
more similar) to every other point in the cluster than to any point not
in the cluster.
3 well-separated Clusters
13. 13
Center Based Clusters
• A cluster is a set of objects such that an object in a cluster is closer
(more similar) to the “center” of a cluster, than to the center of any
other cluster
• The center of a cluster is often a centroid, the average of all the
points in the cluster, or a medoid, the most “representative” point of
a cluster
14. 14
Contiguous Clusters (Nearest neighbor or Transitive)
• A cluster is a set of points such that a point in a cluster is closer (or
more similar) to one or more other points in the cluster than to any
point not in the cluster.
15. 15
Density based Clusters
• A cluster is a dense region of points, which is separated by low-
density regions, from other regions of high density.
• Used when the clusters are irregular or intertwined, and when noise
and outliers are present.
16. 16
Shared Property or Conceptual Clusters
• Finds clusters that share some common property or represent a
particular concept.
2 Overlapping circles
17. 17
Clusters defined by an Objective Function
• Finds clusters that minimize or maximize an objective function.
• Enumerate all possible ways of dividing the points into clusters and evaluate the
`goodness' of each potential set of clusters by using the given objective function.
(NP Hard)
• Can have global or local objectives.
• Hierarchical clustering algorithms typically have local objectives
• Partitional algorithms typically have global objectives
• A variation of the global objective function approach is to fit the data to a
parameterized model.
• Parameters for the model are determined from the data.
• Mixture models assume that the data is a ‘mixture' of a number of
statistical distributions.
20. 20
K-Mean Clustering
• Partitional clustering approach
• Each cluster is associated with a centroid (center point)
• Each point is assigned to the cluster with the closest centroid
• Number of clusters, K, must be specified
• The basic algorithm is very simple
21. 21
K-Mean Clustering Algorithm
Input:
• (number of
clusters)
• Training set
= index of cluster (1,2,…, K) to which example
assigned
= cluster centroid ( )
= cluster centroid of cluster to which example
assigned
Notations:
22. 22
K-Mean Clustering Algorithm
K-means algorithm
Randomly initialize cluster
centroids Repeat {
for = 1 to
:= index (from 1
to closest to
for = 1 to
:= average (mean) of points assigned to
cluster
}
32. 32
K-Mean Clustering Example
• Apply K(=2)-Means algorithm over the data (185, 72), (170, 56), (168, 60),
(179,68), (182,72), (188,77) up to two iterations and show the clusters.
Initially choose first two objects as initial centroids.
• Given, number of clusters to be
• created (K) = 2 say c1 and c2,
• number of iterations = 2
• first two objects as initial
• centroids:
• Centroid for first cluster c1 = (185, 72)
Centroid for second cluster c2 = (170, 56)
37. 37
What is the right value of K?
• Elbow Method:
• a graphical method for finding the
optimal K value in a k-means
clustering algorithm
38. 38
Evaluating K-Mean Clusters
• Most common measure is Sum of Squared Error (SSE)
• For each point, the error is the distance to the nearest cluster
• To get SSE, we square these errors and sum them.
• x is a data point in cluster Ci and mi is the representative point for cluster Ci
• can show that mi corresponds to the center (mean) of the cluster
• Given two clusters, we can choose the one with the smallest error
• One easy way to reduce SSE is to increase K, the number of clusters
• A good clustering with smaller K can have a lower SSE than a poor clustering with higher
K
39. 39
Updating Centers
• In the basic K-means algorithm, centroids are updated after all points
are assigned to a centroid.
• An alternative is to update the centroids after each assignment
(incremental approach)
• Each assignment updates zero or two centroids
• More expensive
• Introduces an order dependency
• Never get an empty cluster
• Can use “weights” to change the impact
41. 41
Limitations
• K-means has problems when clusters are of differing
• Sizes
• Densities
• Non-globular shapes
• K-means has problems when the data contains outliers
50. 50
What is Hierarchical Clustering?
• Produces a set of nested clusters organized as a hierarchical tree.
• Can be visualized as a dendrogram
• A tree like diagram that records the sequences of merges or splits
51. 51
Strengths of Hierarchical Clustering
• Do not have to assume any
particular number of clusters
• Any desired number of clusters
can be obtained by ‘cutting’ the
dendogram at the proper level
• They may correspond to
meaningful taxonomies
• Example in biological sciences
(e.g., animal kingdom, phylogeny
reconstruction, …)
52. 52
Hierarchical Clustering
• Two main types of hierarchical
clustering
• Agglomerative
• Divisive
• Traditional hierarchical
algorithms use a similarity or
distance matrix
• Merge or split one cluster at a
time
54. 54
Agglomerative Clustering
• More popular hierarchical clustering
technique:
• Start with the points as individual
clusters
• At each step, merge the closest pair of
clusters until only one cluster (or k
clusters) left
• Key operation is the computation of
the proximity of two clusters
• Different approaches to defining the
distance between clusters distinguish
the different algorithms
55. 55
Agglomerative Clustering Algorithm
• Basic algorithm is straightforward
1.Compute the proximity matrix
2.Let each data point be a cluster
3.Repeat
4.Merge the two closest clusters
5.Update the proximity matrix
6.Untilonly a single cluster remains
60. 60
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
61. 61
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
62. 62
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
63. 63
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
64. 64
How to Define Inter-Cluster Similarity
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
65. 65
Cluster Similarity: MIN or Single Link
• Similarity of two clusters is based on the two most similar (closest)
points in the different clusters
• Determined by one pair of points, i.e., by one link in the proximity graph.
69. 69
Cluster Similarity: MAX or Complete Linkage
• Similarity of two clusters is based on the two least similar (most
distant) points in the different clusters
• Determined by all pairs of points in the two clusters
73. 73
Cluster Similarity: Group Average
• Proximity of two clusters is the average of pairwise proximity between
points in the two clusters.
• Need to use average connectivity for scalability since total proximity
favors large clusters
75. 75
Hierarchical Clustering: Group Average
• Compromise between Single and Complete Link
• Strengths
• Less susceptible to noise and outliers
• Limitations
• Biased towards globular clusters
77. 77
Hierarchical Clustering: Problems and Limitations
• Once a decision is made to combine two clusters, it cannot be undone
• No objective function is directly minimized
• Different schemes have problems with one or more of the following:
• Sensitivity to noise and outliers
• Difficulty handling different sized clusters and convex shapes
• Breaking large clusters
78. 78
Hierarchical Clustering(MIN): Example
• Distance between two clusters
have been defined by minimum
distance between objects of two
clusters.
• Iteration 1:
• We have been given a distance
matrix of size 6x6. The distance is
calculated based on features.
• In this case, the closest cluster is F
and D with shortest distance of 0.5
79. 79
Hierarchical Clustering(MIN): Example
• We update the distance matrix
• Iteration 2:
• Calculate the distance of newly
created cluster as follows:
• Similarly calculate all other
distances and we will get an
updated matrix.
• The min distance is between A and
B.
80. 80
Hierarchical Clustering(MIN): Example
• We update the distance matrix
• Iteration 3:
• Calculate the distance of newly
created cluster (A,B) as follows:
• Similarly calculate all other
distances and we will get an
updated matrix.
• The min distance is between E and
(D, F).
81. 81
Hierarchical Clustering(MIN): Example
• We update the distance matrix
• Iteration 4:
• Calculate the distance of newly
created cluster (E, (A,B)) as
follows:
• Similarly calculate all other
distances and we will get an
updated matrix.
• The min distance is between (E,
(D, F)) and C.
83. 83
Hierarchical Clustering: Practice
• Using the same table as in above example use , max and group
average Hierarchical clustering and compare the results.
• Hint: instead of min function use max and average functions.
85. 85
DBSCAN
• DBSCAN is a density-based algorithm.
• Density = number of points within a specified radius (Eps)
• A point is a core point if it has more than a specified number of points
(MinPts) within Eps
• These are points that are at the interior of a cluster
• A border point has fewer than MinPts within Eps, but is in the neighborhood
of a core point
• A noise point is any point that is not a core point or a border point.
89. 89
When DBSCAN Works Well?
• Resistant to Noise
• Can handle clusters of different shapes and sizes
90. 90
When DBSCAN Does NOT Work Well?
• Varying densities
• High-dimensional data
91. 91
DBSCAN: Determining EPS and MinPts
• Idea is that for points in a cluster, their kth nearest neighbors are at
roughly the same distance
• Noise points have the kth nearest neighbor at farther distance
• So, plot sorted distance of every point to its kth nearest neighbor
92. 92
DBSCAN- Example
• Given the points A(3, 7), B(4, 6), C(5, 5), D(6, 4), E(7, 3), F(6, 2), G(7, 2)
and H(8, 4), Find the core points and outliers using DBSCAN. Take Eps
= 2.5 and MinPts = 3.
• Let’s represent the given data points in tabular form:
93. 93
DBSCAN- Example
• To find the core points, outliers and clusters by using DBSCAN we
need to first calculate the distance among all pairs of given data point.
• Find Euclidean distance between the points.
Distance ≤ Epsilon (i.e. 2.5) is marked red.
94. 94
DBSCAN- Example
• Now, finding all the data points that lie in the Eps-neighborhood of each data
points. That is, put all the points in the neighborhood set of each data point
whose distance is <=2.5.
N(A) = {B}; — — — — — — -→ because distance of B is <= 2.5 with A
N(B) = {A, C}; — — — — — → because distance of A and C is <= 2.5 with B
N(C) = {B, D}; — — — — —→ because distance of B and D is <=2.5 with C
N(D) = {C, E, F, G, H}; — → because distance of C, E, F,G and H is <=2.5 with D
N(E) = {D, F, G, H}; — — → because distance of D, F, G and H is <=2.5 with E
N(F) = {D, E, G}; — — — — → because distance of D, E and G is <=2.5 with F
N(G) = {D, E, F, H}; — — -→ because distance of D, E, F and H is <=2.5 with G
N(H) = {D, E, G}; — — — — → because distance of D, E and G is <=2.5 with H
95. 95
DBSCAN- Example
• Here, data points A, B and C have neighbors <= MinPts (i.e. 3) so can’t
be considered as core points. Since they belong to the neighborhood
of other data points, hence there exist no outliers in the given set of
data points.
• Data points D, E, F, G and H have neighbors >= MinPts (i.e. 3) and
hence are the core data points.