SlideShare a Scribd company logo
1
Hierarchical Clustering
Class Algorithmic Methods of Data Mining
Program M. Sc. Data Science
University Sapienza University of Rome
Semester Fall 2015
Lecturer Carlos Castillo http://chato.cl/
Sources:
● Mohammed J. Zaki, Wagner Meira, Jr., Data Mining and Analysis:
Fundamental Concepts and Algorithms, Cambridge University
Press, May 2014. Chapter 14. [download]
● Evimaria Terzi: Data Mining course at Boston University
http://www.cs.bu.edu/~evimaria/cs565-13.html
2http://www.talkorigins.org/faqs/comdesc/phylo.html
3
http://www.chegg.com/homework-help/questions-and-answers/part-phylogenetic-tree-shown-figure-b-sines-10-12-13-first-insert-genomes-artiodactyls-phy-q4932026
Hierarchical Clustering
• Produces a set of nested clusters organized as
a hierarchical tree
• Can be visualized as a dendrogram
– A tree-like diagram that records the sequences of
merges or splits
Strengths of Hierarchical
Clustering
• No assumptions on the number of clusters
– Any desired number of clusters can be obtained by
‘cutting’ the dendogram at the proper level
• Hierarchical clusterings may correspond to
meaningful taxonomies
– Example in biological sciences (e.g., phylogeny
reconstruction, etc), web (e.g., product catalogs) etc
Hierarchical Clustering
Algorithms
• Two main types of hierarchical clustering
– Agglomerative:
• Start with the points as individual clusters
• At each step, merge the closest pair of clusters until only one cluster
(or k clusters) left
– Divisive:
• Start with one, all-inclusive cluster
• At each step, split a cluster until each cluster contains a point (or
there are k clusters)
• Traditional hierarchical algorithms use a similarity or
distance matrix
– Merge or split one cluster at a time
Complexity of hierarchical
clustering
• Distance matrix is used for deciding which
clusters to merge/split
• At least quadratic in the number of data points
• Not usable for large datasets
Agglomerative clustering algorithm
• Most popular hierarchical clustering technique
• Basic algorithm:
Compute the distance matrix between the input data points
Let each data point be a cluster
Repeat
Merge the two closest clusters
Update the distance matrix
Until only a single cluster remains
Key operation is the computation of the distance
between two clusters
Different definitions of the distance between clusters lead
to different algorithms
Input/ Initial setting
• Start with clusters of individual points
and a distance/proximity matrix
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Distance/Proximity Matrix
Intermediate State
• After some merging steps, we have some clusters
C1
C4
C2 C5
C3
C2C1
C1
C3
C5
C4
C2
C3 C4 C5
Distance/Proximity Matrix
Intermediate State
• Merge the two closest clusters (C2 and C5) and update the
distance matrix.
C1
C4
C2 C5
C3
C2C1
C1
C3
C5
C4
C2
C3 C4 C5
Distance/Proximity Matrix
After Merging
• “How do we update the distance matrix?”
C1
C4
C2 U C5
C3
? ? ? ?
?
?
?
C2 U
C5
C1
C1
C3
C4
C2 U C5
C3 C4
Distance between two
clusters
• Each cluster is a set of points
• How do we define distance between
two sets of points
– Lots of alternatives
– Not an easy task
Distance between two
clusters
• Single-link distance between clusters Ci and
Cj is the minimum distance between any
object in Ci and any object in Cj
• The distance is defined by the two most
similar objects
Single-link clustering:
example
• Determined by one pair of points, i.e., by one
link in the proximity graph.
1 2 3 4 5
Single-link clustering: example
Nested Clusters Dendrogram
1
2
3
4
5
6
1
2
3
4
5
17
Exercise: 1-dimensional clustering
5 11 13 16 25 36 38 39 42 60 62 64 67
Exercise:
Create a hierarchical agglomerative clustering for this data.
To make this deterministic, if there are ties, pick the left-most link.
Verify: clustering with 4 clusters has 25 as singleton.
http://chato.cl/2015/data-analysis/exercise-answers/hierarchical-clustering_exercise_01_answer.txt
Strengths of single-link clustering
Original Points Two Clusters
• Can handle non-elliptical shapes
Limitations of single-link clustering
Original Points Two Clusters
• Sensitive to noise and outliers
• It produces long, elongated clusters
Distance between two
clusters
• Complete-link distance between clusters Ci
and Cj is the maximum distance between any
object in Ci and any object in Cj
• The distance is defined by the two most
dissimilar objects
Complete-link clustering: example
• Distance between clusters is determined by the
two most distant points in the different clusters
1 2 3 4 5
Complete-link clustering: example
Nested Clusters Dendrogram
1
2
3
4
5
6
1
2 5
3
4
Strengths of complete-link
clustering
Original Points Two Clusters
• More balanced clusters (with equal diameter)
• Less susceptible to noise
Limitations of complete-link
clustering
Original Points Two Clusters
• Tends to break large clusters
• All clusters tend to have the same diameter – small
clusters are merged with larger ones
Distance between two
clusters
• Group average distance between clusters Ci
and Cj is the average distance between any
object in Ci and any object in Cj
Average-link clustering:
example
• Proximity of two clusters is the average of
pairwise proximity between points in the two
clusters.
1 2 3 4 5
Average-link clustering: example
Nested Clusters Dendrogram
1
2
3
4
5
6
1
2
5
3
4
Average-link clustering:
discussion
• Compromise between Single and
Complete Link
• Strengths
– Less susceptible to noise and outliers
• Limitations
– Biased towards globular clusters
Distance between two
clusters
• Centroid distance between clusters Ci and Cj is
the distance between the centroid ri of Ci and the
centroid rj of Cj
Distance between two
clusters
• Ward’s distance between clusters Ci and Cj is the
difference between the total within cluster sum of
squares for the two clusters separately, and the
within cluster sum of squares resulting from
merging the two clusters in cluster Cij
• ri: centroid of Ci
• rj: centroid of Cj
• rij: centroid of Cij
Ward’s distance for
clusters
• Similar to group average and centroid distance
• Less susceptible to noise and outliers
• Biased towards globular clusters
• Hierarchical analogue of k-means
– Can be used to initialize k-means
Hierarchical Clustering: Comparison
Group Average
Ward’s Method
1
2
3
4
5
6
1
2
5
3
4
MIN MAX
1
2
3
4
5
6
1
2
5
3
4
1
2
3
4
5
6
1
2 5
3
41
2
3
4
5
6
1
2
3
4
5
Hierarchical Clustering: Time and Space
requirements
• For a dataset X consisting of n points
• O(n2) space; it requires storing the distance matrix
• O(n3) time in most of the cases
– There are n steps and at each step the size n2 distance
matrix must be updated and searched
– Complexity can be reduced to O(n2 log(n) ) time for
some approaches by using appropriate data structures

More Related Content

PPTX
Hierarchical clustering
PPT
3.3 hierarchical methods
PPTX
Hierarchical clustering.pptx
PPTX
05 Clustering in Data Mining
PPTX
Clustering, k-means clustering
PDF
Density Based Clustering
PPTX
k medoid clustering.pptx
Hierarchical clustering
3.3 hierarchical methods
Hierarchical clustering.pptx
05 Clustering in Data Mining
Clustering, k-means clustering
Density Based Clustering
k medoid clustering.pptx

What's hot (20)

PPTX
Machine learning clustering
PPTX
DBSCAN : A Clustering Algorithm
ODP
Machine Learning with Decision trees
PDF
Clustering
PPTX
Presentation on K-Means Clustering
PPTX
K means clustering
PPTX
Unsupervised learning
PPT
Clustering
PDF
K - Nearest neighbor ( KNN )
PPTX
PPTX
Feature Selection in Machine Learning
PDF
Mean shift and Hierarchical clustering
PPTX
Logistic regression
PPTX
Hierarchical Clustering | Hierarchical Clustering in R |Hierarchical Clusteri...
PPTX
Grid based method & model based clustering method
PDF
Hierarchical clustering
PPTX
Data reduction
PPT
3.5 model based clustering
PPTX
Dbscan algorithom
Machine learning clustering
DBSCAN : A Clustering Algorithm
Machine Learning with Decision trees
Clustering
Presentation on K-Means Clustering
K means clustering
Unsupervised learning
Clustering
K - Nearest neighbor ( KNN )
Feature Selection in Machine Learning
Mean shift and Hierarchical clustering
Logistic regression
Hierarchical Clustering | Hierarchical Clustering in R |Hierarchical Clusteri...
Grid based method & model based clustering method
Hierarchical clustering
Data reduction
3.5 model based clustering
Dbscan algorithom
Ad

Similar to Hierarchical Clustering (20)

PPTX
log6kntt4i4dgwfwbpxw-signature-75c4ed0a4b22d2fef90396cdcdae85b38911f9dce0924a...
PDF
clustering_hierarchical ckustering notes.pdf
PDF
12. Clustering.pdf for the students of aktu.
PPTX
Unsupervised Learning-Clustering Algorithms.pptx
PDF
Clustering Algorithms.pdf
PPTX
Hierarchical clustering machine learning by arpit_sharma
ODP
Hierarchical Clustering With KSAI
PDF
DMTM Lecture 12 Hierarchical clustering
PPTX
Data Mining Lecture_7.pptx
PPTX
TYPES OF CLUSTERING.pptx
PDF
ch_5_dm clustering in data mining.......
PDF
Algorithm for mining cluster and association patterns
PPTX
Algorithms used in AIML and the need for aiml basic use cases
PPTX
Data mining and warehousing
PDF
Hierarchical clustering for Petroleum.pdf
PDF
Mastering Hierarchical Clustering: A Comprehensive Guide
PPTX
clustering ppt.pptx
PPT
Hierarchical (2)l ppt for data and analytics
PPTX
hierarchical clustering.pptx
PPT
Slide-TIF311-DM-10-11.ppt
log6kntt4i4dgwfwbpxw-signature-75c4ed0a4b22d2fef90396cdcdae85b38911f9dce0924a...
clustering_hierarchical ckustering notes.pdf
12. Clustering.pdf for the students of aktu.
Unsupervised Learning-Clustering Algorithms.pptx
Clustering Algorithms.pdf
Hierarchical clustering machine learning by arpit_sharma
Hierarchical Clustering With KSAI
DMTM Lecture 12 Hierarchical clustering
Data Mining Lecture_7.pptx
TYPES OF CLUSTERING.pptx
ch_5_dm clustering in data mining.......
Algorithm for mining cluster and association patterns
Algorithms used in AIML and the need for aiml basic use cases
Data mining and warehousing
Hierarchical clustering for Petroleum.pdf
Mastering Hierarchical Clustering: A Comprehensive Guide
clustering ppt.pptx
Hierarchical (2)l ppt for data and analytics
hierarchical clustering.pptx
Slide-TIF311-DM-10-11.ppt
Ad

More from Carlos Castillo (ChaTo) (20)

PDF
Finding High Quality Content in Social Media
PDF
When no clicks are good news
PDF
Socia Media and Digital Volunteering in Disaster Management @ DSEM 2017
PDF
Detecting Algorithmic Bias (keynote at DIR 2016)
PDF
Discrimination Discovery
PDF
Fairness-Aware Data Mining
PDF
Big Crisis Data for ISPC
PDF
Databeers: Big Crisis Data
PDF
Observational studies in social media
PDF
Natural experiments
PDF
Content-based link prediction
PDF
Link prediction
PDF
Recommender Systems
PDF
Graph Partitioning and Spectral Methods
PDF
Finding Dense Subgraphs
PDF
Graph Evolution Models
PDF
Link-Based Ranking
PDF
Text Indexing / Inverted Indices
PDF
Text Summarization
Finding High Quality Content in Social Media
When no clicks are good news
Socia Media and Digital Volunteering in Disaster Management @ DSEM 2017
Detecting Algorithmic Bias (keynote at DIR 2016)
Discrimination Discovery
Fairness-Aware Data Mining
Big Crisis Data for ISPC
Databeers: Big Crisis Data
Observational studies in social media
Natural experiments
Content-based link prediction
Link prediction
Recommender Systems
Graph Partitioning and Spectral Methods
Finding Dense Subgraphs
Graph Evolution Models
Link-Based Ranking
Text Indexing / Inverted Indices
Text Summarization

Recently uploaded (20)

PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Empathic Computing: Creating Shared Understanding
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Cloud computing and distributed systems.
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
PDF
Machine learning based COVID-19 study performance prediction
PDF
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
Big Data Technologies - Introduction.pptx
PDF
Approach and Philosophy of On baking technology
PDF
Modernizing your data center with Dell and AMD
PDF
Electronic commerce courselecture one. Pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPT
Teaching material agriculture food technology
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Empathic Computing: Creating Shared Understanding
Spectral efficient network and resource selection model in 5G networks
Cloud computing and distributed systems.
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
Machine learning based COVID-19 study performance prediction
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Big Data Technologies - Introduction.pptx
Approach and Philosophy of On baking technology
Modernizing your data center with Dell and AMD
Electronic commerce courselecture one. Pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Teaching material agriculture food technology
Reach Out and Touch Someone: Haptics and Empathic Computing
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
“AI and Expert System Decision Support & Business Intelligence Systems”
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...

Hierarchical Clustering

  • 1. 1 Hierarchical Clustering Class Algorithmic Methods of Data Mining Program M. Sc. Data Science University Sapienza University of Rome Semester Fall 2015 Lecturer Carlos Castillo http://chato.cl/ Sources: ● Mohammed J. Zaki, Wagner Meira, Jr., Data Mining and Analysis: Fundamental Concepts and Algorithms, Cambridge University Press, May 2014. Chapter 14. [download] ● Evimaria Terzi: Data Mining course at Boston University http://www.cs.bu.edu/~evimaria/cs565-13.html
  • 4. Hierarchical Clustering • Produces a set of nested clusters organized as a hierarchical tree • Can be visualized as a dendrogram – A tree-like diagram that records the sequences of merges or splits
  • 5. Strengths of Hierarchical Clustering • No assumptions on the number of clusters – Any desired number of clusters can be obtained by ‘cutting’ the dendogram at the proper level • Hierarchical clusterings may correspond to meaningful taxonomies – Example in biological sciences (e.g., phylogeny reconstruction, etc), web (e.g., product catalogs) etc
  • 6. Hierarchical Clustering Algorithms • Two main types of hierarchical clustering – Agglomerative: • Start with the points as individual clusters • At each step, merge the closest pair of clusters until only one cluster (or k clusters) left – Divisive: • Start with one, all-inclusive cluster • At each step, split a cluster until each cluster contains a point (or there are k clusters) • Traditional hierarchical algorithms use a similarity or distance matrix – Merge or split one cluster at a time
  • 7. Complexity of hierarchical clustering • Distance matrix is used for deciding which clusters to merge/split • At least quadratic in the number of data points • Not usable for large datasets
  • 8. Agglomerative clustering algorithm • Most popular hierarchical clustering technique • Basic algorithm: Compute the distance matrix between the input data points Let each data point be a cluster Repeat Merge the two closest clusters Update the distance matrix Until only a single cluster remains Key operation is the computation of the distance between two clusters Different definitions of the distance between clusters lead to different algorithms
  • 9. Input/ Initial setting • Start with clusters of individual points and a distance/proximity matrix p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Distance/Proximity Matrix
  • 10. Intermediate State • After some merging steps, we have some clusters C1 C4 C2 C5 C3 C2C1 C1 C3 C5 C4 C2 C3 C4 C5 Distance/Proximity Matrix
  • 11. Intermediate State • Merge the two closest clusters (C2 and C5) and update the distance matrix. C1 C4 C2 C5 C3 C2C1 C1 C3 C5 C4 C2 C3 C4 C5 Distance/Proximity Matrix
  • 12. After Merging • “How do we update the distance matrix?” C1 C4 C2 U C5 C3 ? ? ? ? ? ? ? C2 U C5 C1 C1 C3 C4 C2 U C5 C3 C4
  • 13. Distance between two clusters • Each cluster is a set of points • How do we define distance between two sets of points – Lots of alternatives – Not an easy task
  • 14. Distance between two clusters • Single-link distance between clusters Ci and Cj is the minimum distance between any object in Ci and any object in Cj • The distance is defined by the two most similar objects
  • 15. Single-link clustering: example • Determined by one pair of points, i.e., by one link in the proximity graph. 1 2 3 4 5
  • 16. Single-link clustering: example Nested Clusters Dendrogram 1 2 3 4 5 6 1 2 3 4 5
  • 17. 17 Exercise: 1-dimensional clustering 5 11 13 16 25 36 38 39 42 60 62 64 67 Exercise: Create a hierarchical agglomerative clustering for this data. To make this deterministic, if there are ties, pick the left-most link. Verify: clustering with 4 clusters has 25 as singleton. http://chato.cl/2015/data-analysis/exercise-answers/hierarchical-clustering_exercise_01_answer.txt
  • 18. Strengths of single-link clustering Original Points Two Clusters • Can handle non-elliptical shapes
  • 19. Limitations of single-link clustering Original Points Two Clusters • Sensitive to noise and outliers • It produces long, elongated clusters
  • 20. Distance between two clusters • Complete-link distance between clusters Ci and Cj is the maximum distance between any object in Ci and any object in Cj • The distance is defined by the two most dissimilar objects
  • 21. Complete-link clustering: example • Distance between clusters is determined by the two most distant points in the different clusters 1 2 3 4 5
  • 22. Complete-link clustering: example Nested Clusters Dendrogram 1 2 3 4 5 6 1 2 5 3 4
  • 23. Strengths of complete-link clustering Original Points Two Clusters • More balanced clusters (with equal diameter) • Less susceptible to noise
  • 24. Limitations of complete-link clustering Original Points Two Clusters • Tends to break large clusters • All clusters tend to have the same diameter – small clusters are merged with larger ones
  • 25. Distance between two clusters • Group average distance between clusters Ci and Cj is the average distance between any object in Ci and any object in Cj
  • 26. Average-link clustering: example • Proximity of two clusters is the average of pairwise proximity between points in the two clusters. 1 2 3 4 5
  • 27. Average-link clustering: example Nested Clusters Dendrogram 1 2 3 4 5 6 1 2 5 3 4
  • 28. Average-link clustering: discussion • Compromise between Single and Complete Link • Strengths – Less susceptible to noise and outliers • Limitations – Biased towards globular clusters
  • 29. Distance between two clusters • Centroid distance between clusters Ci and Cj is the distance between the centroid ri of Ci and the centroid rj of Cj
  • 30. Distance between two clusters • Ward’s distance between clusters Ci and Cj is the difference between the total within cluster sum of squares for the two clusters separately, and the within cluster sum of squares resulting from merging the two clusters in cluster Cij • ri: centroid of Ci • rj: centroid of Cj • rij: centroid of Cij
  • 31. Ward’s distance for clusters • Similar to group average and centroid distance • Less susceptible to noise and outliers • Biased towards globular clusters • Hierarchical analogue of k-means – Can be used to initialize k-means
  • 32. Hierarchical Clustering: Comparison Group Average Ward’s Method 1 2 3 4 5 6 1 2 5 3 4 MIN MAX 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 5 3 41 2 3 4 5 6 1 2 3 4 5
  • 33. Hierarchical Clustering: Time and Space requirements • For a dataset X consisting of n points • O(n2) space; it requires storing the distance matrix • O(n3) time in most of the cases – There are n steps and at each step the size n2 distance matrix must be updated and searched – Complexity can be reduced to O(n2 log(n) ) time for some approaches by using appropriate data structures