SlideShare a Scribd company logo
Large Scale Data Clustering
Algorithms
Vahid Mirjalili
Data Scientist
Feb 11th 2016
Outline
1. Overview of clustering algorithms and validation
2. Fast and accurate k-means clustering for large datasets
3. Clustering based on landmark points
4. Spectral relaxation for k-means clustering
5. Proposed methods for microbial community detection
2
Part 1:
Overview of
Data Clustering Algorithms
Jain, Anil K. "Data clustering: 50 years beyond K-means." Pattern recognition letters 31.8 (2010): 651-666.
3
Data clustering
Goal: discover natural groupings among given data points
Unsupervised learning (unlabeled data)
Exploratory analysis (without any pre-specified model/hypothesis)
Usages
Gain insight from the underlying structure of data (salient features, anomaly detection, etc)
Identify degree of similarity between points (infer phylogenetic relationships)
Data Compression (summarizing data by cluster prototypes, removing redundant patterns)
4
Applications
Wide range of applications: computer vision, document clustering, gene clustering,
customer/product groups
An example application for Computer Visions: image
segmentation and separating the background
5
Different clustering algorithms
Literature contains 1000
clustering algorithms
Different criteria to divide
clustering algorithms
Soft vs. Hard
Clustering
Prototype vs.
Density based
Partitional vs.
Hierarchical
Clustering
6
Partitional vs. Hierarchical
1. Partitional algorithms (k-means)
Partition the data space
Finds all clusters simultaneously
2. Hierarchical algorithms
Generate nested cluster hierarchy
Agglomerative (bottom-up)
Divisive (top-down)
Distance between clusters:
Single-linkage, complete linkage,
average-linkage
7
K-means clustering
Objective function:
1. Select K initial cluster centroids
2. Assign data points to the nearest
cluster centroid
3. Update the new cluster centroids
Figure curtesy: Data clustering: 50 years beyond k-means, Anil Jain
8
K-means pros and cons
+ Simple/easy to implement
+ Order of linear complexity O(N × Iterations)
- Results highly dependent on initialization
- Prone to local minima
- Sensitive to outliers, and clusters sizes
- Globular shaped clusters
- Requiring multiple passes
- Not applicable to categorical data
Local minima
Non-globular
clusters
Outliers
9
K-means extensions
K-means++ To improve the initialization process
X-means To find the optimal number of clusters without prior knowledge
Kernel K-means To form arbitrary/non-globular shaped clusters
Fuzzy c-means Multiple cluster assignment (membership degree)
K-medians More robust to outliers (median of each feature)
K-medoids More robust to outliers, different distance metrics, categorical data
Bisecting K-means and many more ...
10
Kernel K-means vs. K-means
Pyclust: Open Source Data Clustering Pckage
11
Bisecting K-means
12
Pyclust: Open Source Data Clustering Pckage
Other approaches in data clustering
Prototype-based methods
• Clusters are formed based on similarity to a prototype
• K-means, k-medians, k-medoids, …
Density based methods (clusters are high density regions
separated by low density regions)
• Jarvis-Patrick Algorithm: similairty between patterns defined
as the number of common neighbors
• DBSCAN (MinPts, ε-neighborhood)
Identify types noise points/border points/core points
13
DBSCAN pros and cons
+ No need to know number of clusters apriori
+ Identify arbitrary shaped clusters
+ Robust to noise and outliers
- Sensitivity to the parameters
- Problem with high dimensional data
(subspace clustering)
14
Clustering Validation
1. Internal Validity Indexes
• Assessing clustering quality based on how the data itself fit in the clustering
structure
• Silhouette
• Stability and Admissibility analyses
 Test the sensitivity of an algorithm to changes in data while keeping the structures intact
 Convext admissiblity, cluster prportion, omission, and monotone admissibility
15
  clustersotherallwithofitydissimilaraveragesmallest:)(
clustersamethewithinofitydissimilaraverage:)(
)()(max
)()(
)(
iib
iia
ibia
iaib
iS



Clustering Validation
2. Relative Indexes
Assessing how similar two clustering solutions are
3. External Indexes
Comparing a clustering solution with ground-truth labels/clusters
Purity, Rand Index (RI), Normalized Mutual Information, Fβ-score, MCC, …
16
 
RePr
Re.Pr1
Re
Pr
2
2









F
FNTP
TP
FPTP
TP
   
k
k
j
c
N
jmax
1
C,Purity 
Purity is not a reliable
measure by itself
Part 2:
Fast and Accurate K-means
for Large Datasets
Shindler, Michael, Alex Wong, and Adam W. Meyerson. "Fast and accurate k-means for large datasets."
Advances in neural information processing systems. 2011
17
Motivation
Goal: Clustering Large Datasets
Data cannot be stored in main memory
Streaming model (sequential access)
Facility Location problem:
desired facility cost is given
without prior knowledge of k
• Original K-means requires multiple passes through data
 not suitable for big data / streaming data
18
Well Clusterable Data
An instance of k-means is called σ-separable if reducing the number of clusters
increases the cost of optimal k-means clustering by 1/σ2
K=3
Jk=3(C)
K=2
Jk=2(C)
)(
)(
3
22
CJ
CJ
k
k



20
K-means++ Seeding Procedure (non-streaming)
Let S be the set of already selected seeds
1. Initially choose a point uniformly at random
2. Select a new point randomly with probability according to
3. Repeat until
• An improved version allows more than k centers
Advantage: avoiding local minima traps
K-means++: The advantages of careful seeding, N. Ailon et al.
j
jSp
Spd
),(
),(2
min
kS ||
kS ||
22
Proposed Algorithm
Initialize
Guess a small facility cost
Find the closest facility point
Decide whether to create a new facility
point, or assign it to the closest
cluster facility (add the
weight=contribution to facility cost)
Number of facility points overflow:
increase f
Merge (wegihted) facility points
23
Approximate Nearest Neighbor
Finding nearest facility point is the most time consuming part
1. Construct a random vector
2. Store the facility points in order of
3. For a new point , find the two facilities
such that
4. Compute the distances to and
This approximation can increase the approximation ratio by a constant factor
w
iyw.iy
x
)(log... 1 Oywxwyw ii  
iy 1iy
iyw.
1. iyww
24
Algorithm Analysis
Determine better facility cost
Better approximation ratio (17) much less than previous work by Braverman
Running time
Running time with approximate nearest neighbor
25
)log( nnk
))loglog(log( nkn 
Part 3:
Active Clustering of
Biological Sequences
Voevodski, Konstantin, et al. "Active clustering of biological sequences." The Journal of Machine Learning
Research 13.1 (2012): 203-225.
26
Motivation
BLAST Sequence Query
• Create a hash table of all the words
• Pairwise alignment of subsequences in the same bucket
No need to calculate the distances of a new query sequence
to all the database sequences
Previous clustering algorithm for gene sequences require all pair-wise calculations
Goal: develop a clustering algorithm without computing all pair distances
Query
sequence
Hash Table
27
Landmark Clustering Algorithm
Input:
Dataset S
Desired number of clusters k
Probability of performance guarantee
Objective function
propertystability ),1( 
1
Main Procedures:
1. Landmark Selection
2. Expanding Landmarks
3. Cluster Assignment
 

k
i Cx
i
i
cxdC
1
),()(
28
...},,{ 21 llL 
 nnLO log||:TimeRunning
Landmark Selection
Distance Calculations
L
|)|.|(| LSO
29
Expand-Landmarks
Landmark l is working if
}),(|{ rlsdSsBr
l 
Ball B around landmark l
minsBl 
30
Cluster Assignment
Construct graph using working landmark,
• Nodes represent (working) landmarks
• Edges represent the overlapping balls
Find the connected components of graph
Clustered points are the set of points in these balls
The number of clusters is
BG
}Comp,...,Comp{)(Components 1 mBG 
)(Components BG
31
Performance Analysis
Number of required landmarks
Active landmark selection
Uniform selection (degrading performance)
Good points:
Landmark spread property: any set of good points must have a landmark
closer than
With high probability (1-δ), landmark spread property is satisfied
Based on landmark spread property, Expand-Landmarks correctly
cluster most of the points in a cluster core
- Assumption of large clusters, doesn’t capture smaller clusters 32
critcrit dxwxwdxw 17)()(and)( 2 
)(xw
)(2 xw
critd
 /1lnkO
 /ln kkO
Part 4:
Spectral Relaxation of
K-means Clustering
Zha, Hongyuan, et al. "Spectral relaxation for k-means clustering." Advances in neural information
processing systems. 2001.
33
Motivation
K-means is prone to local minima
Reformulate k-means objective function as a trace maximization problem
Different approaches to tackle this local minima issue:
• Improving the initialization process (K-means++)
• Relaxing constraints in the objective function (spectral relaxation
method)
34
Derivation
Data matrix D Cost of a partitioning
Cost for an single cluster

Maximizing  minimizing
 

k
i
s
s
i
i
s
i
mdss
1 1
2)(
)(
 22
1
2)(
/
Fi
T
siF
T
ii
s
s
i
i
si seeIDemDmdss i
i
 
 









k
i i
i
T
i
i
T
i
T
i
k
i
i
s
e
DD
s
e
DDssss )(trace)(
1

)(trace)(trace)( XDDXDDss TTT
 matrixindicatorlorthonormakniswhere X
)(trace DXDX TT
)(ss











1...1
......
1...1
e
35
Theorem
For a symmetric matrix with eigenvalues n  ...21nnH 
)(tracemax...21 YHYT
IYY
n
k
T

 
36
Cluster Assignment
Global Gram matrix
Gram matrix for cluster i:
Eigenvalue decomposition largest eigenvalue
Err
DD
DD
DD
DD
k
T
k
T
T
T
















...00
............
0...0
0...0
22
11
i
T
i DD
ii yˆiiii
T
i yyDD ˆ
n
njjj
T
yyy
yyDD
21
21 ...  
DDT
37
Cluster Assignment
Method1: apply k-means to the matrix of k largest eigenvectors of global gram
matrix (pivoted k-means)
Method2: (pivoted QR decomposition of )
Davis-Khan sin(θ) Theorem: matrixorthogonaliswhere)(ˆ kkVErrOVYY kk 
38
 
kcluster1cluster
ˆ,...,ˆ....ˆ,...,ˆ 111111 1 kkskks
T
k vyvyvyvyY k

T
kY
Part 5:
Application for
Microbial Community Detection
39
Large genetic sequence datasets
Goal: cluster in streaming model / limited passes
Landmark selection algorithm (one pass)
Expand landmarks and assigning sequences to the nearest landmark
Finding the nearest landmark: A hashing scheme to find the nearest
landmark
Require choice of hyper-parameters
Assumption of σ-separability, and large clusters
40
Hashing Scheme for nearest neighbor search
Shindler’s approximate nearest neighbor numeric random vector
Create random sequences
Hash function: Levenshtein (edit) distance between sequence x, and random
sequences
41
New sequence
x
Hash Table
xww

.
Closest
landmark
mrrr ...,,, 21
),( ii rxEdith 
Acknowledgements
Department of Computer Science and Engineering,
Michigan State University
My friend Sebastian Raschka,
Data Scientist and author of “Python Machine Learning”
Please visit http://vahidmirjalili.com
42

More Related Content

PPT
Lect4
PDF
New Approach for K-mean and K-medoids Algorithm
PDF
K means Clustering
PDF
Clustering: A Survey
PPTX
Pattern recognition binoy k means clustering
PPT
Dataa miining
PDF
An improvement in k mean clustering algorithm using better time and accuracy
PDF
K-Means, its Variants and its Applications
Lect4
New Approach for K-mean and K-medoids Algorithm
K means Clustering
Clustering: A Survey
Pattern recognition binoy k means clustering
Dataa miining
An improvement in k mean clustering algorithm using better time and accuracy
K-Means, its Variants and its Applications

What's hot (20)

PPTX
Kmeans
PPT
Chapter 11 cluster advanced : web and text mining
PPTX
K means clustering
PPT
K mean-clustering
PDF
Data clustering
PPT
Clustering
PPT
Chap8 basic cluster_analysis
PPTX
K-Means clustring @jax
PPT
Cure, Clustering Algorithm
PPT
K mean-clustering algorithm
PPTX
Types of clustering and different types of clustering algorithms
PPT
3.3 hierarchical methods
PPTX
K means clustring @jax
PPT
3.2 partitioning methods
PPTX
Clustering on database systems rkm
PPT
Clustering
PPTX
K-means clustering algorithm
PPT
Cluster analysis
PPT
K means Clustering Algorithm
PPT
Clustering in artificial intelligence
Kmeans
Chapter 11 cluster advanced : web and text mining
K means clustering
K mean-clustering
Data clustering
Clustering
Chap8 basic cluster_analysis
K-Means clustring @jax
Cure, Clustering Algorithm
K mean-clustering algorithm
Types of clustering and different types of clustering algorithms
3.3 hierarchical methods
K means clustring @jax
3.2 partitioning methods
Clustering on database systems rkm
Clustering
K-means clustering algorithm
Cluster analysis
K means Clustering Algorithm
Clustering in artificial intelligence
Ad

Viewers also liked (14)

PPTX
Rfid based localization
PPTX
Detecting Flight Trajectory Anomalies and Predicting Diversions in Freight Tr...
PDF
Trajectory clustering - Traclus Algorithm
ODP
Beyond TrueTime
PDF
Data preprocessing and unsupervised learning methods in Bioinformatics
PPT
Clustering
PDF
Spatio-Temporal Data Mining and Classification of Ships' Trajectories
PPTX
Spanner
PDF
K-Means Algorithm
PDF
Try Cloud Spanner
PDF
An Overview of Spanner: Google's Globally Distributed Database
PPTX
Lecture 18: Gaussian Mixture Models and Expectation Maximization
PPTX
Google Cloud Spanner Preview
PDF
Expectation Maximization and Gaussian Mixture Models
Rfid based localization
Detecting Flight Trajectory Anomalies and Predicting Diversions in Freight Tr...
Trajectory clustering - Traclus Algorithm
Beyond TrueTime
Data preprocessing and unsupervised learning methods in Bioinformatics
Clustering
Spatio-Temporal Data Mining and Classification of Ships' Trajectories
Spanner
K-Means Algorithm
Try Cloud Spanner
An Overview of Spanner: Google's Globally Distributed Database
Lecture 18: Gaussian Mixture Models and Expectation Maximization
Google Cloud Spanner Preview
Expectation Maximization and Gaussian Mixture Models
Ad

Similar to Large Scale Data Clustering: an overview (20)

PDF
Chapter 5.pdf
PDF
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETS
PPT
15857 cse422 unsupervised-learning
PPTX
big data analytics unit 2 notes for study
PDF
Machine Learning - Clustering
PPTX
Unsupervised learning (clustering)
PPTX
Unsupervised%20Learninffffg (2).pptx. application
PDF
Data clustering using kernel based
PDF
[ML]-Unsupervised-learning_Unit2.ppt.pdf
PPTX
"k-means-clustering" presentation @ Papers We Love Bucharest
PDF
clustering using different methods in .pdf
PPTX
Fast Single-pass K-means Clusterting at Oxford
PDF
A Study of Efficiency Improvements Technique for K-Means Algorithm
PPTX
Unsupervised Learning.pptx
PDF
Mat189: Cluster Analysis with NBA Sports Data
PDF
A survey on Efficient Enhanced K-Means Clustering Algorithm
PPT
Chapter 10 ClusBasic ppt file for clear understaning
PPT
Chapter -10-Clus_Basic.ppt -DataMinning
PPT
Data mining concepts and techniques Chapter 10
PDF
Chapter#04[Part#01]K-Means Clusterig.pdf
Chapter 5.pdf
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETS
15857 cse422 unsupervised-learning
big data analytics unit 2 notes for study
Machine Learning - Clustering
Unsupervised learning (clustering)
Unsupervised%20Learninffffg (2).pptx. application
Data clustering using kernel based
[ML]-Unsupervised-learning_Unit2.ppt.pdf
"k-means-clustering" presentation @ Papers We Love Bucharest
clustering using different methods in .pdf
Fast Single-pass K-means Clusterting at Oxford
A Study of Efficiency Improvements Technique for K-Means Algorithm
Unsupervised Learning.pptx
Mat189: Cluster Analysis with NBA Sports Data
A survey on Efficient Enhanced K-Means Clustering Algorithm
Chapter 10 ClusBasic ppt file for clear understaning
Chapter -10-Clus_Basic.ppt -DataMinning
Data mining concepts and techniques Chapter 10
Chapter#04[Part#01]K-Means Clusterig.pdf

Recently uploaded (20)

PPTX
Introduction to Knowledge Engineering Part 1
PDF
Foundation of Data Science unit number two notes
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPT
Quality review (1)_presentation of this 21
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
Computer network topology notes for revision
PPTX
Global journeys: estimating international migration
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PDF
Lecture1 pattern recognition............
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Introduction to Knowledge Engineering Part 1
Foundation of Data Science unit number two notes
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
oil_refinery_comprehensive_20250804084928 (1).pptx
Clinical guidelines as a resource for EBP(1).pdf
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Quality review (1)_presentation of this 21
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Computer network topology notes for revision
Global journeys: estimating international migration
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
climate analysis of Dhaka ,Banglades.pptx
Business Ppt On Nestle.pptx huunnnhhgfvu
Lecture1 pattern recognition............
Galatica Smart Energy Infrastructure Startup Pitch Deck
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”

Large Scale Data Clustering: an overview

  • 1. Large Scale Data Clustering Algorithms Vahid Mirjalili Data Scientist Feb 11th 2016
  • 2. Outline 1. Overview of clustering algorithms and validation 2. Fast and accurate k-means clustering for large datasets 3. Clustering based on landmark points 4. Spectral relaxation for k-means clustering 5. Proposed methods for microbial community detection 2
  • 3. Part 1: Overview of Data Clustering Algorithms Jain, Anil K. "Data clustering: 50 years beyond K-means." Pattern recognition letters 31.8 (2010): 651-666. 3
  • 4. Data clustering Goal: discover natural groupings among given data points Unsupervised learning (unlabeled data) Exploratory analysis (without any pre-specified model/hypothesis) Usages Gain insight from the underlying structure of data (salient features, anomaly detection, etc) Identify degree of similarity between points (infer phylogenetic relationships) Data Compression (summarizing data by cluster prototypes, removing redundant patterns) 4
  • 5. Applications Wide range of applications: computer vision, document clustering, gene clustering, customer/product groups An example application for Computer Visions: image segmentation and separating the background 5
  • 6. Different clustering algorithms Literature contains 1000 clustering algorithms Different criteria to divide clustering algorithms Soft vs. Hard Clustering Prototype vs. Density based Partitional vs. Hierarchical Clustering 6
  • 7. Partitional vs. Hierarchical 1. Partitional algorithms (k-means) Partition the data space Finds all clusters simultaneously 2. Hierarchical algorithms Generate nested cluster hierarchy Agglomerative (bottom-up) Divisive (top-down) Distance between clusters: Single-linkage, complete linkage, average-linkage 7
  • 8. K-means clustering Objective function: 1. Select K initial cluster centroids 2. Assign data points to the nearest cluster centroid 3. Update the new cluster centroids Figure curtesy: Data clustering: 50 years beyond k-means, Anil Jain 8
  • 9. K-means pros and cons + Simple/easy to implement + Order of linear complexity O(N × Iterations) - Results highly dependent on initialization - Prone to local minima - Sensitive to outliers, and clusters sizes - Globular shaped clusters - Requiring multiple passes - Not applicable to categorical data Local minima Non-globular clusters Outliers 9
  • 10. K-means extensions K-means++ To improve the initialization process X-means To find the optimal number of clusters without prior knowledge Kernel K-means To form arbitrary/non-globular shaped clusters Fuzzy c-means Multiple cluster assignment (membership degree) K-medians More robust to outliers (median of each feature) K-medoids More robust to outliers, different distance metrics, categorical data Bisecting K-means and many more ... 10
  • 11. Kernel K-means vs. K-means Pyclust: Open Source Data Clustering Pckage 11
  • 12. Bisecting K-means 12 Pyclust: Open Source Data Clustering Pckage
  • 13. Other approaches in data clustering Prototype-based methods • Clusters are formed based on similarity to a prototype • K-means, k-medians, k-medoids, … Density based methods (clusters are high density regions separated by low density regions) • Jarvis-Patrick Algorithm: similairty between patterns defined as the number of common neighbors • DBSCAN (MinPts, ε-neighborhood) Identify types noise points/border points/core points 13
  • 14. DBSCAN pros and cons + No need to know number of clusters apriori + Identify arbitrary shaped clusters + Robust to noise and outliers - Sensitivity to the parameters - Problem with high dimensional data (subspace clustering) 14
  • 15. Clustering Validation 1. Internal Validity Indexes • Assessing clustering quality based on how the data itself fit in the clustering structure • Silhouette • Stability and Admissibility analyses  Test the sensitivity of an algorithm to changes in data while keeping the structures intact  Convext admissiblity, cluster prportion, omission, and monotone admissibility 15   clustersotherallwithofitydissimilaraveragesmallest:)( clustersamethewithinofitydissimilaraverage:)( )()(max )()( )( iib iia ibia iaib iS   
  • 16. Clustering Validation 2. Relative Indexes Assessing how similar two clustering solutions are 3. External Indexes Comparing a clustering solution with ground-truth labels/clusters Purity, Rand Index (RI), Normalized Mutual Information, Fβ-score, MCC, … 16   RePr Re.Pr1 Re Pr 2 2          F FNTP TP FPTP TP     k k j c N jmax 1 C,Purity  Purity is not a reliable measure by itself
  • 17. Part 2: Fast and Accurate K-means for Large Datasets Shindler, Michael, Alex Wong, and Adam W. Meyerson. "Fast and accurate k-means for large datasets." Advances in neural information processing systems. 2011 17
  • 18. Motivation Goal: Clustering Large Datasets Data cannot be stored in main memory Streaming model (sequential access) Facility Location problem: desired facility cost is given without prior knowledge of k • Original K-means requires multiple passes through data  not suitable for big data / streaming data 18
  • 19. Well Clusterable Data An instance of k-means is called σ-separable if reducing the number of clusters increases the cost of optimal k-means clustering by 1/σ2 K=3 Jk=3(C) K=2 Jk=2(C) )( )( 3 22 CJ CJ k k    20
  • 20. K-means++ Seeding Procedure (non-streaming) Let S be the set of already selected seeds 1. Initially choose a point uniformly at random 2. Select a new point randomly with probability according to 3. Repeat until • An improved version allows more than k centers Advantage: avoiding local minima traps K-means++: The advantages of careful seeding, N. Ailon et al. j jSp Spd ),( ),(2 min kS || kS || 22
  • 21. Proposed Algorithm Initialize Guess a small facility cost Find the closest facility point Decide whether to create a new facility point, or assign it to the closest cluster facility (add the weight=contribution to facility cost) Number of facility points overflow: increase f Merge (wegihted) facility points 23
  • 22. Approximate Nearest Neighbor Finding nearest facility point is the most time consuming part 1. Construct a random vector 2. Store the facility points in order of 3. For a new point , find the two facilities such that 4. Compute the distances to and This approximation can increase the approximation ratio by a constant factor w iyw.iy x )(log... 1 Oywxwyw ii   iy 1iy iyw. 1. iyww 24
  • 23. Algorithm Analysis Determine better facility cost Better approximation ratio (17) much less than previous work by Braverman Running time Running time with approximate nearest neighbor 25 )log( nnk ))loglog(log( nkn 
  • 24. Part 3: Active Clustering of Biological Sequences Voevodski, Konstantin, et al. "Active clustering of biological sequences." The Journal of Machine Learning Research 13.1 (2012): 203-225. 26
  • 25. Motivation BLAST Sequence Query • Create a hash table of all the words • Pairwise alignment of subsequences in the same bucket No need to calculate the distances of a new query sequence to all the database sequences Previous clustering algorithm for gene sequences require all pair-wise calculations Goal: develop a clustering algorithm without computing all pair distances Query sequence Hash Table 27
  • 26. Landmark Clustering Algorithm Input: Dataset S Desired number of clusters k Probability of performance guarantee Objective function propertystability ),1(  1 Main Procedures: 1. Landmark Selection 2. Expanding Landmarks 3. Cluster Assignment    k i Cx i i cxdC 1 ),()( 28 ...},,{ 21 llL   nnLO log||:TimeRunning
  • 28. Expand-Landmarks Landmark l is working if }),(|{ rlsdSsBr l  Ball B around landmark l minsBl  30
  • 29. Cluster Assignment Construct graph using working landmark, • Nodes represent (working) landmarks • Edges represent the overlapping balls Find the connected components of graph Clustered points are the set of points in these balls The number of clusters is BG }Comp,...,Comp{)(Components 1 mBG  )(Components BG 31
  • 30. Performance Analysis Number of required landmarks Active landmark selection Uniform selection (degrading performance) Good points: Landmark spread property: any set of good points must have a landmark closer than With high probability (1-δ), landmark spread property is satisfied Based on landmark spread property, Expand-Landmarks correctly cluster most of the points in a cluster core - Assumption of large clusters, doesn’t capture smaller clusters 32 critcrit dxwxwdxw 17)()(and)( 2  )(xw )(2 xw critd  /1lnkO  /ln kkO
  • 31. Part 4: Spectral Relaxation of K-means Clustering Zha, Hongyuan, et al. "Spectral relaxation for k-means clustering." Advances in neural information processing systems. 2001. 33
  • 32. Motivation K-means is prone to local minima Reformulate k-means objective function as a trace maximization problem Different approaches to tackle this local minima issue: • Improving the initialization process (K-means++) • Relaxing constraints in the objective function (spectral relaxation method) 34
  • 33. Derivation Data matrix D Cost of a partitioning Cost for an single cluster  Maximizing  minimizing    k i s s i i s i mdss 1 1 2)( )(  22 1 2)( / Fi T siF T ii s s i i si seeIDemDmdss i i              k i i i T i i T i T i k i i s e DD s e DDssss )(trace)( 1  )(trace)(trace)( XDDXDDss TTT  matrixindicatorlorthonormakniswhere X )(trace DXDX TT )(ss            1...1 ...... 1...1 e 35
  • 34. Theorem For a symmetric matrix with eigenvalues n  ...21nnH  )(tracemax...21 YHYT IYY n k T    36
  • 35. Cluster Assignment Global Gram matrix Gram matrix for cluster i: Eigenvalue decomposition largest eigenvalue Err DD DD DD DD k T k T T T                 ...00 ............ 0...0 0...0 22 11 i T i DD ii yˆiiii T i yyDD ˆ n njjj T yyy yyDD 21 21 ...   DDT 37
  • 36. Cluster Assignment Method1: apply k-means to the matrix of k largest eigenvectors of global gram matrix (pivoted k-means) Method2: (pivoted QR decomposition of ) Davis-Khan sin(θ) Theorem: matrixorthogonaliswhere)(ˆ kkVErrOVYY kk  38   kcluster1cluster ˆ,...,ˆ....ˆ,...,ˆ 111111 1 kkskks T k vyvyvyvyY k  T kY
  • 37. Part 5: Application for Microbial Community Detection 39
  • 38. Large genetic sequence datasets Goal: cluster in streaming model / limited passes Landmark selection algorithm (one pass) Expand landmarks and assigning sequences to the nearest landmark Finding the nearest landmark: A hashing scheme to find the nearest landmark Require choice of hyper-parameters Assumption of σ-separability, and large clusters 40
  • 39. Hashing Scheme for nearest neighbor search Shindler’s approximate nearest neighbor numeric random vector Create random sequences Hash function: Levenshtein (edit) distance between sequence x, and random sequences 41 New sequence x Hash Table xww  . Closest landmark mrrr ...,,, 21 ),( ii rxEdith 
  • 40. Acknowledgements Department of Computer Science and Engineering, Michigan State University My friend Sebastian Raschka, Data Scientist and author of “Python Machine Learning” Please visit http://vahidmirjalili.com 42

Editor's Notes

  • #18: Eliezer de Souza da Silva