SlideShare a Scribd company logo
Unsupervised Joint Alignment,
Clustering, and Feature Learning
Marwan A. Mattar
!
PhD Thesis Defense
December 12, 2013
Unsupervised Joint Alignment,
Clustering, and Feature Learning
Research Goal
Develop an unsupervised data set-agnostic processing module
that includes alignment, clustering, and feature learning
Joint Alignment
Transform instances to make data set more coherent
Joint Alignment: Domains
• Computer vision: remove spatial variability in images	

• Speech: remove temporal variability in speech signals	

• Psychology: recover stimulus onset in ERP signals	

• Radiology: remove bias in MRI scans	

• ...
Joint Alignment != Pairwise Alignment
• Pairwise alignment involves two instances	

• Joint alignment can be achieved through (N - 1) pairwise alignments	

• Can be ineffective (local view & optimization)
Joint Alignment: Unsupervised
• Supervision: fiducial points or examples of transforms	

• Can be expensive to obtain	

• We focus on unsupervised joint alignment
Joint Alignment
• Input: 	

• Data set, 	

• Transformation function,	

• Output: 	

• Transformed data set, s.t. is maximized 	

• Transformation parameters,
{x1, x2, . . . , xN }
{y1, y2, . . . , yN }
yi = ⌧(xi, ⇢i)
{⇢1, ⇢2, . . . , ⇢N }
S(y)
Joint Alignment: Previous Work
• Procrustes analysis (1939 & 1970’s)	

• Congealing framework (2000)	

• Domain-specific algorithms (e.g. curves or images)	

• Data-set specific algorithms (e.g. ERP)
Previous Work: Congealing
(Miller et al. 2000)
• General framework, widely applicable	

• Utilizes entropy-based objective function	

• Iterative conditional maximization optimization	

• Expectation regularization step: E[⇢] = 0
Research Goals
• Develop a framework for joint alignment	

• Handle complexities due to multi-modalities and representation
!
• Release open-source implementations of all models
clustering feature learning
Research Goals
• Develop a
• Handle complexities due to
!
• Release
clustering deep learning
Develop an unsupervised data set-agnostic processing module
that includes alignment, clustering, and feature learning
Today ....
alignment
clustering feature learning
Today ....
alignment
clustering feature learning
Today ....
alignment
clustering feature learning
Today ....
alignment
clustering feature learning
Future ....
alignment
clustering feature learning
Curve Alignment
Curve Alignment: Objectives
• Adapt congealing framework to 1-dimensional curves	

• Nonparametric: entropy-based objective function	

• Efficient: global transformation parametrization	

• Experiment with effective parameterizations	

• Evaluate effect of alignment on classification
Curve Alignment: Objective Function
• Input: curves of length	

• Sum of location-wise entropy (or variance)	

!
!
• Marginal entropy computed usingVasicek estimator
{yn(t)}N
n=1 ⇠ Y tS(y1, y2, . . . , yN ) =
TX
t=0
H(Y t
) H(Y)
ˆH(x1, x2, . . . , xN ) /
X
log(x(mi+m+1)
x(mi+1)
)
N T
Curve Alignment: Optimization
• Conditional maximization with expectation regularization step	

• Step sizes are sampled: 	

• Dampen parameters:
p ⇠ U(0, p)
Curve Alignment:Transformation
y(t) = ↵g(t0
)x(t0
) + t0
= h(t)
linear amplitude	

scaling
non-linear time	

warping
non-linear amplitude	

scaling
amplitude	

translation
Curve Alignment:Transformation
• is a smooth monotone function	

• Utilize monotonicity operator of Ramsey (1998)
y(t) = ↵g(t0
)x(t0
) + t0
= h(t)
linear amplitude	

scaling
non-linear time	

warping
non-linear amplitude	

scaling
amplitude	

translation	

h(t)
unconstrained	

function
monotonicity	

operator
Curve Alignment:Transformation
• Unconstrained and estimated using Fourier basis	

• 2 and 7 frequencies, respectively, were sufficient	

• Total # of parameters: 20
y(t) = ↵g(t0
)x(t0
) + t0
= h(t)
linear amplitude	

scaling
non-linear time	

warping
non-linear amplitude	

scaling
amplitude	

translation	

w(t) g(t)
Curve Alignment: Experiments
• Alignment of synthetic data sets	

• Alignment of real data sets	

• Improving classification with unsupervised joint alignment
Curve Alignment: Synthetic
• 5 curves x 5 transformations x 3 levels of difficulty = 75 data sets	

• ~90% of the data sets converged to correct alignments
Curve Alignment: Synthetic
Non-linear time scaling
Curve Alignment: Synthetic
All transformations
Curve Alignment: Synthetic
Failure case
Curve Alignment: Real
2-class data set, color-coded
Curve Alignment: Comparison to CPM
TIC data set
Original w/ variance
Curve Alignment: Comparison to CPM
TIC data set
Original w/ entropy
Curve Alignment: Classification
Aligned	

Data Set
Transformation	

Parameters
Original
Data
Curve Alignment: Classification
• Perform unsupervised alignment	

• Represent each instance by its original + aligned + transformation	

• Improve the performance of an SVM
Curve Alignment: Classification
Method Accuracy
Original Curves (no alignment) 78.83%
Dynamic Time Warping (pairwise alignment) 82.35%
13 Data Sets from UCRTime Series Repository
Curve Alignment: Classification
Method Accuracy
Original Curves (no alignment) 78.83%
Dynamic Time Warping (pairwise alignment) 82.35%
Joint Alignment (entropy + non-linear time) 82.97%
13 Data Sets from UCRTime Series Repository
Curve Alignment: Classification
Method Accuracy
Original Curves (no alignment) 78.83%
Dynamic Time Warping (pairwise alignment) 82.35%
Joint Alignment (entropy + non-linear time) 82.97%
Joint Alignment (entropy + all transformations) 84.12%
13 Data Sets from UCRTime Series Repository
Curve Alignment: Classification
Method Accuracy
Original Curves (no alignment) 78.83%
Dynamic Time Warping (pairwise alignment) 82.35%
Joint Alignment (entropy + non-linear time) 82.97%
Joint Alignment (entropy + all transformations) 84.12%
Joint Alignment (variance + all transformations) 85.46%
13 Data Sets from UCRTime Series Repository
Curve Alignment: Classification
Method Accuracy
Original Curves (no alignment) 78.83%
Dynamic Time Warping (pairwise alignment) 82.35%
Joint Alignment (entropy + non-linear time) 82.97%
Joint Alignment (entropy + all transformations) 84.12%
Joint Alignment (variance + all transformations) 85.46%
Joint Alignment (variance + all trans. + MKL) 85.79%
13 Data Sets from UCRTime Series Repository
Curve Alignment: Classification
13 Data Sets from UCRTime Series Repository
Curve Alignment: Conclusions
• Effective on 1-dimensional curves	

• Small # of parameters needed	

• Unsupervised alignment improves classification
Curve Alignment: Drawbacks
• Lack of a feature representation	

• Independence assumption in entropy computation	

• Results in co-alignment and difficulty aligning complex data sets	

• Transformation regularization is ad hoc
Alignment with CRBMs
CRBM Alignment: Objectives
• Alignment of complex data sets requires an appropriate representation	

• Automate feature selection	

• Use statistics of data set to learn an appropriate feature representation	

• Experimented with convolutional restricted Boltzmann machines
!
• Weights between visible units and hidden unit utilized as feature detector	

• Hidden units are binary and activate if feature is detected	

• Higher layers represent more complex features
CRBM Alignment: Feature Learning
v1 v2 v3 v4 v5 v6 v7
h1 h2 h3 h4
v1 v2 v3 v4 v5 v6 v7
h1
1 h1
2
h1
3 h1
4
p1
2p1
1
RBM	

CRBM	

(1 group)
CRBM Alignment: Overview
• Train a CRBM using unaligned data	

• Use pooling unit activation probability as feature representation	

• Use (stochastic) congealing algorithm	

• Applied successfully to both curves and face images
CRBM Alignment: Curve Features
• Trained a CRBM on training data from 13 UCR data sets	

• Learned 32 filters of length 10, with pooling area of 5	

• 576 pooling unit activations
CRBM Alignment: Curve Features
CRBM Alignment: Curve Features
CRBM Alignment: Curve Features
CRBM Alignment: Classification
Method
Accuracy
Alignment Classification
None	

 Raw 76.89%
13 Data Sets from UCRTime Series Repository
CRBM Alignment: Classification
Method
Accuracy
Alignment Classification
None	

 Raw 76.89%
Raw Raw 84.63%
13 Data Sets from UCRTime Series Repository
CRBM Alignment: Classification
Method
Accuracy
Alignment Classification
None	

 Raw 76.89%
Raw Raw 84.63%
None	

 CRBM 84.96%
13 Data Sets from UCRTime Series Repository
CRBM Alignment: Classification
Method
Accuracy
Alignment Classification
None	

 Raw 76.89%
Raw Raw 84.63%
None	

 CRBM 84.96%
CRBM CRBM 87.40%
13 Data Sets from UCRTime Series Repository
CRBM Alignment: Classification
13 Data Sets from UCRTime Series Repository
CRBM Alignment: FaceVerification
• Use alignment as pre-processing for face verification task	

• Use alternative / consistent feature representation for verification step
CRBM Alignment: Filters
CRBM	

32 filters, 10x10 size	

pooling: 5x5
KMean	

clusters
CRBM Alignment: Classification
Method Accuracy
Original (no alignment) 74.2%
Alignment with SIFT features 75.8%
Alignment with CRBM features 82.0%
Supervised alignment 80.5%
View 1 from LFW
CRBM Alignment: Summary
• Effective for alignment of both curves & faces	

• Effective for curve classification
Joint Alignment & Clustering
JAC: Objectives
• Develop nonparametric/unsupervised alignment & clustering framework	

• Applicable to any domain	

• Allows use of continuous transformations	

• Discovers # clusters automatically	

• Regularize transformations principally	

• Evaluate model on curve and image data sets
JAC: Objectives
• Simultaneous > sequentially	

!
!
!
!
• KMeans clustering accuracy = 54% (almost random guessing)
JAC: Overview
Bayesian alignment	

(assumes single mode)
Infinite Bayesian 	

alignment & clustering
JAC: Bayesian Alignment
xi = ⌧(yi, ⇢i)
JAC: Bayesian Alignment
• Explicit regularization of transformations	

• Direct maximization that treats transformation function as black box	

• Low memory footprint with sufficient stats
JAC: Bayesian Alignment
Average pairwise distance: 8.7 (before), 5.2 (with congealing), 5.1 (with BA)
JAC: Bayesian Alignment
75 synthetic curve data sets
JAC: Bayesian Alignment
Sample result
JAC: Model
Bayesian alignment
Single set of parameters
Alignment & Clustering
Potentially infinite parameters
JAC: Model
Blocked Gibbs sampler: Direct generalization of BA
transformation termdata termcluster term (CRP)
(zi, ⇢i) ⇠ p(zi | z i; )p(yi | yzi
; )p(⇢i | ⇢zi
; ↵)
JAC: Model
Second sampler integrates out transformations
zi ⇠ p(zi | z i; )p(xi | x i, z; , ↵)
Transformation parameter updated with EM
JAC: Digits ‘4’ & ‘9’
JAC: Digits ‘4’ & ‘9’
Method Clustering Alignment
Original - 4.54
KMeans 54.0%
-Affinity Propagation 82.0% (3)	

	

Infinite mixture model 86.0% (4)
JAC: Digits ‘4’ & ‘9’
Method Clustering Alignment
Original - 4.54
KMeans 54.0%
-Affinity Propagation 82.0% (3)	

	

Infinite mixture model 86.0% (4)
Congealing (+KMeans) 90.0% 1.80
JAC: Digits ‘4’ & ‘9’
Method Clustering Alignment
Original - 4.54
KMeans 54.0%
-Affinity Propagation 82.0% (3)	

	

Infinite mixture model 86.0% (4)
Congealing (+KMeans) 90.0% 1.80
Unsupervised JAC 94.0% (2) 1.20
JAC: Digits ‘4’ & ‘9’
JAC:All 10 Digit Classes
JAC:All 10 Digit Classes
Method Clustering Alignment
Original - 5.22
KMeans 62.5%
-Affinity Propagation 67.5% (14)
Infinite mixture model 69.0% (13)
JAC:All 10 Digit Classes
Method Clustering Alignment
Original - 5.22
KMeans 62.5%
-Affinity Propagation 67.5% (14)
Infinite mixture model 69.0% (13)
Congealing (+KMeans) 64.0% 3.43
Liu et al. 56.5% / 73.7% 3.80
JAC:All 10 Digit Classes
Method Clustering Alignment
Original - 5.22
KMeans 62.5%
-Affinity Propagation 67.5% (14)
Infinite mixture model 69.0% (13)
Congealing (+KMeans) 64.0% 3.43
Liu et al. 56.5% / 73.7% 3.80
Unsupervised JAC 87.0% (11) 1.58
JAC:All 10 Digit Classes
JAC: ECG Heart Data
JAC: ECG Heart Data
JAC: ECG Heart Data
Method Clustering Alignment
Original - 93.6
KMeans (2 clusters) 58.70%
-
KMeans (5 clusters) 82.61%
Affinity Propagation 65.22% (4)
Infinite mixture model 78.26% (2)
JAC: ECG Heart Data
Method Clustering Alignment
Original - 93.6
KMeans (2 clusters) 58.70%
-
KMeans (5 clusters) 82.61%
Affinity Propagation 65.22% (4)
Infinite mixture model 78.26% (2)
Congealing (+KMeans, 5 clusters) 76.09% 53.85
JAC: ECG Heart Data
Method Clustering Alignment
Original - 93.6
KMeans (2 clusters) 58.70%
-
KMeans (5 clusters) 82.61%
Affinity Propagation 65.22% (4)
Infinite mixture model 78.26% (2)
Congealing (+KMeans, 5 clusters) 76.09% 53.85
Unsupervised JAC 86.96% (5) 5.93
JAC: Summary
• Effective, large improvements over prior work	

• Allows finite, semi-supervised, online & distributed variations	

• Modular implementation
Thesis Summary
Curve Alignment
Alignment &
Feature Learning
Alignment &
Clustering
Future Directions
Future: Closing the Loop
• Simultaneous unsupervised alignment, clustering, and feature learning	

• Update learned features through alignment	

• Different filters / weights for each cluster	

• Multi-resolution framework using different network depths
Research Goal
Develop an unsupervised data set-agnostic processing module
that includes alignment, clustering, and feature learning
Thank you.
Photo credit: Gary B. Huang
Previous Work: Procrustes Analysis
(Mosier 1939, Hartley & Cattel 1962, Kristof & Wingersky 1971)
• Framework for computing optimal matching under transformations	

• Residual sum-of-squares distance called procrustes statistic	

• Applied to joint alignment and several other applications	

!
!
Previous Work: Procrustes Analysis
(Mosier 1939, Hartley & Cattel 1962, Kristof & Wingersky 1971)
• Framework for computing optimal matching under transformations	

• Residual sum-of-squares distance called procrustes statistic	

• Applied to joint alignment and several other applications	

• Drawbacks: 	

• Mean may not be appropriate reference	

• Assumes unimodal data set
Previous Work: Procrustes Analysis
(Mosier 1939, Hartley & Cattel 1962, Kristof & Wingersky 1971)
Previous Work: Congealing
(Binary images of handwritten digits: Miller et al. 2000)
before after
Previous Work: Congealing
(Bias removal in MRI scans: Learned-Miller et al. 2004 & 2005)
before after
Previous Work: Congealing
• Complex images of cars and faces	

• 3D MRI scans	

• Images of Drosophila discs	

• Facial contour labeling
Previous Work: Congealing
(Grayscale images of faces: Huang et al. 2007 & 2012)
Previous Work: Congealing
(3D MRI scans: Zollei et al. 2005)
Previous Work: Curves
• HMM-based models: Continuous Profile Model & Segmental HMM	

• Many parameters	

• Assumes unimodal data set	

• Regression-based models:Transformed mixture of regression	

• Requires # of clusters	

• Only linear transformations
Previous Work: Images
• Congealing variants:	

• Specific to image domains	

• Improved performance on digit and simple face data sets	

• Clustering-based models
h(t) =
1
Z
Z t
0

exp
✓Z r=t
0
w(s)ds
◆
dr = (w(t))
Curve Alignment:Transformation
• is a smooth monotone function	

• Utilize monotonicity operator of Ramsey (1998)
y(t) = ↵g(t0
)x(t0
) + t0
= h(t)
linear amplitude	

scaling
non-linear time	

warping
non-linear amplitude	

scaling
amplitude	

translation	

h(t)
d2
h
dt2
= w
dh
dt
h(t) =
1
Z
Z t
0

exp
✓Z r=t
0
w(s)ds
◆
dr = (w(t))
Curve Alignment:Transformation
• is a smooth monotone function	

• Utilize monotonicity operator of Ramsey (1998)
y(t) = ↵g(t0
)x(t0
) + t0
= h(t)
linear amplitude	

scaling
non-linear time	

warping
non-linear amplitude	

scaling
amplitude	

translation	

h(t)
d2
h
dt2
= w
dh
dt
unconstrained	

function
monotonicity	

operator
Curve Congealing: Drawbacks
Co-alignment
original mean supervised unsupervised
Curve Congealing: Drawbacks
Complex multi-modal
50 100 150 200 250
−3
−2
−1
0
1
2
3
4
Original Data Set
50 100 150 200 250
−3
−2
−1
0
1
2
3
4
with Congealing
CRBM Alignment: Overview
• Train a convolutional deep belief network using unaligned images	

!
!
!
• Use pooling activations as feature representation	

• Use (stochastic) congealing algorithm
CRBM	

(1-layer)
CDBN	

(2-layer)
JAC:Toy Example
(2) Congealing (3) Clustering (4) Alignment and Clustering
(1) Original
JAC: Bayesian Alignment
8i=1:N ⇢i ⇠ p(yi | y i; )p(⇢i | ⇢ i; ↵)
Collapsed (Rao-Blackwellized) Gibbs Sampler
JAC: Bayesian Alignment
8i=1:N ⇢i ⇠ p(yi | y i; )p(⇢i | ⇢ i; ↵)
⇢i = arg max
⇢i
p(yi | y i; )p(⇢i | ⇢ i; ↵)
Collapsed (Rao-Blackwellized) Gibbs Sampler
JAC: Bayesian Alignment
8i=1:N ⇢i ⇠ p(yi | y i; )p(⇢i | ⇢ i; ↵)
⇢i = arg max
⇢i
p(yi | y i; )p(⇢i | ⇢ i; ↵)
data term transformation term
Collapsed (Rao-Blackwellized) Gibbs Sampler
JAC:Alignment & Clustering
Second sampler integrates out transformations
zi ⇠ p(zi | z i; )p(xi | x i, z; , ↵)
p(xi | x i, z; , ↵) ⇡
P
l wlp(xi | ⇢l, ✓zi ; )
P
l wl
s.t. 8l=1:L ⇢l ⇠ q(⇢) & wl =
p(⇢l | 'zi ; ↵)
q(⇢l)
JAC: Implementation
• visDataSet: represents data set (e.g. images, curves)	

• visProcessor: handles feature representation (e.g. HOG)	

• visTransform: represents transformation function (e.g. affine trans)	

• visSufficientStats: represents exponential family members and priors	

• visAlignment: base class for alignment algorithms	

• visMixtureModelBase: base class for DP-based clustering/alignment
JAC: Implementation
visDataSet
visAlignment
visSufficientStatsvisTransforms
visMixtureModel
visProcessor
JAC: Implementation
visDataSet
visAlignment
visSufficientStatsvisTransforms
visMixtureModel
visProcessor
visCongealing
JAC: Implementation
visDataSet
visAlignment
visSufficientStatsvisTransforms
visMixtureModel
visProcessor
visDPClustering
JAC: Implementation
visDataSet
visAlignment
visSufficientStatsvisTransforms
visMixtureModel
visProcessor
visBayesianAlignment
JAC: Implementation
visDataSet
visAlignment
visSufficientStatsvisTransforms
visMixtureModel
visProcessor
visAlignmentClustering
JAC: Implementation
visDataSet visSufficientStatsvisTransforms visProcessor
visImageTransforms
visCurveTransforms
visRotatePointTransforms
visVectorDataSet
visImageDataSet
visDiscretize
visHOG
visCDBN
visMultinomial
visGaussian
Thank you.
Photo credit: Gary B. Huang
Thank you.
Photo credit: Gary B. Huang

More Related Content

PDF
Two strategies for large-scale multi-label classification on the YouTube-8M d...
PDF
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
PDF
PR-232: AutoML-Zero:Evolving Machine Learning Algorithms From Scratch
PDF
Multinomial Logistic Regression with Apache Spark
PDF
Meta-Learning with Implicit Gradients
PDF
Multi-class Classification on Riemannian Manifolds for Video Surveillance
PDF
Bayesian Model-Agnostic Meta-Learning
PPT
Shai Avidan's Support vector tracking and ensemble tracking
Two strategies for large-scale multi-label classification on the YouTube-8M d...
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
PR-232: AutoML-Zero:Evolving Machine Learning Algorithms From Scratch
Multinomial Logistic Regression with Apache Spark
Meta-Learning with Implicit Gradients
Multi-class Classification on Riemannian Manifolds for Video Surveillance
Bayesian Model-Agnostic Meta-Learning
Shai Avidan's Support vector tracking and ensemble tracking

What's hot (20)

PDF
Lec3 dqn
PDF
Revisiting the Calibration of Modern Neural Networks
PDF
EE660_Report_YaxinLiu_8448347171
PPTX
Scaling out logistic regression with Spark
PDF
AlphaPy: A Data Science Pipeline in Python
PPTX
Cross-Validation and Big Data Partitioning Via Experimental Design
PDF
Is Unsupervised Ensemble Learning Useful for Aggregated or Clustered Load For...
PPTX
Vision Transformer(ViT) / An Image is Worth 16*16 Words: Transformers for Ima...
PDF
GLM & GBM in H2O
PDF
Huong dan cu the svm
PDF
consistency regularization for generative adversarial networks_review
PPTX
An overview of gradient descent optimization algorithms
PDF
Dear - 딥러닝 논문읽기 모임 김창연님
PDF
2014-06-20 Multinomial Logistic Regression with Apache Spark
PDF
DQN (Deep Q-Network)
PPTX
Analysis of algorithms
PDF
Stochastic gradient descent and its tuning
PDF
Chap 8. Optimization for training deep models
PPTX
Introduction to Deep Learning
Lec3 dqn
Revisiting the Calibration of Modern Neural Networks
EE660_Report_YaxinLiu_8448347171
Scaling out logistic regression with Spark
AlphaPy: A Data Science Pipeline in Python
Cross-Validation and Big Data Partitioning Via Experimental Design
Is Unsupervised Ensemble Learning Useful for Aggregated or Clustered Load For...
Vision Transformer(ViT) / An Image is Worth 16*16 Words: Transformers for Ima...
GLM & GBM in H2O
Huong dan cu the svm
consistency regularization for generative adversarial networks_review
An overview of gradient descent optimization algorithms
Dear - 딥러닝 논문읽기 모임 김창연님
2014-06-20 Multinomial Logistic Regression with Apache Spark
DQN (Deep Q-Network)
Analysis of algorithms
Stochastic gradient descent and its tuning
Chap 8. Optimization for training deep models
Introduction to Deep Learning
Ad

Viewers also liked (6)

PPTX
Master thesis presentation
PDF
Birch
PPT
Cure, Clustering Algorithm
PDF
Big data Clustering Algorithms And Strategies
PPTX
Data mining
PDF
Optics ordering points to identify the clustering structure
Master thesis presentation
Birch
Cure, Clustering Algorithm
Big data Clustering Algorithms And Strategies
Data mining
Optics ordering points to identify the clustering structure
Ad

Similar to Mattar_PhD_Thesis (20)

PDF
General Tips for participating Kaggle Competitions
PDF
Measuring the Validity of Clustering Validation Datasets
PPT
AI & ML INTRODUCTION OF AI AND ML FOR LEARING BASICS
PPTX
Nimrita deep learning
PDF
Trinity of AI: data, algorithms and cloud
PPT
Supervised and unsupervised learning
PPTX
PCA-LDA-Lobo.pptxttvertyuytreiopkjhgftfv
PDF
Kaggle Higgs Boson Machine Learning Challenge
PPTX
background.pptx
PPTX
Introduction to data visualization tools like Tableau and Power BI and Excel
PPTX
Ensemble_instance_unsupersied_learning 01_02_2024.pptx
PPTX
Machine learning and linear regression programming
PPTX
How Machine Learning Helps Organizations to Work More Efficiently?
PDF
An introduction to variable and feature selection
PDF
A new development in the hierarchical clustering of repertory grid data
PPT
Presentation
PPTX
05 contours seg_matching
PDF
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
PDF
4. Classification.pdf
PPTX
Machine Learning Algorithms (Part 1)
General Tips for participating Kaggle Competitions
Measuring the Validity of Clustering Validation Datasets
AI & ML INTRODUCTION OF AI AND ML FOR LEARING BASICS
Nimrita deep learning
Trinity of AI: data, algorithms and cloud
Supervised and unsupervised learning
PCA-LDA-Lobo.pptxttvertyuytreiopkjhgftfv
Kaggle Higgs Boson Machine Learning Challenge
background.pptx
Introduction to data visualization tools like Tableau and Power BI and Excel
Ensemble_instance_unsupersied_learning 01_02_2024.pptx
Machine learning and linear regression programming
How Machine Learning Helps Organizations to Work More Efficiently?
An introduction to variable and feature selection
A new development in the hierarchical clustering of repertory grid data
Presentation
05 contours seg_matching
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
4. Classification.pdf
Machine Learning Algorithms (Part 1)

Mattar_PhD_Thesis