SlideShare a Scribd company logo
Jong Youl Choi Computer Science Department (jychoi@cs.indiana.edu)
Social Bookmarking Socialized Tags Bookmarks
Machine Learning and Statistical Analysis
Principles of Machine Learning Bayes’ theorem and maximum likelihood Machine Learning Algorithms Clustering analysis Dimension reduction Classification Parallel Computing General parallel computing architecture Parallel algorithms
Definition Algorithms or techniques that enable computer (machine) to “learn” from data. Related with many areas such as data mining, statistics, information theory, etc. Algorithm Types Unsupervised learning Supervised learning Reinforcement learning Topics Models Artificial Neural Network (ANN) Support Vector Machine (SVM) Optimization Expectation-Maximization (EM) Deterministic Annealing (DA)
Posterior probability of   i , given  X  i  2     : Parameter X  : Observations P (  i ) : Prior (or marginal) probability  P ( X |  i ) : likelihood Maximum Likelihood (ML) Used to find the most plausible   i   2    , given  X  Computing maximum likelihood (ML) or log-likelihood     Optimization problem
Problem Estimate hidden parameters (  ={  ,   }) from the given data extracted from  k Gaussian distributions Gaussian distribution Maximum Likelihood With Gaussian (P =  N ), Solve either brute-force or numeric method (Mitchell , 1997)
Problems in ML estimation Observation  X  is often not complete Latent (hidden) variable  Z   exists Hard to explore whole parameter space Expectation-Maximization algorithm Object : To find ML, over latent distribution  P ( Z  | X ,  ) Steps 0. Init – Choose a random   old 1. E-step – Expectation  P ( Z  | X ,   old ) 2. M-step – Find   new  which maximize likelihood.  3. Go to step 1 after updating   old   Ã    new
Definition Grouping unlabeled data into clusters, for the purpose of inference of hidden structures or information Dissimilarity measurement Distance : Euclidean(L 2 ), Manhattan(L 1 ), … Angle : Inner product, … Non-metric : Rank, Intensity, … Types of Clustering Hierarchical  Agglomerative or divisive Partitioning K-means, VQ, MDS, … (Matlab helppage)
Find K partitions with the total intra-cluster variance minimized Iterative method  Initialization : Randomized  y i Assignment  of  x  ( y i  fixed) Update of  y i  ( x  fixed) Problem?    Trap in local minima (MacKay, 2003)
Deterministically avoid local minima  No stochastic process (random walk) Tracing the global solution by changing  level of randomness Statistical Mechanics Gibbs distribution Helmholtz free energy F = D   – TS Average Energy D = <   E x > Entropy S = -  P (E x ) ln  P (E x ) F = – T ln Z In DA, we make F minimized (Maxima and Minima, Wikipedia)
Analogy to physical annealing process  Control energy (randomness) by temperature (high    low)  Starting with high temperature (T =  1 )  Soft (or fuzzy) association probability Smooth cost function with one global minimum Lowering the temperature (T  !   0) Hard association Revealing full complexity, clusters are emerged Minimization of F, using E( x ,  y j ) = || x - y j || 2 Iteratively,
Definition Process to transform high-dimensional data into low-dimensional ones for improving accuracy, understanding, or removing noises.  Curse of dimensionality Complexity grows exponentially  in volume by adding extra  dimensions Types Feature selection : Choose representatives (e.g., filter,…) Feature extraction : Map to lower dim. (e.g., PCA, MDS, … ) (Koppen, 2000)
Finding a map of principle components (PCs) of data into an orthogonal space, such that  y   = W  x   where W  2   R d £ h  (h À d) PCs –  Variables with the largest variances Orthogonality  Linearity – Optimal least mean-square error  Limitations?  Strict linearity  specific distribution Large variance assumption x 1 x 2 PC 1 PC 2
Like PCA, reduction of dimension by  y   = R  x  where R is a random matrix with i.i.d columns and R  2   R d £ p  (p À d) Johnson-Lindenstrauss lemma When projecting to a randomly selected subspace, the distance are approximately preserved Generating R Hard to obtain orthogonalized R Gaussian R Simple approach  choose r ij  = {+3 1/2 ,0,-3 1/2 } with probability 1/6, 4/6, 1/6 respectively
Dimension reduction preserving distance proximities observed in original data set Loss functions  Inner product Distance Squared distance  Classical MDS: minimizing STRAIN, given   From   , find inner product matrix B (Double centering) From B, recover the coordinates X’ (i.e., B=X’X’ T  )
SMACOF : minimizing STRESS Majorization – for complex f(x),  find auxiliary simple g(x,y) s.t.:  Majorization for STRESS Minimize tr(X T  B(Y) Y), known as Guttman transform (Cox, 2001)
Competitive and unsupervised learning process for clustering and visualization Result : similar data getting closer in the model space  Input Model Learning Choose the best similar model vector  m j  with  x i Update the winner and its neighbors by  m k  =  m k  +   (t)   (t)( x i  –  m k )  (t) : learning rate  (t) : neighborhood size
Definition A procedure dividing data into the given set of categories based on the training set in a supervised way Generalization Vs. Specification Hard to achieve both Avoid overfitting(overtraining) Early stopping Holdout validation K-fold cross validation  Leave-one-out cross-validation (Overfitting, Wikipedia) Validation Error Training Error Underfitting Overfitting
Perceptron : A computational unit with binary threshold Abilities Linear separable decision surface  Represent boolean functions  (AND, OR, NO) Network (Multilayer) of perceptrons   Various network architectures and capabilities Weighted Sum Activation Function (Jain, 1996)
Learning weights – random initialization and updating Error-correction training rules Difference between training data and output: E(t,o) Gradient descent (Batch learning)  With E =    E i  ,  Stochastic approach (On-line learning) Update  gradient for each result Various error functions Adding weight regularization term (   w i 2 ) to avoid overfitting Adding momentum (  w i (n-1) ) to expedite convergence
Q: How to draw the optimal linear separating hyperplane?    A: Maximizing  margin Margin maximization The distance between H +1  and H -1 : Thus, || w || should be minimized Margin
Constraint optimization problem Given training set { x i , y i } (y i   2  {+1, -1}):  Minimize : Lagrangian equation with saddle points  Minimized w.r.t the primal variable  w  and b: Maximized w.r.t the dual variables   i  (all   i   ¸  0) x i  with   i  > 0 (not   i  = 0) is called support vector (SV)
Soft Margin (Non-separable case) Slack variables   i  < C Optimization with additional constraint Non-linear SVM Map non-linear input to feature space Kernel function k( x , y ) =  h  ( x ),   ( y ) i   Kernel classifier with support vectors  s i Input Space Feature Space
Memory Architecture Decomposition Strategy Task – E.g., Word, IE, …  Data – scientific problem Pipelining – Task + Data Symmetric Multiprocessor (SMP) OpenMP, POSIX, pthread, MPI Easy to manage but expensive Commodity, off-the-shelf processors MPI Cost effective but hard to maintain (Barney, 2007) (Barney, 2007) Shared Memory Distributed Memory
Shrinking Recall : Only support vectors (  i >0) are  used in SVM optimization Predict if data is either SV or non-SV Remove non-SVs from problem space Parallel SVM Partition the problem Merge data hierarchically Each unit finds support vectors Loop until converge (Graf, 2005)
Machine Learning and Statistical Analysis

More Related Content

PDF
Radial Basis Function Interpolation
PDF
MLHEP Lectures - day 1, basic track
PDF
MLHEP Lectures - day 2, basic track
PDF
Reweighting and Boosting to uniforimty in HEP
PDF
MLHEP Lectures - day 3, basic track
PDF
Matrix Factorizations for Recommender Systems
PDF
Recsys matrix-factorizations
PDF
Machine learning in science and industry — day 3
Radial Basis Function Interpolation
MLHEP Lectures - day 1, basic track
MLHEP Lectures - day 2, basic track
Reweighting and Boosting to uniforimty in HEP
MLHEP Lectures - day 3, basic track
Matrix Factorizations for Recommender Systems
Recsys matrix-factorizations
Machine learning in science and industry — day 3

What's hot (15)

PDF
Machine learning in science and industry — day 4
PDF
Machine learning in science and industry — day 2
PPT
Jörg Stelzer
PDF
Machine learning in science and industry — day 1
PPTX
Machine learning applications in aerospace domain
PDF
MLHEP 2015: Introductory Lecture #2
PDF
MLHEP 2015: Introductory Lecture #4
PDF
Variational Autoencoders For Image Generation
PDF
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
PPTX
Gaussian processing
PDF
MLHEP 2015: Introductory Lecture #3
PDF
505 260-266
PDF
Lecture 5 Relationship between pixel-2
PDF
MLHEP 2015: Introductory Lecture #1
PDF
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Machine learning in science and industry — day 4
Machine learning in science and industry — day 2
Jörg Stelzer
Machine learning in science and industry — day 1
Machine learning applications in aerospace domain
MLHEP 2015: Introductory Lecture #2
MLHEP 2015: Introductory Lecture #4
Variational Autoencoders For Image Generation
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
Gaussian processing
MLHEP 2015: Introductory Lecture #3
505 260-266
Lecture 5 Relationship between pixel-2
MLHEP 2015: Introductory Lecture #1
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Ad

Viewers also liked (11)

PDF
Launch of the new eco shopping mall
PPT
מצעד הספרים ב"יס הדקל תל אביב
PDF
D1142533
PDF
Pmdc step 1 formative assessment day-5
PDF
RAMANUJAM PARLIAMENT?... (“RAMANUJAM CASTE”)
PDF
Pmdc step 1 Review of CVS & Respiratory System
PDF
TRIVIDAITE?... (“PROTO DRAVIDIAN”)
PDF
Glimpse of Bronze Age girl's daily life from hair, clothes
PDF
Interconnected Serialized Architecture for Transmission Systems
PDF
Floral Design
DOCX
Cryptology and Mathematics
Launch of the new eco shopping mall
מצעד הספרים ב"יס הדקל תל אביב
D1142533
Pmdc step 1 formative assessment day-5
RAMANUJAM PARLIAMENT?... (“RAMANUJAM CASTE”)
Pmdc step 1 Review of CVS & Respiratory System
TRIVIDAITE?... (“PROTO DRAVIDIAN”)
Glimpse of Bronze Age girl's daily life from hair, clothes
Interconnected Serialized Architecture for Transmission Systems
Floral Design
Cryptology and Mathematics
Ad

Similar to Machine Learning and Statistical Analysis (20)

PPT
Support Vector Machines
PPT
Introduction to Machine Learning Aristotelis Tsirigos
PPT
Free Ebooks Download ! Edhole.com
PPT
Lecture 2
PPT
SVM (2).ppt
PPT
PPTX
Support Vector Machines USING MACHINE LEARNING HOW IT WORKS
PPTX
Unit-1 Introduction and Mathematical Preliminaries.pptx
PPT
. An introduction to machine learning and probabilistic ...
PPT
Introduction to Support Vector Machine 221 CMU.ppt
PDF
Machine Learning Algorithms Introduction.pdf
PPTX
Support-Vector-Machine (Supervised Learning).pptx
PPT
Machine Learning: Foundations Course Number 0368403401
PPT
Machine Learning: Foundations Course Number 0368403401
PPT
Statistical Machine________ Learning.ppt
PPT
Introduction
PDF
2012 mdsp pr13 support vector machine
PPT
November, 2006 CCKM'06 1
PPTX
Deep learning from mashine learning AI..
PPTX
When Models Meet Data: From ancient science to todays Artificial Intelligence...
Support Vector Machines
Introduction to Machine Learning Aristotelis Tsirigos
Free Ebooks Download ! Edhole.com
Lecture 2
SVM (2).ppt
Support Vector Machines USING MACHINE LEARNING HOW IT WORKS
Unit-1 Introduction and Mathematical Preliminaries.pptx
. An introduction to machine learning and probabilistic ...
Introduction to Support Vector Machine 221 CMU.ppt
Machine Learning Algorithms Introduction.pdf
Support-Vector-Machine (Supervised Learning).pptx
Machine Learning: Foundations Course Number 0368403401
Machine Learning: Foundations Course Number 0368403401
Statistical Machine________ Learning.ppt
Introduction
2012 mdsp pr13 support vector machine
November, 2006 CCKM'06 1
Deep learning from mashine learning AI..
When Models Meet Data: From ancient science to todays Artificial Intelligence...

More from butest (20)

PDF
EL MODELO DE NEGOCIO DE YOUTUBE
DOC
1. MPEG I.B.P frame之不同
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPT
Timeline: The Life of Michael Jackson
DOCX
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPTX
Com 380, Summer II
PPT
PPT
DOCX
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
DOC
MICHAEL JACKSON.doc
PPTX
Social Networks: Twitter Facebook SL - Slide 1
PPT
Facebook
DOCX
Executive Summary Hare Chevrolet is a General Motors dealership ...
DOC
Welcome to the Dougherty County Public Library's Facebook and ...
DOC
NEWS ANNOUNCEMENT
DOC
C-2100 Ultra Zoom.doc
DOC
MAC Printing on ITS Printers.doc.doc
DOC
Mac OS X Guide.doc
DOC
hier
DOC
WEB DESIGN!
EL MODELO DE NEGOCIO DE YOUTUBE
1. MPEG I.B.P frame之不同
LESSONS FROM THE MICHAEL JACKSON TRIAL
Timeline: The Life of Michael Jackson
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
LESSONS FROM THE MICHAEL JACKSON TRIAL
Com 380, Summer II
PPT
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
MICHAEL JACKSON.doc
Social Networks: Twitter Facebook SL - Slide 1
Facebook
Executive Summary Hare Chevrolet is a General Motors dealership ...
Welcome to the Dougherty County Public Library's Facebook and ...
NEWS ANNOUNCEMENT
C-2100 Ultra Zoom.doc
MAC Printing on ITS Printers.doc.doc
Mac OS X Guide.doc
hier
WEB DESIGN!

Machine Learning and Statistical Analysis

  • 1. Jong Youl Choi Computer Science Department (jychoi@cs.indiana.edu)
  • 4. Principles of Machine Learning Bayes’ theorem and maximum likelihood Machine Learning Algorithms Clustering analysis Dimension reduction Classification Parallel Computing General parallel computing architecture Parallel algorithms
  • 5. Definition Algorithms or techniques that enable computer (machine) to “learn” from data. Related with many areas such as data mining, statistics, information theory, etc. Algorithm Types Unsupervised learning Supervised learning Reinforcement learning Topics Models Artificial Neural Network (ANN) Support Vector Machine (SVM) Optimization Expectation-Maximization (EM) Deterministic Annealing (DA)
  • 6. Posterior probability of  i , given X  i 2  : Parameter X : Observations P (  i ) : Prior (or marginal) probability P ( X |  i ) : likelihood Maximum Likelihood (ML) Used to find the most plausible  i 2  , given X Computing maximum likelihood (ML) or log-likelihood  Optimization problem
  • 7. Problem Estimate hidden parameters (  ={  ,  }) from the given data extracted from k Gaussian distributions Gaussian distribution Maximum Likelihood With Gaussian (P = N ), Solve either brute-force or numeric method (Mitchell , 1997)
  • 8. Problems in ML estimation Observation X is often not complete Latent (hidden) variable Z exists Hard to explore whole parameter space Expectation-Maximization algorithm Object : To find ML, over latent distribution P ( Z | X ,  ) Steps 0. Init – Choose a random  old 1. E-step – Expectation P ( Z | X ,  old ) 2. M-step – Find  new which maximize likelihood. 3. Go to step 1 after updating  old à  new
  • 9. Definition Grouping unlabeled data into clusters, for the purpose of inference of hidden structures or information Dissimilarity measurement Distance : Euclidean(L 2 ), Manhattan(L 1 ), … Angle : Inner product, … Non-metric : Rank, Intensity, … Types of Clustering Hierarchical Agglomerative or divisive Partitioning K-means, VQ, MDS, … (Matlab helppage)
  • 10. Find K partitions with the total intra-cluster variance minimized Iterative method Initialization : Randomized y i Assignment of x ( y i fixed) Update of y i ( x fixed) Problem?  Trap in local minima (MacKay, 2003)
  • 11. Deterministically avoid local minima No stochastic process (random walk) Tracing the global solution by changing level of randomness Statistical Mechanics Gibbs distribution Helmholtz free energy F = D – TS Average Energy D = <  E x > Entropy S = - P (E x ) ln P (E x ) F = – T ln Z In DA, we make F minimized (Maxima and Minima, Wikipedia)
  • 12. Analogy to physical annealing process Control energy (randomness) by temperature (high  low) Starting with high temperature (T = 1 ) Soft (or fuzzy) association probability Smooth cost function with one global minimum Lowering the temperature (T ! 0) Hard association Revealing full complexity, clusters are emerged Minimization of F, using E( x , y j ) = || x - y j || 2 Iteratively,
  • 13. Definition Process to transform high-dimensional data into low-dimensional ones for improving accuracy, understanding, or removing noises. Curse of dimensionality Complexity grows exponentially in volume by adding extra dimensions Types Feature selection : Choose representatives (e.g., filter,…) Feature extraction : Map to lower dim. (e.g., PCA, MDS, … ) (Koppen, 2000)
  • 14. Finding a map of principle components (PCs) of data into an orthogonal space, such that y = W x where W 2 R d £ h (h À d) PCs – Variables with the largest variances Orthogonality Linearity – Optimal least mean-square error Limitations? Strict linearity specific distribution Large variance assumption x 1 x 2 PC 1 PC 2
  • 15. Like PCA, reduction of dimension by y = R x where R is a random matrix with i.i.d columns and R 2 R d £ p (p À d) Johnson-Lindenstrauss lemma When projecting to a randomly selected subspace, the distance are approximately preserved Generating R Hard to obtain orthogonalized R Gaussian R Simple approach choose r ij = {+3 1/2 ,0,-3 1/2 } with probability 1/6, 4/6, 1/6 respectively
  • 16. Dimension reduction preserving distance proximities observed in original data set Loss functions Inner product Distance Squared distance Classical MDS: minimizing STRAIN, given  From  , find inner product matrix B (Double centering) From B, recover the coordinates X’ (i.e., B=X’X’ T )
  • 17. SMACOF : minimizing STRESS Majorization – for complex f(x), find auxiliary simple g(x,y) s.t.: Majorization for STRESS Minimize tr(X T B(Y) Y), known as Guttman transform (Cox, 2001)
  • 18. Competitive and unsupervised learning process for clustering and visualization Result : similar data getting closer in the model space Input Model Learning Choose the best similar model vector m j with x i Update the winner and its neighbors by m k = m k +  (t)  (t)( x i – m k )  (t) : learning rate  (t) : neighborhood size
  • 19. Definition A procedure dividing data into the given set of categories based on the training set in a supervised way Generalization Vs. Specification Hard to achieve both Avoid overfitting(overtraining) Early stopping Holdout validation K-fold cross validation Leave-one-out cross-validation (Overfitting, Wikipedia) Validation Error Training Error Underfitting Overfitting
  • 20. Perceptron : A computational unit with binary threshold Abilities Linear separable decision surface Represent boolean functions (AND, OR, NO) Network (Multilayer) of perceptrons  Various network architectures and capabilities Weighted Sum Activation Function (Jain, 1996)
  • 21. Learning weights – random initialization and updating Error-correction training rules Difference between training data and output: E(t,o) Gradient descent (Batch learning) With E =  E i , Stochastic approach (On-line learning) Update gradient for each result Various error functions Adding weight regularization term (  w i 2 ) to avoid overfitting Adding momentum (  w i (n-1) ) to expedite convergence
  • 22. Q: How to draw the optimal linear separating hyperplane?  A: Maximizing margin Margin maximization The distance between H +1 and H -1 : Thus, || w || should be minimized Margin
  • 23. Constraint optimization problem Given training set { x i , y i } (y i 2 {+1, -1}): Minimize : Lagrangian equation with saddle points Minimized w.r.t the primal variable w and b: Maximized w.r.t the dual variables  i (all  i ¸ 0) x i with  i > 0 (not  i = 0) is called support vector (SV)
  • 24. Soft Margin (Non-separable case) Slack variables  i < C Optimization with additional constraint Non-linear SVM Map non-linear input to feature space Kernel function k( x , y ) = h  ( x ),  ( y ) i Kernel classifier with support vectors s i Input Space Feature Space
  • 25. Memory Architecture Decomposition Strategy Task – E.g., Word, IE, … Data – scientific problem Pipelining – Task + Data Symmetric Multiprocessor (SMP) OpenMP, POSIX, pthread, MPI Easy to manage but expensive Commodity, off-the-shelf processors MPI Cost effective but hard to maintain (Barney, 2007) (Barney, 2007) Shared Memory Distributed Memory
  • 26. Shrinking Recall : Only support vectors (  i >0) are used in SVM optimization Predict if data is either SV or non-SV Remove non-SVs from problem space Parallel SVM Partition the problem Merge data hierarchically Each unit finds support vectors Loop until converge (Graf, 2005)

Editor's Notes

  • #6: Inductive Learning – extract rules, patterns, or information out of massive data (e.g., decision tree, clustering, …) Deductive Learning – require no additional input, but improve performance gradually (e.g., advice taker, …)