SlideShare a Scribd company logo
Jong Youl Choi Computer Science Department (jychoi@cs.indiana.edu)
Social Bookmarking Socialized Tags Bookmarks
Machine Learning and Statistical Analysis
Principles of Machine Learning Bayes’ theorem and maximum likelihood Machine Learning Algorithms Clustering analysis Dimension reduction Classification Parallel Computing General parallel computing architecture Parallel algorithms
Definition Algorithms or techniques that enable computer (machine) to “learn” from data. Related with many areas such as data mining, statistics, information theory, etc. Algorithm Types Unsupervised learning Supervised learning Reinforcement learning Topics Models Artificial Neural Network (ANN) Support Vector Machine (SVM) Optimization Expectation-Maximization (EM) Deterministic Annealing (DA)
Posterior probability of   i , given  X  i  2     : Parameter X  : Observations P (  i ) : Prior (or marginal) probability  P ( X |  i ) : likelihood Maximum Likelihood (ML) Used to find the most plausible   i   2    , given  X  Computing maximum likelihood (ML) or log-likelihood     Optimization problem
Problem Estimate hidden parameters (  ={  ,   }) from the given data extracted from  k Gaussian distributions Gaussian distribution Maximum Likelihood With Gaussian (P =  N ), Solve either brute-force or numeric method (Mitchell , 1997)
Problems in ML estimation Observation  X  is often not complete Latent (hidden) variable  Z   exists Hard to explore whole parameter space Expectation-Maximization algorithm Object : To find ML, over latent distribution  P ( Z  | X ,  ) Steps 0. Init – Choose a random   old 1. E-step – Expectation  P ( Z  | X ,   old ) 2. M-step – Find   new  which maximize likelihood.  3. Go to step 1 after updating   old   Ã    new
Definition Grouping unlabeled data into clusters, for the purpose of inference of hidden structures or information Dissimilarity measurement Distance : Euclidean(L 2 ), Manhattan(L 1 ), … Angle : Inner product, … Non-metric : Rank, Intensity, … Types of Clustering Hierarchical  Agglomerative or divisive Partitioning K-means, VQ, MDS, … (Matlab helppage)
Find K partitions with the total intra-cluster variance minimized Iterative method  Initialization : Randomized  y i Assignment  of  x  ( y i  fixed) Update of  y i  ( x  fixed) Problem?    Trap in local minima (MacKay, 2003)
Deterministically avoid local minima  No stochastic process (random walk) Tracing the global solution by changing  level of randomness Statistical Mechanics Gibbs distribution Helmholtz free energy F = D   – TS Average Energy D = <   E x > Entropy S = -  P (E x ) ln  P (E x ) F = – T ln Z In DA, we make F minimized (Maxima and Minima, Wikipedia)
Analogy to physical annealing process  Control energy (randomness) by temperature (high    low)  Starting with high temperature (T =  1 )  Soft (or fuzzy) association probability Smooth cost function with one global minimum Lowering the temperature (T  !   0) Hard association Revealing full complexity, clusters are emerged Minimization of F, using E( x ,  y j ) = || x - y j || 2 Iteratively,
Definition Process to transform high-dimensional data into low-dimensional ones for improving accuracy, understanding, or removing noises.  Curse of dimensionality Complexity grows exponentially  in volume by adding extra  dimensions Types Feature selection : Choose representatives (e.g., filter,…) Feature extraction : Map to lower dim. (e.g., PCA, MDS, … ) (Koppen, 2000)
Finding a map of principle components (PCs) of data into an orthogonal space, such that  y   = W  x   where W  2   R d £ h  (h À d) PCs –  Variables with the largest variances Orthogonality  Linearity – Optimal least mean-square error  Limitations?  Strict linearity  specific distribution Large variance assumption x 1 x 2 PC 1 PC 2
Like PCA, reduction of dimension by  y   = R  x  where R is a random matrix with i.i.d columns and R  2   R d £ p  (p À d) Johnson-Lindenstrauss lemma When projecting to a randomly selected subspace, the distance are approximately preserved Generating R Hard to obtain orthogonalized R Gaussian R Simple approach  choose r ij  = {+3 1/2 ,0,-3 1/2 } with probability 1/6, 4/6, 1/6 respectively
Dimension reduction preserving distance proximities observed in original data set Loss functions  Inner product Distance Squared distance  Classical MDS: minimizing STRAIN, given   From   , find inner product matrix B (Double centering) From B, recover the coordinates X’ (i.e., B=X’X’ T  )
SMACOF : minimizing STRESS Majorization – for complex f(x),  find auxiliary simple g(x,y) s.t.:  Majorization for STRESS Minimize tr(X T  B(Y) Y), known as Guttman transform (Cox, 2001)
Competitive and unsupervised learning process for clustering and visualization Result : similar data getting closer in the model space  Input Model Learning Choose the best similar model vector  m j  with  x i Update the winner and its neighbors by  m k  =  m k  +   (t)   (t)( x i  –  m k )  (t) : learning rate  (t) : neighborhood size
Definition A procedure dividing data into the given set of categories based on the training set in a supervised way Generalization Vs. Specification Hard to achieve both Avoid overfitting(overtraining) Early stopping Holdout validation K-fold cross validation  Leave-one-out cross-validation (Overfitting, Wikipedia) Validation Error Training Error Underfitting Overfitting
Perceptron : A computational unit with binary threshold Abilities Linear separable decision surface  Represent boolean functions  (AND, OR, NO) Network (Multilayer) of perceptrons   Various network architectures and capabilities Weighted Sum Activation Function (Jain, 1996)
Learning weights – random initialization and updating Error-correction training rules Difference between training data and output: E(t,o) Gradient descent (Batch learning)  With E =    E i  ,  Stochastic approach (On-line learning) Update  gradient for each result Various error functions Adding weight regularization term (   w i 2 ) to avoid overfitting Adding momentum (  w i (n-1) ) to expedite convergence
Q: How to draw the optimal linear separating hyperplane?    A: Maximizing  margin Margin maximization The distance between H +1  and H -1 : Thus, || w || should be minimized Margin
Constraint optimization problem Given training set { x i , y i } (y i   2  {+1, -1}):  Minimize : Lagrangian equation with saddle points  Minimized w.r.t the primal variable  w  and b: Maximized w.r.t the dual variables   i  (all   i   ¸  0) x i  with   i  > 0 (not   i  = 0) is called support vector (SV)
Soft Margin (Non-separable case) Slack variables   i  < C Optimization with additional constraint Non-linear SVM Map non-linear input to feature space Kernel function k( x , y ) =  h  ( x ),   ( y ) i   Kernel classifier with support vectors  s i Input Space Feature Space
Memory Architecture Decomposition Strategy Task – E.g., Word, IE, …  Data – scientific problem Pipelining – Task + Data Symmetric Multiprocessor (SMP) OpenMP, POSIX, pthread, MPI Easy to manage but expensive Commodity, off-the-shelf processors MPI Cost effective but hard to maintain (Barney, 2007) (Barney, 2007) Shared Memory Distributed Memory
Shrinking Recall : Only support vectors (  i >0) are  used in SVM optimization Predict if data is either SV or non-SV Remove non-SVs from problem space Parallel SVM Partition the problem Merge data hierarchically Each unit finds support vectors Loop until converge (Graf, 2005)
Machine Learning and Statistical Analysis

More Related Content

PDF
MLHEP Lectures - day 1, basic track
PDF
MLHEP Lectures - day 2, basic track
PDF
Radial Basis Function Interpolation
PDF
Reweighting and Boosting to uniforimty in HEP
PDF
MLHEP Lectures - day 3, basic track
PDF
Matrix Factorizations for Recommender Systems
PDF
Recsys matrix-factorizations
PDF
Machine learning in science and industry — day 3
MLHEP Lectures - day 1, basic track
MLHEP Lectures - day 2, basic track
Radial Basis Function Interpolation
Reweighting and Boosting to uniforimty in HEP
MLHEP Lectures - day 3, basic track
Matrix Factorizations for Recommender Systems
Recsys matrix-factorizations
Machine learning in science and industry — day 3

What's hot (15)

PDF
Machine learning in science and industry — day 4
PDF
Machine learning in science and industry — day 2
PPT
Jörg Stelzer
PDF
Machine learning in science and industry — day 1
PPTX
Machine learning applications in aerospace domain
PDF
MLHEP 2015: Introductory Lecture #2
PDF
MLHEP 2015: Introductory Lecture #4
PDF
Variational Autoencoders For Image Generation
PDF
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
PPTX
Gaussian processing
PDF
MLHEP 2015: Introductory Lecture #3
PDF
505 260-266
PDF
Lecture 5 Relationship between pixel-2
PDF
MLHEP 2015: Introductory Lecture #1
PDF
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Machine learning in science and industry — day 4
Machine learning in science and industry — day 2
Jörg Stelzer
Machine learning in science and industry — day 1
Machine learning applications in aerospace domain
MLHEP 2015: Introductory Lecture #2
MLHEP 2015: Introductory Lecture #4
Variational Autoencoders For Image Generation
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
Gaussian processing
MLHEP 2015: Introductory Lecture #3
505 260-266
Lecture 5 Relationship between pixel-2
MLHEP 2015: Introductory Lecture #1
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Ad

Viewers also liked (9)

PDF
Machine Learning Based Botnet Detection
DOCX
Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...
DOC
High-level
PPT
DMML1_overview.ppt
PPTX
슬라이드 1
DOC
MIS 542 Syllabus 08.doc
PPT
Basic Notions of Learning, Introduction to Learning ...
PPT
Designed by Identity MLP
DOC
eliot.doc
Machine Learning Based Botnet Detection
Julie Acker, M.S.W., CMHA Lambton Julie Acker holds a Masters ...
High-level
DMML1_overview.ppt
슬라이드 1
MIS 542 Syllabus 08.doc
Basic Notions of Learning, Introduction to Learning ...
Designed by Identity MLP
eliot.doc
Ad

Similar to Machine Learning and Statistical Analysis (20)

PPT
20070702 Text Categorization
PPT
Lect4
PPTX
Anomaly detection using deep one class classifier
PDF
Matrix Computations in Machine Learning
PDF
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
PDF
Cheatsheet supervised-learning
PPT
. An introduction to machine learning and probabilistic ...
PPTX
The world of loss function
PPT
Pattern Recognition and understanding patterns
PPT
Pattern Recognition- Basic Lecture Notes
PPT
PatternRecognition_fundamental_engineering.ppt
PPT
Introduction
PDF
Clustering:k-means, expect-maximization and gaussian mixture model
PPT
PERFORMANCE EVALUATION PARAMETERS FOR MACHINE LEARNING
PDF
Unbiased Bayes for Big Data
PDF
Introduction to conventional machine learning techniques
PPT
4.Support Vector Machines.ppt machine learning and development
PPTX
Image Recognition of recognition pattern.pptx
PDF
Hands-on Tutorial of Machine Learning in Python
PPT
Support Vector Machines
20070702 Text Categorization
Lect4
Anomaly detection using deep one class classifier
Matrix Computations in Machine Learning
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
Cheatsheet supervised-learning
. An introduction to machine learning and probabilistic ...
The world of loss function
Pattern Recognition and understanding patterns
Pattern Recognition- Basic Lecture Notes
PatternRecognition_fundamental_engineering.ppt
Introduction
Clustering:k-means, expect-maximization and gaussian mixture model
PERFORMANCE EVALUATION PARAMETERS FOR MACHINE LEARNING
Unbiased Bayes for Big Data
Introduction to conventional machine learning techniques
4.Support Vector Machines.ppt machine learning and development
Image Recognition of recognition pattern.pptx
Hands-on Tutorial of Machine Learning in Python
Support Vector Machines

More from butest (20)

PDF
EL MODELO DE NEGOCIO DE YOUTUBE
DOC
1. MPEG I.B.P frame之不同
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPT
Timeline: The Life of Michael Jackson
DOCX
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPTX
Com 380, Summer II
PPT
PPT
DOCX
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
DOC
MICHAEL JACKSON.doc
PPTX
Social Networks: Twitter Facebook SL - Slide 1
PPT
Facebook
DOCX
Executive Summary Hare Chevrolet is a General Motors dealership ...
DOC
Welcome to the Dougherty County Public Library's Facebook and ...
DOC
NEWS ANNOUNCEMENT
DOC
C-2100 Ultra Zoom.doc
DOC
MAC Printing on ITS Printers.doc.doc
DOC
Mac OS X Guide.doc
DOC
hier
DOC
WEB DESIGN!
EL MODELO DE NEGOCIO DE YOUTUBE
1. MPEG I.B.P frame之不同
LESSONS FROM THE MICHAEL JACKSON TRIAL
Timeline: The Life of Michael Jackson
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
LESSONS FROM THE MICHAEL JACKSON TRIAL
Com 380, Summer II
PPT
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
MICHAEL JACKSON.doc
Social Networks: Twitter Facebook SL - Slide 1
Facebook
Executive Summary Hare Chevrolet is a General Motors dealership ...
Welcome to the Dougherty County Public Library's Facebook and ...
NEWS ANNOUNCEMENT
C-2100 Ultra Zoom.doc
MAC Printing on ITS Printers.doc.doc
Mac OS X Guide.doc
hier
WEB DESIGN!

Machine Learning and Statistical Analysis

  • 1. Jong Youl Choi Computer Science Department (jychoi@cs.indiana.edu)
  • 4. Principles of Machine Learning Bayes’ theorem and maximum likelihood Machine Learning Algorithms Clustering analysis Dimension reduction Classification Parallel Computing General parallel computing architecture Parallel algorithms
  • 5. Definition Algorithms or techniques that enable computer (machine) to “learn” from data. Related with many areas such as data mining, statistics, information theory, etc. Algorithm Types Unsupervised learning Supervised learning Reinforcement learning Topics Models Artificial Neural Network (ANN) Support Vector Machine (SVM) Optimization Expectation-Maximization (EM) Deterministic Annealing (DA)
  • 6. Posterior probability of  i , given X  i 2  : Parameter X : Observations P (  i ) : Prior (or marginal) probability P ( X |  i ) : likelihood Maximum Likelihood (ML) Used to find the most plausible  i 2  , given X Computing maximum likelihood (ML) or log-likelihood  Optimization problem
  • 7. Problem Estimate hidden parameters (  ={  ,  }) from the given data extracted from k Gaussian distributions Gaussian distribution Maximum Likelihood With Gaussian (P = N ), Solve either brute-force or numeric method (Mitchell , 1997)
  • 8. Problems in ML estimation Observation X is often not complete Latent (hidden) variable Z exists Hard to explore whole parameter space Expectation-Maximization algorithm Object : To find ML, over latent distribution P ( Z | X ,  ) Steps 0. Init – Choose a random  old 1. E-step – Expectation P ( Z | X ,  old ) 2. M-step – Find  new which maximize likelihood. 3. Go to step 1 after updating  old à  new
  • 9. Definition Grouping unlabeled data into clusters, for the purpose of inference of hidden structures or information Dissimilarity measurement Distance : Euclidean(L 2 ), Manhattan(L 1 ), … Angle : Inner product, … Non-metric : Rank, Intensity, … Types of Clustering Hierarchical Agglomerative or divisive Partitioning K-means, VQ, MDS, … (Matlab helppage)
  • 10. Find K partitions with the total intra-cluster variance minimized Iterative method Initialization : Randomized y i Assignment of x ( y i fixed) Update of y i ( x fixed) Problem?  Trap in local minima (MacKay, 2003)
  • 11. Deterministically avoid local minima No stochastic process (random walk) Tracing the global solution by changing level of randomness Statistical Mechanics Gibbs distribution Helmholtz free energy F = D – TS Average Energy D = <  E x > Entropy S = - P (E x ) ln P (E x ) F = – T ln Z In DA, we make F minimized (Maxima and Minima, Wikipedia)
  • 12. Analogy to physical annealing process Control energy (randomness) by temperature (high  low) Starting with high temperature (T = 1 ) Soft (or fuzzy) association probability Smooth cost function with one global minimum Lowering the temperature (T ! 0) Hard association Revealing full complexity, clusters are emerged Minimization of F, using E( x , y j ) = || x - y j || 2 Iteratively,
  • 13. Definition Process to transform high-dimensional data into low-dimensional ones for improving accuracy, understanding, or removing noises. Curse of dimensionality Complexity grows exponentially in volume by adding extra dimensions Types Feature selection : Choose representatives (e.g., filter,…) Feature extraction : Map to lower dim. (e.g., PCA, MDS, … ) (Koppen, 2000)
  • 14. Finding a map of principle components (PCs) of data into an orthogonal space, such that y = W x where W 2 R d £ h (h À d) PCs – Variables with the largest variances Orthogonality Linearity – Optimal least mean-square error Limitations? Strict linearity specific distribution Large variance assumption x 1 x 2 PC 1 PC 2
  • 15. Like PCA, reduction of dimension by y = R x where R is a random matrix with i.i.d columns and R 2 R d £ p (p À d) Johnson-Lindenstrauss lemma When projecting to a randomly selected subspace, the distance are approximately preserved Generating R Hard to obtain orthogonalized R Gaussian R Simple approach choose r ij = {+3 1/2 ,0,-3 1/2 } with probability 1/6, 4/6, 1/6 respectively
  • 16. Dimension reduction preserving distance proximities observed in original data set Loss functions Inner product Distance Squared distance Classical MDS: minimizing STRAIN, given  From  , find inner product matrix B (Double centering) From B, recover the coordinates X’ (i.e., B=X’X’ T )
  • 17. SMACOF : minimizing STRESS Majorization – for complex f(x), find auxiliary simple g(x,y) s.t.: Majorization for STRESS Minimize tr(X T B(Y) Y), known as Guttman transform (Cox, 2001)
  • 18. Competitive and unsupervised learning process for clustering and visualization Result : similar data getting closer in the model space Input Model Learning Choose the best similar model vector m j with x i Update the winner and its neighbors by m k = m k +  (t)  (t)( x i – m k )  (t) : learning rate  (t) : neighborhood size
  • 19. Definition A procedure dividing data into the given set of categories based on the training set in a supervised way Generalization Vs. Specification Hard to achieve both Avoid overfitting(overtraining) Early stopping Holdout validation K-fold cross validation Leave-one-out cross-validation (Overfitting, Wikipedia) Validation Error Training Error Underfitting Overfitting
  • 20. Perceptron : A computational unit with binary threshold Abilities Linear separable decision surface Represent boolean functions (AND, OR, NO) Network (Multilayer) of perceptrons  Various network architectures and capabilities Weighted Sum Activation Function (Jain, 1996)
  • 21. Learning weights – random initialization and updating Error-correction training rules Difference between training data and output: E(t,o) Gradient descent (Batch learning) With E =  E i , Stochastic approach (On-line learning) Update gradient for each result Various error functions Adding weight regularization term (  w i 2 ) to avoid overfitting Adding momentum (  w i (n-1) ) to expedite convergence
  • 22. Q: How to draw the optimal linear separating hyperplane?  A: Maximizing margin Margin maximization The distance between H +1 and H -1 : Thus, || w || should be minimized Margin
  • 23. Constraint optimization problem Given training set { x i , y i } (y i 2 {+1, -1}): Minimize : Lagrangian equation with saddle points Minimized w.r.t the primal variable w and b: Maximized w.r.t the dual variables  i (all  i ¸ 0) x i with  i > 0 (not  i = 0) is called support vector (SV)
  • 24. Soft Margin (Non-separable case) Slack variables  i < C Optimization with additional constraint Non-linear SVM Map non-linear input to feature space Kernel function k( x , y ) = h  ( x ),  ( y ) i Kernel classifier with support vectors s i Input Space Feature Space
  • 25. Memory Architecture Decomposition Strategy Task – E.g., Word, IE, … Data – scientific problem Pipelining – Task + Data Symmetric Multiprocessor (SMP) OpenMP, POSIX, pthread, MPI Easy to manage but expensive Commodity, off-the-shelf processors MPI Cost effective but hard to maintain (Barney, 2007) (Barney, 2007) Shared Memory Distributed Memory
  • 26. Shrinking Recall : Only support vectors (  i >0) are used in SVM optimization Predict if data is either SV or non-SV Remove non-SVs from problem space Parallel SVM Partition the problem Merge data hierarchically Each unit finds support vectors Loop until converge (Graf, 2005)

Editor's Notes

  • #6: Inductive Learning – extract rules, patterns, or information out of massive data (e.g., decision tree, clustering, …) Deductive Learning – require no additional input, but improve performance gradually (e.g., advice taker, …)