SlideShare a Scribd company logo
zekeLabs
Support Vector Machines
Learning made Simpler !
www.zekeLabs.com
Agenda
● Introduction to SVM
● Advantages & Disadvantages of SVM
● Classification Algo using SVM
● Linearly Separable Data
● Non linearly separable Data
● Understanding Kernels
● Gamma & C for RBF kernel
● Unbalanced Data
● Regression using SVM
● Kernels in Regression
● Novelty Detection
● Additional
● Applications
Introduction to Support Vector Machine
● First developed in the mid-1960s by Vladimir Vapnik.
Support Vector Machine
Classification Regression Outlier Detection
Advantages of SVM
● Effective in high dimensional spaces.
● Where number of dimensions is greater than the number of samples.
● Uses a subset of training points in the decision function (called support
vectors), so it is also memory efficient.
● Versatile: different Kernel functions can be specified for the decision
function. Common kernels are provided, but it is also possible to specify
custom kernels.
Disadvantages of SVM
● Very sensitive to hyper-parameters
● Different kernels needs different parameters
● SVMs do not directly provide probability estimates, these are calculated
using an expensive five-fold cross-validation
Classification using SVM
Algo
● This algorithm looks for a linearly separable hyperplane, or a decision
boundary separating members of one class from the other.
● If such a hyperplane does not exist, SVM uses a nonlinear mapping to
transform the training data into a higher dimension. Then it searches for
the linear optimal separating hyperplane.
● With an appropriate nonlinear mapping to a sufficiently high dimension,
data from two classes can always be separated by a hyperplane.
● The SVM algorithm finds this hyperplane using support vectors and
margins.
Linearly Separable Data
● Among the infinite straight lines possible to
separate the red from blue balls, find the
optimal one.
● Intuitively it is clear that if a line passes too
close to any of the points, that line will be
more sensitive to small changes in one or
more points.
● Hyperplane Generalization
Maximum Margin Classifier
● A natural choice of separating hyperplane
is optimal margin hyperplane (also known
as optimal separating hyperplane) which is
farthest from the observations.
● Finding the hyperplane that gives the
largest minimum distance to the training
examples, i.e. to find the maximum
margin.
Soft-margin Classifier
● Maximum margin classifier may not practically exist.
● Extending maximum margin to a soft-margin, a small amount of data is
allowed to cross margins or even the separating hyperplanes.
● Support Vector Machine maximizes the soft margin.
● C parameter leads to larger penalty for errors & thus inversely proportional
to soft margin.
Data non-linearly Separable
● Can’t really draw a line to
seperate yellow from purple
Data non-linearly Separable
● If such a hyperplane does not
exist, SVM uses a nonlinear
mapping to transform the training
data into a higher dimension.
● Transformation , Z = X^2 + Y^2
● Now, we see a plane exists
separating them both
Kernels
● We don’t have to do the transformation manually.
● This is done by kernel tricks
Kernel Tricks
Linear Poly RBF Custom
Comparing Kernels - Classification
RBF
Intuitively, the gamma parameter defines how far the
influence of a single training example reaches, with
low values meaning ‘far’ and high values meaning
‘close’.
RBF & Gamma
RBF & C
● The C parameter trades off misclassification of training examples against
simplicity of the decision surface.
● A low C makes the decision surface smooth, while a high C aims at
classifying all training examples correctly by giving the model freedom to
select more samples as support vectors.
RBF & C
XOR Data
C = 1 C = 10 C = 10000
RBF - Gamma & C
Unbalanced Data
● Find the optimal separating hyperplane
using an SVC for classes that are
unbalanced.
● Parameter - class_weight
Regression using SVM
Foundations
● y = f(x) + noise
● This can be achieved by training
the SVM model on a sample set,
i.e., training set, a process that
involves sequential optimization
of an error function.
Comparing Kernels - Regression
Novelty Detection using SVM
One Class SVM
● The training data is not polluted by outliers,
and we are interested in detecting
anomalies in new observations.
● One-class SVM is an unsupervised
algorithm that learns a decision function
for novelty detection: classifying new data
as similar or different to the training set.
Additional : Custom Kernel
● You can also use your own defined kernels by passing a function to the
keyword kernel in the constructor.
Applications
Image Classification
Text Classification
Thank You !!!
Visit : www.zekeLabs.com for more details
THANK YOU
Let us know how can we help your organization to Upskill the
employees to stay updated in the ever-evolving IT Industry.
Get in touch:
www.zekeLabs.com | +91-8095465880 | info@zekeLabs.com

More Related Content

PPTX
Support vector machine
PPT
Support Vector machine
PPTX
Support vector machine-SVM's
PPTX
Support vector machine
PPTX
Support vector machine
PPT
Support Vector Machines
PPTX
Support vector machines (svm)
PPTX
Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...
Support vector machine
Support Vector machine
Support vector machine-SVM's
Support vector machine
Support vector machine
Support Vector Machines
Support vector machines (svm)
Support Vector Machine - How Support Vector Machine works | SVM in Machine Le...

What's hot (20)

PDF
Support Vector Machines for Classification
PPTX
Feedforward neural network
PPTX
Support Vector Machine ppt presentation
PPTX
Unsupervised learning clustering
PDF
Decision trees in Machine Learning
PPTX
Radial basis function network ppt bySheetal,Samreen and Dhanashri
PPTX
Wrapper feature selection method
PPTX
Naive bayes
PPTX
Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...
PDF
Performance Metrics for Machine Learning Algorithms
PPTX
Presentation on supervised learning
PDF
Logistic regression in Machine Learning
PDF
Gradient descent method
PPTX
Feature Selection in Machine Learning
PDF
An introduction to Machine Learning
PPT
2.4 rule based classification
PDF
Dimensionality Reduction
PPTX
Data preprocessing in Machine learning
PDF
Support Vector Machines (SVM)
 
PPTX
Machine learning with ADA Boost
Support Vector Machines for Classification
Feedforward neural network
Support Vector Machine ppt presentation
Unsupervised learning clustering
Decision trees in Machine Learning
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Wrapper feature selection method
Naive bayes
Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...
Performance Metrics for Machine Learning Algorithms
Presentation on supervised learning
Logistic regression in Machine Learning
Gradient descent method
Feature Selection in Machine Learning
An introduction to Machine Learning
2.4 rule based classification
Dimensionality Reduction
Data preprocessing in Machine learning
Support Vector Machines (SVM)
 
Machine learning with ADA Boost
Ad

Similar to Support vector machine (20)

PPTX
svm-proyekt.pptx
PPTX
SVM[Support vector Machine] Machine learning
PPTX
Classification-Support Vector Machines.pptx
PPTX
Support vector machine learning.pptx
PPTX
Lecture 4a Random Forest classifier and SVM.pptx
PPTX
SVM_and_Kernels_presentation_with_code.pptx
PPTX
Support vector machines
PPTX
SVM FOR GRADE 11 pearson Btec 3rd level.ppt
PDF
Support Vector Machines ( SVM )
PPTX
Support Vector Machine topic of machine learning.pptx
PPTX
classification algorithms in machine learning.pptx
PDF
SVM(support vector Machine)withExplanation.pdf
PPTX
Statistical Machine Learning unit4 lecture notes
PPTX
Lecture09 SVM Intro, Kernel Trick (updated).pptx
DOC
Introduction to Support Vector Machines
PPTX
Tariku Bokila SVMA Presentation.pptx ddd
PDF
SVM_notes.pdf
PPTX
Support Vector Machines USING MACHINE LEARNING HOW IT WORKS
PPTX
Support-Vector-Machine (Supervised Learning).pptx
PPTX
Support Vector Machine.pptx
svm-proyekt.pptx
SVM[Support vector Machine] Machine learning
Classification-Support Vector Machines.pptx
Support vector machine learning.pptx
Lecture 4a Random Forest classifier and SVM.pptx
SVM_and_Kernels_presentation_with_code.pptx
Support vector machines
SVM FOR GRADE 11 pearson Btec 3rd level.ppt
Support Vector Machines ( SVM )
Support Vector Machine topic of machine learning.pptx
classification algorithms in machine learning.pptx
SVM(support vector Machine)withExplanation.pdf
Statistical Machine Learning unit4 lecture notes
Lecture09 SVM Intro, Kernel Trick (updated).pptx
Introduction to Support Vector Machines
Tariku Bokila SVMA Presentation.pptx ddd
SVM_notes.pdf
Support Vector Machines USING MACHINE LEARNING HOW IT WORKS
Support-Vector-Machine (Supervised Learning).pptx
Support Vector Machine.pptx
Ad

More from zekeLabs Technologies (20)

PPTX
Webinar - Build Cloud-native platform using Docker, Kubernetes, Prometheus, I...
PPTX
Design Patterns for Pods and Containers in Kubernetes - Webinar by zekeLabs
PDF
[Webinar] Following the Agile Footprint - zekeLabs
PPTX
Machine learning at scale - Webinar By zekeLabs
PDF
A curtain-raiser to the container world Docker & Kubernetes
PPTX
Docker - A curtain raiser to the Container world
PPTX
Serverless and cloud computing
PPTX
02 terraform core concepts
PPTX
08 Terraform: Provisioners
PPTX
Outlier detection handling
PPTX
Nearest neighbors
PPTX
PPTX
Master guide to become a data scientist
PPTX
Linear regression
PPTX
Linear models of classification
PPTX
Grid search, pipeline, featureunion
PPTX
Feature selection
PPTX
Essential NumPy
PPTX
Ensemble methods
Webinar - Build Cloud-native platform using Docker, Kubernetes, Prometheus, I...
Design Patterns for Pods and Containers in Kubernetes - Webinar by zekeLabs
[Webinar] Following the Agile Footprint - zekeLabs
Machine learning at scale - Webinar By zekeLabs
A curtain-raiser to the container world Docker & Kubernetes
Docker - A curtain raiser to the Container world
Serverless and cloud computing
02 terraform core concepts
08 Terraform: Provisioners
Outlier detection handling
Nearest neighbors
Master guide to become a data scientist
Linear regression
Linear models of classification
Grid search, pipeline, featureunion
Feature selection
Essential NumPy
Ensemble methods

Recently uploaded (20)

PPTX
MYSQL Presentation for SQL database connectivity
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Machine learning based COVID-19 study performance prediction
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Modernizing your data center with Dell and AMD
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
MYSQL Presentation for SQL database connectivity
20250228 LYD VKU AI Blended-Learning.pptx
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Understanding_Digital_Forensics_Presentation.pptx
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Machine learning based COVID-19 study performance prediction
Encapsulation_ Review paper, used for researhc scholars
Reach Out and Touch Someone: Haptics and Empathic Computing
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Modernizing your data center with Dell and AMD
The AUB Centre for AI in Media Proposal.docx
Building Integrated photovoltaic BIPV_UPV.pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Dropbox Q2 2025 Financial Results & Investor Presentation
Build a system with the filesystem maintained by OSTree @ COSCUP 2025

Support vector machine

  • 1. zekeLabs Support Vector Machines Learning made Simpler ! www.zekeLabs.com
  • 2. Agenda ● Introduction to SVM ● Advantages & Disadvantages of SVM ● Classification Algo using SVM ● Linearly Separable Data ● Non linearly separable Data ● Understanding Kernels ● Gamma & C for RBF kernel ● Unbalanced Data ● Regression using SVM ● Kernels in Regression ● Novelty Detection ● Additional ● Applications
  • 3. Introduction to Support Vector Machine ● First developed in the mid-1960s by Vladimir Vapnik. Support Vector Machine Classification Regression Outlier Detection
  • 4. Advantages of SVM ● Effective in high dimensional spaces. ● Where number of dimensions is greater than the number of samples. ● Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient. ● Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.
  • 5. Disadvantages of SVM ● Very sensitive to hyper-parameters ● Different kernels needs different parameters ● SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation
  • 7. Algo ● This algorithm looks for a linearly separable hyperplane, or a decision boundary separating members of one class from the other. ● If such a hyperplane does not exist, SVM uses a nonlinear mapping to transform the training data into a higher dimension. Then it searches for the linear optimal separating hyperplane. ● With an appropriate nonlinear mapping to a sufficiently high dimension, data from two classes can always be separated by a hyperplane. ● The SVM algorithm finds this hyperplane using support vectors and margins.
  • 8. Linearly Separable Data ● Among the infinite straight lines possible to separate the red from blue balls, find the optimal one. ● Intuitively it is clear that if a line passes too close to any of the points, that line will be more sensitive to small changes in one or more points. ● Hyperplane Generalization
  • 9. Maximum Margin Classifier ● A natural choice of separating hyperplane is optimal margin hyperplane (also known as optimal separating hyperplane) which is farthest from the observations. ● Finding the hyperplane that gives the largest minimum distance to the training examples, i.e. to find the maximum margin.
  • 10. Soft-margin Classifier ● Maximum margin classifier may not practically exist. ● Extending maximum margin to a soft-margin, a small amount of data is allowed to cross margins or even the separating hyperplanes. ● Support Vector Machine maximizes the soft margin. ● C parameter leads to larger penalty for errors & thus inversely proportional to soft margin.
  • 11. Data non-linearly Separable ● Can’t really draw a line to seperate yellow from purple
  • 12. Data non-linearly Separable ● If such a hyperplane does not exist, SVM uses a nonlinear mapping to transform the training data into a higher dimension. ● Transformation , Z = X^2 + Y^2 ● Now, we see a plane exists separating them both
  • 13. Kernels ● We don’t have to do the transformation manually. ● This is done by kernel tricks Kernel Tricks Linear Poly RBF Custom
  • 14. Comparing Kernels - Classification
  • 15. RBF Intuitively, the gamma parameter defines how far the influence of a single training example reaches, with low values meaning ‘far’ and high values meaning ‘close’.
  • 17. RBF & C ● The C parameter trades off misclassification of training examples against simplicity of the decision surface. ● A low C makes the decision surface smooth, while a high C aims at classifying all training examples correctly by giving the model freedom to select more samples as support vectors.
  • 18. RBF & C XOR Data C = 1 C = 10 C = 10000
  • 19. RBF - Gamma & C
  • 20. Unbalanced Data ● Find the optimal separating hyperplane using an SVC for classes that are unbalanced. ● Parameter - class_weight
  • 22. Foundations ● y = f(x) + noise ● This can be achieved by training the SVM model on a sample set, i.e., training set, a process that involves sequential optimization of an error function.
  • 23. Comparing Kernels - Regression
  • 25. One Class SVM ● The training data is not polluted by outliers, and we are interested in detecting anomalies in new observations. ● One-class SVM is an unsupervised algorithm that learns a decision function for novelty detection: classifying new data as similar or different to the training set.
  • 26. Additional : Custom Kernel ● You can also use your own defined kernels by passing a function to the keyword kernel in the constructor.
  • 29. Visit : www.zekeLabs.com for more details THANK YOU Let us know how can we help your organization to Upskill the employees to stay updated in the ever-evolving IT Industry. Get in touch: www.zekeLabs.com | +91-8095465880 | info@zekeLabs.com