SlideShare a Scribd company logo
Gradient Decent in Linear
Regression
A linear regression model attempts to explain the relationship between a dependent
(output variables) variable and one or more independent (predictor variable) variables
using a straight line.
This straight line is represented using the following formula:
Where,
y: Dependent variable
x: Independent variable
m: Slope of the line (For a unit increase in the quantity of X, Y increases by m.1 = m
units.)
c: y intercept (The value of Y is c when value of X is 0)
The first step in finding a linear regression equation is to determine if there is a
relationship between the two variables. We can do this by using Correlation
coefficient and scatter plot. When a correlation coefficient shows that data is
likely to be able to predict future outcomes and a scatter plot of the data appears
to form a straight line, we can use simple linear regression to find a predictive
function. Let us consider an example.
From the scatter plot we can see there is a linear relationship between Sales
and marketing spent. Next step is to find a straight line between Sales and
Marketing that explain the relationship between them. But there can be
multiple lines that can pass through these points.
So how do we know which of these lines is the best fit line? That’s the
problem that we will solve in this article. For this we will first look at the
cost function.
Cost Function
Cost Function
The cost is the error in our predicted value. We
will use Mean Squared Error function to calculate
the cost.
Our goal is to minimize the cost as much as possible in order to
find the best fit line. We are not going to try all the permutation
and combination of m and c (inefficient way) to find best fit line.
For that we will use Gradient Decent Algorithm.
Gradient Decent Algorithm
Gradient Decent is an algorithm that finds best fit line for a given training dataset in less number of
iteration.
If we plot m and c against MSE, it will acquire a bowl shape (As shown in the diagram below)
For some combination of m and c we will get the least Error (MSE). That
combination of m and c will give us our best fit line.
The algorithm starts with some value of m and c (usually starts with m=0, c=0).
We calculate MSE (cost) at point m=0, c=0. Let say the MSE (cost) at m=0, c=0
is 100. Then we reduce the value of m and c by some amount (Learning Step).
We will notice decrease in MSE (cost). We will continue doing the same until
our loss function is a very small value or ideally 0 (which means 0 error or
100% accuracy).
Step By Step Algorithm
Let m = 0 and c = 0. Let L be our learning rate. It could be a small
value like 0.01 for good accuracy.
Learning rate gives the rate of speed where the gradient moves during gradient
descent. Setting it too high would make your path instable, too low would make
convergence slow. Put it to zero means your model isn’t learning anything from the
gradients.
2. Calculate the partial derivative of the Cost function with respect to m. Let partial
derivative of the Cost function with respect to m be Dm (With little change in m how
much Cost function changes)
Similarly let’s find the partial derivative with respect to c. Let partial derivative of the
Cost function with respect to c be Dc (With little change in c how much Cost function
changes).
3. Now update the current values of m and c using the following equation:
4. We will repeat this process until our Cost function is a very small (ideally 0).
Gradient Decent Algorithm gives optimum values of m and c. With these values
of m and c we will get the equation of the best-fit line and ready to make
predictions.

More Related Content

PPTX
Bootcamp of new world to taken seriously
PPTX
Machine learning session4(linear regression)
PPT
Simple lin regress_inference
PDF
working with python
PPSX
Lasso and ridge regression
PDF
ML_Lec3 introduction to regression problems.pdf
PDF
12 support vector machines
PDF
Linear logisticregression
Bootcamp of new world to taken seriously
Machine learning session4(linear regression)
Simple lin regress_inference
working with python
Lasso and ridge regression
ML_Lec3 introduction to regression problems.pdf
12 support vector machines
Linear logisticregression

Similar to Gradient Decent in Linear Regression.pptx (20)

PPTX
Linear Regression
DOCX
CHAPTER TENObjectiveA brief introduction of the basic concept
PPTX
Linear_Regression
PDF
Machine learning
PDF
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
PDF
The normal presentation about linear regression in machine learning
PPTX
Direct Methods to Solve Lineal Equations
PPTX
Direct Methods to Solve Linear Equations Systems
PPTX
Direct methods
PPTX
Direct methods
PPTX
Linear regression
PPTX
linear regression in machine learning.pptx
PDF
Unit---5.pdf of ba in srcc du gst before exam
PPTX
Regularization concept in machine learning
PPTX
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
PPT
Chapter05
PPTX
Regression Analysis.pptx
PPTX
Regression Analysis Techniques.pptx
PDF
Machine Learning Interview Question and Answer
PPTX
General Mathematics Quarter 1 – Module 1 Functions.pptx
Linear Regression
CHAPTER TENObjectiveA brief introduction of the basic concept
Linear_Regression
Machine learning
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
The normal presentation about linear regression in machine learning
Direct Methods to Solve Lineal Equations
Direct Methods to Solve Linear Equations Systems
Direct methods
Direct methods
Linear regression
linear regression in machine learning.pptx
Unit---5.pdf of ba in srcc du gst before exam
Regularization concept in machine learning
Applied Numerical Methods Curve Fitting: Least Squares Regression, Interpolation
Chapter05
Regression Analysis.pptx
Regression Analysis Techniques.pptx
Machine Learning Interview Question and Answer
General Mathematics Quarter 1 – Module 1 Functions.pptx
Ad

Recently uploaded (20)

PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PPTX
20th Century Theater, Methods, History.pptx
PDF
Hazard Identification & Risk Assessment .pdf
PPTX
Virtual and Augmented Reality in Current Scenario
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PDF
Empowerment Technology for Senior High School Guide
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
Share_Module_2_Power_conflict_and_negotiation.pptx
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
20th Century Theater, Methods, History.pptx
Hazard Identification & Risk Assessment .pdf
Virtual and Augmented Reality in Current Scenario
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
Empowerment Technology for Senior High School Guide
Unit 4 Computer Architecture Multicore Processor.pptx
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
What if we spent less time fighting change, and more time building what’s rig...
FORM 1 BIOLOGY MIND MAPS and their schemes
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Paper A Mock Exam 9_ Attempt review.pdf.
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Ad

Gradient Decent in Linear Regression.pptx

  • 1. Gradient Decent in Linear Regression
  • 2. A linear regression model attempts to explain the relationship between a dependent (output variables) variable and one or more independent (predictor variable) variables using a straight line. This straight line is represented using the following formula:
  • 3. Where, y: Dependent variable x: Independent variable m: Slope of the line (For a unit increase in the quantity of X, Y increases by m.1 = m units.) c: y intercept (The value of Y is c when value of X is 0)
  • 4. The first step in finding a linear regression equation is to determine if there is a relationship between the two variables. We can do this by using Correlation coefficient and scatter plot. When a correlation coefficient shows that data is likely to be able to predict future outcomes and a scatter plot of the data appears to form a straight line, we can use simple linear regression to find a predictive function. Let us consider an example.
  • 5. From the scatter plot we can see there is a linear relationship between Sales and marketing spent. Next step is to find a straight line between Sales and Marketing that explain the relationship between them. But there can be multiple lines that can pass through these points.
  • 6. So how do we know which of these lines is the best fit line? That’s the problem that we will solve in this article. For this we will first look at the cost function.
  • 7. Cost Function Cost Function The cost is the error in our predicted value. We will use Mean Squared Error function to calculate the cost.
  • 8. Our goal is to minimize the cost as much as possible in order to find the best fit line. We are not going to try all the permutation and combination of m and c (inefficient way) to find best fit line. For that we will use Gradient Decent Algorithm.
  • 9. Gradient Decent Algorithm Gradient Decent is an algorithm that finds best fit line for a given training dataset in less number of iteration. If we plot m and c against MSE, it will acquire a bowl shape (As shown in the diagram below)
  • 10. For some combination of m and c we will get the least Error (MSE). That combination of m and c will give us our best fit line. The algorithm starts with some value of m and c (usually starts with m=0, c=0). We calculate MSE (cost) at point m=0, c=0. Let say the MSE (cost) at m=0, c=0 is 100. Then we reduce the value of m and c by some amount (Learning Step). We will notice decrease in MSE (cost). We will continue doing the same until our loss function is a very small value or ideally 0 (which means 0 error or 100% accuracy). Step By Step Algorithm Let m = 0 and c = 0. Let L be our learning rate. It could be a small value like 0.01 for good accuracy.
  • 11. Learning rate gives the rate of speed where the gradient moves during gradient descent. Setting it too high would make your path instable, too low would make convergence slow. Put it to zero means your model isn’t learning anything from the gradients. 2. Calculate the partial derivative of the Cost function with respect to m. Let partial derivative of the Cost function with respect to m be Dm (With little change in m how much Cost function changes)
  • 12. Similarly let’s find the partial derivative with respect to c. Let partial derivative of the Cost function with respect to c be Dc (With little change in c how much Cost function changes).
  • 13. 3. Now update the current values of m and c using the following equation: 4. We will repeat this process until our Cost function is a very small (ideally 0). Gradient Decent Algorithm gives optimum values of m and c. With these values of m and c we will get the equation of the best-fit line and ready to make predictions.