SlideShare a Scribd company logo
Linear Regression
Dr. Varun Kumar
Dr. Varun Kumar Lecture 5 1 / 13
Outlines
1 Linear Regression:
2 Basic Rule for Linear Regression:
3 Parametric Estimation in Linear Regression
4 Linear Regression of Higher Order
5 Introduction to Quadratic or Polynomial Regression
6 References
Dr. Varun Kumar Lecture 5 2 / 13
Linear Regression:
Linear Regression:
Linear regression is an approach to model the relationship between a scalar
response (or dependent variable) and one or more explanatory variables (or
independent variables). The case of one explanatory variable is called
simple linear regression.
y = mx + c ⇒ y
Independent variable
= m x
Dependent variable
+ c
m → Gradient or slope and y → Intercept
or
y = a1x1 + a2x2 + ..... + akxk + b
Here, y → Independent variable
x1, x2, ...., xk → Dependent variable
Dr. Varun Kumar Lecture 5 3 / 13
Basic Rule for Linear Regression:
Table for Doing Bi-variate Linear Regression
y x ˆx − x ˆy − y
y1 x1 ˆx − x1 ˆy − y1
y2 x2 ˆx − x2 ˆy − y2
...
...
...
...
yk−1 xk−1 ˆx − xk−1 ˆy − yk−1
yk xk ˆx − xk ˆy − yk
ˆy = mean(y) ˆx = mean(x)
m =
k
i=1(ˆx − xi )(ˆy − yi )
k
i=1(ˆx − xi )2
Based on these observation, estimated straight line equation can be
y − ˆy = m(x − ˆx) or y = mx + c, where c = mˆx + ˆy
Dr. Varun Kumar Lecture 5 4 / 13
Continued–
Note:
1 There may exist infinite number of lines for different gradient and
intercept value.
2 Straight line is supposed to be best-fit in linear regression line that
gives the minimum least square error.
3 In linear regression, fitness of straight line can also be calculated as
R2 value. Lower the R2 greater be the fitness of straight line.
4 Above straight line may be one solution, but cannot say the true
estimate that gives minimum mean square error.
Calculation of R2
R2
=
(yp − ˆy)2
(yi − ˆy)2
where yp → Predicted value from st line.
yi → Value of ith sample of y vector
Dr. Varun Kumar Lecture 5 5 / 13
Parametric Estimation in Linear Regression
Mean Square Error:
MSE = 1
N (yp − yi )2
Parametric Estimation in Linear Regression
Let linear regression model is as follow
r = f (x) +
f (x) is unknown function, which requires to estimate properly.
G(x|θ) → Linear estimator, estimate the unknown function f (x).
∼ N(0, σ2)
p(r|x) ∼ N(G(x|θ), σ2)
Maximum likelihood is used to learn the parameters θ.
Dr. Varun Kumar Lecture 5 6 / 13
Continued–
The pairs (xt, rt) in the training set are drawn from an unknown joint
probability density p(x, r), which we can write as
p(x, r) = p(r|x)p(x)
Let χ = {xt, rt}N
t=1 → Training set.
Log-likelihood Function :
L(θ|χ) = log{p(r, x)} = log(p(r|x)) + log(p(x))
= log( N
t=1 p(rt|xt)) + log( N
t=1 p(xt))
Note: We can ignore the second term since it does not depend on our
estimator.
L(θ|χ) = log
N
t=1
1
√
2πσ
e
(rt −G(x|θ))2
2σ2
= −N log(
√
2πσ) −
1
2σ2
N
t=1
(rt
− G(x|θ))2
Dr. Varun Kumar Lecture 5 7 / 13
Continued–
Note:
In log-likelihood function L(θ|χ), first term is independent from dependent
parameter θ.
Maximizing this is equivalent to minimizing the error function,
E(θ|χ) =
−1
2σ2
N
t=1
(rt
− G(x|θ))2
In linear regression, we have a linear model
G(xt
|w1, w0) = w1xt
+ w0
There are two unknown w1 and w0. Hence, there require two equation.
N
t=1
rt
= Nw0 + w1
N
t=1
xt
(1)
Dr. Varun Kumar Lecture 5 8 / 13
Continued–
N
t=1
rt
xt
= w0
N
t=1
xt
+ w1
N
t=1
(xt
)2
(2)
which can be written in vector-matrix form as Aw = y where
A =
N N
t=1 xt
N
t=1 xt N
t=1(xt)2 W =
w0
w1
Hence, we can estimate the parameter w0 and w1 by estimator G(x|θ)
using training set χ = {xt, rt}
W = A−1
y
Dr. Varun Kumar Lecture 5 9 / 13
Linear Regression of Higher Order:
Example:
1 Let a mathematical input-output relation is such a way that
r = f (x) + b
where unknown function f (x) is estimated by a linear estimator
G(x|θ) = a1x1 + a2x2 + a3x3 + b
where, x = {x1, x2, x3} and θ = {a1, a2, a3, b} training data set
χ = {rt, xt
1, xt
2, xt
3}N
t=1, respectively .
Ans. Let us start with second question, where four unknown and total 5N
number of training data
N
t=1
rt
= bN + a1
N
t=1
xt
1 + a2
N
t=1
xt
2 + a3
N
t=1
xt
3 (3)
Dr. Varun Kumar Lecture 5 10 / 13
Continued–
N
t=1
rt
xt
1 = b
N
t=1
xt
1 + a1
N
t=1
(xt
1)2
+ a2
N
t=1
xt
1xt
2 + a3
N
t=1
xt
1xt
3 (4)
N
t=1
rt
xt
2 = b
N
t=1
xt
2 + a1
N
t=1
xt
2xt
1 + a2
N
t=1
(xt
2)2
+ a3
N
t=1
xt
2xt
3 (5)
N
t=1
rt
xt
3 = b
N
t=1
xt
3 + a1
N
t=1
xt
3xt
1 + a2
N
t=1
xt
3xt
2 + a3
N
t=1
(xt
3)2
(6)
From previous example, we can also express as
AW = y
where y = N
t=1 rt, N
t=1 rtxt
1, N
t=1 rtxt
2, N
t=1 rtxt
3
T
Dr. Varun Kumar Lecture 5 11 / 13
Quadratic or Polynomial Regression
Quadratic or Polynomial Regression :
For quadratic regression, our estimator can be modeled as
G(x|θ) = a2x2
+ a1x + a0
Here, θ = a2, a1, a0
For polynomial regression, our estimator can be modeled as
G(x|θ) = anxn
+ an−1xn−1
+ ..... + a0
Here, θ = an, an−1, ...., a0
Dr. Varun Kumar Lecture 5 12 / 13
References
E. Alpaydin, Introduction to machine learning. MIT press, 2020.
T. M. Mitchell, The discipline of machine learning. Carnegie Mellon University,
School of Computer Science, Machine Learning , 2006, vol. 9.
J. Grus, Data science from scratch: first principles with python. O’Reilly Media,
2019.
Dr. Varun Kumar Lecture 5 13 / 13

More Related Content

PPTX
Direct Methods to Solve Linear Equations Systems
PPT
Divide and Conquer
PPTX
Gauss
PPTX
linear equation and gaussian elimination
PPT
Gauss-Jordan Theory
PDF
A brief survey of tensors
PDF
Basic terminology description in convex optimization
PPTX
Numerical solution of ordinary differential equation
Direct Methods to Solve Linear Equations Systems
Divide and Conquer
Gauss
linear equation and gaussian elimination
Gauss-Jordan Theory
A brief survey of tensors
Basic terminology description in convex optimization
Numerical solution of ordinary differential equation

What's hot (20)

PPTX
Nmsa 170900713008
PPTX
Cramer's Rule
PPTX
Gauss y gauss jordan
PDF
01.02 linear equations
PPTX
Gauss jordan
PDF
A Note on Hessen berg of Trapezoidal Fuzzy Number Matrices
PPTX
Determinant
DOCX
Indefinite Integral
PDF
9.4 Cramer's Rule
PPT
Gauss elimination
PPTX
Gauss Quadrature Formula
PDF
Numerical Solution of Ordinary Differential Equations
PPTX
NUMERICAL INTEGRATION : ERROR FORMULA, GAUSSIAN QUADRATURE FORMULA
DOCX
Gauss elimination method
PPTX
Gauss jordan and Guass elimination method
PPTX
PPT
Alin 2.2 2.4
PDF
9.2 Matrices
PPTX
Iterativos Methods
PPTX
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Nmsa 170900713008
Cramer's Rule
Gauss y gauss jordan
01.02 linear equations
Gauss jordan
A Note on Hessen berg of Trapezoidal Fuzzy Number Matrices
Determinant
Indefinite Integral
9.4 Cramer's Rule
Gauss elimination
Gauss Quadrature Formula
Numerical Solution of Ordinary Differential Equations
NUMERICAL INTEGRATION : ERROR FORMULA, GAUSSIAN QUADRATURE FORMULA
Gauss elimination method
Gauss jordan and Guass elimination method
Alin 2.2 2.4
9.2 Matrices
Iterativos Methods
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Ad

Similar to Linear Regression (20)

PPTX
11Polynomial RegressionPolynomial RegressionPolynomial RegressionPolynomial R...
PDF
Linear Regression
PPTX
Regression vs Neural Net
PDF
eR-Biostat_LinearRegressioninR_2017_V1.pdf
PPTX
Regression Analysis.pptx
PPTX
Regression Analysis Techniques.pptx
PDF
3ml.pdf
PDF
Lecture 4 - Linear Regression, a lecture in subject module Statistical & Mach...
PDF
Linear Regression With One or More Variables
PPTX
Linear Regression final-1.pptx thbejnnej
PDF
Module-2_ML.pdf
PPTX
unit 3 regression.pptx
PPTX
Different Types of Machine Learning Algorithms
PPT
Machine Learning Unit 2_Supervised Learning
PDF
An econometric model for Linear Regression using Statistics
PPTX
Regression
PPTX
regression analysis presentation slides.
PPTX
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
PDF
Linear_Models_with_R_----_(2._Estimation).pdf
PDF
Module 5.pdf Machine Learning Types and examples
11Polynomial RegressionPolynomial RegressionPolynomial RegressionPolynomial R...
Linear Regression
Regression vs Neural Net
eR-Biostat_LinearRegressioninR_2017_V1.pdf
Regression Analysis.pptx
Regression Analysis Techniques.pptx
3ml.pdf
Lecture 4 - Linear Regression, a lecture in subject module Statistical & Mach...
Linear Regression With One or More Variables
Linear Regression final-1.pptx thbejnnej
Module-2_ML.pdf
unit 3 regression.pptx
Different Types of Machine Learning Algorithms
Machine Learning Unit 2_Supervised Learning
An econometric model for Linear Regression using Statistics
Regression
regression analysis presentation slides.
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Linear_Models_with_R_----_(2._Estimation).pdf
Module 5.pdf Machine Learning Types and examples
Ad

More from VARUN KUMAR (20)

PDF
Distributed rc Model
PDF
Electrical Wire Model
PDF
Interconnect Parameter in Digital VLSI Design
PDF
Introduction to Digital VLSI Design
PDF
Challenges of Massive MIMO System
PDF
E-democracy or Digital Democracy
PDF
Ethics of Parasitic Computing
PDF
Action Lines of Geneva Plan of Action
PDF
Geneva Plan of Action
PDF
Fair Use in the Electronic Age
PDF
Software as a Property
PDF
Orthogonal Polynomial
PDF
Patent Protection
PDF
Copyright Vs Patent and Trade Secrecy Law
PDF
Property Right and Software
PDF
Investigating Data Trials
PDF
Gaussian Numerical Integration
PDF
Censorship and Controversy
PDF
Romberg's Integration
PDF
Introduction to Censorship
Distributed rc Model
Electrical Wire Model
Interconnect Parameter in Digital VLSI Design
Introduction to Digital VLSI Design
Challenges of Massive MIMO System
E-democracy or Digital Democracy
Ethics of Parasitic Computing
Action Lines of Geneva Plan of Action
Geneva Plan of Action
Fair Use in the Electronic Age
Software as a Property
Orthogonal Polynomial
Patent Protection
Copyright Vs Patent and Trade Secrecy Law
Property Right and Software
Investigating Data Trials
Gaussian Numerical Integration
Censorship and Controversy
Romberg's Integration
Introduction to Censorship

Recently uploaded (20)

PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
PPT on Performance Review to get promotions
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
DOCX
573137875-Attendance-Management-System-original
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPT
Project quality management in manufacturing
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
web development for engineering and engineering
PPTX
Sustainable Sites - Green Building Construction
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
Welding lecture in detail for understanding
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPT on Performance Review to get promotions
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
573137875-Attendance-Management-System-original
Lesson 3_Tessellation.pptx finite Mathematics
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Project quality management in manufacturing
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
web development for engineering and engineering
Sustainable Sites - Green Building Construction
Strings in CPP - Strings in C++ are sequences of characters used to store and...
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Model Code of Practice - Construction Work - 21102022 .pdf
Welding lecture in detail for understanding
Embodied AI: Ushering in the Next Era of Intelligent Systems
UNIT-1 - COAL BASED THERMAL POWER PLANTS

Linear Regression

  • 1. Linear Regression Dr. Varun Kumar Dr. Varun Kumar Lecture 5 1 / 13
  • 2. Outlines 1 Linear Regression: 2 Basic Rule for Linear Regression: 3 Parametric Estimation in Linear Regression 4 Linear Regression of Higher Order 5 Introduction to Quadratic or Polynomial Regression 6 References Dr. Varun Kumar Lecture 5 2 / 13
  • 3. Linear Regression: Linear Regression: Linear regression is an approach to model the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent variables). The case of one explanatory variable is called simple linear regression. y = mx + c ⇒ y Independent variable = m x Dependent variable + c m → Gradient or slope and y → Intercept or y = a1x1 + a2x2 + ..... + akxk + b Here, y → Independent variable x1, x2, ...., xk → Dependent variable Dr. Varun Kumar Lecture 5 3 / 13
  • 4. Basic Rule for Linear Regression: Table for Doing Bi-variate Linear Regression y x ˆx − x ˆy − y y1 x1 ˆx − x1 ˆy − y1 y2 x2 ˆx − x2 ˆy − y2 ... ... ... ... yk−1 xk−1 ˆx − xk−1 ˆy − yk−1 yk xk ˆx − xk ˆy − yk ˆy = mean(y) ˆx = mean(x) m = k i=1(ˆx − xi )(ˆy − yi ) k i=1(ˆx − xi )2 Based on these observation, estimated straight line equation can be y − ˆy = m(x − ˆx) or y = mx + c, where c = mˆx + ˆy Dr. Varun Kumar Lecture 5 4 / 13
  • 5. Continued– Note: 1 There may exist infinite number of lines for different gradient and intercept value. 2 Straight line is supposed to be best-fit in linear regression line that gives the minimum least square error. 3 In linear regression, fitness of straight line can also be calculated as R2 value. Lower the R2 greater be the fitness of straight line. 4 Above straight line may be one solution, but cannot say the true estimate that gives minimum mean square error. Calculation of R2 R2 = (yp − ˆy)2 (yi − ˆy)2 where yp → Predicted value from st line. yi → Value of ith sample of y vector Dr. Varun Kumar Lecture 5 5 / 13
  • 6. Parametric Estimation in Linear Regression Mean Square Error: MSE = 1 N (yp − yi )2 Parametric Estimation in Linear Regression Let linear regression model is as follow r = f (x) + f (x) is unknown function, which requires to estimate properly. G(x|θ) → Linear estimator, estimate the unknown function f (x). ∼ N(0, σ2) p(r|x) ∼ N(G(x|θ), σ2) Maximum likelihood is used to learn the parameters θ. Dr. Varun Kumar Lecture 5 6 / 13
  • 7. Continued– The pairs (xt, rt) in the training set are drawn from an unknown joint probability density p(x, r), which we can write as p(x, r) = p(r|x)p(x) Let χ = {xt, rt}N t=1 → Training set. Log-likelihood Function : L(θ|χ) = log{p(r, x)} = log(p(r|x)) + log(p(x)) = log( N t=1 p(rt|xt)) + log( N t=1 p(xt)) Note: We can ignore the second term since it does not depend on our estimator. L(θ|χ) = log N t=1 1 √ 2πσ e (rt −G(x|θ))2 2σ2 = −N log( √ 2πσ) − 1 2σ2 N t=1 (rt − G(x|θ))2 Dr. Varun Kumar Lecture 5 7 / 13
  • 8. Continued– Note: In log-likelihood function L(θ|χ), first term is independent from dependent parameter θ. Maximizing this is equivalent to minimizing the error function, E(θ|χ) = −1 2σ2 N t=1 (rt − G(x|θ))2 In linear regression, we have a linear model G(xt |w1, w0) = w1xt + w0 There are two unknown w1 and w0. Hence, there require two equation. N t=1 rt = Nw0 + w1 N t=1 xt (1) Dr. Varun Kumar Lecture 5 8 / 13
  • 9. Continued– N t=1 rt xt = w0 N t=1 xt + w1 N t=1 (xt )2 (2) which can be written in vector-matrix form as Aw = y where A = N N t=1 xt N t=1 xt N t=1(xt)2 W = w0 w1 Hence, we can estimate the parameter w0 and w1 by estimator G(x|θ) using training set χ = {xt, rt} W = A−1 y Dr. Varun Kumar Lecture 5 9 / 13
  • 10. Linear Regression of Higher Order: Example: 1 Let a mathematical input-output relation is such a way that r = f (x) + b where unknown function f (x) is estimated by a linear estimator G(x|θ) = a1x1 + a2x2 + a3x3 + b where, x = {x1, x2, x3} and θ = {a1, a2, a3, b} training data set χ = {rt, xt 1, xt 2, xt 3}N t=1, respectively . Ans. Let us start with second question, where four unknown and total 5N number of training data N t=1 rt = bN + a1 N t=1 xt 1 + a2 N t=1 xt 2 + a3 N t=1 xt 3 (3) Dr. Varun Kumar Lecture 5 10 / 13
  • 11. Continued– N t=1 rt xt 1 = b N t=1 xt 1 + a1 N t=1 (xt 1)2 + a2 N t=1 xt 1xt 2 + a3 N t=1 xt 1xt 3 (4) N t=1 rt xt 2 = b N t=1 xt 2 + a1 N t=1 xt 2xt 1 + a2 N t=1 (xt 2)2 + a3 N t=1 xt 2xt 3 (5) N t=1 rt xt 3 = b N t=1 xt 3 + a1 N t=1 xt 3xt 1 + a2 N t=1 xt 3xt 2 + a3 N t=1 (xt 3)2 (6) From previous example, we can also express as AW = y where y = N t=1 rt, N t=1 rtxt 1, N t=1 rtxt 2, N t=1 rtxt 3 T Dr. Varun Kumar Lecture 5 11 / 13
  • 12. Quadratic or Polynomial Regression Quadratic or Polynomial Regression : For quadratic regression, our estimator can be modeled as G(x|θ) = a2x2 + a1x + a0 Here, θ = a2, a1, a0 For polynomial regression, our estimator can be modeled as G(x|θ) = anxn + an−1xn−1 + ..... + a0 Here, θ = an, an−1, ...., a0 Dr. Varun Kumar Lecture 5 12 / 13
  • 13. References E. Alpaydin, Introduction to machine learning. MIT press, 2020. T. M. Mitchell, The discipline of machine learning. Carnegie Mellon University, School of Computer Science, Machine Learning , 2006, vol. 9. J. Grus, Data science from scratch: first principles with python. O’Reilly Media, 2019. Dr. Varun Kumar Lecture 5 13 / 13