SlideShare a Scribd company logo
Linear Regression
Wajahat Hussain
Acknowledgement
● These slides are mainly inspired by the online course offered by Prof Andrew
Ng (stanford university) at coursera
● The slides and videos are available online at
Coursera: https://www.coursera.org/learn/machine-learning
Youtube: https://www.youtube.com/watch?v=qeHZOdmJvFU&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW
Regression? Which curve better represents data pattern
House
Size
House
Price
House
Size
House
Price House
Size
House
Price
θ0
+ θ1
x θ0
+ θ1
x +θ2
x2
θ0
+ θ1
x +θ2
x2
+θ3
x3
● x = house size
● Which curve better predicts the price for house
Examples: Regression
● House price prediction
5 10 15 20
House Size
10
20
30
House
Price
Your House Size
● House price prediction
5 10 15 20
House Size
10
20
30
House
Price
Examples: Regression
● House price prediction
5 10 15 20
House Size
10
20
30
House
Price
Examples: Regression
● GPA prediction
600 700 800 900
FSc Marks
1
2
3
GPA 4
● Regression: Predict continuous valued output
● Supervised Learning: Given the right answer for each example.
Examples: Regression
● Current Prediction
● Reinventing Ohm’s Law V = IxR
5 10 15 20
Voltage
10
20
30
Current
Examples: Regression
Voltage Applied
● Regression: Predict continuous valued output
● Supervised Learning: Given the right answer for each example.
● Current Prediction
● Reinventing Ohm’s Law
5 10 15 20
Voltage
10
20
30
Current
Examples: Regression
Voltage Applied
● Regression: Predict continuous valued output
● Supervised Learning: Given the right answer for each example.
● Predicting the score of the rain affected match, e.g., Duckworth-Lewis
5 10 15 20
Overs Remaining
10
20
30
Runs
Scored
in
these
overs
Examples: Regression
● Regression: Predict continuous valued output
● Supervised Learning: Given the right answer for each example.
Regression? Why not just fit a curve?
House
Size
House
Price
House
Size
House
Price House
Size
House
Price
θ0
+ θ1
x θ0
+ θ1
x +θ2
x2
θ0
+ θ1
x +θ2
x2
+θ3
x3
● x = house size
● Which curve better predicts the price for house
Linear Regression. How to choose the line?
House
Size
House
Price
θ0
+ θ1
x
θ0
+ θ1
x
θ0
+ θ1
x
● How to automatically choose the best line from infinite lines possible?
Regression Notation
Size in feet2
(x) House Price in 1000$ (y)
2104 460
1416 232
1534 315
... ...
● m = number of training examples
● x = input variable/ feature
● y = target or output variable
● (x,y) = one training example
● (x(i)
,y(i)
) = ith
training example
● x(1)
= 2104
● x(2)
= 1416
● y(1)
= 460
Training set of housing prices
m
Regression
Training Set
h
hypothesis
Learning
Algorithm
House
Size (x)
Estimated
Price
h(x)
House
Size
House
Price
θ0
+ θ1
x
θ0
+ θ1
x
θ0
+ θ1
x
hθ
(x) = θ0
+ θ1
x
● Linear regression with one variable. Here there is one variable x.
● Univariate linear regression
Regression Notation
Size in feet2
(x) House Price in 1000$ (y)
2104 460
1416 232
1534 315
... ...
● m = number of training examples
● x = input variable/ feature
● y = target or output variable
● (x,y) = one training example
● (x(i)
,y(i)
) = ith
training example
● x(1)
= 2104
● x(2)
= 1416
● y(1)
= 460
Training set of housing prices
● Hypothesis: hθ
(x) = θ0
+ θ1
x
● θi’s
: Parameters
● How to choose θi’s
automatically?
House
Size
House
Price
θ0
+ θ1
x
θ0
+ θ1
x
θ0
+ θ1
x
How to choose θi’s
automatically?
● Hypothesis: hθ
(x) = θ0
+ θ1
x
● θi’s
: Parameters
● How to choose θi’s
automatically?
● Let’s choose θ0
and θ1
so that hθ
(x) is close to y
for our training example (x,y) House
Size
House
Price
θ0
+ θ1
x
θ0
+ θ1
x
θ0
+ θ1
x
Σ(hθ
(xi
) - yi
)2
i=1
i=m
1
__
2m Σ(θ0
+ θ1
xi
- yi
)2
i=1
i=m
1
__
2m
minimize
θ0
, θ1
minimize
θ0
, θ1
J(θ0
,θ1
)
J(θ0
,θ1
)
Cost Function
Minimize the squared error cost function
How to choose θi’s
automatically?
● Hypothesis: hθ
(x) = θ0
+ θ1
x
● θ0
, θ1
: Parameters
● Let's set θ0
= 0
● Simplified hypothesis: hθ
(x) = θ1
x
● Cost function
● Goal
Σ(θ1
xi
- yi
)2
i=1
i=m
1
__
2m
J(θ1
)
minimize
θ1
J(θ1
)
1 2 3 x
1
2
3
hθ
(x) = x
θ1
= 0.5
0.5 1 1.5 2
0.5
1
1.5
J(θ1
)
θ
hθ
(x)
J(1) = ((1-0.5)2
+ (2-1)2
+ (3-1.5)2
)/(2x3) =
0.58
How to choose θi’s
automatically?
● Hypothesis: hθ
(x) = θ0
+ θ1
x
● θ0
, θ1
: Parameters
● Let's set θ0
= 0
● Simplified hypothesis: hθ
(x) = θ1
x
● Cost function
● Goal
Σ(θ1
xi
- yi
)2
i=1
i=m
1
__
2m
J(θ1
)
minimize
θ1
J(θ1
)
1 2 3 x
1
2
3
hθ
(x) = x
θ1
= 1.5
0.5 1 1.5 2
0.5
1
1.5
J(θ1
)
θ
hθ
(x)
J(1.5) = ((1-1.5)2
+ (2-3)2
+ (3-4.5)2
)/(2x3) = 0.58
How to choose θi’s
automatically?
● Hypothesis: hθ
(x) = θ0
+ θ1
x
● θ0
, θ1
: Parameters
● Let's set θ0
= 0
● Simplified hypothesis: hθ
(x) = θ1
x
● Cost function
● Goal
Σ(θ1
xi
- yi
)2
i=1
i=m
1
__
2m
J(θ1
)
minimize
θ1
J(θ1
)
1 2 3 x
1
2
3
hθ
(x) = x
θ1
= 1
0.5 1 1.5 2
1
2
3
J(θ1
)
θ
hθ
(x)
J(1) = ((1-1)2
+ (2-2)2
+ (3-3)2
)/(2x3) = 0
How to choose θi’s
automatically?
● Have some function
● Goal
Outline
● Start with some θ1
,e.g., θ1
= 0.5
● Keep changing θ1
to reduce until we
reach the minimum
J(θ1
)
1 2 3 x
1
2
3
hθ
(x) = x
θ1
= 1
0.5 1 1.5 2
1
2
3
J(θ1
)
θ
hθ
(x)
minimize
θ1
J(θ1
)
J(θ1
)
What is minimum? 0 or eps
J(θ1
)
θ1
● Which of the following is true?
● Blue slope (gradient) is negative
● Red slope (gradient) is positive
● Magenta slope is less negative than blue slope
● Yellow slope is close to zero
J(θ1
)
θ1
● Which of the following is true?
● Blue slope (gradient) is negative
● Red slope (gradient) is positive
● Magenta slope is less negative than blue slope
● Yellow slope is close to zero
● If slope is negative you want to increase θ1
● If slope is positive you want to decrease θ1
Gradient Descent Algorithm
J(θ1
)
θ1
● Which of the following is true?
● Blue slope (gradient) is negative
● Red slope (gradient) is positive
● Magenta slope is less negative than blue slope
● Yellow slope is close to zero
● If slope is negative you want to increase θ1
● If slope is positive you want to decrease θ1
θ1
:= θ1
- α J(θ1
)
dθ1
__
d , α = 1
How to choose θi’s
automatically?
● Have some function
● Goal
Outline
● Start with some θ1
,e.g., θ1
= 0.5
● Keep changing θ1
to reduce until we
reach the minimum
J(θ1
)
1 2 3 x
1
2
3
hθ
(x) = x
θ1
= 1
0.5 1 1.5 2
1
2
3
J(θ1
)
θ
hθ
(x)
minimize
θ1
J(θ1
)
J(θ1
)
What is minimum? 0 or eps
Learning rate α: Large vs Small
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
How does cost function J(θ0,
θ1
) Look Like?
● Does it matter where we start from?
● Is the solution unique?
Start point
● Does it matter where we start from?
● Is the solution unique?
How does cost function J(θ0,
θ1
) Look Like?
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
● Multivariate linear regression. It means multiple features (x1
,x1, …,
xn
)
● Previously it was univariate linear regression.
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regression? Why not just fit a curve?
House
Size
House
Price
House
Size
House
Price House
Size
House
Price
θ0
+ θ1
x θ0
+ θ1
x +θ2
x2
θ0
+ θ1
x +θ2
x2
+θ3
x3
● x = house size
● Which curve better predicts the price for house
Regresssion technique part of Machine learning
● How to craft new features?
● Hand crafted features
● Is it possible to auto-create new features? Yes.
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning
Regresssion technique part of Machine learning

More Related Content

PDF
Regression
PPTX
Linear Regression.pptx
PDF
Machine learning
PPTX
2. Linear regression with one variable.pptx
PDF
ML_Lec3 introduction to regression problems.pdf
PDF
1. Regression_V1.pdf
PPTX
Linear regression with one variable
PPTX
Week 2 - ML models and Linear Regression.pptx
Regression
Linear Regression.pptx
Machine learning
2. Linear regression with one variable.pptx
ML_Lec3 introduction to regression problems.pdf
1. Regression_V1.pdf
Linear regression with one variable
Week 2 - ML models and Linear Regression.pptx

Similar to Regresssion technique part of Machine learning (20)

PDF
Machine learning (1)
PDF
Lecture 11 linear regression
PPTX
R For Data Science - Linear Regression
PPTX
Lecture 8 about data mining and how to use it.pptx
PDF
CS229 Machine Learning Lecture Notes
PPTX
exploring Machine Learning with best way
PPTX
Coursera 2week
PDF
Module 5.pdf Machine Learning Types and examples
PDF
Linear logisticregression
PDF
X01 Supervised learning problem linear regression one feature theorie
PPTX
SET-02_SOCS_ESE-DEC23__B.Tech%20(CSE-H+NH)-AIML_5_CSAI3001_Neural%20Networks.pdf
PDF
working with python
PPTX
Linear regression, costs & gradient descent
PDF
Lecture 5 - Linear Regression Linear Regression
PPTX
Coursera 1week
PPTX
Linear regression
PPTX
07 logistic regression and stochastic gradient descent
PDF
Regression_1.pdf
PPTX
Bootcamp of new world to taken seriously
PDF
Linear Regression artificial intelligence.pdf
Machine learning (1)
Lecture 11 linear regression
R For Data Science - Linear Regression
Lecture 8 about data mining and how to use it.pptx
CS229 Machine Learning Lecture Notes
exploring Machine Learning with best way
Coursera 2week
Module 5.pdf Machine Learning Types and examples
Linear logisticregression
X01 Supervised learning problem linear regression one feature theorie
SET-02_SOCS_ESE-DEC23__B.Tech%20(CSE-H+NH)-AIML_5_CSAI3001_Neural%20Networks.pdf
working with python
Linear regression, costs & gradient descent
Lecture 5 - Linear Regression Linear Regression
Coursera 1week
Linear regression
07 logistic regression and stochastic gradient descent
Regression_1.pdf
Bootcamp of new world to taken seriously
Linear Regression artificial intelligence.pdf
Ad

Recently uploaded (20)

PDF
Digital Logic Computer Design lecture notes
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPT
Mechanical Engineering MATERIALS Selection
PPT
Project quality management in manufacturing
PDF
Structs to JSON How Go Powers REST APIs.pdf
DOCX
573137875-Attendance-Management-System-original
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
Welding lecture in detail for understanding
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Digital Logic Computer Design lecture notes
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Mechanical Engineering MATERIALS Selection
Project quality management in manufacturing
Structs to JSON How Go Powers REST APIs.pdf
573137875-Attendance-Management-System-original
OOP with Java - Java Introduction (Basics)
Welding lecture in detail for understanding
bas. eng. economics group 4 presentation 1.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Foundation to blockchain - A guide to Blockchain Tech
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Ad

Regresssion technique part of Machine learning

  • 2. Acknowledgement ● These slides are mainly inspired by the online course offered by Prof Andrew Ng (stanford university) at coursera ● The slides and videos are available online at Coursera: https://www.coursera.org/learn/machine-learning Youtube: https://www.youtube.com/watch?v=qeHZOdmJvFU&list=PLZ9qNFMHZ-A4rycgrgOYma6zxF4BZGGPW
  • 3. Regression? Which curve better represents data pattern House Size House Price House Size House Price House Size House Price θ0 + θ1 x θ0 + θ1 x +θ2 x2 θ0 + θ1 x +θ2 x2 +θ3 x3 ● x = house size ● Which curve better predicts the price for house
  • 4. Examples: Regression ● House price prediction 5 10 15 20 House Size 10 20 30 House Price Your House Size
  • 5. ● House price prediction 5 10 15 20 House Size 10 20 30 House Price Examples: Regression
  • 6. ● House price prediction 5 10 15 20 House Size 10 20 30 House Price Examples: Regression
  • 7. ● GPA prediction 600 700 800 900 FSc Marks 1 2 3 GPA 4 ● Regression: Predict continuous valued output ● Supervised Learning: Given the right answer for each example. Examples: Regression
  • 8. ● Current Prediction ● Reinventing Ohm’s Law V = IxR 5 10 15 20 Voltage 10 20 30 Current Examples: Regression Voltage Applied ● Regression: Predict continuous valued output ● Supervised Learning: Given the right answer for each example.
  • 9. ● Current Prediction ● Reinventing Ohm’s Law 5 10 15 20 Voltage 10 20 30 Current Examples: Regression Voltage Applied ● Regression: Predict continuous valued output ● Supervised Learning: Given the right answer for each example.
  • 10. ● Predicting the score of the rain affected match, e.g., Duckworth-Lewis 5 10 15 20 Overs Remaining 10 20 30 Runs Scored in these overs Examples: Regression ● Regression: Predict continuous valued output ● Supervised Learning: Given the right answer for each example.
  • 11. Regression? Why not just fit a curve? House Size House Price House Size House Price House Size House Price θ0 + θ1 x θ0 + θ1 x +θ2 x2 θ0 + θ1 x +θ2 x2 +θ3 x3 ● x = house size ● Which curve better predicts the price for house
  • 12. Linear Regression. How to choose the line? House Size House Price θ0 + θ1 x θ0 + θ1 x θ0 + θ1 x ● How to automatically choose the best line from infinite lines possible?
  • 13. Regression Notation Size in feet2 (x) House Price in 1000$ (y) 2104 460 1416 232 1534 315 ... ... ● m = number of training examples ● x = input variable/ feature ● y = target or output variable ● (x,y) = one training example ● (x(i) ,y(i) ) = ith training example ● x(1) = 2104 ● x(2) = 1416 ● y(1) = 460 Training set of housing prices m
  • 14. Regression Training Set h hypothesis Learning Algorithm House Size (x) Estimated Price h(x) House Size House Price θ0 + θ1 x θ0 + θ1 x θ0 + θ1 x hθ (x) = θ0 + θ1 x ● Linear regression with one variable. Here there is one variable x. ● Univariate linear regression
  • 15. Regression Notation Size in feet2 (x) House Price in 1000$ (y) 2104 460 1416 232 1534 315 ... ... ● m = number of training examples ● x = input variable/ feature ● y = target or output variable ● (x,y) = one training example ● (x(i) ,y(i) ) = ith training example ● x(1) = 2104 ● x(2) = 1416 ● y(1) = 460 Training set of housing prices ● Hypothesis: hθ (x) = θ0 + θ1 x ● θi’s : Parameters ● How to choose θi’s automatically? House Size House Price θ0 + θ1 x θ0 + θ1 x θ0 + θ1 x
  • 16. How to choose θi’s automatically? ● Hypothesis: hθ (x) = θ0 + θ1 x ● θi’s : Parameters ● How to choose θi’s automatically? ● Let’s choose θ0 and θ1 so that hθ (x) is close to y for our training example (x,y) House Size House Price θ0 + θ1 x θ0 + θ1 x θ0 + θ1 x Σ(hθ (xi ) - yi )2 i=1 i=m 1 __ 2m Σ(θ0 + θ1 xi - yi )2 i=1 i=m 1 __ 2m minimize θ0 , θ1 minimize θ0 , θ1 J(θ0 ,θ1 ) J(θ0 ,θ1 ) Cost Function Minimize the squared error cost function
  • 17. How to choose θi’s automatically? ● Hypothesis: hθ (x) = θ0 + θ1 x ● θ0 , θ1 : Parameters ● Let's set θ0 = 0 ● Simplified hypothesis: hθ (x) = θ1 x ● Cost function ● Goal Σ(θ1 xi - yi )2 i=1 i=m 1 __ 2m J(θ1 ) minimize θ1 J(θ1 ) 1 2 3 x 1 2 3 hθ (x) = x θ1 = 0.5 0.5 1 1.5 2 0.5 1 1.5 J(θ1 ) θ hθ (x) J(1) = ((1-0.5)2 + (2-1)2 + (3-1.5)2 )/(2x3) = 0.58
  • 18. How to choose θi’s automatically? ● Hypothesis: hθ (x) = θ0 + θ1 x ● θ0 , θ1 : Parameters ● Let's set θ0 = 0 ● Simplified hypothesis: hθ (x) = θ1 x ● Cost function ● Goal Σ(θ1 xi - yi )2 i=1 i=m 1 __ 2m J(θ1 ) minimize θ1 J(θ1 ) 1 2 3 x 1 2 3 hθ (x) = x θ1 = 1.5 0.5 1 1.5 2 0.5 1 1.5 J(θ1 ) θ hθ (x) J(1.5) = ((1-1.5)2 + (2-3)2 + (3-4.5)2 )/(2x3) = 0.58
  • 19. How to choose θi’s automatically? ● Hypothesis: hθ (x) = θ0 + θ1 x ● θ0 , θ1 : Parameters ● Let's set θ0 = 0 ● Simplified hypothesis: hθ (x) = θ1 x ● Cost function ● Goal Σ(θ1 xi - yi )2 i=1 i=m 1 __ 2m J(θ1 ) minimize θ1 J(θ1 ) 1 2 3 x 1 2 3 hθ (x) = x θ1 = 1 0.5 1 1.5 2 1 2 3 J(θ1 ) θ hθ (x) J(1) = ((1-1)2 + (2-2)2 + (3-3)2 )/(2x3) = 0
  • 20. How to choose θi’s automatically? ● Have some function ● Goal Outline ● Start with some θ1 ,e.g., θ1 = 0.5 ● Keep changing θ1 to reduce until we reach the minimum J(θ1 ) 1 2 3 x 1 2 3 hθ (x) = x θ1 = 1 0.5 1 1.5 2 1 2 3 J(θ1 ) θ hθ (x) minimize θ1 J(θ1 ) J(θ1 ) What is minimum? 0 or eps
  • 21. J(θ1 ) θ1 ● Which of the following is true? ● Blue slope (gradient) is negative ● Red slope (gradient) is positive ● Magenta slope is less negative than blue slope ● Yellow slope is close to zero
  • 22. J(θ1 ) θ1 ● Which of the following is true? ● Blue slope (gradient) is negative ● Red slope (gradient) is positive ● Magenta slope is less negative than blue slope ● Yellow slope is close to zero ● If slope is negative you want to increase θ1 ● If slope is positive you want to decrease θ1
  • 23. Gradient Descent Algorithm J(θ1 ) θ1 ● Which of the following is true? ● Blue slope (gradient) is negative ● Red slope (gradient) is positive ● Magenta slope is less negative than blue slope ● Yellow slope is close to zero ● If slope is negative you want to increase θ1 ● If slope is positive you want to decrease θ1 θ1 := θ1 - α J(θ1 ) dθ1 __ d , α = 1
  • 24. How to choose θi’s automatically? ● Have some function ● Goal Outline ● Start with some θ1 ,e.g., θ1 = 0.5 ● Keep changing θ1 to reduce until we reach the minimum J(θ1 ) 1 2 3 x 1 2 3 hθ (x) = x θ1 = 1 0.5 1 1.5 2 1 2 3 J(θ1 ) θ hθ (x) minimize θ1 J(θ1 ) J(θ1 ) What is minimum? 0 or eps
  • 25. Learning rate α: Large vs Small
  • 31. How does cost function J(θ0, θ1 ) Look Like? ● Does it matter where we start from? ● Is the solution unique? Start point
  • 32. ● Does it matter where we start from? ● Is the solution unique? How does cost function J(θ0, θ1 ) Look Like?
  • 52. ● Multivariate linear regression. It means multiple features (x1 ,x1, …, xn ) ● Previously it was univariate linear regression.
  • 56. Regression? Why not just fit a curve? House Size House Price House Size House Price House Size House Price θ0 + θ1 x θ0 + θ1 x +θ2 x2 θ0 + θ1 x +θ2 x2 +θ3 x3 ● x = house size ● Which curve better predicts the price for house
  • 58. ● How to craft new features? ● Hand crafted features ● Is it possible to auto-create new features? Yes.