SlideShare a Scribd company logo
Regression analysis
Developed by
Dr.Ammara Khakwani
Regression:
means “to move backward”,
“Return to an earlier time or stage”
Technically it means
● “The relationship between the mean
value of a random variable and the
corresponding values of one or more
independent variables.”
Formulation of an equation
“The analysis or measure of the
association between one variable
(dependent variable) and one or more
variables ( the independent variable)
usually formulated in an equation in
which the independent variables have
parametric co-efficient which may
enable future values of the dependent
variable to be predicted.”
Regression analysis is concerned
• Regression analysis is largely concerned
with estimating and/or predicting the
(population) mean value of the dependent
variable on the basis of the known or fixed
values of the explanatory variables. The
technique of linear regression is an
extremely flexible method for describing
data. It is a powerful flexible method that
defines much of econometrics. The word
regression, has stuck with us, estimating,
predictive line.
Suppose we want to find out some variables of
interest Y, is driven by some other variable
X. Then we call Y the dependent variable and
X the independent variable. In addition,
suppose that the relationship between Y and
X is basically linear, but is inexact: besides its
determination by X, Y has a random
component µ, which we call the ‘disturbance’
or ‘error’. The simple linear model is
Y = β1+β2X +µi
Where, β1,β2, are parameters the y-intercept
and, the slope of the relationship.
Regression analysis
Regression analysis
Regression analysis
• Helpful for:
• ● Manager making a hiring decision.
• ● Executive can arrive at sales forecasts for a company
• ● Describe relationship between two or more variables
• ● Find out what the future holds before a decision can be made
• ● Predict revenues before a budget can be prepared.
• ● Change in the price of a product and consumer demand for the
product.
• ● An economist may want to find out
• ◘ The dependence of personal consumption expenditure on after-
tax. It will help him in estimating the marginal propensity to
consume, a dollar’s worth of change in real income.
• ● A monopolist who can fix the price or output but not both may
want to find out the response of the demand for a product to
changes in price.
• A labor economist may want to study, the rate of change
of money wages in relation to the unemployment rate.
• ● From monetary Economics, it is known that, other
things remaining the same, the higher the rate of inflation
(п) the lower the proportion (k) of their income that
people would want to hold in the form of money. A
quantitative analysis of this relationship will enable the
monetary economist to predict the amount to predict the
amount of money as a proportion of their income that
people would want to hold at various rate of inflation.
Why to study:
• Tools of regression analysis and correlation analysis have been developed to study and measure the statistical
relationship that exists between two or more variables.
• ● In regression analysis, an estimating, or predicting, equation is developed to describe the pattern or functional
nature of the relationship that exists between the variables.
• ● Analyst prepares an estimating (or regression) equation to make estimates of values of one variable from given
values of the others.
• ● Our concern is with predicting
• ◘ The key-idea behind regression analysis is the statistical dependency of one variable, the dependent variable (y)
on one or more other variables the independent variables (x).
• ◘ The objective of such analysis is to estimate and predict the (mean) average value of the dependent variable on
the basis of the known or fixed values of the explanatory variables
• ◘ The success of regression analysis depends on the availability of the appropriate data.
• ◘ In any research, the researcher should clearly state the source of data used in analysis, their definitions, there
method of collection and any gaps or omissions in the data.
• ◘ Its often necessary to prepare a forecast – an explanations of what the future holds before a decision can be
made. For example, it may be necessary to predict revenues before a budget can be prepared. These predication
become easier if we develop the relationship between the variables to be predict and some other variables relating
to it.
• ◘ Computing regression (estimating equation) and then using it to provide an estimate of the value of the
dependent variable (response) y when given one or more values of the independent or explanatory variables(s) x.
• ◘ Computing measures that show the possible errors that may be involved in using the estimating equation as a
basis for forecasting.
• ◘ Preparing measures that show the closeness or correlation that exists between variables.
What does it provide?
• This method provides an equation
(Model) for estimating; predicting the
average value of dependent variable
(y) from the known values of the
independent variable.
• y is assumed to be random and x are
fixed.
Types of Regression Relationship
• The RELATION between the expected value
of the dependent variable and the
independent variable is called a regression
relation. It is called a simple or two-variable
regression. If the dependency of one variable
is studied on two or more independent
variable is called multiple regression. When
dependency is represented by a straight line
equation, the regression is said to be linear
regression. Otherwise it is Curvilinear.
Example of a model
• Consider a situation where a small ball is
being tossed up in the air and then we measure its
heights of ascent hi at various moments in time ti.
Physics tells us that, ignoring the drag, the
relationship can be modeled as
Yi = β1+β2Xi+µi
• where β1 determines the initial velocity of the
ball, β2 is proportional to the standard gravity,
and µI is due to measurement errors. Linear
regression can be used to estimate the values
of β1 and β2 from the measured data. This model
is non-linear in the time variable, but it is linear in
the parameters β1and β2; if we take
regressor xi = (xi1, xi2) = (ti, ti2), the model takes
on the standard form
Applications of linear regression
• Linear regression is widely used in
biological, behavioral and social
sciences to describe possible
relationships between variables. It
ranks as one of the most important
tools used in these disciplines.
Correlation:
• In statistics, dependence refers to any
statistical relationship between two random
variables or two sets
of data. Correlation refers to any of a broad
class of statistical relationships involving
dependence. Formally, dependence refers to
any situation in which random variables do
not satisfy a mathematical condition
of probabilistic independence
Pearson correlation coefficient
• In loose usage, correlation can refer to any
departure of two or more random variables
from independence, but technically it refers
to any of several more specialized types of
relationship between mean values. There
are several correlation coefficients, often
denoted ρ or r, measuring the degree of
correlation. The most common of these is
the Pearson correlation coefficient
Correlation Coefficient
• The population correlation coefficient
ρX,Y between two random
variables X and Y with expected
valuesμX and μY and standard
deviations σX and σY is defined as:
• where E is the expected
value operator, cov means covariance,
and, corr a widely used alternative
notation for Pearson's correlation.
Regression analysis
where x and y are the sample means of X and Y,
and sx and sy are the sample standard deviations ofX and Y.
If x and y are results of measurements that contain measurement error,
the realistic limits on the correlation coefficient are not −1 to +1 but a
smaller range
Regression Versus Correlation:
• They are closely related but conceptually
different in correlation analysis we measure
the strength or degree of linear association
between two variables. But in regression
analysis we try to estimate or predict the
average value of one variable on the basis of
the fixed values of other variables. Examples
in correlation we can measure the relation
between smoking and lung cancer in
regression we can predict the average score
on a statistics exam by knowing a student
score on a mathematical exam.
Regression Versus Correlation:
• They have some fundamental differences.
Correlation theory is based on the assumption of
randomness of variable and regression theory has
assumption that the dependent variable is stochastic
and the explanatory variables are fixed or non
stochastic. In correlation there is no difference
between dependent and explanatory variables.
• In correlation analysis, the purpose is to measure
the strength or closeness of the relationship
between the variables.
• ♦ What is the pattern of existing relationship
Q, and solution
• The statistical relationship between
the error terms and the regressor
plays an important role in
determining whether an estimation
procedure has desirable sampling
properties such as being unbiased
and consistent.
IMPORTANT TO NOTE
Trend line
• A trend line represents a trend, the long-term
movement in time series data after other
components have been accounted for. It tells
whether a particular data set (say GDP, oil prices
or stock prices) have increased or decreased over
the period of time. A trend line could simply be
drawn by eye through a set of data points, but
more properly their position and slope is
calculated using statistical techniques like linear
regression. Trend lines typically are straight lines,
although some variations use higher degree
polynomials depending on the degree of curvature
desired in the line.
Trend line uses
• Trend lines are sometimes used in business
analytics to show changes in data over time. Trend
lines are often used to argue that a particular
action or event (such as training, or an advertising
campaign) caused observed changes at a point in
time. This is a simple technique, and does not
require a control group, experimental design, or a
sophisticated analysis technique. However, it
suffers from a lack of scientific validity in cases
where other potential changes can affect the data.
Scatter Diagram:
• To find out whether or not a relationship
between two variables exists, we plot given
data independent and dependent variables
using x-axis for independent regression
variable. Y-axis for dependent variable such
a diagram is called a Scatter Diagram or a
Scatter Plot. If a point shows tendency to a
straight line it is called regression line and if
it shows curve it is called regression curve. It
also shows relationship
GraphScatter Diagram:
Data Graphs:
• Let us assume that a logical
relationship may exist between two
variables.
• To support further analysis we use a
graph to plot the available data. This is
called Scatter Diagram.
• X= Independent or explanatory
• Y= Dependent or response.
Purpose of Diagram:
• To see if there is a useful relationship
between the two variables.
• Determine the type of equation to use
to describe the relationship.
Example of Model Construction:
• Keynes’s consumption function.
• Consumption=C
• Income= X
• C= F(X)
• Consumption and income cannot be connected by any simple
deterministic relationship.
• Linear Model
• C= α + β X
• It is hopeless to attempt to capture every influence in the
relationship so to incorporate the inherent randomness in the
real world counterpart
• C= f(X, ε)
• ε =Stochastic element
• C= α + β X + ε
• Empirical counterpart to Keynes’s theoretical Model.
Example II: Earnings and Education
relationship
• Higher level of education is associated with higher income.
• Simple regression Model is
• Earnings = β1+ β2 education + ε
• [old people have higher income regardless of education]
• And if age and education are positively correlated.
• Han regression Model will associate all the observed increases
in education.
• Than with age effects
• Earnings = β1 + β2 education + β3 age + ε
• We also observe than income that income tends to rise less
rapidly in the later years than in early so to accommodate this
possibility
• Earnings = β1 + β2 education + β3 age + age² + ε
• β3 = +ev
• = -ev
Different ways of linearity
Uses or unique effect of Y
• A fitted linear regression model can be
used to identify the relationship
between a single predictor variable
xj and the response variable y when all
the other predictor variables in the
model are “held fixed”. Specifically, the
interpretation of βj is the
expected change in y for a one-unit
change in xj when the other covariates
are held fixed. This is sometimes called
the unique effect of x j on y.
Simple and multiple linear
regression
• The very simplest case of a
single scalar predictor variable x and a
single scalar response variable y is
known as simple linear regression.
• The extension to multiple
and/or vector-valued predictor
variables (denoted with a capital X) is
known as multiple linear regression.
General linear models
• The general linear model considers the
situation when the response
variable Y is not a scalar but a vector.
Conditional linearity of E(y|x) = Bx is
still assumed, with a
matrix B replacing the vector β of the
classical linear regression model.
Multivariate analogues of OLS and
GLS have been developed.
Heteroscedasticity models
• Various models have been created that
allow for heteroscedasticity, i.e. the
errors for different response variables
may have different variances. For
example, weighted least squares is a
method for estimating linear regression
models when the response variables may
have different error variances, possibly
with correlated errors.
Generalized linear models
• Generalized linear models (GLM's) are a
framework for modeling a response
variable y that is bounded or discrete.
This is used, for example:
• when modeling positive quantities
• when modeling categorical data
• when modeling ordinal data,
Some common examples of
GLM's are:
• Poisson regression for count data.
• Logistic regression and Probit
regression for binary data.
• Multinomial logistic
regression and multinomial
probit regression for categorical data.
• Ordered probit regression for ordinal
data.
Single index models
• allow some degree of nonlinearity in
the relationship between x and y, while
preserving the central role of the linear
predictor β′x as in the classical linear
regression model. Under certain
conditions, simply applying OLS to
data from a single-index model will
consistently estimate β up to a
proportionality constant.
Hierarchical linear models
• Hierarchical linear models (or multilevel
regression) organizes the data into a
hierarchy of regressions, for example
where A is regressed on B, and B is
regressed on C. It is often used where the
data have a natural hierarchical structure
such as in educational statistics, where
students are nested in classrooms,
classrooms are nested in schools, and
schools are nested in some administrative
grouping such as a school district.
Errors-in-variables
extend the traditional linear regression
model to allow the predictor
variables X to be observed with error.
This error causes standard estimators
of β to become biased. Generally, the
form of bias is an attenuation, meaning
that the effects are biased toward zero.
procedures have been developed
for parameter estimation
• A large number of procedures have been
developed for parameter estimation and
inference in linear regression. These
methods differ in computational simplicity
of algorithms, presence of a closed-form
solution, robustness with respect to heavy-
tailed distributions, and theoretical
assumptions needed to validate desirable
statistical properties such
as consistency and asymptotic efficiency.
Some of the more common estimation techniques for
linear regression
• Least-squares estimation and related
techniques
Ordinary least squares (OLS)
Generalized least squares (GLS)
Percentage least squares
 Iteratively reweighed least squares (IRLS)
Instrumental variables regression (IV)
Total least squares (TLS)
Maximum-likelihood estimation and
related techniques
Ridge regression,
Least absolute deviation (LAD)
regression
Adaptive estimation.
Epidemiology
• Early evidence relating tobacco smoking to
mortality and morbidity came from observational
studies employing regression analysis. For
example, suppose we have a regression model in
which cigarette smoking is the independent
variable of interest, and the dependent variable is
life span measured in years. Researchers might
include socio-economic status as an additional
independent variable, to ensure that any observed
effect of smoking on life span is not due to some
effect of education or income. However, it is never
possible to include all possible confounding
variables in an empirical analysis.
Example
• For example, a hypothetical gene might increase
mortality and also cause people to smoke more.
For this reason, randomized controlled trials are
often able to generate more compelling evidence
of causal relationships than can be obtained
using regression analyses of observational data.
When controlled experiments are not feasible,
variants of regression analysis such
as instrumental variables regression may be used
to attempt to estimate causal relationships from
observational data.
Finance
• The capital asset pricing model uses
linear regression as well as the concept
of Beta for analyzing and quantifying
the systematic risk of an investment.
This comes directly from the Beta
coefficient of the linear regression
model that relates the return on the
investment to the return on all risky
assets.
Economics
• Linear regression is the predominant
empirical tool in economics. For
example, it is used to predict
consumption spending fixed
investment spending, inventory
investment, purchases of a country's
exports spending
on imports the demand to hold liquid
assets labor demand and labor supply
Environmental science
• Linear regression finds application in a
wide range of environmental science
applications. In Canada, the
Environmental Effects Monitoring
Program uses statistical analyses on
fish and benthic surveys to measure the
effects of pulp mill or metal mine
effluent on the aquatic ecosystem
Simple Linear Model:
• The correlation co-efficient may indicate that two variables are
associated with one another but it does not give any idea of the
kind of relationship involved.
• Hypothesize one variable (Dependent variable) is determined
by other variable known as explanatory variables, independent
variable or regressor.The hypothesized mathematical
relationship linking them is known as the regression model.If
there is one regressor, it is described as a simple regression
model. If there are two or more regressor it is described as
regressor it is described as a multiple regression.
We would not expect to find an exact relationship between two
economic variables, the relationship is not exact by explicitly
including it a random factor known as the disturbance term
Simple Regression Model:
• Yi =β1 + β2Xi +€i
• Has two component β1 , β2Xi
• where β1 and β2 are fixed quantities
known as parameters. The value of
explanatory variable
• the disturbance Ui
yi
yi
Components of model
• Dependent Variable
• is the variable to be estimated. It is plotted
on the vertical or y-axis of a chart is
therefore identified by the symbol y.
• Independent variable Predictand,
regressand, response, explanatory variable,
• is the one that presumably exerts a influence
on or explains variations in the dependent
variable. It is plotted on x-axis that why
denoted by X. It is also called regressor,
Predicator regression variable and
explanatory variable
We Must Know:
Two Things:
• Value of y-Intercept =a when X is
equal to zero we can read on y-axis.
• Measuring a change of one unit of the
X-variable
• Measuring the Corresponding change
in Yon the y-axis
• Dividing the change inby the change
in X.
Graph
Deterministic and Probabilistic:
• Let us consider a set of n pairs of observations (Xi,Yi). If the relation
between the variables is exactly linear, then the mathematical equation
describing the linear relation is
• Yi= a+ bYi
• Where, a= value of Y when x=0 , = Intercept
• b= Change in Y for one unit change in X , Slope of
line
• It is a deterministic Model.
• C= f(X)
• Consumption function
• Y= a + b X
• (Area = п r²)
• But in some situations it is not exact we get what is called non-
deterministic or probabilistic Model.
• = a+ b+ = Unknown Random Errors
Simple Linear regression Model:
• We assume Linear relationship holds between and
= α+ β+ = Fixed, Predetermined values
• = Observations drawn from pop
• = Error components
• α, β = Parameters
• α= Intercept
• β= Slope Reg-co-efficient
• β= +ve, -ve based upon the direction of the relationship between X and
Y.
• Further more we assume.
• E () =0
• => E(Y) ----------- X is a st. line
• Var () = σ²
• ε ~ N(0, σ²)
• E (,) =0 cov=0
• E(X,) are independent of each other.
Multiple Linear Regression
Model:
• It is used to study the relationship between a dependent variable and one or
more independent variables.
• The form of model is
• Y= f (X1+X2+X3+…+Xk) + ε
• = X1 β1 + X2 β2 + X3 β3 +……+ βkXk + ε
• Y= Dependent or explained variable
• X1-------Xk = Independent or Explanatory Variable
• f( X1+X2+X3+…+Xk) = Population regression equation of y on X1-------Xk
• Y= Sum (Deterministic part+ Random Part )
• Y= Regressand
• Xk = Regressors, Covariates
• For example we take a demand equation
• Quantity = β1 + Price× β2 +Income× β3 + ε
• Inverse demand equation
• Price= + × quantity + × income +u
• ε, u = Disturbance, Because it disturb the model.
• Because we cannot hope to capture every influence.
table
What table is showing:
• Output for a time period in dozens of units (Y).
• Aptitude test results for eight employees (X)
• ♦ It is a sample small of 8 emp.
• Q#1 If the test does what is supposed to do.
• Q#2 Employees with higher scores will be among the higher
producers.
• ♦ Every point on diagram represents each employees
• C= (X, Y) Pairs of observations
• F= (X, Y)
• ♦ They are making a path in straight line.
• ♦ So there is a linear relationship.
• ♦ (+ev) direct relationship
Ordinary Least Square (OLS):
Estimator:
• Is one of the simplest methods of linear
regression. The goal of OLS is to
closely fit a function with the data. It
does so by minimizing the sum of
squared errors from the data. We are
not trying to minimize the sum of
absolute errors, but rather the sum of
squared errors.
Linear regression model and
assumptions:
1. Model Specification:
2. Homoscedasticity and non-
autocorrelation:
3. 1) Linearity There is no exact linear relationship among any of the
independent variables in the model.
• (Identification Condition)
• Exogeneity of independent variables
From monetary Eco nomics
• it is known that, other things remaining the
same, the higher the rate of inflation (п) the
lower the proportion (k) of their income that
people would want to hold in the form of
money. A quantitative analysis of this
relationship will enable the monetary
economist to predict the amount to predict
the amount of money as a proportion of their
income that people would want to hold at
various rate of inflation.
LS OR OLS:
• The principle of least square consists of determining
the values for the unknown parameters that will
minimize the sum of squares of errors (or residuals).
Where errors are defined as the differences between
observed values and the corresponding values
predicted or estimated by the fitted model equation.
• The parameters values thus determine will give the
least sum of the square of errors and are known as
least squares estimates.
• It gives us the least sum of the squares of errors and
are known as least squares estimates.
Method of ordinary least square
(OLS):
• It is one of the econometric methods
that can be used to derive estimates of
the parameters of economic
relationships from statistical
observations.
Advantages of OLS
• 1) It is a fairly simple as compared with other
econometric techniques.
• 2) This method is used in a wide range of economic
relationships.
• 3) It is still one of the most commonly employed
methods in estimating relationships in econometric
models.
• 4) The mechanics of least square are simple to
understand.
• 5) OLS is an essential component of most other
econometric techniques.
• 6) Appealing mathematically as compared to other
methods.
• 7) It is one of the most powerful and
popular methods of regression
analysis.
• 8) They can be easily computed.
• 9) They are point estimators. Each
estimator will provide only a single
(point).
• 10) Once the OLS estimates are
obtained from the sample data the
sample regression line can be easily
obtained.
Model Specification
• Economic study does not specify whether the
supply should be studied with a single – equation
model or with simultaneous - equation.
• We choose to start our investigation with a single –
equation Model.
• Economic theory is not clear about the
mathematical form (linear or non linear)
• We start by assuming that the variables are related
with the simplest possible mathematical form that
the relationship between quantity and price is
linear of the form.
• Y= a+ b X
Example:
• Quantity supplied of a commodity and its price.
• When price raises the quantity of the commodity supplied
increases
• Step I:
• Specification of the supply model.
• i.e.
• Dependent Variable regressand] = quantity supply
• Explanatory variable regressor] = Price
• Y= f(X)
• Y= β1 + β2 X + [Variation in] = [Explained
variation] + [Unexplained Variation]
• β1, β2 = Parameters of supply function our aim is to obtain ,
• = Due to methods of collecting and processing statistical
information.
Assumptions
• Weak exogeneity.
• Linearity.
• Constant variance
(aka homoskedasticity
heteroscedasticity
Independence of errors.
Lack of multicollinearity

More Related Content

PPTX
Regression Analysis
PPT
Heizer 07
ODP
Exploratory factor analysis
PPTX
Regression analysis.
PPTX
Research Report Writing
PDF
Social network analysis intro part I
PPT
Ppt econ 9e_one_click_ch02
DOCX
International economics notes
Regression Analysis
Heizer 07
Exploratory factor analysis
Regression analysis.
Research Report Writing
Social network analysis intro part I
Ppt econ 9e_one_click_ch02
International economics notes

What's hot (20)

PPT
Linear regression
PPTX
Regression Analysis
PDF
Regression analysis
PDF
Correlation and Regression
PDF
Introduction to correlation and regression analysis
PPT
Regression analysis
PPTX
Correlation and regression
PPTX
Correlation
PPTX
Regression vs correlation and causation
PPT
Regression analysis
PPT
Simple linear regression
PPTX
Rank correlation
PPTX
Correlation and regression
PDF
Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...
PPTX
Spearman rank correlation coefficient
PPTX
Karl pearson's coefficient of correlation (1)
PPT
Correlation IN STATISTICS
PPTX
Basics of Regression analysis
PPT
Correlation and regression
PPT
Probability concept and Probability distribution
Linear regression
Regression Analysis
Regression analysis
Correlation and Regression
Introduction to correlation and regression analysis
Regression analysis
Correlation and regression
Correlation
Regression vs correlation and causation
Regression analysis
Simple linear regression
Rank correlation
Correlation and regression
Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...
Spearman rank correlation coefficient
Karl pearson's coefficient of correlation (1)
Correlation IN STATISTICS
Basics of Regression analysis
Correlation and regression
Probability concept and Probability distribution
Ad

Viewers also liked (10)

PDF
Linear Regression Ordinary Least Squares Distributed Calculation Example
PPTX
Econometrics chapter 8
PPT
Linear regression
PPT
Hypothesis
PDF
Regression Analysis
PPT
Regression
PPT
Simple linear regression (final)
ODP
Multiple linear regression
PPTX
Presentation On Regression
PPT
Regression analysis ppt
Linear Regression Ordinary Least Squares Distributed Calculation Example
Econometrics chapter 8
Linear regression
Hypothesis
Regression Analysis
Regression
Simple linear regression (final)
Multiple linear regression
Presentation On Regression
Regression analysis ppt
Ad

Similar to Regression analysis (20)

PPTX
STATISTICAL REGRESSION MODELS
PPTX
Regression analysis refers to assessing the relationship between the outcome ...
PPTX
how to select the appropriate method for our study of interest
PPTX
ML4 Regression.pptx
PPT
correlation in Marketing research uses..
PPTX
12 rhl gta
PPTX
Regression
PPT
A presentation for Multiple linear regression.ppt
PPTX
Linear regression aims to find the "best-fit" linear line
PPTX
Dependence Techniques
PDF
Regression Analysis-Machine Learning -Different Types
PPTX
An Introduction to Regression Models: Linear and Logistic approaches
PDF
Regression
PPT
Data analysis test for association BY Prof Sachin Udepurkar
PPT
BRM-lecture-11.ppt
PPT
ders 8 Quantile-Regression.ppt
PDF
Antim-Prahar-Business-Statistics-And-Analysis-2025.pdf
PPTX
Regression
PPTX
DA//////////////////////////////////////// Unit 2.pptx
PPT
Economic statistics ii -unit 2 & 5-(theory)
STATISTICAL REGRESSION MODELS
Regression analysis refers to assessing the relationship between the outcome ...
how to select the appropriate method for our study of interest
ML4 Regression.pptx
correlation in Marketing research uses..
12 rhl gta
Regression
A presentation for Multiple linear regression.ppt
Linear regression aims to find the "best-fit" linear line
Dependence Techniques
Regression Analysis-Machine Learning -Different Types
An Introduction to Regression Models: Linear and Logistic approaches
Regression
Data analysis test for association BY Prof Sachin Udepurkar
BRM-lecture-11.ppt
ders 8 Quantile-Regression.ppt
Antim-Prahar-Business-Statistics-And-Analysis-2025.pdf
Regression
DA//////////////////////////////////////// Unit 2.pptx
Economic statistics ii -unit 2 & 5-(theory)

More from Dr.ammara khakwani (10)

PDF
1.A New Ridge-Type Estimator for the Linear Regression Model.pdf
PPTX
Decision tree in decision analysis
PPTX
Example of enivornment decisions
PPT
Qualitative Research methods
DOCX
Example of decision making steps
PPTX
Presentation lecture 2nd quantitative techniques
PPTX
Decision analysis
PPTX
data analysis techniques and statistical softwares
PPTX
Statistics and agricultural
DOCX
Statistics and agricultural lecture 1
1.A New Ridge-Type Estimator for the Linear Regression Model.pdf
Decision tree in decision analysis
Example of enivornment decisions
Qualitative Research methods
Example of decision making steps
Presentation lecture 2nd quantitative techniques
Decision analysis
data analysis techniques and statistical softwares
Statistics and agricultural
Statistics and agricultural lecture 1

Recently uploaded (20)

PPTX
Introduction to Customs (June 2025) v1.pptx
PDF
way to join Real illuminati agent 0782561496,0756664682
PDF
Bitcoin Layer August 2025: Power Laws of Bitcoin: The Core and Bubbles
PPTX
EABDM Slides for Indifference curve.pptx
PDF
Predicting Customer Bankruptcy Using Machine Learning Algorithm research pape...
PDF
Dr Tran Quoc Bao the first Vietnamese speaker at GITEX DigiHealth Conference ...
PDF
Bladex Earnings Call Presentation 2Q2025
PDF
ADVANCE TAX Reduction using traditional insurance
PPTX
Introduction to Managemeng Chapter 1..pptx
PPTX
4.5.1 Financial Governance_Appropriation & Finance.pptx
PPT
E commerce busin and some important issues
PPTX
Unilever_Financial_Analysis_Presentation.pptx
PPTX
fastest_growing_sectors_in_india_2025.pptx
PPTX
Session 3. Time Value of Money.pptx_finance
PDF
Mathematical Economics 23lec03slides.pdf
PDF
Why Ignoring Passive Income for Retirees Could Cost You Big.pdf
PDF
Copia de Minimal 3D Technology Consulting Presentation.pdf
PPTX
Antihypertensive_Drugs_Presentation_Poonam_Painkra.pptx
PDF
NAPF_RESPONSE_TO_THE_PENSIONS_COMMISSION_8 _2_.pdf
PDF
Corporate Finance Fundamentals - Course Presentation.pdf
Introduction to Customs (June 2025) v1.pptx
way to join Real illuminati agent 0782561496,0756664682
Bitcoin Layer August 2025: Power Laws of Bitcoin: The Core and Bubbles
EABDM Slides for Indifference curve.pptx
Predicting Customer Bankruptcy Using Machine Learning Algorithm research pape...
Dr Tran Quoc Bao the first Vietnamese speaker at GITEX DigiHealth Conference ...
Bladex Earnings Call Presentation 2Q2025
ADVANCE TAX Reduction using traditional insurance
Introduction to Managemeng Chapter 1..pptx
4.5.1 Financial Governance_Appropriation & Finance.pptx
E commerce busin and some important issues
Unilever_Financial_Analysis_Presentation.pptx
fastest_growing_sectors_in_india_2025.pptx
Session 3. Time Value of Money.pptx_finance
Mathematical Economics 23lec03slides.pdf
Why Ignoring Passive Income for Retirees Could Cost You Big.pdf
Copia de Minimal 3D Technology Consulting Presentation.pdf
Antihypertensive_Drugs_Presentation_Poonam_Painkra.pptx
NAPF_RESPONSE_TO_THE_PENSIONS_COMMISSION_8 _2_.pdf
Corporate Finance Fundamentals - Course Presentation.pdf

Regression analysis

  • 2. Regression: means “to move backward”, “Return to an earlier time or stage” Technically it means ● “The relationship between the mean value of a random variable and the corresponding values of one or more independent variables.”
  • 3. Formulation of an equation “The analysis or measure of the association between one variable (dependent variable) and one or more variables ( the independent variable) usually formulated in an equation in which the independent variables have parametric co-efficient which may enable future values of the dependent variable to be predicted.”
  • 4. Regression analysis is concerned • Regression analysis is largely concerned with estimating and/or predicting the (population) mean value of the dependent variable on the basis of the known or fixed values of the explanatory variables. The technique of linear regression is an extremely flexible method for describing data. It is a powerful flexible method that defines much of econometrics. The word regression, has stuck with us, estimating, predictive line.
  • 5. Suppose we want to find out some variables of interest Y, is driven by some other variable X. Then we call Y the dependent variable and X the independent variable. In addition, suppose that the relationship between Y and X is basically linear, but is inexact: besides its determination by X, Y has a random component µ, which we call the ‘disturbance’ or ‘error’. The simple linear model is Y = β1+β2X +µi Where, β1,β2, are parameters the y-intercept and, the slope of the relationship.
  • 9. • Helpful for: • ● Manager making a hiring decision. • ● Executive can arrive at sales forecasts for a company • ● Describe relationship between two or more variables • ● Find out what the future holds before a decision can be made • ● Predict revenues before a budget can be prepared. • ● Change in the price of a product and consumer demand for the product. • ● An economist may want to find out • ◘ The dependence of personal consumption expenditure on after- tax. It will help him in estimating the marginal propensity to consume, a dollar’s worth of change in real income. • ● A monopolist who can fix the price or output but not both may want to find out the response of the demand for a product to changes in price.
  • 10. • A labor economist may want to study, the rate of change of money wages in relation to the unemployment rate. • ● From monetary Economics, it is known that, other things remaining the same, the higher the rate of inflation (п) the lower the proportion (k) of their income that people would want to hold in the form of money. A quantitative analysis of this relationship will enable the monetary economist to predict the amount to predict the amount of money as a proportion of their income that people would want to hold at various rate of inflation.
  • 11. Why to study: • Tools of regression analysis and correlation analysis have been developed to study and measure the statistical relationship that exists between two or more variables. • ● In regression analysis, an estimating, or predicting, equation is developed to describe the pattern or functional nature of the relationship that exists between the variables. • ● Analyst prepares an estimating (or regression) equation to make estimates of values of one variable from given values of the others. • ● Our concern is with predicting • ◘ The key-idea behind regression analysis is the statistical dependency of one variable, the dependent variable (y) on one or more other variables the independent variables (x). • ◘ The objective of such analysis is to estimate and predict the (mean) average value of the dependent variable on the basis of the known or fixed values of the explanatory variables • ◘ The success of regression analysis depends on the availability of the appropriate data. • ◘ In any research, the researcher should clearly state the source of data used in analysis, their definitions, there method of collection and any gaps or omissions in the data. • ◘ Its often necessary to prepare a forecast – an explanations of what the future holds before a decision can be made. For example, it may be necessary to predict revenues before a budget can be prepared. These predication become easier if we develop the relationship between the variables to be predict and some other variables relating to it. • ◘ Computing regression (estimating equation) and then using it to provide an estimate of the value of the dependent variable (response) y when given one or more values of the independent or explanatory variables(s) x. • ◘ Computing measures that show the possible errors that may be involved in using the estimating equation as a basis for forecasting. • ◘ Preparing measures that show the closeness or correlation that exists between variables.
  • 12. What does it provide? • This method provides an equation (Model) for estimating; predicting the average value of dependent variable (y) from the known values of the independent variable. • y is assumed to be random and x are fixed.
  • 13. Types of Regression Relationship • The RELATION between the expected value of the dependent variable and the independent variable is called a regression relation. It is called a simple or two-variable regression. If the dependency of one variable is studied on two or more independent variable is called multiple regression. When dependency is represented by a straight line equation, the regression is said to be linear regression. Otherwise it is Curvilinear.
  • 14. Example of a model • Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent hi at various moments in time ti. Physics tells us that, ignoring the drag, the relationship can be modeled as Yi = β1+β2Xi+µi • where β1 determines the initial velocity of the ball, β2 is proportional to the standard gravity, and µI is due to measurement errors. Linear regression can be used to estimate the values of β1 and β2 from the measured data. This model is non-linear in the time variable, but it is linear in the parameters β1and β2; if we take regressor xi = (xi1, xi2) = (ti, ti2), the model takes on the standard form
  • 15. Applications of linear regression • Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines.
  • 16. Correlation: • In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation refers to any of a broad class of statistical relationships involving dependence. Formally, dependence refers to any situation in which random variables do not satisfy a mathematical condition of probabilistic independence
  • 17. Pearson correlation coefficient • In loose usage, correlation can refer to any departure of two or more random variables from independence, but technically it refers to any of several more specialized types of relationship between mean values. There are several correlation coefficients, often denoted ρ or r, measuring the degree of correlation. The most common of these is the Pearson correlation coefficient
  • 18. Correlation Coefficient • The population correlation coefficient ρX,Y between two random variables X and Y with expected valuesμX and μY and standard deviations σX and σY is defined as: • where E is the expected value operator, cov means covariance, and, corr a widely used alternative notation for Pearson's correlation.
  • 20. where x and y are the sample means of X and Y, and sx and sy are the sample standard deviations ofX and Y. If x and y are results of measurements that contain measurement error, the realistic limits on the correlation coefficient are not −1 to +1 but a smaller range
  • 21. Regression Versus Correlation: • They are closely related but conceptually different in correlation analysis we measure the strength or degree of linear association between two variables. But in regression analysis we try to estimate or predict the average value of one variable on the basis of the fixed values of other variables. Examples in correlation we can measure the relation between smoking and lung cancer in regression we can predict the average score on a statistics exam by knowing a student score on a mathematical exam.
  • 22. Regression Versus Correlation: • They have some fundamental differences. Correlation theory is based on the assumption of randomness of variable and regression theory has assumption that the dependent variable is stochastic and the explanatory variables are fixed or non stochastic. In correlation there is no difference between dependent and explanatory variables. • In correlation analysis, the purpose is to measure the strength or closeness of the relationship between the variables. • ♦ What is the pattern of existing relationship
  • 24. • The statistical relationship between the error terms and the regressor plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent. IMPORTANT TO NOTE
  • 25. Trend line • A trend line represents a trend, the long-term movement in time series data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.
  • 26. Trend line uses • Trend lines are sometimes used in business analytics to show changes in data over time. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.
  • 27. Scatter Diagram: • To find out whether or not a relationship between two variables exists, we plot given data independent and dependent variables using x-axis for independent regression variable. Y-axis for dependent variable such a diagram is called a Scatter Diagram or a Scatter Plot. If a point shows tendency to a straight line it is called regression line and if it shows curve it is called regression curve. It also shows relationship
  • 29. Data Graphs: • Let us assume that a logical relationship may exist between two variables. • To support further analysis we use a graph to plot the available data. This is called Scatter Diagram. • X= Independent or explanatory • Y= Dependent or response.
  • 30. Purpose of Diagram: • To see if there is a useful relationship between the two variables. • Determine the type of equation to use to describe the relationship.
  • 31. Example of Model Construction: • Keynes’s consumption function. • Consumption=C • Income= X • C= F(X) • Consumption and income cannot be connected by any simple deterministic relationship. • Linear Model • C= α + β X • It is hopeless to attempt to capture every influence in the relationship so to incorporate the inherent randomness in the real world counterpart • C= f(X, ε) • ε =Stochastic element • C= α + β X + ε • Empirical counterpart to Keynes’s theoretical Model.
  • 32. Example II: Earnings and Education relationship • Higher level of education is associated with higher income. • Simple regression Model is • Earnings = β1+ β2 education + ε • [old people have higher income regardless of education] • And if age and education are positively correlated. • Han regression Model will associate all the observed increases in education. • Than with age effects • Earnings = β1 + β2 education + β3 age + ε • We also observe than income that income tends to rise less rapidly in the later years than in early so to accommodate this possibility • Earnings = β1 + β2 education + β3 age + age² + ε • β3 = +ev • = -ev
  • 33. Different ways of linearity
  • 34. Uses or unique effect of Y • A fitted linear regression model can be used to identify the relationship between a single predictor variable xj and the response variable y when all the other predictor variables in the model are “held fixed”. Specifically, the interpretation of βj is the expected change in y for a one-unit change in xj when the other covariates are held fixed. This is sometimes called the unique effect of x j on y.
  • 35. Simple and multiple linear regression • The very simplest case of a single scalar predictor variable x and a single scalar response variable y is known as simple linear regression. • The extension to multiple and/or vector-valued predictor variables (denoted with a capital X) is known as multiple linear regression.
  • 36. General linear models • The general linear model considers the situation when the response variable Y is not a scalar but a vector. Conditional linearity of E(y|x) = Bx is still assumed, with a matrix B replacing the vector β of the classical linear regression model. Multivariate analogues of OLS and GLS have been developed.
  • 37. Heteroscedasticity models • Various models have been created that allow for heteroscedasticity, i.e. the errors for different response variables may have different variances. For example, weighted least squares is a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors.
  • 38. Generalized linear models • Generalized linear models (GLM's) are a framework for modeling a response variable y that is bounded or discrete. This is used, for example: • when modeling positive quantities • when modeling categorical data • when modeling ordinal data,
  • 39. Some common examples of GLM's are: • Poisson regression for count data. • Logistic regression and Probit regression for binary data. • Multinomial logistic regression and multinomial probit regression for categorical data. • Ordered probit regression for ordinal data.
  • 40. Single index models • allow some degree of nonlinearity in the relationship between x and y, while preserving the central role of the linear predictor β′x as in the classical linear regression model. Under certain conditions, simply applying OLS to data from a single-index model will consistently estimate β up to a proportionality constant.
  • 41. Hierarchical linear models • Hierarchical linear models (or multilevel regression) organizes the data into a hierarchy of regressions, for example where A is regressed on B, and B is regressed on C. It is often used where the data have a natural hierarchical structure such as in educational statistics, where students are nested in classrooms, classrooms are nested in schools, and schools are nested in some administrative grouping such as a school district.
  • 42. Errors-in-variables extend the traditional linear regression model to allow the predictor variables X to be observed with error. This error causes standard estimators of β to become biased. Generally, the form of bias is an attenuation, meaning that the effects are biased toward zero.
  • 43. procedures have been developed for parameter estimation • A large number of procedures have been developed for parameter estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy- tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency.
  • 44. Some of the more common estimation techniques for linear regression • Least-squares estimation and related techniques Ordinary least squares (OLS) Generalized least squares (GLS) Percentage least squares  Iteratively reweighed least squares (IRLS) Instrumental variables regression (IV) Total least squares (TLS)
  • 45. Maximum-likelihood estimation and related techniques Ridge regression, Least absolute deviation (LAD) regression Adaptive estimation.
  • 46. Epidemiology • Early evidence relating tobacco smoking to mortality and morbidity came from observational studies employing regression analysis. For example, suppose we have a regression model in which cigarette smoking is the independent variable of interest, and the dependent variable is life span measured in years. Researchers might include socio-economic status as an additional independent variable, to ensure that any observed effect of smoking on life span is not due to some effect of education or income. However, it is never possible to include all possible confounding variables in an empirical analysis.
  • 47. Example • For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, randomized controlled trials are often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such as instrumental variables regression may be used to attempt to estimate causal relationships from observational data.
  • 48. Finance • The capital asset pricing model uses linear regression as well as the concept of Beta for analyzing and quantifying the systematic risk of an investment. This comes directly from the Beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.
  • 49. Economics • Linear regression is the predominant empirical tool in economics. For example, it is used to predict consumption spending fixed investment spending, inventory investment, purchases of a country's exports spending on imports the demand to hold liquid assets labor demand and labor supply
  • 50. Environmental science • Linear regression finds application in a wide range of environmental science applications. In Canada, the Environmental Effects Monitoring Program uses statistical analyses on fish and benthic surveys to measure the effects of pulp mill or metal mine effluent on the aquatic ecosystem
  • 51. Simple Linear Model: • The correlation co-efficient may indicate that two variables are associated with one another but it does not give any idea of the kind of relationship involved. • Hypothesize one variable (Dependent variable) is determined by other variable known as explanatory variables, independent variable or regressor.The hypothesized mathematical relationship linking them is known as the regression model.If there is one regressor, it is described as a simple regression model. If there are two or more regressor it is described as regressor it is described as a multiple regression. We would not expect to find an exact relationship between two economic variables, the relationship is not exact by explicitly including it a random factor known as the disturbance term
  • 52. Simple Regression Model: • Yi =β1 + β2Xi +€i • Has two component β1 , β2Xi • where β1 and β2 are fixed quantities known as parameters. The value of explanatory variable • the disturbance Ui yi yi
  • 53. Components of model • Dependent Variable • is the variable to be estimated. It is plotted on the vertical or y-axis of a chart is therefore identified by the symbol y. • Independent variable Predictand, regressand, response, explanatory variable, • is the one that presumably exerts a influence on or explains variations in the dependent variable. It is plotted on x-axis that why denoted by X. It is also called regressor, Predicator regression variable and explanatory variable
  • 54. We Must Know: Two Things: • Value of y-Intercept =a when X is equal to zero we can read on y-axis. • Measuring a change of one unit of the X-variable • Measuring the Corresponding change in Yon the y-axis • Dividing the change inby the change in X.
  • 55. Graph
  • 56. Deterministic and Probabilistic: • Let us consider a set of n pairs of observations (Xi,Yi). If the relation between the variables is exactly linear, then the mathematical equation describing the linear relation is • Yi= a+ bYi • Where, a= value of Y when x=0 , = Intercept • b= Change in Y for one unit change in X , Slope of line • It is a deterministic Model. • C= f(X) • Consumption function • Y= a + b X • (Area = п r²) • But in some situations it is not exact we get what is called non- deterministic or probabilistic Model. • = a+ b+ = Unknown Random Errors
  • 57. Simple Linear regression Model: • We assume Linear relationship holds between and = α+ β+ = Fixed, Predetermined values • = Observations drawn from pop • = Error components • α, β = Parameters • α= Intercept • β= Slope Reg-co-efficient • β= +ve, -ve based upon the direction of the relationship between X and Y. • Further more we assume. • E () =0 • => E(Y) ----------- X is a st. line • Var () = σ² • ε ~ N(0, σ²) • E (,) =0 cov=0 • E(X,) are independent of each other.
  • 58. Multiple Linear Regression Model: • It is used to study the relationship between a dependent variable and one or more independent variables. • The form of model is • Y= f (X1+X2+X3+…+Xk) + ε • = X1 β1 + X2 β2 + X3 β3 +……+ βkXk + ε • Y= Dependent or explained variable • X1-------Xk = Independent or Explanatory Variable • f( X1+X2+X3+…+Xk) = Population regression equation of y on X1-------Xk • Y= Sum (Deterministic part+ Random Part ) • Y= Regressand • Xk = Regressors, Covariates • For example we take a demand equation • Quantity = β1 + Price× β2 +Income× β3 + ε • Inverse demand equation • Price= + × quantity + × income +u • ε, u = Disturbance, Because it disturb the model. • Because we cannot hope to capture every influence.
  • 59. table
  • 60. What table is showing: • Output for a time period in dozens of units (Y). • Aptitude test results for eight employees (X) • ♦ It is a sample small of 8 emp. • Q#1 If the test does what is supposed to do. • Q#2 Employees with higher scores will be among the higher producers. • ♦ Every point on diagram represents each employees • C= (X, Y) Pairs of observations • F= (X, Y) • ♦ They are making a path in straight line. • ♦ So there is a linear relationship. • ♦ (+ev) direct relationship
  • 61. Ordinary Least Square (OLS): Estimator: • Is one of the simplest methods of linear regression. The goal of OLS is to closely fit a function with the data. It does so by minimizing the sum of squared errors from the data. We are not trying to minimize the sum of absolute errors, but rather the sum of squared errors.
  • 62. Linear regression model and assumptions: 1. Model Specification: 2. Homoscedasticity and non- autocorrelation: 3. 1) Linearity There is no exact linear relationship among any of the independent variables in the model. • (Identification Condition) • Exogeneity of independent variables
  • 63. From monetary Eco nomics • it is known that, other things remaining the same, the higher the rate of inflation (п) the lower the proportion (k) of their income that people would want to hold in the form of money. A quantitative analysis of this relationship will enable the monetary economist to predict the amount to predict the amount of money as a proportion of their income that people would want to hold at various rate of inflation.
  • 64. LS OR OLS: • The principle of least square consists of determining the values for the unknown parameters that will minimize the sum of squares of errors (or residuals). Where errors are defined as the differences between observed values and the corresponding values predicted or estimated by the fitted model equation. • The parameters values thus determine will give the least sum of the square of errors and are known as least squares estimates. • It gives us the least sum of the squares of errors and are known as least squares estimates.
  • 65. Method of ordinary least square (OLS): • It is one of the econometric methods that can be used to derive estimates of the parameters of economic relationships from statistical observations.
  • 66. Advantages of OLS • 1) It is a fairly simple as compared with other econometric techniques. • 2) This method is used in a wide range of economic relationships. • 3) It is still one of the most commonly employed methods in estimating relationships in econometric models. • 4) The mechanics of least square are simple to understand. • 5) OLS is an essential component of most other econometric techniques. • 6) Appealing mathematically as compared to other methods.
  • 67. • 7) It is one of the most powerful and popular methods of regression analysis. • 8) They can be easily computed. • 9) They are point estimators. Each estimator will provide only a single (point). • 10) Once the OLS estimates are obtained from the sample data the sample regression line can be easily obtained.
  • 68. Model Specification • Economic study does not specify whether the supply should be studied with a single – equation model or with simultaneous - equation. • We choose to start our investigation with a single – equation Model. • Economic theory is not clear about the mathematical form (linear or non linear) • We start by assuming that the variables are related with the simplest possible mathematical form that the relationship between quantity and price is linear of the form. • Y= a+ b X
  • 69. Example: • Quantity supplied of a commodity and its price. • When price raises the quantity of the commodity supplied increases • Step I: • Specification of the supply model. • i.e. • Dependent Variable regressand] = quantity supply • Explanatory variable regressor] = Price • Y= f(X) • Y= β1 + β2 X + [Variation in] = [Explained variation] + [Unexplained Variation] • β1, β2 = Parameters of supply function our aim is to obtain , • = Due to methods of collecting and processing statistical information.
  • 70. Assumptions • Weak exogeneity. • Linearity. • Constant variance (aka homoskedasticity heteroscedasticity Independence of errors. Lack of multicollinearity