SlideShare a Scribd company logo
HYPOTHESIS TESTING
& PARAMETRIC
ANALYSIS
MODULE 3
What is hypothesis?
 According to Prof. Morris Hamburg “A hypothesis in
statistics is simply a quantitative statement about a
population”
 A statistical hypothesis is an assumption about a population
parameter.
 It is supposition made as a basis for reasoning
 This assumption may or may not be true.
 Hypothesis testing refers to the formal procedures used by
statisticians to accept or reject statistical hypotheses.
 To determine whether a statistical hypothesis is true by
examining a random sample from the population.
 If sample data are not consistent with the statistical
hypothesis, the hypothesis is rejected.
Type of Hypothesis
 Null hypothesis. The null hypothesis, denoted by Ho, is
usually the hypothesis that sample observations result
purely from chance.
 Alternative hypothesis. The alternative hypothesis,
denoted by H1 or Ha, is the hypothesis that sample
observations are influenced by some non-random cause.
 For example, suppose we wanted to determine whether a
coin was fair and balanced. A null hypothesis might be that
half the flips would result in Heads and half, in Tails. The
alternative hypothesis might be that the number of Heads
and Tails would be very different. Symbolically, these
hypotheses would be expressed as
Ho: P = 0.5
Ha: P ≠ 0.5
Hypothesis Tests
1. State the hypotheses
2. Set up a suitable level of significance
3. Setting a test criterion
4. Doing computations
5. Interpret results/ Decision making
Decision Errors
When a statistical hypothesis is tested there
are four possibilities:
 The hypothesis is true but our test rejects it
(Type I error)
 The hypothesis is false but our test accepts
it (Type II error)
 The hypothesis is true and our test accepts
it (Correct decision)
 The hypothesis is false and our test rejects
it
Hypothesis Testing.pptx ( T- test, F- test, U- test , Anova)
Errors in hypothesis testing
Type I error.
 A Type I error occurs when the researcher rejects a
null hypothesis when it is true.
 The probability of committing a Type I error is called
the significance level.
 This probability is also called alpha, and is often
denoted by α.
Type II error.
 A Type II error occurs when the researcher fails to
reject a null hypothesis that is false.
 The probability of committing a Type II error is
called Beta, and is often denoted by β.
 The probability of not committing a Type II error is
called the Power of the test.
Hypothesis Testing.pptx ( T- test, F- test, U- test , Anova)
Decision Rules
P-value.
 The strength of evidence in support of a null
hypothesis is measured by the P-value.
 Suppose the test statistic is equal to S.
 The P-value is the probability of observing a test
statistic as extreme as S, assuming the null
hypothesis is true.
 If the P-value is less than the significance level, we
reject the null hypothesis.
Decision Rules
Region of acceptance.
 The region of acceptance is a range of values. If the
test statistic falls within the region of acceptance, the
null hypothesis is not rejected.
 The region of acceptance is defined so that the
chance of making a Type I error is equal to the
significance level.
 The set of values outside the region of acceptance is
called the region of rejection.
 If the test statistic falls within the region of rejection,
the null hypothesis is rejected.
 In such cases, we say that the hypothesis has been
rejected at the α level of significance
One-Tailed
 A test of a statistical hypothesis, where the region of
rejection is on only one side of the sampling
distribution, is called a one-tailed test.
 For example, suppose the null hypothesis states that
the mean is less than or equal to 10.
 The alternative hypothesis would be that the mean is
greater than 10.
 The region of rejection would consist of a range of
numbers located on the right side of sampling
distribution; that is, a set of numbers greater than 10.
Two Tailed Test
 A test of a statistical hypothesis, where the region of
rejection is on both sides of the sampling distribution,
is called a two-tailed test.
 For example, suppose the null hypothesis states that
the mean is equal to 10.
 The alternative hypothesis would be that the mean is
less than 10 or greater than 10.
 The region of rejection would consist of a range of
numbers located on both sides of sampling distribution;
that is, the region of rejection would consist partly of
numbers that were less than 10 and partly of numbers
that were greater than 10.
Standard Error
 The standard deviation of the sampling distribution is
called standard error
 The standard error is a statistical term that measures
the accuracy with which a sample
distribution represents a population by using standard
deviation.
 In statistics, a sample mean deviates from the actual
mean of a population; this deviation is the standard
error of the mean.
 The standard error is also inversely proportional to the
sample size; the larger the sample size, the smaller the
standard error because the statistic will approach the
actual value.
Parametric Test and Non Parametric
Test
 Parameters: Statistical measurements such as Mean,
Variance etc. of the population are called parameters.
 Parametric tests are those that make assumptions
about the parameters of the population distribution
from which the sample is drawn.
 This is often the assumption that the population data
are normally distributed.
 Non-parametric tests are “distribution-free” and, as
such, can be used for non-Normal variables.
 They can thus be applied even if parametric conditions
of validity are not met.
Hypothesis Testing.pptx ( T- test, F- test, U- test , Anova)
t- Test
 A t-test is a type of inferential statistic used to determine if
there is a significant difference between the means of two
groups, which may be related in certain features.
 It is mostly used when the data sets, like the data set
recorded as the outcome from flipping a coin 100 times,
would follow a normal distribution and may have unknown
variances.
 A t-test is used as a hypothesis testing tool, which allows
testing of an assumption applicable to a population.
 A t-test looks at the t-statistic, the t-distribution values,
and the degrees of freedom to determine the statistical
Z-Test
 A z-test is a statistical test to determine whether two
population means are different when the variances are
known and the sample size is large.
 It can be used to test hypotheses in which the z-test
follows a normal distribution.
 A z-statistic, or z-score, is a number representing the
result from the z-test.
 Z-tests are closely related to t-tests, but t-tests are
best performed when an experiment has a small
sample size.
 Also, t-tests assume the standard deviation is
unknown, while z-tests assume it is known.
U-Test
 The Mann-Whitney U test is
the nonparametric equivalent of the two sample t-
test.
 While the t-test makes an assumption about the
distribution of a population (i.e. that the sample came
from a t-distributed population), the Mann Whitney U
Test makes no such assumption.
 The test compares two populations. The null
hypothesis for the test is that the probability is 50%
that a randomly drawn member of the first population
will exceed a member of the second population.
 An alternate null hypothesis is that the two samples
come from the same population (i.e. that they both
Kruskal Wallis - H Test
 Non parametric alternative to the One Way ANOVA.
 The H test is used when the assumptions for ANOVA aren’t
met.
 Sometimes called the one-way ANOVA on ranks, as the
ranks of the data values are used in the test rather than the
actual data points.
 The test determines whether the medians of two or more
groups are different.
 Calculate a test statistic and compare it to a distribution cut-
off point. The test statistic used is H statistic.
 The hypotheses for the test are:
H0: population medians are equal. H1: population medians are not equal.
Bivariate Analysis
 Bivariate analysis is stated to be an analysis of any
concurrent relation between two variables or
attributes.
 It is one of the simplest forms of statistical analysis,
used to find out if there is a relationship between two
sets of values. It usually involves the variables X and
Y.
 This study explores the relationship of two variables
as well as the depth of this relationship to figure out if
there are any discrepancies between two variables
and any causes of this difference.
 Some of the examples are percentage table, scatter
plot, etc.
 Univariate analysis is the analysis of one (“uni”)
variable.
 Bivariate analysis is the analysis of exactly two
variables.
Types of Bivariate Analysis
 Numerical and Numerical – In this type,
both the variables of bivariate data,
independent and dependent, are having
numerical values.
 Categorical and Categorical – When both
the variables are categorical.
 Numerical and Categorical – When one
variable is numerical and one is
categorical.
Multivariate Analysis
 Multivariate means involving multiple dependent
variables resulting in one outcome.
 This explains that the majority of the problems in
the real world are Multivariate.
 For example, we cannot predict the weather of
any year based on the season. There are multiple
factors like pollution, humidity, precipitation, etc.
Advantages and Disadvantages of
Multivariate Analysis
Advantages
 The main advantage of multivariate analysis is that
since it considers more than one factor of independent
variables that influence the variability of dependent
variables, the conclusion drawn is more accurate.
 The conclusions are more realistic and nearer to the
real-life situation.
Disadvantages
 The main disadvantage of MVA includes that it requires
rather complex computations to arrive at a satisfactory
conclusion.
 Many observations for a large number of variables
need to be collected and tabulated; it is a rather time-
consuming process.
Classification Chart of Multivariate
Techniques
ANOVA
 Analysis of variance (ANOVA) is an analysis tool used in
statistics that splits an observed aggregate variability
found inside a data set into two parts: systematic factors
and random factors.
 The systematic factors have a statistical influence on the
given data set, while the random factors do not.
 Analysts use the ANOVA test to determine the influence
that independent variables have on the dependent
One way- Two way Anova
 One-way or two-way refers to the number
of independent variables (IVs) in your Analysis of
Variance test.
 One-way has one independent variable (with
2 levels). For example: brand of cereal,
 Two-way has two independent variables (it can
have multiple levels). For example: brand of
cereal, calories
The Formula for ANOVA is:
F=MST​/MSE
where:
F=ANOVA coefficient
MST=Mean sum of squares due to treatment
MSE=Mean sum of squares due to error
One-way Anova
 One-way or two-way refers to the number of
independent variables in your analysis of variance test.
 A one-way ANOVA evaluates the impact of a sole
factor on a sole response variable.
 It determines whether all the samples are the same.
 The one-way ANOVA is used to determine whether
there are any statistically significant differences
between the means of three or more independent
(unrelated) groups.
Two way Anova
 A two-way ANOVA is an extension of the one-way
ANOVA. With a one-way, you have one independent
variable affecting a dependent variable.
 With a two-way ANOVA, there are two independents. For
example, a two-way ANOVA allows a company to
compare worker productivity based on two independent
variables, such as salary and skill set.
 It is utilized to observe the interaction between the two
factors and tests the effect of two factors at the same

More Related Content

PPTX
Testing Of Hypothesis
ODP
Multiple Linear Regression II and ANOVA I
PDF
MANOVA SPSS
PPTX
Statistical inference concept, procedure of hypothesis testing
PPTX
Stat 3203 -pps sampling
PDF
Assumptions of Linear Regression - Machine Learning
PDF
Lecture 5: Interval Estimation
Testing Of Hypothesis
Multiple Linear Regression II and ANOVA I
MANOVA SPSS
Statistical inference concept, procedure of hypothesis testing
Stat 3203 -pps sampling
Assumptions of Linear Regression - Machine Learning
Lecture 5: Interval Estimation

What's hot (20)

PDF
Introduction to experimental design
PDF
The jackknife and bootstrap
PDF
Split-plot Designs
PPT
Chi – square test
PPT
Standard error-Biostatistics
PPT
Inferential Statistics
PPTX
PPTX
PPTX
Confidence interval
PPTX
F test mamtesh ppt.pptx
PPT
Regression
PPT
Skewness & Kurtosis
PPTX
A.6 confidence intervals
PPT
sampling distribution
PDF
Multinomial Logistic Regression
PPTX
Skewness and kurtosis ppt
PPTX
Univariate analysis:Medical statistics Part IV
PPTX
Standard error
PPT
statistical estimation
Introduction to experimental design
The jackknife and bootstrap
Split-plot Designs
Chi – square test
Standard error-Biostatistics
Inferential Statistics
Confidence interval
F test mamtesh ppt.pptx
Regression
Skewness & Kurtosis
A.6 confidence intervals
sampling distribution
Multinomial Logistic Regression
Skewness and kurtosis ppt
Univariate analysis:Medical statistics Part IV
Standard error
statistical estimation
Ad

Similar to Hypothesis Testing.pptx ( T- test, F- test, U- test , Anova) (20)

PPTX
Tests of significance
PPTX
Parametric tests
PDF
20200519073328de6dca404c.pdfkshhjejhehdhd
PPT
PPT
PPTX
Ds 2251 -_hypothesis test
PPTX
Basic Statistics Until Regression in SPSS
PPT
Ch15 data exploration (ii)
PPT
Ch15 data exploration (ii)
PPTX
Hypothsis testing
PPTX
Hyphotheses testing 6
PPTX
TEST OF SIGNIFICANCE.pptx ests of Significance: Process, Example and Type
PPT
Formulating Hypotheses
PPTX
hypothesis testing
PPTX
Chapter 10 (hypotheses testing) 24-25.pptx
PPTX
Introduction-to-Hypothesis-Testing Explained in detail
PPTX
Elements of inferential statistics
PPTX
Day-2_Presentation for SPSS parametric workshop.pptx
PDF
Modern Marketing Research Concepts Methods and Cases 2nd Edition Feinberg Sol...
PPTX
Medical Statistics Part-II:Inferential statistics
Tests of significance
Parametric tests
20200519073328de6dca404c.pdfkshhjejhehdhd
Ds 2251 -_hypothesis test
Basic Statistics Until Regression in SPSS
Ch15 data exploration (ii)
Ch15 data exploration (ii)
Hypothsis testing
Hyphotheses testing 6
TEST OF SIGNIFICANCE.pptx ests of Significance: Process, Example and Type
Formulating Hypotheses
hypothesis testing
Chapter 10 (hypotheses testing) 24-25.pptx
Introduction-to-Hypothesis-Testing Explained in detail
Elements of inferential statistics
Day-2_Presentation for SPSS parametric workshop.pptx
Modern Marketing Research Concepts Methods and Cases 2nd Edition Feinberg Sol...
Medical Statistics Part-II:Inferential statistics
Ad

Recently uploaded (20)

PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PDF
Fluorescence-microscope_Botany_detailed content
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PDF
annual-report-2024-2025 original latest.
PPT
ISS -ESG Data flows What is ESG and HowHow
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
PDF
Business Analytics and business intelligence.pdf
PPTX
Database Infoormation System (DBIS).pptx
PPT
Quality review (1)_presentation of this 21
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PPTX
1_Introduction to advance data techniques.pptx
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
IBA_Chapter_11_Slides_Final_Accessible.pptx
Miokarditis (Inflamasi pada Otot Jantung)
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
Fluorescence-microscope_Botany_detailed content
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Data_Analytics_and_PowerBI_Presentation.pptx
annual-report-2024-2025 original latest.
ISS -ESG Data flows What is ESG and HowHow
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Supervised vs unsupervised machine learning algorithms
Galatica Smart Energy Infrastructure Startup Pitch Deck
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Clinical guidelines as a resource for EBP(1).pdf
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
Business Analytics and business intelligence.pdf
Database Infoormation System (DBIS).pptx
Quality review (1)_presentation of this 21
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
1_Introduction to advance data techniques.pptx

Hypothesis Testing.pptx ( T- test, F- test, U- test , Anova)

  • 2. What is hypothesis?  According to Prof. Morris Hamburg “A hypothesis in statistics is simply a quantitative statement about a population”  A statistical hypothesis is an assumption about a population parameter.  It is supposition made as a basis for reasoning  This assumption may or may not be true.  Hypothesis testing refers to the formal procedures used by statisticians to accept or reject statistical hypotheses.  To determine whether a statistical hypothesis is true by examining a random sample from the population.  If sample data are not consistent with the statistical hypothesis, the hypothesis is rejected.
  • 3. Type of Hypothesis  Null hypothesis. The null hypothesis, denoted by Ho, is usually the hypothesis that sample observations result purely from chance.  Alternative hypothesis. The alternative hypothesis, denoted by H1 or Ha, is the hypothesis that sample observations are influenced by some non-random cause.  For example, suppose we wanted to determine whether a coin was fair and balanced. A null hypothesis might be that half the flips would result in Heads and half, in Tails. The alternative hypothesis might be that the number of Heads and Tails would be very different. Symbolically, these hypotheses would be expressed as Ho: P = 0.5 Ha: P ≠ 0.5
  • 4. Hypothesis Tests 1. State the hypotheses 2. Set up a suitable level of significance 3. Setting a test criterion 4. Doing computations 5. Interpret results/ Decision making
  • 5. Decision Errors When a statistical hypothesis is tested there are four possibilities:  The hypothesis is true but our test rejects it (Type I error)  The hypothesis is false but our test accepts it (Type II error)  The hypothesis is true and our test accepts it (Correct decision)  The hypothesis is false and our test rejects it
  • 7. Errors in hypothesis testing Type I error.  A Type I error occurs when the researcher rejects a null hypothesis when it is true.  The probability of committing a Type I error is called the significance level.  This probability is also called alpha, and is often denoted by α. Type II error.  A Type II error occurs when the researcher fails to reject a null hypothesis that is false.  The probability of committing a Type II error is called Beta, and is often denoted by β.  The probability of not committing a Type II error is called the Power of the test.
  • 9. Decision Rules P-value.  The strength of evidence in support of a null hypothesis is measured by the P-value.  Suppose the test statistic is equal to S.  The P-value is the probability of observing a test statistic as extreme as S, assuming the null hypothesis is true.  If the P-value is less than the significance level, we reject the null hypothesis.
  • 10. Decision Rules Region of acceptance.  The region of acceptance is a range of values. If the test statistic falls within the region of acceptance, the null hypothesis is not rejected.  The region of acceptance is defined so that the chance of making a Type I error is equal to the significance level.  The set of values outside the region of acceptance is called the region of rejection.  If the test statistic falls within the region of rejection, the null hypothesis is rejected.  In such cases, we say that the hypothesis has been rejected at the α level of significance
  • 11. One-Tailed  A test of a statistical hypothesis, where the region of rejection is on only one side of the sampling distribution, is called a one-tailed test.  For example, suppose the null hypothesis states that the mean is less than or equal to 10.  The alternative hypothesis would be that the mean is greater than 10.  The region of rejection would consist of a range of numbers located on the right side of sampling distribution; that is, a set of numbers greater than 10.
  • 12. Two Tailed Test  A test of a statistical hypothesis, where the region of rejection is on both sides of the sampling distribution, is called a two-tailed test.  For example, suppose the null hypothesis states that the mean is equal to 10.  The alternative hypothesis would be that the mean is less than 10 or greater than 10.  The region of rejection would consist of a range of numbers located on both sides of sampling distribution; that is, the region of rejection would consist partly of numbers that were less than 10 and partly of numbers that were greater than 10.
  • 13. Standard Error  The standard deviation of the sampling distribution is called standard error  The standard error is a statistical term that measures the accuracy with which a sample distribution represents a population by using standard deviation.  In statistics, a sample mean deviates from the actual mean of a population; this deviation is the standard error of the mean.  The standard error is also inversely proportional to the sample size; the larger the sample size, the smaller the standard error because the statistic will approach the actual value.
  • 14. Parametric Test and Non Parametric Test  Parameters: Statistical measurements such as Mean, Variance etc. of the population are called parameters.  Parametric tests are those that make assumptions about the parameters of the population distribution from which the sample is drawn.  This is often the assumption that the population data are normally distributed.  Non-parametric tests are “distribution-free” and, as such, can be used for non-Normal variables.  They can thus be applied even if parametric conditions of validity are not met.
  • 16. t- Test  A t-test is a type of inferential statistic used to determine if there is a significant difference between the means of two groups, which may be related in certain features.  It is mostly used when the data sets, like the data set recorded as the outcome from flipping a coin 100 times, would follow a normal distribution and may have unknown variances.  A t-test is used as a hypothesis testing tool, which allows testing of an assumption applicable to a population.  A t-test looks at the t-statistic, the t-distribution values, and the degrees of freedom to determine the statistical
  • 17. Z-Test  A z-test is a statistical test to determine whether two population means are different when the variances are known and the sample size is large.  It can be used to test hypotheses in which the z-test follows a normal distribution.  A z-statistic, or z-score, is a number representing the result from the z-test.  Z-tests are closely related to t-tests, but t-tests are best performed when an experiment has a small sample size.  Also, t-tests assume the standard deviation is unknown, while z-tests assume it is known.
  • 18. U-Test  The Mann-Whitney U test is the nonparametric equivalent of the two sample t- test.  While the t-test makes an assumption about the distribution of a population (i.e. that the sample came from a t-distributed population), the Mann Whitney U Test makes no such assumption.  The test compares two populations. The null hypothesis for the test is that the probability is 50% that a randomly drawn member of the first population will exceed a member of the second population.  An alternate null hypothesis is that the two samples come from the same population (i.e. that they both
  • 19. Kruskal Wallis - H Test  Non parametric alternative to the One Way ANOVA.  The H test is used when the assumptions for ANOVA aren’t met.  Sometimes called the one-way ANOVA on ranks, as the ranks of the data values are used in the test rather than the actual data points.  The test determines whether the medians of two or more groups are different.  Calculate a test statistic and compare it to a distribution cut- off point. The test statistic used is H statistic.  The hypotheses for the test are: H0: population medians are equal. H1: population medians are not equal.
  • 20. Bivariate Analysis  Bivariate analysis is stated to be an analysis of any concurrent relation between two variables or attributes.  It is one of the simplest forms of statistical analysis, used to find out if there is a relationship between two sets of values. It usually involves the variables X and Y.  This study explores the relationship of two variables as well as the depth of this relationship to figure out if there are any discrepancies between two variables and any causes of this difference.  Some of the examples are percentage table, scatter plot, etc.  Univariate analysis is the analysis of one (“uni”) variable.  Bivariate analysis is the analysis of exactly two variables.
  • 21. Types of Bivariate Analysis  Numerical and Numerical – In this type, both the variables of bivariate data, independent and dependent, are having numerical values.  Categorical and Categorical – When both the variables are categorical.  Numerical and Categorical – When one variable is numerical and one is categorical.
  • 22. Multivariate Analysis  Multivariate means involving multiple dependent variables resulting in one outcome.  This explains that the majority of the problems in the real world are Multivariate.  For example, we cannot predict the weather of any year based on the season. There are multiple factors like pollution, humidity, precipitation, etc.
  • 23. Advantages and Disadvantages of Multivariate Analysis Advantages  The main advantage of multivariate analysis is that since it considers more than one factor of independent variables that influence the variability of dependent variables, the conclusion drawn is more accurate.  The conclusions are more realistic and nearer to the real-life situation. Disadvantages  The main disadvantage of MVA includes that it requires rather complex computations to arrive at a satisfactory conclusion.  Many observations for a large number of variables need to be collected and tabulated; it is a rather time- consuming process.
  • 24. Classification Chart of Multivariate Techniques
  • 25. ANOVA  Analysis of variance (ANOVA) is an analysis tool used in statistics that splits an observed aggregate variability found inside a data set into two parts: systematic factors and random factors.  The systematic factors have a statistical influence on the given data set, while the random factors do not.  Analysts use the ANOVA test to determine the influence that independent variables have on the dependent
  • 26. One way- Two way Anova  One-way or two-way refers to the number of independent variables (IVs) in your Analysis of Variance test.  One-way has one independent variable (with 2 levels). For example: brand of cereal,  Two-way has two independent variables (it can have multiple levels). For example: brand of cereal, calories
  • 27. The Formula for ANOVA is: F=MST​/MSE where: F=ANOVA coefficient MST=Mean sum of squares due to treatment MSE=Mean sum of squares due to error
  • 28. One-way Anova  One-way or two-way refers to the number of independent variables in your analysis of variance test.  A one-way ANOVA evaluates the impact of a sole factor on a sole response variable.  It determines whether all the samples are the same.  The one-way ANOVA is used to determine whether there are any statistically significant differences between the means of three or more independent (unrelated) groups.
  • 29. Two way Anova  A two-way ANOVA is an extension of the one-way ANOVA. With a one-way, you have one independent variable affecting a dependent variable.  With a two-way ANOVA, there are two independents. For example, a two-way ANOVA allows a company to compare worker productivity based on two independent variables, such as salary and skill set.  It is utilized to observe the interaction between the two factors and tests the effect of two factors at the same