SlideShare a Scribd company logo
Research Methods
Second Stage:
Operationalization
1. Formulation of Theory
2. Operationalization of Theory
3. Selection of Appropriate Research Techniques
4. Observation of Behavior (Data Collection)
5. Analysis of Data
6. Interpretation of Results
Hypotheses Generation
Hypothesis:
 an explicit statement that indicates how a researcher thinks the
phenomena of interest are related.
 It represents the proposed explanation for some phenomenon.
 Indicates how an independent variable is thought to affect,
influence, or alter a dependent variable.
 A proposed relationship that may be true or false.
Good Hypotheses
Hypotheses should be empirical statements: proposed
explanations for relationships that exist in the real world.
Hypotheses should be general: a hypothesis should explain
a general phenomenon rather than a particular occurrence.
Hypotheses should be plausible: some logical reason for
thinking that it may be confirmed.
Hypotheses should be specific: it should specify the
expected relationship between two variables.
Hypotheses should relate directly to the data collected.
Directional Hypotheses
Hypotheses should be specific. IOW, they should
state exactly how the independent variable relates to
the dependent variable.
1.Positive Relationship: where the concepts are
predicted to increase or decrease in size together.
2.Negative Relationship: where one concepts
increases in size or amount while the other decreases
in size or amount.
Unit of Analysis
One of the most important aspects of research design is
determining the unit of analysis.
This is where we specify the types or levels of political actor
to which t he hypothesis is thought to apply.
There are numerous kinds of units we can collect data on:
 Individuals
 Groups
 States
 Agencies
 Organizations
U of A continued
Cross-level analysis: sometimes we collect data
on one unit of analysis to answer questions about
another unit of analysis.
The purpose in CLA is to make an ecological inference: the
use of aggregate data to study the behavior of individuals.
 Data on voting districts  individual voting behavior
CAVEAT: Avoid the ecological fallacy: where a
relationship found at the aggregate level is not operative at
the individual level.
 State voting data used to infer about the relationships b/w district
voting data.
Measurement
Measurement: systematic observation and
representation by scores or numerals of the variables
we have decided to investigate.
Operational definition: deciding what kinds of
empirical observations should be made to measure
the occurrence of an attribute or behavior.
Measuring Variables
The level of
measurement refers to
the relationship among
the values that are
assigned to the attributes
for a variable
It is important to
distinguish between the
values of a variable and
the level of measurment
Levels of Measurement
There are typically four levels of measurement that
are defined:
 Nominal
 Ordinal
 Interval
 Ratio
Levels of Measurement
Knowing the level of
measurement helps you
decide how to interpret
the data from that
variable.
Knowing the level of
measurement helps you
decide what statistical
analysis is appropriate
on the values that were
assigned.
It's important to
recognize that there is a
hierarchy implied in the
level of measurement
idea.
Nominal & Ordinal
In nominal measurement the numerical values just
"name" the attribute uniquely.
 No ordering of the cases is implied. For example, jersey numbers in
basketball are measures at the nominal level. A player with number 30 is not
more of anything than a player with number 15, and is certainly not twice
whatever number 15 is.
In ordinal measurement the attributes can be rank-
ordered.
 Here, distances between attributes do not have any meaning. For example,
on a survey you might code Educational Attainment as 0=less than H.S.;
1=some H.S.; 2=H.S. degree; 3=some college; 4=college degree; 5=post
college. In this measure, higher numbers mean more education. But is
distance from 0 to 1 same as 3 to 4? Of course not. The interval between
values is not interpretable in an ordinal measure.
Interval & Ratio
In interval measurement the distance between attributes
does have meaning.
 For example, when we measure temperature (in Fahrenheit), the distance
from 30-40 is same as distance from 70-80. The interval between values is
interpretable. Because of this, it makes sense to compute an average of an
interval variable, where it doesn't make sense to do so for ordinal scales. But
note that in interval measurement ratios don't make any sense - 80 degrees
is not twice as hot as 40 degrees
In ratio measurement there is always an absolute zero that
is meaningful.
 This means that you can construct a meaningful fraction (or ratio) with a
ratio variable. Weight is a ratio variable. In applied social research most
"count" variables are ratio, for example, the number of clients in past six
months. Why? Because you can have zero clients and because it is
meaningful to say that "...we had twice as many clients in the past six months
as we did in the previous six months."
Nominal, Ordinal, Interval, and Ratio Scales Provide Different Information
Characteristics of Different Levels of Scale Measurement
Type of
Scale
Data
Characteristics
Numerical
Operation
Descriptive
Statistics
Examples
Nominal Classification but no
order, distance, or
origin
Counting Frequency in each
category
Percent in each
category
Mode
Gender (1=Male,
2=Female)
Ordinal Classification and
order but no distance
or unique origin
Rank ordering Median
Range
Percentile ranking
Academic status
(1=Freshman,
2=Sophomore,
3=Junior,
4=Senior)
Interval Classification, order,
and distance but no
unique origin
Arithmetic
operations that
preserve order and
magnitude
Mean
Standard deviation
Variance
Temperature in
degrees
Satisfaction on
semantic
differential scale
Ratio Classification, order,
distance and unique
origin
Arithmetic
operations on
actual quantities
Geometric mean
Coefficient of
variation
Age in years
Income in Saudi
riyals
Levels & Research Design
At lower levels of measurement, assumptions tend to
be less restrictive and data analyses tend to be less
sensitive. At each level up the hierarchy, the current
level includes all of the qualities of the one below it
and adds something new
In general, it is desirable to have a higher level of
measurement (e.g., interval or ratio) rather than a
lower one (nominal or ordinal).
True Score Theory
True Score Theory is a theory about
measurement. Like all theories, you need to
recognize that it is not proven -- it is postulated as a
model of how the world operates. Like many very
powerful model, the true score theory is a very
simple one.
 Essentially, true score theory maintains that every
measurement is an additive composite of two
components: true ability (or the true level) of the
respondent on that measure; and random error.
True Score Theory
We observe the
measurement -- the
score on the test, the
total for a self-esteem
instrument, the scale
value for a person's
weight. We don't observe
what's on the right side
of the equation (only
God knows what those
values are!), we assume
that there are two
components to the right
side.
1.The ‘true’ value
2.The error in our
measurement of that
value
Error
 The true score theory is a good simple model for measurement, but it
may not always be an accurate reflection of reality.
 In particular, it assumes that any observation is composed of the true
value plus some random error value. But is that reasonable? What if all
error is not random?
 Isn't it possible that some errors are systematic, that they hold across
most or all of the members of a group?
 One way to deal with this notion is to revise the simple true score model
by dividing the error component into two subcomponents, random
error and systematic error. here, we'll look at the differences
between these two types of errors and try to diagnose their effects on
our research.
Random Error
Random error is caused by any factors that randomly
affect measurement of the variable across the
sample.
 For instance, each person's mood can inflate or deflate their
performance on any occasion. In a particular testing, some
children may be feeling in a good mood and others may be
depressed.
 If mood affects their performance on the measure, it may
artificially inflate the observed scores for some children and
artificially deflate them for others.
Random Error is often referred to as ‘noise.’
Random Error does not effect averages.
Random Error
The important thing
about random error is
that it does not have any
consistent effects across
the entire sample.
Instead, it pushes
observed scores up or
down randomly.
This means that if we
could see all of the
random errors in a
distribution they would
have to sum to 0 -- there
would be as many
negative errors as
positive ones.
Systematic Error
Systematic error is caused by any factors that
systematically affect measurement of the variable
across the sample.
 For instance, if there is loud traffic going by just outside of a
classroom where students are taking a test, this noise is liable
to affect all of the children's scores -- in this case,
systematically lowering them.
Unlike random error, systematic errors tend to be
consistently either positive or negative -- because of
this, systematic error is sometimes considered to be
bias in measurement.
Systematic Error
Systematic error, or bias,
is a real threat to your
research.
Because it affects the
average results, it may
cause you to report a
relationship that doesn’t
exist or miss a
relationship that does
exist.
Avoiding bias in our
research is an important
technique for producing
good research.
Reducing & Eliminating Errors
So, how can we reduce measurement errors, random
or systematic?
 One thing you can do is to pilot test your instruments, getting
feedback from your respondents regarding how easy or hard
the measure was and information about how the testing
environment affected their performance.
 Second, if you are gathering measures using people to collect
the data (as interviewers or observers) you should make sure
you train them thoroughly so that they aren't inadvertently
introducing error.
R & E Errors
 Third, when you collect the data for your study you should double-
check the data thoroughly. All data entry for computer analysis should
be "double-punched" and verified. This means that you enter the data
twice, the second time having your data entry machine check that you
are typing the exact same data you did the first time.
 Fourth, you can use statistical procedures to adjust for measurement
error. These range from rather simple formulas you can apply directly
to your data to very complex modeling procedures for modeling the
error and its effects.
 Finally, one of the best things you can do to deal with measurement
errors, especially systematic errors, is to use multiple measures of the
same construct. Especially if the different measures don't share the
same systematic errors, you will be able to triangulate across the
multiple measures and get a more accurate sense of what's going on.
How do we measure
Unemployment?
Concepts
Definitions
How do we collect data on it?
What should that data tell us?
Why do we want to know about unemployment to
begin with?
Unemployment: Federal
definition
The definition of unemployment used in this report
is the standard Federal definition of the percent of
individuals in the labor force who were not
employed.
The labor force is defined as individuals who were
employed, were on lay-off, or had sought work
within the preceding four weeks. Although this is the
most commonly used measure of unemployment,
other measures are used.
Unemployment: How is it
measured?
Because unemployment insurance records relate
only to persons who have applied for such benefits,
and since it is impractical to actually count every
unemployed person each month, the Government
conducts a monthly sample survey called the Current
Population Survey (CPS) to measure the extent of
unemployment in the country. The CPS has been
conducted in the United States every month since
1940 when it began as a Work Projects
Administration project.
Unemployment Defining the Concepts
The basic concepts involved in identifying the
employed and unemployed are quite simple:
 People with jobs are employed.
 People who are jobless, looking for jobs, and available for work
are unemployed.
 People who are neither employed nor unemployed are not in
the labor force.
Operational Definition of Unemployment
The survey is designed so that each person age 16
and over who is not in an institution such as a prison
or mental hospital or on active duty in the Armed
Forces is counted and classified in only one group.
The sum of the employed and the unemployed
constitutes the civilian labor force.
Persons not in the labor force combined with those
in the civilian labor force constitute the civilian
noninstitutional population 16 years of age and over.
Reliability & Validity
In research, the term "reliable" can means
dependable in a general sense, but that's not a
precise enough definition. What does it mean to have
a dependable measure or observation in a research
context?
In research, the term reliability means
"repeatability" or "consistency".
A measure is considered reliable if it would give us
the same result over and over again (assuming that
what we are measuring isn't changing!).

More Related Content

PPTX
Content analysis research
PPTX
Content analysis
PPTX
Writing Qualitative Research Reports PowerPoint
PPTX
Content analysis
PPTX
Quantitative and Qualitative Research
PPTX
Thematic Analysis for Qualitative Research.pptx
PPTX
Validity of a Research Tool
Content analysis research
Content analysis
Writing Qualitative Research Reports PowerPoint
Content analysis
Quantitative and Qualitative Research
Thematic Analysis for Qualitative Research.pptx
Validity of a Research Tool

What's hot (20)

PPTX
Case study method in research
PPT
Research Methodology (MBA II SEM) - Introduction to SPSS
PDF
Choosing the Right Statistical Techniques
PPT
Quantitative analysis using SPSS
PDF
What is Reliability and its Types?
PPT
Descriptive statistics
DOCX
Research Design _komal-1.docx
PPT
Review of literature
PPTX
Methods of research -Historical research
PPTX
Methodology & Content analysis
PPT
Questionnaire Construction
PPTX
Content analysis
PPTX
Validity and reliability of questionnaires
PPTX
Ethnography Research
PPTX
ACTION RESEARCH
PPTX
HISTORICAL RESEARCH
PPTX
Types of hypotheses
PPTX
Observation as a method of data collection
PPT
Qualitative research
PPTX
Hypotheses- Concept, Sources & Types
Case study method in research
Research Methodology (MBA II SEM) - Introduction to SPSS
Choosing the Right Statistical Techniques
Quantitative analysis using SPSS
What is Reliability and its Types?
Descriptive statistics
Research Design _komal-1.docx
Review of literature
Methods of research -Historical research
Methodology & Content analysis
Questionnaire Construction
Content analysis
Validity and reliability of questionnaires
Ethnography Research
ACTION RESEARCH
HISTORICAL RESEARCH
Types of hypotheses
Observation as a method of data collection
Qualitative research
Hypotheses- Concept, Sources & Types
Ad

Viewers also liked (6)

PPT
Elements of scientific writing3
PPT
Khan 2 new
DOCX
Introduction MAKS-PGCJ (29.05.2015)
DOCX
CV (2017) of Muhammad Attique Khan Shahid
PPT
Scientific method and procces.
PPT
Grand unified theory
Elements of scientific writing3
Khan 2 new
Introduction MAKS-PGCJ (29.05.2015)
CV (2017) of Muhammad Attique Khan Shahid
Scientific method and procces.
Grand unified theory
Ad

Similar to Research methods 2 operationalization & measurement (20)

PPTX
Analyzing quantitative data
PPT
QUANTITATIVE RESEARCH DESIGN AND METHODS (1).ppt
PPT
Kinds Of Variables Kato Begum
PPT
QUANTITATIVE RESEARCH DESIGN AND METHODS.ppt
PPTX
Data presentation. Faculty will demonstrate use of MS excel in preparing var...
PPTX
Measurementand scaling-10
PPTX
Poe_STUDY GUIDE_term 2.docx.pptx
PPTX
Measurement in research
PPTX
PPTX
Data analysis powerpoint
PPT
arStarting1.ppt
PPT
Scaling and Measurement techniques
PPT
Statistics And Correlation
DOCX
Statistical ProcessesCan descriptive statistical processes b.docx
PPTX
Reseaech methodology reena
PPTX
02 Basics of Research Methodology...pptx
PDF
Real Estate Data Set
DOCX
level of measurement TED TALK.docx
PPTX
Descriptive Statistics Analysis - Week 6
PPTX
05-Measurement Scales in the education field of.pptx
Analyzing quantitative data
QUANTITATIVE RESEARCH DESIGN AND METHODS (1).ppt
Kinds Of Variables Kato Begum
QUANTITATIVE RESEARCH DESIGN AND METHODS.ppt
Data presentation. Faculty will demonstrate use of MS excel in preparing var...
Measurementand scaling-10
Poe_STUDY GUIDE_term 2.docx.pptx
Measurement in research
Data analysis powerpoint
arStarting1.ppt
Scaling and Measurement techniques
Statistics And Correlation
Statistical ProcessesCan descriptive statistical processes b.docx
Reseaech methodology reena
02 Basics of Research Methodology...pptx
Real Estate Data Set
level of measurement TED TALK.docx
Descriptive Statistics Analysis - Week 6
05-Measurement Scales in the education field of.pptx

More from attique1960 (11)

DOC
Khan summery special 2021
DOCX
Muhammad attique khan shahid cv (2019 as per directions)
DOC
Principall Vision
PPT
Modern Atomic theory
PPTX
Niab 2016
DOCX
Cv (2016) of muhammad attique khan shahid
DOCX
Cv (2016) of muhammad attique khan shahid
PDF
science fair
PPT
PhysicsChpt1
PPTX
PEEDA PRESENTATION Sept 2013 - Copy
PPT
Presentation of Atomic bombs
Khan summery special 2021
Muhammad attique khan shahid cv (2019 as per directions)
Principall Vision
Modern Atomic theory
Niab 2016
Cv (2016) of muhammad attique khan shahid
Cv (2016) of muhammad attique khan shahid
science fair
PhysicsChpt1
PEEDA PRESENTATION Sept 2013 - Copy
Presentation of Atomic bombs

Recently uploaded (18)

DOC
UP毕业证学历认证,阿拉巴马大学毕业证国外证书
PDF
NS_HRM_2023-Training-and-Development.pdf
PPTX
ADVANCED WORKFORCE PLANNING FOR mBA GRADS
DOC
Penn毕业证学历认证,阿拉斯加大学费尔班克斯分校毕业证学位证书复制
PDF
TOP 10+ AngularJS Developers for Hire in 2025 with Upstaff Platform with Upst...
PPTX
HRM introduction to the working place. Human Resouces
PPT
Staffing, Human resource management practices
PDF
NS_HRM_2023 - Recruitment, Selection, and Placement.pdf
PPT
HR Management - Healthcare Compensaton -
PPTX
Human Resources Management Presentation.pptx
PDF
Induction and Socialization __Objectives
PDF
The Symphony of AI and User Experience_ Demystifying the Connection.pdf
PDF
Labor Market Regulations, and others (Session 4 Report) DADO.pdf
PPTX
Determine your personality and how to apply it in leadership
PPTX
HR Payroll Management Amazon presentation
DOC
ISU毕业证学历认证,斯旺西大学毕业证留学回国
PDF
Sirogiddin D. Senior Data Engineer, DataOps with ML & Data Science skills
PPTX
Strategic Human Resource Management in practice
UP毕业证学历认证,阿拉巴马大学毕业证国外证书
NS_HRM_2023-Training-and-Development.pdf
ADVANCED WORKFORCE PLANNING FOR mBA GRADS
Penn毕业证学历认证,阿拉斯加大学费尔班克斯分校毕业证学位证书复制
TOP 10+ AngularJS Developers for Hire in 2025 with Upstaff Platform with Upst...
HRM introduction to the working place. Human Resouces
Staffing, Human resource management practices
NS_HRM_2023 - Recruitment, Selection, and Placement.pdf
HR Management - Healthcare Compensaton -
Human Resources Management Presentation.pptx
Induction and Socialization __Objectives
The Symphony of AI and User Experience_ Demystifying the Connection.pdf
Labor Market Regulations, and others (Session 4 Report) DADO.pdf
Determine your personality and how to apply it in leadership
HR Payroll Management Amazon presentation
ISU毕业证学历认证,斯旺西大学毕业证留学回国
Sirogiddin D. Senior Data Engineer, DataOps with ML & Data Science skills
Strategic Human Resource Management in practice

Research methods 2 operationalization & measurement

  • 2. Second Stage: Operationalization 1. Formulation of Theory 2. Operationalization of Theory 3. Selection of Appropriate Research Techniques 4. Observation of Behavior (Data Collection) 5. Analysis of Data 6. Interpretation of Results
  • 3. Hypotheses Generation Hypothesis:  an explicit statement that indicates how a researcher thinks the phenomena of interest are related.  It represents the proposed explanation for some phenomenon.  Indicates how an independent variable is thought to affect, influence, or alter a dependent variable.  A proposed relationship that may be true or false.
  • 4. Good Hypotheses Hypotheses should be empirical statements: proposed explanations for relationships that exist in the real world. Hypotheses should be general: a hypothesis should explain a general phenomenon rather than a particular occurrence. Hypotheses should be plausible: some logical reason for thinking that it may be confirmed. Hypotheses should be specific: it should specify the expected relationship between two variables. Hypotheses should relate directly to the data collected.
  • 5. Directional Hypotheses Hypotheses should be specific. IOW, they should state exactly how the independent variable relates to the dependent variable. 1.Positive Relationship: where the concepts are predicted to increase or decrease in size together. 2.Negative Relationship: where one concepts increases in size or amount while the other decreases in size or amount.
  • 6. Unit of Analysis One of the most important aspects of research design is determining the unit of analysis. This is where we specify the types or levels of political actor to which t he hypothesis is thought to apply. There are numerous kinds of units we can collect data on:  Individuals  Groups  States  Agencies  Organizations
  • 7. U of A continued Cross-level analysis: sometimes we collect data on one unit of analysis to answer questions about another unit of analysis. The purpose in CLA is to make an ecological inference: the use of aggregate data to study the behavior of individuals.  Data on voting districts  individual voting behavior CAVEAT: Avoid the ecological fallacy: where a relationship found at the aggregate level is not operative at the individual level.  State voting data used to infer about the relationships b/w district voting data.
  • 8. Measurement Measurement: systematic observation and representation by scores or numerals of the variables we have decided to investigate. Operational definition: deciding what kinds of empirical observations should be made to measure the occurrence of an attribute or behavior.
  • 9. Measuring Variables The level of measurement refers to the relationship among the values that are assigned to the attributes for a variable It is important to distinguish between the values of a variable and the level of measurment
  • 10. Levels of Measurement There are typically four levels of measurement that are defined:  Nominal  Ordinal  Interval  Ratio
  • 11. Levels of Measurement Knowing the level of measurement helps you decide how to interpret the data from that variable. Knowing the level of measurement helps you decide what statistical analysis is appropriate on the values that were assigned. It's important to recognize that there is a hierarchy implied in the level of measurement idea.
  • 12. Nominal & Ordinal In nominal measurement the numerical values just "name" the attribute uniquely.  No ordering of the cases is implied. For example, jersey numbers in basketball are measures at the nominal level. A player with number 30 is not more of anything than a player with number 15, and is certainly not twice whatever number 15 is. In ordinal measurement the attributes can be rank- ordered.  Here, distances between attributes do not have any meaning. For example, on a survey you might code Educational Attainment as 0=less than H.S.; 1=some H.S.; 2=H.S. degree; 3=some college; 4=college degree; 5=post college. In this measure, higher numbers mean more education. But is distance from 0 to 1 same as 3 to 4? Of course not. The interval between values is not interpretable in an ordinal measure.
  • 13. Interval & Ratio In interval measurement the distance between attributes does have meaning.  For example, when we measure temperature (in Fahrenheit), the distance from 30-40 is same as distance from 70-80. The interval between values is interpretable. Because of this, it makes sense to compute an average of an interval variable, where it doesn't make sense to do so for ordinal scales. But note that in interval measurement ratios don't make any sense - 80 degrees is not twice as hot as 40 degrees In ratio measurement there is always an absolute zero that is meaningful.  This means that you can construct a meaningful fraction (or ratio) with a ratio variable. Weight is a ratio variable. In applied social research most "count" variables are ratio, for example, the number of clients in past six months. Why? Because you can have zero clients and because it is meaningful to say that "...we had twice as many clients in the past six months as we did in the previous six months."
  • 14. Nominal, Ordinal, Interval, and Ratio Scales Provide Different Information
  • 15. Characteristics of Different Levels of Scale Measurement Type of Scale Data Characteristics Numerical Operation Descriptive Statistics Examples Nominal Classification but no order, distance, or origin Counting Frequency in each category Percent in each category Mode Gender (1=Male, 2=Female) Ordinal Classification and order but no distance or unique origin Rank ordering Median Range Percentile ranking Academic status (1=Freshman, 2=Sophomore, 3=Junior, 4=Senior) Interval Classification, order, and distance but no unique origin Arithmetic operations that preserve order and magnitude Mean Standard deviation Variance Temperature in degrees Satisfaction on semantic differential scale Ratio Classification, order, distance and unique origin Arithmetic operations on actual quantities Geometric mean Coefficient of variation Age in years Income in Saudi riyals
  • 16. Levels & Research Design At lower levels of measurement, assumptions tend to be less restrictive and data analyses tend to be less sensitive. At each level up the hierarchy, the current level includes all of the qualities of the one below it and adds something new In general, it is desirable to have a higher level of measurement (e.g., interval or ratio) rather than a lower one (nominal or ordinal).
  • 17. True Score Theory True Score Theory is a theory about measurement. Like all theories, you need to recognize that it is not proven -- it is postulated as a model of how the world operates. Like many very powerful model, the true score theory is a very simple one.  Essentially, true score theory maintains that every measurement is an additive composite of two components: true ability (or the true level) of the respondent on that measure; and random error.
  • 18. True Score Theory We observe the measurement -- the score on the test, the total for a self-esteem instrument, the scale value for a person's weight. We don't observe what's on the right side of the equation (only God knows what those values are!), we assume that there are two components to the right side. 1.The ‘true’ value 2.The error in our measurement of that value
  • 19. Error  The true score theory is a good simple model for measurement, but it may not always be an accurate reflection of reality.  In particular, it assumes that any observation is composed of the true value plus some random error value. But is that reasonable? What if all error is not random?  Isn't it possible that some errors are systematic, that they hold across most or all of the members of a group?  One way to deal with this notion is to revise the simple true score model by dividing the error component into two subcomponents, random error and systematic error. here, we'll look at the differences between these two types of errors and try to diagnose their effects on our research.
  • 20. Random Error Random error is caused by any factors that randomly affect measurement of the variable across the sample.  For instance, each person's mood can inflate or deflate their performance on any occasion. In a particular testing, some children may be feeling in a good mood and others may be depressed.  If mood affects their performance on the measure, it may artificially inflate the observed scores for some children and artificially deflate them for others. Random Error is often referred to as ‘noise.’ Random Error does not effect averages.
  • 21. Random Error The important thing about random error is that it does not have any consistent effects across the entire sample. Instead, it pushes observed scores up or down randomly. This means that if we could see all of the random errors in a distribution they would have to sum to 0 -- there would be as many negative errors as positive ones.
  • 22. Systematic Error Systematic error is caused by any factors that systematically affect measurement of the variable across the sample.  For instance, if there is loud traffic going by just outside of a classroom where students are taking a test, this noise is liable to affect all of the children's scores -- in this case, systematically lowering them. Unlike random error, systematic errors tend to be consistently either positive or negative -- because of this, systematic error is sometimes considered to be bias in measurement.
  • 23. Systematic Error Systematic error, or bias, is a real threat to your research. Because it affects the average results, it may cause you to report a relationship that doesn’t exist or miss a relationship that does exist. Avoiding bias in our research is an important technique for producing good research.
  • 24. Reducing & Eliminating Errors So, how can we reduce measurement errors, random or systematic?  One thing you can do is to pilot test your instruments, getting feedback from your respondents regarding how easy or hard the measure was and information about how the testing environment affected their performance.  Second, if you are gathering measures using people to collect the data (as interviewers or observers) you should make sure you train them thoroughly so that they aren't inadvertently introducing error.
  • 25. R & E Errors  Third, when you collect the data for your study you should double- check the data thoroughly. All data entry for computer analysis should be "double-punched" and verified. This means that you enter the data twice, the second time having your data entry machine check that you are typing the exact same data you did the first time.  Fourth, you can use statistical procedures to adjust for measurement error. These range from rather simple formulas you can apply directly to your data to very complex modeling procedures for modeling the error and its effects.  Finally, one of the best things you can do to deal with measurement errors, especially systematic errors, is to use multiple measures of the same construct. Especially if the different measures don't share the same systematic errors, you will be able to triangulate across the multiple measures and get a more accurate sense of what's going on.
  • 26. How do we measure Unemployment? Concepts Definitions How do we collect data on it? What should that data tell us? Why do we want to know about unemployment to begin with?
  • 27. Unemployment: Federal definition The definition of unemployment used in this report is the standard Federal definition of the percent of individuals in the labor force who were not employed. The labor force is defined as individuals who were employed, were on lay-off, or had sought work within the preceding four weeks. Although this is the most commonly used measure of unemployment, other measures are used.
  • 28. Unemployment: How is it measured? Because unemployment insurance records relate only to persons who have applied for such benefits, and since it is impractical to actually count every unemployed person each month, the Government conducts a monthly sample survey called the Current Population Survey (CPS) to measure the extent of unemployment in the country. The CPS has been conducted in the United States every month since 1940 when it began as a Work Projects Administration project.
  • 29. Unemployment Defining the Concepts The basic concepts involved in identifying the employed and unemployed are quite simple:  People with jobs are employed.  People who are jobless, looking for jobs, and available for work are unemployed.  People who are neither employed nor unemployed are not in the labor force.
  • 30. Operational Definition of Unemployment The survey is designed so that each person age 16 and over who is not in an institution such as a prison or mental hospital or on active duty in the Armed Forces is counted and classified in only one group. The sum of the employed and the unemployed constitutes the civilian labor force. Persons not in the labor force combined with those in the civilian labor force constitute the civilian noninstitutional population 16 years of age and over.
  • 31. Reliability & Validity In research, the term "reliable" can means dependable in a general sense, but that's not a precise enough definition. What does it mean to have a dependable measure or observation in a research context? In research, the term reliability means "repeatability" or "consistency". A measure is considered reliable if it would give us the same result over and over again (assuming that what we are measuring isn't changing!).