SlideShare a Scribd company logo
Measurement: Scaling,
Reliability, Validity
Chapter 9
CRITERIA FOR GOOD MEASUREMENT
• Now that we have seen how to operationally define variables.
• it is important to make sure that the instrument that we develop
to measure a particular concept is indeed accurately measuring
the variable,
• and in fact, we are actually measuring the concept that we set out
to measure.
• This ensures that in operationally defining perceptual and
attitudinal variables, we have not overlooked some important
dimensions and elements or included some irrelevant ones.
• The scales developed could often be imperfect and errors are
prone to occur in the measurement of attitudinal variables.
• The use of better instruments will ensure more accuracy in
results, which in turn, will enhance the scientific quality of the
research.
Reliability
• The reliability of a measure indicates the extent to
which it is without bias (error free) and hence
ensures consistent measurement across time and
across the various items in the instrument.
• Accuracy in measurement
• Its answer the two things
1. Stability
2. Consistency
• Reliability is the contributor to validity.
1. Stability of Measures
• The measure is said to be the stable, if you can secure
consistent results with repeated measurement of the
same person with the same instrument.
• Same reading on particular person when repeated
one or more time.
• Repeat and compare
• Time involve situational factors
a) Test-retest Reliability
b) Parallel-Form Reliability
a. Test-retest Reliability
• Test-retest method of determining reliability involves administering the same
scale to the same respondents at two separate times to test for stability.
• If the measure is stable over time, the test, administered under the same
conditions each time, should obtain similar results.
• researcher measures job satisfaction and finds that 64 percent of the
population is satisfied with their jobs.
• If the study is repeated a few weeks later under similar conditions, and the
• researcher again finds that 64 percent of the population is satisfied with their
jobs
• The measure has repeatability
• The high stability correlation or consistency between the two measures at
• time 1 and at time 2 indicates high degree of reliability.
1. the first measure may sensitize the respondents to their participation in a
research project and subsequently influence the results of the second
measure.
b. Parallel-Form Reliability:
• When responses on two similar sets of measures
tapping the same construct are highly correlated.
• also called equivalent-form reliability.
• Both forms have similar items and same response
format, the only changes being the wording and
the order or sequence of the questions.
• What we try to establish here is the error
variability resulting from wording and ordering of
the questions.
2. Internal Consistency of Measures
• Internal consistency of measures is indicative
of the homogeneity of the items in the
measure that tap the construct.
• hang together as a set and be capable of
independently measuring the same concept
a. Inter-item Consistency reliability:
• This is a test of consistency of respondents’
answers to all the items in a measure.
• To the degree that items are independent
measures of the same concept, they will be
correlated with one another.
Validity
• When we ask a set of question with the hope we are
tapping the concept, how can we be reasonably
certain that we are indeed measuring the concept we
set out to do and not something else.
• Validity is the ability of an instrument (for example
measuring an attitude) to measure what it is supposed
to measure.
1. content validity
2. criterion-related validity
3. construct validity.
1. Content Validity
• Ensures that the measure includes an adequate and
representative set of items that tape the concept.
• How well the dimensions and elements of a concept
have been delineated.
• A panel of persons to judge how well the instrument
meets the standard can attest to the content validity
of the instrument.
• Face validity is considered as a basic and very
minimum index of content validity
2. Criterion-Related Validity
• Criterion validity uses some standard or criterion to indicate a construct accurately.
• The validity of an indicator is verified by comparing it with another measure of the same
construct in which research has confidence.
• Concurrent validity:
• an indicator must be associated with a preexisting indicator that is judged to be valid.
• For it to be concurrently valid, it should be highly associated with existing IQ tests
• It means that most people who score high on the old measure should also score
• high on the new one, and vice versa.
• Predictive validity:
• Criterion validity whereby an indicator predicts future events that are logically related to a
construct is called a predictive validity.
• It cannot be used for all measures.
• These are supposed to measure the scholastic aptitude of the candidates
• the ability to
• perform in institution as well as in the subject.
• If this test has high predictive validity, then candidates who get high test score will
subsequently do well in their subjects.
• If students with high scores perform the same as students with average or low score, then the
test has low predictive validity.
3. Construct Validity
• Construct validity is for measures with multiple indicators.
• It addresses the question: If the measure is valid, do the various indicators
operate in consistent manner?
• Convergent Validity:
• This kind of validity applies when multiple indicators converge or are
associated with one another.
• Convergent validity means that multiple measures of the same construct
hang together or operate in similar ways.
• It established when the scores obtained from two different scales are highly
correlated.
• Discriminant Validity:
• Also called divergent validity,
• Validity established when, based on theory, two variables are predictive to
be uncorrelated, and scores obtained from measuring them found to be so.

More Related Content

PPT
Research Methods.ppt Criterion for Good measurement. Goodness of measurement
PDF
MBA-12-02
PPTX
unit 2.6.pptx
PPT
Test characteristics
DOC
Validity and reliability
PPTX
Validity and reliability of questionnaires
PPT
Slides--Reliability and Validity.ppt
PPTX
Validity Reliability - Practical Research 2
Research Methods.ppt Criterion for Good measurement. Goodness of measurement
MBA-12-02
unit 2.6.pptx
Test characteristics
Validity and reliability
Validity and reliability of questionnaires
Slides--Reliability and Validity.ppt
Validity Reliability - Practical Research 2

Similar to MEASUREMENT METHODS AND SCALING FOR BUSINESS RESEARCH (20)

PPT
Topic validity
PPTX
Reliablity and Validity
PPTX
Validity of Assessment Tools
PPTX
Business research methods
PPTX
Reliability & validity
PPT
23APR_NR_Data collection Methods_Part 3.ppt
PPT
23APR_NR_Data collection Methods_Part 3.ppt
PPT
Presentation Validity & Reliability
PPT
Research Instrument Validity & reliability.ppt
PPTX
Validity, reliability & practicality
PPTX
Data collection reliability
PPTX
VALIDITY
PPTX
Presentation validity and reliability of instruments.pptx
PPTX
Validity and Reliability of an Instrument Brief Introduction.pptx
PPT
ch13 scales.ppt
PPTX
Reliability and Validity types and example.pptx
PDF
reliablity and validity in social sciences research
PPT
Lecture_Notes_on_Reliability_and_validit.ppt
PPTX
Research methodology scale of measurement
PPTX
Nature or Characteristics of Good Measurement.pptx
Topic validity
Reliablity and Validity
Validity of Assessment Tools
Business research methods
Reliability & validity
23APR_NR_Data collection Methods_Part 3.ppt
23APR_NR_Data collection Methods_Part 3.ppt
Presentation Validity & Reliability
Research Instrument Validity & reliability.ppt
Validity, reliability & practicality
Data collection reliability
VALIDITY
Presentation validity and reliability of instruments.pptx
Validity and Reliability of an Instrument Brief Introduction.pptx
ch13 scales.ppt
Reliability and Validity types and example.pptx
reliablity and validity in social sciences research
Lecture_Notes_on_Reliability_and_validit.ppt
Research methodology scale of measurement
Nature or Characteristics of Good Measurement.pptx
Ad

Recently uploaded (20)

PPTX
3. HISTORICAL PERSPECTIVE UNIIT 3^..pptx
PPTX
svnfcksanfskjcsnvvjknsnvsdscnsncxasxa saccacxsax
PDF
Ôn tập tiếng anh trong kinh doanh nâng cao
PDF
Cours de Système d'information about ERP.pdf
PPTX
DMT - Profile Brief About Business .pptx
PDF
BsN 7th Sem Course GridNNNNNNNN CCN.pdf
PPTX
Belch_12e_PPT_Ch18_Accessible_university.pptx
PDF
Nante Industrial Plug Factory: Engineering Quality for Modern Power Applications
PDF
Module 3 - Functions of the Supervisor - Part 1 - Student Resource (1).pdf
PPTX
Principles of Marketing, Industrial, Consumers,
PDF
How to Get Funding for Your Trucking Business
PDF
Power and position in leadershipDOC-20250808-WA0011..pdf
PDF
Laughter Yoga Basic Learning Workshop Manual
PDF
pdfcoffee.com-opt-b1plus-sb-answers.pdfvi
PDF
NISM Series V-A MFD Workbook v December 2024.khhhjtgvwevoypdnew one must use ...
PDF
Outsourced Audit & Assurance in USA Why Globus Finanza is Your Trusted Choice
PPT
340036916-American-Literature-Literary-Period-Overview.ppt
PDF
Hindu Circuler Economy - Model (Concept)
PDF
IFRS Notes in your pocket for study all the time
PDF
NewBase 12 August 2025 Energy News issue - 1812 by Khaled Al Awadi_compresse...
3. HISTORICAL PERSPECTIVE UNIIT 3^..pptx
svnfcksanfskjcsnvvjknsnvsdscnsncxasxa saccacxsax
Ôn tập tiếng anh trong kinh doanh nâng cao
Cours de Système d'information about ERP.pdf
DMT - Profile Brief About Business .pptx
BsN 7th Sem Course GridNNNNNNNN CCN.pdf
Belch_12e_PPT_Ch18_Accessible_university.pptx
Nante Industrial Plug Factory: Engineering Quality for Modern Power Applications
Module 3 - Functions of the Supervisor - Part 1 - Student Resource (1).pdf
Principles of Marketing, Industrial, Consumers,
How to Get Funding for Your Trucking Business
Power and position in leadershipDOC-20250808-WA0011..pdf
Laughter Yoga Basic Learning Workshop Manual
pdfcoffee.com-opt-b1plus-sb-answers.pdfvi
NISM Series V-A MFD Workbook v December 2024.khhhjtgvwevoypdnew one must use ...
Outsourced Audit & Assurance in USA Why Globus Finanza is Your Trusted Choice
340036916-American-Literature-Literary-Period-Overview.ppt
Hindu Circuler Economy - Model (Concept)
IFRS Notes in your pocket for study all the time
NewBase 12 August 2025 Energy News issue - 1812 by Khaled Al Awadi_compresse...
Ad

MEASUREMENT METHODS AND SCALING FOR BUSINESS RESEARCH

  • 2. CRITERIA FOR GOOD MEASUREMENT • Now that we have seen how to operationally define variables. • it is important to make sure that the instrument that we develop to measure a particular concept is indeed accurately measuring the variable, • and in fact, we are actually measuring the concept that we set out to measure. • This ensures that in operationally defining perceptual and attitudinal variables, we have not overlooked some important dimensions and elements or included some irrelevant ones. • The scales developed could often be imperfect and errors are prone to occur in the measurement of attitudinal variables. • The use of better instruments will ensure more accuracy in results, which in turn, will enhance the scientific quality of the research.
  • 3. Reliability • The reliability of a measure indicates the extent to which it is without bias (error free) and hence ensures consistent measurement across time and across the various items in the instrument. • Accuracy in measurement • Its answer the two things 1. Stability 2. Consistency • Reliability is the contributor to validity.
  • 4. 1. Stability of Measures • The measure is said to be the stable, if you can secure consistent results with repeated measurement of the same person with the same instrument. • Same reading on particular person when repeated one or more time. • Repeat and compare • Time involve situational factors a) Test-retest Reliability b) Parallel-Form Reliability
  • 5. a. Test-retest Reliability • Test-retest method of determining reliability involves administering the same scale to the same respondents at two separate times to test for stability. • If the measure is stable over time, the test, administered under the same conditions each time, should obtain similar results. • researcher measures job satisfaction and finds that 64 percent of the population is satisfied with their jobs. • If the study is repeated a few weeks later under similar conditions, and the • researcher again finds that 64 percent of the population is satisfied with their jobs • The measure has repeatability • The high stability correlation or consistency between the two measures at • time 1 and at time 2 indicates high degree of reliability. 1. the first measure may sensitize the respondents to their participation in a research project and subsequently influence the results of the second measure.
  • 6. b. Parallel-Form Reliability: • When responses on two similar sets of measures tapping the same construct are highly correlated. • also called equivalent-form reliability. • Both forms have similar items and same response format, the only changes being the wording and the order or sequence of the questions. • What we try to establish here is the error variability resulting from wording and ordering of the questions.
  • 7. 2. Internal Consistency of Measures • Internal consistency of measures is indicative of the homogeneity of the items in the measure that tap the construct. • hang together as a set and be capable of independently measuring the same concept
  • 8. a. Inter-item Consistency reliability: • This is a test of consistency of respondents’ answers to all the items in a measure. • To the degree that items are independent measures of the same concept, they will be correlated with one another.
  • 9. Validity • When we ask a set of question with the hope we are tapping the concept, how can we be reasonably certain that we are indeed measuring the concept we set out to do and not something else. • Validity is the ability of an instrument (for example measuring an attitude) to measure what it is supposed to measure. 1. content validity 2. criterion-related validity 3. construct validity.
  • 10. 1. Content Validity • Ensures that the measure includes an adequate and representative set of items that tape the concept. • How well the dimensions and elements of a concept have been delineated. • A panel of persons to judge how well the instrument meets the standard can attest to the content validity of the instrument. • Face validity is considered as a basic and very minimum index of content validity
  • 11. 2. Criterion-Related Validity • Criterion validity uses some standard or criterion to indicate a construct accurately. • The validity of an indicator is verified by comparing it with another measure of the same construct in which research has confidence. • Concurrent validity: • an indicator must be associated with a preexisting indicator that is judged to be valid. • For it to be concurrently valid, it should be highly associated with existing IQ tests • It means that most people who score high on the old measure should also score • high on the new one, and vice versa. • Predictive validity: • Criterion validity whereby an indicator predicts future events that are logically related to a construct is called a predictive validity. • It cannot be used for all measures. • These are supposed to measure the scholastic aptitude of the candidates • the ability to • perform in institution as well as in the subject. • If this test has high predictive validity, then candidates who get high test score will subsequently do well in their subjects. • If students with high scores perform the same as students with average or low score, then the test has low predictive validity.
  • 12. 3. Construct Validity • Construct validity is for measures with multiple indicators. • It addresses the question: If the measure is valid, do the various indicators operate in consistent manner? • Convergent Validity: • This kind of validity applies when multiple indicators converge or are associated with one another. • Convergent validity means that multiple measures of the same construct hang together or operate in similar ways. • It established when the scores obtained from two different scales are highly correlated. • Discriminant Validity: • Also called divergent validity, • Validity established when, based on theory, two variables are predictive to be uncorrelated, and scores obtained from measuring them found to be so.