2. CRITERIA FOR GOOD MEASUREMENT
• Now that we have seen how to operationally define variables.
• it is important to make sure that the instrument that we develop
to measure a particular concept is indeed accurately measuring
the variable,
• and in fact, we are actually measuring the concept that we set out
to measure.
• This ensures that in operationally defining perceptual and
attitudinal variables, we have not overlooked some important
dimensions and elements or included some irrelevant ones.
• The scales developed could often be imperfect and errors are
prone to occur in the measurement of attitudinal variables.
• The use of better instruments will ensure more accuracy in
results, which in turn, will enhance the scientific quality of the
research.
3. Reliability
• The reliability of a measure indicates the extent to
which it is without bias (error free) and hence
ensures consistent measurement across time and
across the various items in the instrument.
• Accuracy in measurement
• Its answer the two things
1. Stability
2. Consistency
• Reliability is the contributor to validity.
4. 1. Stability of Measures
• The measure is said to be the stable, if you can secure
consistent results with repeated measurement of the
same person with the same instrument.
• Same reading on particular person when repeated
one or more time.
• Repeat and compare
• Time involve situational factors
a) Test-retest Reliability
b) Parallel-Form Reliability
5. a. Test-retest Reliability
• Test-retest method of determining reliability involves administering the same
scale to the same respondents at two separate times to test for stability.
• If the measure is stable over time, the test, administered under the same
conditions each time, should obtain similar results.
• researcher measures job satisfaction and finds that 64 percent of the
population is satisfied with their jobs.
• If the study is repeated a few weeks later under similar conditions, and the
• researcher again finds that 64 percent of the population is satisfied with their
jobs
• The measure has repeatability
• The high stability correlation or consistency between the two measures at
• time 1 and at time 2 indicates high degree of reliability.
1. the first measure may sensitize the respondents to their participation in a
research project and subsequently influence the results of the second
measure.
6. b. Parallel-Form Reliability:
• When responses on two similar sets of measures
tapping the same construct are highly correlated.
• also called equivalent-form reliability.
• Both forms have similar items and same response
format, the only changes being the wording and
the order or sequence of the questions.
• What we try to establish here is the error
variability resulting from wording and ordering of
the questions.
7. 2. Internal Consistency of Measures
• Internal consistency of measures is indicative
of the homogeneity of the items in the
measure that tap the construct.
• hang together as a set and be capable of
independently measuring the same concept
8. a. Inter-item Consistency reliability:
• This is a test of consistency of respondents’
answers to all the items in a measure.
• To the degree that items are independent
measures of the same concept, they will be
correlated with one another.
9. Validity
• When we ask a set of question with the hope we are
tapping the concept, how can we be reasonably
certain that we are indeed measuring the concept we
set out to do and not something else.
• Validity is the ability of an instrument (for example
measuring an attitude) to measure what it is supposed
to measure.
1. content validity
2. criterion-related validity
3. construct validity.
10. 1. Content Validity
• Ensures that the measure includes an adequate and
representative set of items that tape the concept.
• How well the dimensions and elements of a concept
have been delineated.
• A panel of persons to judge how well the instrument
meets the standard can attest to the content validity
of the instrument.
• Face validity is considered as a basic and very
minimum index of content validity
11. 2. Criterion-Related Validity
• Criterion validity uses some standard or criterion to indicate a construct accurately.
• The validity of an indicator is verified by comparing it with another measure of the same
construct in which research has confidence.
• Concurrent validity:
• an indicator must be associated with a preexisting indicator that is judged to be valid.
• For it to be concurrently valid, it should be highly associated with existing IQ tests
• It means that most people who score high on the old measure should also score
• high on the new one, and vice versa.
• Predictive validity:
• Criterion validity whereby an indicator predicts future events that are logically related to a
construct is called a predictive validity.
• It cannot be used for all measures.
• These are supposed to measure the scholastic aptitude of the candidates
• the ability to
• perform in institution as well as in the subject.
• If this test has high predictive validity, then candidates who get high test score will
subsequently do well in their subjects.
• If students with high scores perform the same as students with average or low score, then the
test has low predictive validity.
12. 3. Construct Validity
• Construct validity is for measures with multiple indicators.
• It addresses the question: If the measure is valid, do the various indicators
operate in consistent manner?
• Convergent Validity:
• This kind of validity applies when multiple indicators converge or are
associated with one another.
• Convergent validity means that multiple measures of the same construct
hang together or operate in similar ways.
• It established when the scores obtained from two different scales are highly
correlated.
• Discriminant Validity:
• Also called divergent validity,
• Validity established when, based on theory, two variables are predictive to
be uncorrelated, and scores obtained from measuring them found to be so.