SlideShare a Scribd company logo
Types of research design – experiments Chapter 8 in Babbie & Mouton (2001) Introduction to all research designs All research designs have specific objectives they strive for Have different strengths and limitations Have validity considerations
Validity considerations When we say that a knowledge claim (or proposition) is valid, we make a JUDGEMENT about the extent to which relevant evidence supports that claim to be true Is the interpretation of the evidence given the only possible one, or are there other plausible ones? "Plausible rival hypotheses" = potential alternative explanations/claims e.g. New York City's "zero tolerance" crime fighting strategy in the 1980s and 1990s - the reverse of the "broken windows" effect
The logic of causal social research in the controlled experiment Explanatory rather than descriptive Different from correlational research - one variable is manipulated (IV) and the effect of that manipulation observed on a second variable (DV) If … then …. E.g. "Animals respond aggressively to crowding" (causal) "People with premarital sexual experience have more stable marriages" (noncausal)
Three pairs of components: Independent and dependent variables Pre-testing and post-testing Experimental and control groups
Components Variables Dependent (DV) Independent (IV) Pre-testing and post-testing O X  O Experimental and control groups To off-set the effects of the experiment itself; to detect effects of the experiment itself
The generic experimental design: R O 1 X O 2 R O 3 O 4 The IV is an active variable; it is manipulated The participants who receive one level of the IV are equivalent in all ways to those who receive other levels of the IV
Sampling 1. Selecting subjects to participate in the research Careful sampling to ensure that results can be generalized from sample to population The relationship found might only exist in the sample; need to ensure that it exists in the population Probability sampling techniques
Sampling 2. How the sample is divided into two or more groups is important to make the groups similar when they start off randomization - equal chance matching - similar to quota sampling procedures match the groups in terms of the most relevant variables; e.g. age, sex, and race
Variations on the standard experimental design One-shot case study X O No real comparison
A famous one-group posttest-only design Milgram's study on obedience Obedience to authority The willingness of subjects to follow E's orders to give painful electrical shocks to another subject A real, important issue here: how could "ordinary" citizens, like many Germans during the Nazi period, do these incredibly cruel and brutal things? If a person is under allegiance to a legitimate authority, under what conditions will the person defy the authority if s/he is asked to carry out actions clearly incompatible with basic moral standards?
One-group pre-test post-test design O 1  X O 2
Example We want to find out whether a family literacy programme enhances the cognitive development of preschool-age children. Find 20 families with a 4-year old child, enrol the family in a high-quality family literacy programme Administer a pretest to the 20 children - they score a mean of say 50 on the cognitive test The family participates in the programme for twelve months Administer a post-test to the 20 children;  now they score 75 on the test - a gain of 25
Two claims/conclusions: 1 The children gained 25 points on average  in terms of their cognitive performance 2 the family literacy programme caused the gain in scores VALIDITY - rival explanations
Static-group comparison X O   O
Evaluating research (experiments) We know the structure of research We understand designs We know the requirements of "good" research Then we can evaluate a study Is it good? Can we believe its conclusions? Back to plausible rival hypotheses
Validity in designs If the design is not valid, then the conclusions drawn are not supported; it is like not doing research at all Validity of designs come in two parts: Internal validity  can the design sustain the conclusions? External validity  can the conclusions be generalized to the population?
Internal validity Each design is only capable of supporting certain types of conclusions e.g. only experiments can support conclusions about causality Says nothing about if the results can be applied to the real world (generalization) Generally, the more controlled the situation, the higher the internal validity The conclusions drawn from experimental results may not accurately reflect hat has gone on in the experiment itself
Sources of internal invalidity These sources often discussed as part of experiments, but can be applied to all designs (e.g. see reactivity) History Historical events may occur that will be confounded with the IV Especially in field research (compare the control in a laboratory, e.g. nonsense syllables in memory studies
Maturation Changes over time can be caused by a natural learning process People naturally grow older, tired, bored, over time
Testing (reactivity) People realize they are being studied, and respond the way they think is appropriate The very act of studying something may change it In qualitative research, the "on stage" effects
The Hawthorne studies Improved performance because of the researcher's presence - people became aware that they were in an experiment, or that they were given special treatment Especially for people who lack social contacts, e.g. residents of nursing homes, chronic mental patients
Placebo effect When a person expects a treatment or experience to change her/him, the person changes, even when the "treatment" is know to be inert or ineffective Medical research "The bedside manner", or the power of suggestion
Experimenter expectancy Pygmalion effect - self-fulfilling prophecies of e.g. teachers' expectancies about student achievement Experimenters may prejudge their results - experimenter bias Double blind experiments: Both the researcher and the research participant are "blind" to the purpose of the study. They don't know what treatment the participant is getting
Instrumentation Instruments with low reliability lead to inaccurate findings/missing phenomena e.g. human observers become more skilled over time (from pretest to posttest) and so report more accurate scores at later time points
Statistical regression to the mean Studying extreme scores can lead to inflated differences, which would not occur in moderate scorers
Selection biases Selection subjects for the study, and assigning them to E-group and C-group Look out for studies using volunteers
Attrition Sometimes called experimental (or subject) mortality If subjects drop out, it creates a bias to those who did not e.g. comparing the effectiveness of family therapy with discussion groups for treatment of drug addiction addicts with the worst prognosis more likely to drop out of the discussion group will make it look like family therapy does less well than discussion groups, because the "worst cases" were still in the family therapy group
Diffusion or imitation of treatments When subject can communicate to each other, pass on some information about the treatment (IV)
Compensation In real life, people may feel sorry for C-group who does not get "the treatment" - try to give them something extra e.g. compare usual day care for street children with an enhanced day treatment condition service providers may very well complain about inequity, and provide some enhanced service to the children receiving usual care
Compensatory rivalry C-group may "work harder" to compete better with the E-group
Demoralization Opposite to compensatory rivalry May feel deprived, and give up e.g. giving unemployed high school dropouts a second chance at completing matric via a special education programme if we assign some of them to a control group, who receive "no treatment", they may very well become profoundly demoralized
External validity Can the findings of the study be generalized? Do they speak only of our sample, or of a wider group? To what populations, settings, treatment variables (IV's), and measurement variables can the finding be generalized?
External validity   Mainly questions about three aspects: Research participants Independent variables, or manipulations Dependent variables, or outcomes Says nothing about the truth of the result that we are generalizing External validity only has meaning once the internal validity of a study has been established Internal validity is the basic minimum without which an experiment is uninterpretable
External validity   Our interest in answering research questions is rarely restricted to the specific situation studied - our interest is in the variables, not the specific details of a piece of research But studies differ in many ways, even if they study the same variables: operational definitions of the variables subject population studied procedural details observers settings Generally bigger samples with valid measures lead to better external validity
Sources of external invalidity Subject selection - Selecting a sample which does not represent the population well, will prevent generalization Interaction between the testing situation and the experimental stimulus When people have been sensitized to the issues by the pre-test Respond differently to the questionnaires the second time (post-test) Operationalization
Operationalization   We take a variable with wide scope and operationalize it in a narrow fashion Will we find the same results with a different operationalization of the same variable?
Field experiments "natural" - e.g. disaster research Static-group comparison type  Non-equivalent experimental and control groups
Strengths and weaknesses Strengths Control Manipulating the IV Sorting out extraneous variables Weaknesses  Articifiality - a generalization problem Expense Limited range of questions
IN CONCLUSION Donald Campbell often cited Neurath's metaphor: "in science we are like sailors who must repair a rotting ship while it is afloat at sea. We depend on the relative soundness of all other planks while we replace a particularly weak one. Each of the planks we now depend on we will in turn have to replace. No one of them is a foundation, nor point of certainty, no one of them is incorrigible"

More Related Content

PDF
Ch. 4 - The critical literature review
PPSX
Bias, confounding and fallacies in epidemiology
PPTX
Research methods-vs-research-methodology-workshop
PPTX
Types of research
PPTX
Review of literature
PPT
basic research concepts, history, trends & types of study.
PPTX
What is a Research design and its types
PPTX
Cross sectional research desighn
Ch. 4 - The critical literature review
Bias, confounding and fallacies in epidemiology
Research methods-vs-research-methodology-workshop
Types of research
Review of literature
basic research concepts, history, trends & types of study.
What is a Research design and its types
Cross sectional research desighn

What's hot (20)

PPTX
Longitudinal research
PPTX
Quasi Experimental Research Designs
PPTX
Research objective
PPT
Qualitative research methodology 1
PPTX
Randomized controlled trial: Going for the Gold
PPT
Medical research methodology
PDF
4. scientific methods of research
PPSX
Experimental Design
DOC
Introduction To Research
PPTX
internal and external validity
PPTX
Research process
PDF
Validity and Reliability.pdf
PPTX
Types of research studies, advantages and Disadvantages
PDF
Redundant Publication: duplicate and overlapping Publication, Salami Slicing,...
PPTX
Qualitative research by Dr. Subraham Pany
PPTX
publication misconduct.pptx
PPTX
Publication Ethics Definition Introduction and Importance
PPTX
Hypothesis in Research Methodology
PDF
7. Selecting a data collection method
PPT
Research Methodology
Longitudinal research
Quasi Experimental Research Designs
Research objective
Qualitative research methodology 1
Randomized controlled trial: Going for the Gold
Medical research methodology
4. scientific methods of research
Experimental Design
Introduction To Research
internal and external validity
Research process
Validity and Reliability.pdf
Types of research studies, advantages and Disadvantages
Redundant Publication: duplicate and overlapping Publication, Salami Slicing,...
Qualitative research by Dr. Subraham Pany
publication misconduct.pptx
Publication Ethics Definition Introduction and Importance
Hypothesis in Research Methodology
7. Selecting a data collection method
Research Methodology
Ad

Viewers also liked (20)

PPTX
Research Design and Types of Research Design Arun Joseph MPhil ppt
PPTX
Ppt on research design
PPT
Research Design
PPTX
Research design
PPSX
Types of Research Designs RS Mehta
PPSX
Introduction to research design
PPT
Research design
PPT
Research Design
PPTX
Quantitative Data Analysis
PPTX
Types of Research
DOCX
Grade 6 MYP science
PPTX
Scientific investigations, reporting and analyzing data,
DOCX
4.activity 3 scientific investigation
DOC
Unit 3 booklet (resistance)
PPT
Types of experiments -NOS
PDF
Social media tools - an overview
PPT
Conversation Marketing
PPT
Communicating the results of research: How much does it cost and who pays?
PPT
Conversational marketing! Go to dialogue!
Research Design and Types of Research Design Arun Joseph MPhil ppt
Ppt on research design
Research Design
Research design
Types of Research Designs RS Mehta
Introduction to research design
Research design
Research Design
Quantitative Data Analysis
Types of Research
Grade 6 MYP science
Scientific investigations, reporting and analyzing data,
4.activity 3 scientific investigation
Unit 3 booklet (resistance)
Types of experiments -NOS
Social media tools - an overview
Conversation Marketing
Communicating the results of research: How much does it cost and who pays?
Conversational marketing! Go to dialogue!
Ad

Similar to Types of research design experiments (20)

PPT
2. research design ldr 280-2
PPT
2. Research Design-LDR 280 (1).PPT
PPT
Lecture 10 between groups designs
PPTX
Experimental research ijaz butt
PPTX
Fixed Designs for Psychological Research
PPTX
Quantitative research present
PPTX
Internal Validity in Research Methodology (Factors that affect Internal Valid...
PPTX
Experimental design
PPTX
Experimental design
PPT
Ch07 Experimental & Quasi-Experimental Designs
PPT
Experimental research
PPTX
Experimental research_Kritika.pptx
PDF
# 8th lect validity threats
PPTX
Social Scintific Research.pptx
PPTX
Threats to internal and external validity
PPT
Experimental research
PPT
Experimental research sd
PPTX
Issues in Experimental Design
PPTX
Experimental design
2. research design ldr 280-2
2. Research Design-LDR 280 (1).PPT
Lecture 10 between groups designs
Experimental research ijaz butt
Fixed Designs for Psychological Research
Quantitative research present
Internal Validity in Research Methodology (Factors that affect Internal Valid...
Experimental design
Experimental design
Ch07 Experimental & Quasi-Experimental Designs
Experimental research
Experimental research_Kritika.pptx
# 8th lect validity threats
Social Scintific Research.pptx
Threats to internal and external validity
Experimental research
Experimental research sd
Issues in Experimental Design
Experimental design

Recently uploaded (20)

PPTX
sap open course for s4hana steps from ECC to s4
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Approach and Philosophy of On baking technology
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
Big Data Technologies - Introduction.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
sap open course for s4hana steps from ECC to s4
The Rise and Fall of 3GPP – Time for a Sabbatical?
Building Integrated photovoltaic BIPV_UPV.pdf
Review of recent advances in non-invasive hemoglobin estimation
Network Security Unit 5.pdf for BCA BBA.
Approach and Philosophy of On baking technology
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Empathic Computing: Creating Shared Understanding
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Big Data Technologies - Introduction.pptx
NewMind AI Weekly Chronicles - August'25 Week I
Unlocking AI with Model Context Protocol (MCP)
Mobile App Security Testing_ A Comprehensive Guide.pdf
Spectral efficient network and resource selection model in 5G networks
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Understanding_Digital_Forensics_Presentation.pptx
Advanced methodologies resolving dimensionality complications for autism neur...

Types of research design experiments

  • 1. Types of research design – experiments Chapter 8 in Babbie & Mouton (2001) Introduction to all research designs All research designs have specific objectives they strive for Have different strengths and limitations Have validity considerations
  • 2. Validity considerations When we say that a knowledge claim (or proposition) is valid, we make a JUDGEMENT about the extent to which relevant evidence supports that claim to be true Is the interpretation of the evidence given the only possible one, or are there other plausible ones? "Plausible rival hypotheses" = potential alternative explanations/claims e.g. New York City's "zero tolerance" crime fighting strategy in the 1980s and 1990s - the reverse of the "broken windows" effect
  • 3. The logic of causal social research in the controlled experiment Explanatory rather than descriptive Different from correlational research - one variable is manipulated (IV) and the effect of that manipulation observed on a second variable (DV) If … then …. E.g. "Animals respond aggressively to crowding" (causal) "People with premarital sexual experience have more stable marriages" (noncausal)
  • 4. Three pairs of components: Independent and dependent variables Pre-testing and post-testing Experimental and control groups
  • 5. Components Variables Dependent (DV) Independent (IV) Pre-testing and post-testing O X O Experimental and control groups To off-set the effects of the experiment itself; to detect effects of the experiment itself
  • 6. The generic experimental design: R O 1 X O 2 R O 3 O 4 The IV is an active variable; it is manipulated The participants who receive one level of the IV are equivalent in all ways to those who receive other levels of the IV
  • 7. Sampling 1. Selecting subjects to participate in the research Careful sampling to ensure that results can be generalized from sample to population The relationship found might only exist in the sample; need to ensure that it exists in the population Probability sampling techniques
  • 8. Sampling 2. How the sample is divided into two or more groups is important to make the groups similar when they start off randomization - equal chance matching - similar to quota sampling procedures match the groups in terms of the most relevant variables; e.g. age, sex, and race
  • 9. Variations on the standard experimental design One-shot case study X O No real comparison
  • 10. A famous one-group posttest-only design Milgram's study on obedience Obedience to authority The willingness of subjects to follow E's orders to give painful electrical shocks to another subject A real, important issue here: how could "ordinary" citizens, like many Germans during the Nazi period, do these incredibly cruel and brutal things? If a person is under allegiance to a legitimate authority, under what conditions will the person defy the authority if s/he is asked to carry out actions clearly incompatible with basic moral standards?
  • 11. One-group pre-test post-test design O 1 X O 2
  • 12. Example We want to find out whether a family literacy programme enhances the cognitive development of preschool-age children. Find 20 families with a 4-year old child, enrol the family in a high-quality family literacy programme Administer a pretest to the 20 children - they score a mean of say 50 on the cognitive test The family participates in the programme for twelve months Administer a post-test to the 20 children; now they score 75 on the test - a gain of 25
  • 13. Two claims/conclusions: 1 The children gained 25 points on average in terms of their cognitive performance 2 the family literacy programme caused the gain in scores VALIDITY - rival explanations
  • 15. Evaluating research (experiments) We know the structure of research We understand designs We know the requirements of "good" research Then we can evaluate a study Is it good? Can we believe its conclusions? Back to plausible rival hypotheses
  • 16. Validity in designs If the design is not valid, then the conclusions drawn are not supported; it is like not doing research at all Validity of designs come in two parts: Internal validity can the design sustain the conclusions? External validity can the conclusions be generalized to the population?
  • 17. Internal validity Each design is only capable of supporting certain types of conclusions e.g. only experiments can support conclusions about causality Says nothing about if the results can be applied to the real world (generalization) Generally, the more controlled the situation, the higher the internal validity The conclusions drawn from experimental results may not accurately reflect hat has gone on in the experiment itself
  • 18. Sources of internal invalidity These sources often discussed as part of experiments, but can be applied to all designs (e.g. see reactivity) History Historical events may occur that will be confounded with the IV Especially in field research (compare the control in a laboratory, e.g. nonsense syllables in memory studies
  • 19. Maturation Changes over time can be caused by a natural learning process People naturally grow older, tired, bored, over time
  • 20. Testing (reactivity) People realize they are being studied, and respond the way they think is appropriate The very act of studying something may change it In qualitative research, the "on stage" effects
  • 21. The Hawthorne studies Improved performance because of the researcher's presence - people became aware that they were in an experiment, or that they were given special treatment Especially for people who lack social contacts, e.g. residents of nursing homes, chronic mental patients
  • 22. Placebo effect When a person expects a treatment or experience to change her/him, the person changes, even when the "treatment" is know to be inert or ineffective Medical research "The bedside manner", or the power of suggestion
  • 23. Experimenter expectancy Pygmalion effect - self-fulfilling prophecies of e.g. teachers' expectancies about student achievement Experimenters may prejudge their results - experimenter bias Double blind experiments: Both the researcher and the research participant are "blind" to the purpose of the study. They don't know what treatment the participant is getting
  • 24. Instrumentation Instruments with low reliability lead to inaccurate findings/missing phenomena e.g. human observers become more skilled over time (from pretest to posttest) and so report more accurate scores at later time points
  • 25. Statistical regression to the mean Studying extreme scores can lead to inflated differences, which would not occur in moderate scorers
  • 26. Selection biases Selection subjects for the study, and assigning them to E-group and C-group Look out for studies using volunteers
  • 27. Attrition Sometimes called experimental (or subject) mortality If subjects drop out, it creates a bias to those who did not e.g. comparing the effectiveness of family therapy with discussion groups for treatment of drug addiction addicts with the worst prognosis more likely to drop out of the discussion group will make it look like family therapy does less well than discussion groups, because the "worst cases" were still in the family therapy group
  • 28. Diffusion or imitation of treatments When subject can communicate to each other, pass on some information about the treatment (IV)
  • 29. Compensation In real life, people may feel sorry for C-group who does not get "the treatment" - try to give them something extra e.g. compare usual day care for street children with an enhanced day treatment condition service providers may very well complain about inequity, and provide some enhanced service to the children receiving usual care
  • 30. Compensatory rivalry C-group may "work harder" to compete better with the E-group
  • 31. Demoralization Opposite to compensatory rivalry May feel deprived, and give up e.g. giving unemployed high school dropouts a second chance at completing matric via a special education programme if we assign some of them to a control group, who receive "no treatment", they may very well become profoundly demoralized
  • 32. External validity Can the findings of the study be generalized? Do they speak only of our sample, or of a wider group? To what populations, settings, treatment variables (IV's), and measurement variables can the finding be generalized?
  • 33. External validity Mainly questions about three aspects: Research participants Independent variables, or manipulations Dependent variables, or outcomes Says nothing about the truth of the result that we are generalizing External validity only has meaning once the internal validity of a study has been established Internal validity is the basic minimum without which an experiment is uninterpretable
  • 34. External validity Our interest in answering research questions is rarely restricted to the specific situation studied - our interest is in the variables, not the specific details of a piece of research But studies differ in many ways, even if they study the same variables: operational definitions of the variables subject population studied procedural details observers settings Generally bigger samples with valid measures lead to better external validity
  • 35. Sources of external invalidity Subject selection - Selecting a sample which does not represent the population well, will prevent generalization Interaction between the testing situation and the experimental stimulus When people have been sensitized to the issues by the pre-test Respond differently to the questionnaires the second time (post-test) Operationalization
  • 36. Operationalization We take a variable with wide scope and operationalize it in a narrow fashion Will we find the same results with a different operationalization of the same variable?
  • 37. Field experiments "natural" - e.g. disaster research Static-group comparison type Non-equivalent experimental and control groups
  • 38. Strengths and weaknesses Strengths Control Manipulating the IV Sorting out extraneous variables Weaknesses Articifiality - a generalization problem Expense Limited range of questions
  • 39. IN CONCLUSION Donald Campbell often cited Neurath's metaphor: "in science we are like sailors who must repair a rotting ship while it is afloat at sea. We depend on the relative soundness of all other planks while we replace a particularly weak one. Each of the planks we now depend on we will in turn have to replace. No one of them is a foundation, nor point of certainty, no one of them is incorrigible"