Chapter Summary

Chapter Objectives

6.1: Distinguish between a causal and spurious relationship.
6.2: Understand the three necessary components in a causal relationship.
6.3: Illustrate how the classical randomized experiment demonstrates how a research design can be used to verify a causal relationship.
6.4: Understand internal and external validity. 
6.5: Understand the distinction between qualitative and quantitative research design.
6.6: Understand the difference between effect-of-causes and causes-of-effects approaches to investigating causal relationships. 

  • A research design is a plan of action for executing a research project that specifies the theory to be tested, the unit of analysis (such as individual, organization, or country), the necessary observable data and how it will be collected, and the procedures that will be used to examine the data.
    • All the parts of a research design should work to the same end: drawing sound conclusions supported by observable evidence.
  • Spurious relationships arise because two things are both affected by a third factor and this appear to be related.
  • When the additional factor has been identified and controlled for, the original relationship weakens or disappears altogether.
  • Distinguishing real, causal relations from spurious ones is an important part of scientific research.
  • Causal relationships meet three characteristics:
    • Covariation: the alleged cause varies with the supposed effect.
    • Time order: the cause precedes the effect in time.
    • Elimination of alternative explanations: the research design should eliminate as many alternative explanations for the observed effect as possible to isolate causation to one factor.
  • The classical randomized experiment has basic characteristics.
    • It begins with a sample.
    • The most important design component is the stimulus or test factor.
      • A stimulus, or test factor, is a condition applied to participants in an experiment in a way in which the researcher can measure some sort of effect.
    • There is at least one experimental group that will have exposure to the treatment and one control group that will not.
    • Randomly assigns individuals to each group, avoiding self-selection, and guaranteeing that on average the groups will not differ in any respect.
    • Controls the administration of the treatment including the circumstances, under which the experimental group is exposed.
    • Establishes and measures a dependent variable before and after the treatment with a pretest and posttest. Because treatment groups are constituted through randomization, any difference between the pre- and posttests can be attributed to the experimental effect of exposure to the treatment.
    • Controls the environment of the experiment (time, location, and other physical aspects).
  • Two important factors in judging the quality of a research design with respect to causal relationships are internal and external validity.
    • Internal validity refers to a causal relationship that was not created by a spurious relationship.
      • Interval validity can be affected by several things:
        • Events other than the experimental stimulus that occur between the pretest and posttest measurements of the dependent variable.
        • Maturation or a change in subjects over time that might produce differences between experiment and control groups.
        • Test-subject interaction, the process of measuring the dependent variable prior to the experimental stimulus, may itself affect the posttreatment scores of subjects.
        • Selection bias: bias due to the assignment of subjects to experimental and control groups according to some criterion and not randomly.
        • Experimental mortality: a differential loss of subjects from experimental and control groups that affects the equivalency of groups.
        • Demand characteristics: aspects of the research situation that cause participants to guess at the investigator’s goals and adjust their behavior or opinions accordingly.
    • External validity refers to the extent to which the results of an experiment can be generalized across populations, time, and settings.
  • There are two different ways to investigate causal questions:
    • The causes-of-effects approach starts with an outcome (e.g., war, election result, passage of a major piece of legislation) and works backward to the causes.
    • The effects-of-causes approach starts with a potential cause and works forward to measure its impact on the outcome.
  • Another distinction between qualitative and quantitative studies is the number of cases or units of analysis which are included in analyses, whether they are small N or large N studies.
    • Qualitative studies are small N, whereas quantitative studies can be small N or large N in scope.
  • The classical randomized experiment is an excellent starting point for understanding how a research project can be designed and executed to establish causation through covariation, time order, and eliminate alternative explanations.