SAGE Journal Articles

Click on the following links. Please note these will open in a new window.

Journal Article 1: Purswell, K. E., & Ray, D. C. (2014). Research with small samples considerations for single case and randomized small group experimental designs. Counseling Outcome Research and Evaluation5, 116–126.

Abstract: Single case designs (SCDs) and randomized small group (RSG) designs are two options for researchers who have limited resources and who would like to demonstrate the experimental effect of an intervention. The authors address considerations for internal and external validity in each design and provide an overview of the strengths and limitations of the various statistical analyses in each design. Effective researchers are well-informed regarding research design and match small-n participant designs to appropriate research questions. Examples of research questions and research design are discussed.

Journal Article 2: Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2012). Single-case intervention research design standards. Remedial and Special Education34, 26–38.

Abstract: In an effort to responsibly incorporate evidence based on single-case designs (SCDs) into the What Works Clearinghouse (WWC) evidence base, the WWC assembled a panel of individuals with expertise in quantitative methods and SCD methodology to draft SCD standards. In this article, the panel provides an overview of the SCD standards recommended by the panel (henceforth referred to as the Standards) and adopted in Version 1.0 of the WWC’s official pilot standards. The Standards are sequentially applied to research studies that incorporate SCDs. The design standards focus on the methodological soundness of SCDs, whereby reviewers assign the categories of Meets Standards, Meets Standards With Reservations, and Does Not Meet Standards to each study. Evidence criteria focus on the credibility of the reported evidence, whereby the outcome measures that meet the design standards (with or without reservations) are examined by reviewers trained in visual analysis and categorized as demonstrating Strong Evidence, Moderate Evidence, or No Evidence. An illustration of an actual research application of the Standards is provided. Issues that the panel did not address are presented as priorities for future consideration. Implications for research and the evidence-based practice movement in psychology and education are discussed. The WWC’s Version 1.0 SCD standards are currently being piloted in systematic reviews conducted by the WWC. This document reflects the initial standards recommended by the authors as well as the underlying rationale for those standards. It should be noted that the WWC may revise the Version 1.0 standards based on the results of the pilot; future versions of the WWC standards can be found at http://www.whatworks.ed.gov.

Journal Article 3: St. Clair, T., Hlaberg, K., & Cook, T. D. (2016). The validity and precision of the comparative interrupted time-series design: Three within-study comparisons. Journal of Educational and Behavioral Statistics41, 269–299.

Abstract: We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to those from a randomized controlled trial (RCT) that shares the same treatment group. The degree of correspondence between RCT and CITS estimates depends on the observed pretest time trend differences and how they are modeled. Where the trend differences are clear and can be easily modeled, no bias results; where the trend differences are more volatile and cannot be easily modeled, the degree of correspondence is more mixed, and the best results come from matching comparison units on both pretest and demographic covariates.