SAGE Journal Articles
Click on the following links. Please note these will open in a new window.
A potential misinterpretation regarding measures of central tendency was identified in several health sciences textbooks presenting basic statistical procedures. The misinter-pretation involves measures of central tendency derivedfrom skewed unimodal sample distributions. The reviewed textbooks state or imply that in asymmetrical distributions the median is always located between the mode and mean. An example is presented illustrating the fallacy of this assumption. The mean and median will always be to the right of the mode in a positively skewed unimodal distribution and to the left of the mode in a negatively skewed distribution, but the order of the mean and median is impossible to predict or generalize. The assumption that the median always falls between the mode and mean in the calculation of coefficients of skewness has implicationsfor the interpretation of health sciences research.
Sinacore, J. M., Chang, R. W., & Falconer, J. (1992). Seeing the forest despite the trees: The benefit of exploratory data analysis to program evaluation research. Evaluation & The Health Professions, 15, 131-146.
In the present article, it is argued that there is a benefit to applying techniques of exploratory data analysis (EDA) to program evaluation. To exemplify this, an evaluation of a rehabilitation program for people with rheumatoid arthritis ispresented. The perceived health status of patients receiving intensive rehabilitation services from a major rehabilitation institute was compared with that of patients receiving customary office-based care over an 18-month period. The data were analyzed in a conventional way (analysis of variance) and then by way of EDA techniques (graphic display of medians and boxplots). The conventional analysis suggested that all patients improved over time and that intensive rehabilitation services provided no particular benefit or harm. The exploratory analysis showed that the distribution of the outcome variable was patently nonnormal, thus casting doubt on the validity of the conventional analysis. The EDA furthershowed that the rehabilitation group lagged behind the comparison group fora year, with a precipitious improvementat the 18-month period. Thissuggests that a selection factor was operating (i.e., those in the rehabilitation group could have been sicker) or that the patients in the rehabilitation group were made more aware of their condition by the intensive health services they received The EDA provided an important insight.
Hambrick, J. P., Rodebaugh, T. L., Balsis, S., Woods, C. M., Mendez, J. L., & Heimberg, R. G. (2010). Cross-ethnic measurement equivalence of measures of depression, social anxiety, and worry. Assessment, 17, 155-171.
Although study of clinical phenomena in individuals from different ethnic backgrounds has improved over the years, African American and Asian American individuals continue to be underrepresented in research samples. Without adequate psychometric data about how questionnaires perform in individuals from different ethnic samples, findings from both within and across groups are arguably uninterpretable. Analyses based on item response theory (IRT) allow us to make fine-grained comparisons of the ways individuals from different ethnic groups respond to clinical measures. This study compared response patterns of African American and Asian American undergraduates to White undergraduates on measures of depression, social anxiety, and worry. On the Beck Depression Inventory—II, response patterns for African American participants were roughly equivalent to the response patterns of White participants. On measures of worry and social anxiety, there were substantial differences, suggesting that the use of these measures in African American and Asian American populations may lead to biased conclusions.
Syntheses of research on educational programs have taken on increasing policy importance. Procedures for performing such syntheses must therefore produce reliable, unbiased, and meaningful information on the strength of evidence behind each program. Because evaluations of any given program are few in number, syntheses of program evaluations must focus on minimizing bias in reviews of each study. This article discusses key issues in the conduct of program evaluation syntheses: requirements for research design, sample size, adjustments for pretest differences, duration, and use of unbiased outcome measures. It also discusses the need to balance factors such as research designs, effect sizes, and numbers of studies in rating the overall strength of evidence supporting each program.