SAGE Journal Articles

Click on the following links. Please note these will open in a new window.

Journal Article 1: Loosveldt, G., & Buellens, K. (2014). A procedure to assess interviewer effects on nonresponse bias. SAGE Open4, 1.

Abstract: It is generally accepted that interviewers have a considerable effect on survey response. The difference between response success and failure does not only affect the response rate, but can also influence the composition of the realized sample or respondent set, and consequently introduce nonresponse bias. To measure these two different aspects of the obtained sample, response propensities will be used. They have an aggregate mean and variance that can both be used to construct quality indicators for the obtained sample of respondents. As these propensities can also be measured on the interviewer level, this allows evaluation of the interviewer group and of the extent to which individual interviewers contribute to a biased respondent set. In this article, a procedure based on a multilevel model with random intercepts and random slopes is elaborated and illustrated. The results show that the procedure is informative to detect influential interviewers with an impact on nonresponse basis.

Journal Article 2: Alwin, D. F., & Beattie, B. A. (2016). the kiss principle in survey design question length and data quality. Sociological Methodology46, 121–152.

Abstract: Writings on the optimal length for survey questions are characterized by a variety of perspectives and very little empirical evidence. Where evidence exists, support seems to favor lengthy questions in some cases and shorter ones in others. However, on the basis of theories of the survey response process, the use of an excessive number of words may get in the way of the respondent’s comprehension of the information requested, and because of the cognitive burden of longer questions, there may be increased measurement errors. Results are reported from a study of reliability estimates for 426 (exactly replicated) survey questions in face-to-face interviews in six large-scale panel surveys conducted by the University of Michigan’s Survey Research Center. The findings suggest that, at least with respect to some types of survey questions, there are declining levels of reliability for questions with greater numbers of words and provide further support for the advice given to survey researchers that questions should be as short as possible, within constraints defined by survey objectives. Findings reinforce conclusions of previous studies that verbiage in survey questions--either in the question text or in the introduction to the question--has negative consequences for the quality of measurement, thus supporting the KISS principle (“keep it simple, stupid”) concerning simplicity and parsimony.

Journal Article 3: Bordacconi, M. J., & Larsen, M. L. (2014). Regression to causality: Regression-style presentation influences causal attribution. Research & Politics1, 2.

Abstract: Humans are fundamentally primed for making causal attributions based on correlations. This implies that researchers must be careful to present their results in a manner that inhibits unwarranted causal attribution. In this paper, we present the results of an experiment that suggests regression models--one of the primary vehicles for analyzing statistical results in political science--encourage causal interpretation. Specifically, we demonstrate that presenting observational results in a regression model, rather than as a simple comparison of means, makes causal interpretation of the results more likely. Our experiment drew on a sample of 235 university students from three different social science degree programs (political science, sociology and economics), all of whom had received substantial training in statistics. The subjects were asked to compare and evaluate the validity of equivalent results presented as either regression models or as a t-test of two sample means. Our experiment shows that the subjects who were presented with results as estimates from a regression model were more inclined to interpret these results causally. Our experiment implies that scholars using regression models should note carefully both their models’ identifying assumptions and which causal attributions can safely be concluded from their analysis.