SAGE Journal Articles

Click on the following links. Please note these will open in a new window.

Journal Article 1: Hope, T. (2004). Pretend it works: Evidence and governance in the evaluation of Reducing Burglary Initiative. Criminal Justice, 4(3), 287-308.

Abstract: This article is about the use of evidence from evaluation research undertaken on, and as part of, the Home Office Reducing Burglary Initiative. More generally, it is a case study about the uses and status of “scientific” evidence in politics. The article reports methods and findings regarding burglary reduction projects evaluated by the “Midlands Consortium” of academic researchers. These are compared with interpretations derived from re-analysis of the data presented in reports published by the Home Office. Specifically, it illustrates what might happen when responsibility for validating policy--that is, for establishing “what works”--is placed in the hands of (social) science, but the evidence produced is not, apparently, congenial to the particular “network of governance” that is responsible for the policy. The outcome for evidence-based policy making in these circumstances is that scientific discourse and method itself falls victim to policy pressures and values. The concerns of this article are placed in the context of Ulrich Beck’s (1992) discussion of “reflexive scientization” in the governance of risk society.

Journal Article 2: Sorsby, A., Shapland, J., & Robinson, G. (2016). Using compliance with probation supervision as an interim outcome measure in evaluating a probation initiative. Criminology & Criminal Justice, 17(1), 40-61.

Abstract: This article addresses the issues involved in using compliance with probation supervision as an interim outcome measure in evaluation research. We address the complex nature of compliance and what it implies. Like much research on probation and criminal justice more generally, it was not possible to use random assignment to treatment and comparison groups in the case study we address, which evaluated the SEED training programme. We therefore compare two different data analysis methods to adjust for prior underlying differences between groups, namely regression adjustment of treatment covariates that are related to the outcome measure in the sample data and regression adjustment using propensity scores derived from a wide range of baseline variables. The propensity score method allows for control of a wider range of baseline variables, including those which do not differ significantly between the two groups.

Journal Article 3: Hollin, C. R. (2008). Evaluating offending behaviour programmes: Does only randomization glister? Criminology & Criminal Justice, 8(1): 89-106.

Abstract: Despite considerable investment there has been a marked reluctance by the Home Office to publish the evaluations of the various Pathfinder Programmes. Arguably, this reluctance stems from the “official” view that the commissioned researchers conducted the wrong type of research, specifically in not using randomized control trials (RCTs). The utility of RCTs is considered here with particular reference to the evaluation of the Offending Behaviour Pathfinder Programmes. It is argued that the Home Office “Reconviction Scale,” favoring RCTs, is seriously flawed and is used to present a misleading view of the extant research. An overview of the wider literature shows that RCTs are not uniformly agreed to be the single design of choice in evaluating complex interventions such as offending behavior programmes. The trend in disciplines such as the clinical sciences, with a history steeped in RCTs, is to utilize a range of research designs, both quantitative and qualitative, to evaluate complex interventions.