Discovering Statistics Using IBM SPSS Statistics
by Andy Field
You are here
Chapter Specific Resources
Cramming Sam's top tips from chapter 6
Click on the topic to read Sam's tips from the book
Skewness and kurtosis
- To check that the distribution of scores is approximately normal, look at the values of skewness and kurtosis in the output.
- Positive values of skewness indicate too many low scores in the distribution, whereas negative values indicate a build-up of high scores.
- Positive values of kurtosis indicate a heavy-tailed distribution, whereas negative values indicate a light-tailed distribution.
- The further the value is from zero, the more likely it is that the data are not normally distributed.
- You can convert these scores to z-scores by dividing by their standard error. If the resulting score (when you ignore the minus sign) is greater than 1.96 then it is significant (p < 0.05).
- Significance tests of skew and kurtosis should not be used in large samples (because they are likely to be significant even when skew and kurtosis are not too different from normal).
Normality tests
- The K-S test can be used (but shouldn’t be) to see if a distribution of scores significantly differs from a normal distribution.
- If the K-S test is significant (Sig. in the SPSS table is less than 0.05) then the scores are significantly different from a normal distribution.
- Otherwise, scores are approximately normally distributed.
- The Shapiro–Wilk test does much the same thing, but it has more power to detect differences from normality (so this test might be significant when the K-S test is not).
- Warning: In large samples these tests can be significant even when the scores are only slightly different from a normal distribution. Therefore, I don’t particularly recommend them and they should always be interpreted in conjunction with histograms, P-P or Q-Q plots, and the values of skew and kurtosis.
Homogeneity of variance
- Homogeneity of variance/homoscedasticity is the assumption that the spread of outcome scores is roughly equal at different points on the predictor variable.
- The assumption can be evaluated by looking at a plot of the standardized predicted values from your model against the standardized residuals (zpred vs. zresid).
- When comparing groups, this assumption can be tested with Levene’s test and the variance ratio (Hartley’s Fmax).
- If Levene’s test is significant (Sig. in the SPSS table is less than 0.05) then the variances are significantly different in different groups.
- Otherwise, homogeneity of variance can be assumed.
- The variance ratio is the largest group variance divided by the smallest. This value needs to be smaller than the critical values in the additional material.
- Warning: There are good reasons not to use Levene’s test or the variance ratio. In large samples they can be significant when group variances are similar, and in small samples they can be non-significant when group variances are very different.