## Scheffe' and Tukey Tests

When the decision from the One-Way Analysis of Variance is to reject the null hypothesis, it means that at least one of the means isn't the same as the other means. What we need is a way to figure out where the differences lie, not just that there is a difference.

This is where the Scheffe' and Tukey tests come into play. They will help us analyze pairs of means to see if there is a difference -- much like the difference of two means covered earlier.

We will not be using either of these tests in the introduction to applied statistics course, but I wanted you to know that they were available. Some statistical packages will indicate where the differences lie for you.

### Hypotheses

Both tests are set up to test if pairs of means are different. The formulas refer to mean i and mean j. The values of i and j vary, and the total number of tests will be equal to a combination of k objects, 2 at a time C(k,2), where k is the number of samples.

### Scheffé Test

The Scheffe' test is customarily used with unequal sample sizes, although it could be used with equal sample sizes.

The critical value for the Scheffe' test is the degrees of freedom for the between variance times the critical value for the one-way ANOVA. This simplifies to be:

```   CV = (k-1) F(k-1,N-k,alpha)
```

The test statistic is a little bit harder to compute. Pure mathematicians will argue that this shouldn't be called F because it doesn't have an F distribution (it's the degrees of freedom times an F), but we'll live it with it.

Reject H0 if the test statistic is greater than the critical value. Note, this is a right tail test. If there is no difference between the means, the numerator will be close to zero, and so performing a left tail test wouldn't show anything.

### Tukey Test

The Tukey test is only usable when the sample sizes are the same.

The Critical Value is looked up in a table. It is Table N in the Bluman text. There are actually several different tables, one for each level of significance. The number of samples, k, is used as a index along the top, and the degrees of freedom for the within group variance, v = N-k, are used as an index along the left side.

The test statistic is found by dividing the difference between the means by the square root of the ratio of the within group variation and the sample size.

Reject the null hypothesis if the absolute value of the test statistic is greater than the critical value (just like the linear correlation coefficient critical values).

## TI-82

The ANOVA program for the TI-82 will do all of the pairwise comparisons for you after it has given the ANOVA summary table. You will need to know how to find the critical values and make the comparisons.