Does this look familiar? It should. It is nothing more than a pretest-posttest control group design set on top of a posttest-only control group design. (Remember what I said earlier about experimental design and "building block" approaches?)

Before we get started, let me refresh your memory about violations of statistical independence. I will be discussing several interesting tests that can be made on Solomon Four-group data, but keep in mind that making multiple tests on a dataset has to be done in a special way. You can't just keep retesting the same data. To do so violates statistical independence of the results and produces t-test values (or F values, or chi-square values) that are incorrect and misleading.
Notice that I have added numeric subscripts to the Os ("observations", or measurements).

Although there is no known single statistical test that can take advantage of all data in the Solomon Four-group, a simple analysis of variance (ANOVA) on the posttests helps to answer several interesting questions. (The ANOVA is discussed quite thoroughly in the Hays text I referenced earlier. If you don't have a copy of Hays, you should get one as soon as possible. He's the best there is.)

In the language of ANOVA, you hear the terms "main effects" and "interaction effects". In this example, main effects are testing effects (row means) and group effects (column means - i.e. experimental or X versus control or No X). Interaction effects in this example are represented by the "cell" means (another common ANOVA term), the mean of observations in each little square (O4, O2, O6, and O5).

If the row means are significantly different (in this example), this means that simply testing your participants before starting the experiment caused a significant change. If the column means are significantly different (in this example), your experiment produced an effect relative to the control group condition (translated: You were successful!). If the cell means are significantly different (in this example), this means that pretesting had a significant impact, but that impact was significantly different for the experimental group versus the control group.

That's enough for now. You can see that experiments can be designed to be extremely powerful and that there are statistical treatments appropriate for any given design. Let's head back to the main experimental design page and take a look at the summary.

Return to the discussion of experimental designs.