We shall turn to a discussion of how to analyze the results (statistics) in a moment, but I first want to harp a bit more on extraneous factors. Just a bit, I promise, but please read it.
Let's say, for example, that you actually are using this design, the Posttest-only Control Group Design, to examine the effectiveness of a new approach to educating adult-onset diabetics about the importance of controlling their blood sugar. You use this new approach with the experimental group, and you continue to do what you have always done with the control group.
Is your experiment "clean"? Maybe, but maybe not. Did you (or your colleagues) treat the control group differently than usual just because of knowledge that they were controls and that there was an experiment in progress?
This is just one example of subtle biases that can creep in if you are not careful.
I promised - that's all. Let's turn to statistics.
Analyzing data from the Posttest-only Control Group Design is a piece of cake, at least as long as you conform to certain standards. You have one measurement (e.g., average (mean) in-office blood sugar level over a six-month period for experimental and control groups), and you simply compare the measurement from the experimental group with the measurement from the control group. Of course, there will be some difference, so you want to know if the difference is large enough to start getting excited. This is where statistical tests come in.
I have prepared a graphic to help you understand the t-test intuitively before we get into a more technical discussion. Click here to see this graphic and read the discussion.