Question 2. What effect do you think this might have on the P value?
They have ignored the pairing of the data, so they have ignored some important information. If they were to take the pairing into account, using all the information, they might have a more powerful study. This could decrease the P value.
On the other hand, they have inflated the sample size, doubling it. Reducing the the sample size to the true 105 might increase the P value.
The correct analysis for comparing two proportions in paired samples is called McNemar's test. We have not covered this in our course. This is a version of the sign test for dichotomous data.
To do McNemar's test we usually redraw the table in a paired structure, like this:
Ambulatory | Clinic | Total | |
---|---|---|---|
Responder | Non-responder | ||
Responder | 43 | 12 | 55 |
Non-responder | 0 | 50 | 50 |
Total | 43 | 62 | 105 |
There were 12 changes from non-response to response and none from response to non-response. This gives p=0.0015.
You can actually do this test using the table at the end of the Week 5 classroom exercise, Transformations. We have 12 subjects who changed, i.e. 12 non-zero differences, and the number in one direction is 12, in the other direction it is zero. This is less than the 2 in the table for the 12 non-zero differences row, so the difference is significant, P < 0.05.
Fisher's exact test requires the observations to be independent. By using a test the assumptions of which are not met by the data, these authors missed a highly significant effect.
Back to Exercise: Fisher's exact test?.
To Clinical Biostatistics index.
To Martin Bland's M.Sc. index.
This page maintained by Martin Bland.
Last updated: 10 August, 2006.