This website is for students following the M.Sc. in Evidence Based Practice at the University of York.
183 students were observed twice by different student observers. These measured height (mm), arm circumference (mm), head circumference, and pulse (beats/min) and recorded sex and eye colour (black, brown, blue, grey, hazel, green, other). They entered these into a computer file. Eye colour and sex were entered as numerical codes.
The following table shows eye colour recorded by the two observers:
| Eye colour recorded |
by first observer
|Eye colour recorded by second observer||Total|
The Stata output for the kappa statistic for this table is:
. kap eye1 eye2 Expected Agreement Agreement Kappa Std. Err. Z Prob>Z ----------------------------------------------------------------- 79.23% 26.16% 0.7188 0.0385 18.69 0.0000
How would you describe the level of agreement in this table?
Check suggested answer 1.
The expected agreement is much lower than for sex, where it was 54.52%. Why is this?
Check suggested answer 2.
How could we improve the kappa statistic?
Check suggested answer 3.
What pairs of categories might be regarded as minor disagreements?
Check suggested answer 4.
What might be plausible weights for the pairs of eye colour categories?
Check suggested answer 5.
We can use the following disagreement weights:
We could use agreement weights instead, as some programs, such as Stata, require. (SPSS 16 does not do weighted kappa.)
What weights for agreement would correspond to these disagreement weights?
Check suggested answer 6.
Back to Measurement in Health and Disease index.
To Martin Bland's home page.
This page maintained by Martin Bland.
Last updated: 21 July, 2008.
Back to top.