Exercise: Radiological audit paper

This website is for students following the M.Sc. in Evidence Based Practice at the University of York.

The following is the abstract of a paper:

AIMS: To assess the quality of the imaging procedure requests and radiologists' reports using an auditing tool, and to assess the agreement between different observers of the quality parameters.

MATERIALS AND METHODS: In an audit using a standardized scoring system, three observers reviewed request forms for 296 consecutive radiological examinations, and two observers reviewed a random sample of 150 of the corresponding radiologists' reports. We present descriptive statistics from the audit and pairwise inter-observer agreement, using the proportion agreement and kappa statistics.

RESULTS: The proportion of acceptable item scores (0 or +1) was above 70% for all items except the requesting physician's bleep or extension number, legibility of the physician's name, or details about previous investigations. For pairs of observers, the inter-observer agreement was generally high, however, the corresponding kappa values were consistently low with only 14 of 90 ratings >0.60 and 6 >0.80 on the requests/reports. For the quality of the clinical information, the appropriateness of the request, and the requested priority/timing of the investigation items, the mean percentage agreement ranged 67-76, and the corresponding kappa values ranged 0.08-0.24.

CONCLUSION: The inter-observer reliability of scores on the different items showed a high degree of agreement, although the kappa values were low, which is a well-known paradox. Current routines for requesting radiology examinations appeared satisfactory, although several problem areas were identified.

(Source: Stavem K, Foss T, Botnmark O, Andersen OK, Erikssen J. Inter-observer agreement in audit of quality of radiology requests and reports. Clinical Radiology 2004; 59: 1018-1024.)

Questions about this abstract:

  1. What is the problem with using the percentage agreement as a measure of agreement between observers? Do percentage agreements between 67 and 76 represent a high degree of agreement, as concluded?
    Check suggested answer.
  2. The percentage agreement was between 67 and 76, but the corresponding kappa values were between 0.08-0.24. Why were the kappa values lower than the proportion showing agreement? What does the size of the difference suggest about the data?
    Check suggested answer.


Back to Measurement in Health and Disease index.

To Martin Bland's home page.

This page maintained by Martin Bland.
Last updated: 1 March, 2007.

Back to top.