Question 3: For the detection of dementia, a cut-off score of 4 or less had sensitivity = 91.9% and specificity = 81.0%, and a cut-off score of 3 or less had sensitivity = 96.1% and specificity = 72.6%. Why did lowering the cutoff increase sensitivity and reduce specificity?
When we have a test based on a cut-off, changing the cut-off will change the sensitivity and specificity of the test.
For example, consider the HADS score for detecting anxiety and/or depression diagnosed at a clinical interview in a sample of arthitis patients:
The graph shows a scatter diagram of HADS score against clinical diagnosis. As we move the cut-off point up the HADS scale, the number of people above the cut-off gets smaller and the number below the cut-off gets larger. In the 'No anxiety or depression' group, the proportion below the cut-off is the specificity. In the 'Yes anxiety or depression' group, the proportion above the cut-off is the sensitivity. Hence as specificity gets larger, sensitivity gets smaller, and as specificity gets smaller, sensitivity gets larger.
In the dementia study, lowering the cut-off from 4 to 3 meant that more people would be classified as 'dementia'. For those who had diagnosed dementia, this would increase the proportion detected by the test and so increase the sensitivity. For those who did not have diagnosed dementia, this would increase the proportion falsely detected as 'dementia' by the test and so decrease the specificity.
Back to question.
Back to Measurement in Health and Disease index.
To Martin Bland's home page.
This page maintained by Martin Bland.
Last updated: 2 March, 2007.
Back to top.