# Clinical Biostatistics: The correlation coefficient

## The correlation coefficient

Correlation coefficients are used to measure the strength of the relationship or association between two quantitative variables. For example, Table 1 shows height, muscle strength and age in 41 alcoholic men.

Table 1. Height, quadriceps muscle strength, and age in 41 male alcoholics (data of Hickish et al., 1989)
muscle
strength (N)
muscle
strength (N)
Age (years)
155 196 55 172 147 32
159 196 62 173 441 39
159 216 53 173 343 28
160 392 32 173 441 40
160 98 58 173 294 53
161 387 39 175 304 27
162 270 47 175 404 28
162 216 61 175 402 34
166 466 24 175 392 53
167 294 50 175 196 37
167 491 35 176 368 51
168 137 65 177 441 49
168 343 41 177 368 48
168 74 65 177 412 32
170 304 55 178 392 49
171 294 47 178 540 41
172 294 31 178 417 42
172 343 38 178 324 55
172 147 31 179 270 32
172 319 39 180 368 34
172 466 53

We will begin with the relationship between height and strength. Figure 1 shows a plot of strength against height.

This is a scatter diagram. Each point represents one subject. If we look at Figure 1, it is fairly easier to see that taller men tend to be stronger than shorter men, or, looking at the other way round, that stronger men tend to be taller than weaker men. It is only a tendency, the tallest man is not the strongest not is the shortest man the weakest. Correlation enables us to measure how close this association is.

The correlation coefficient is based on the products of differences from the mean of the two variables. That is, for each observation we subtract the mean, just as when calculating a standard deviation. We then multiply the deviations from the mean for the two variables for a subject together, and add them. We call this the sum of products about the mean. It is very like the sum of squares about the mean used for measuring variability.

To see how correlation works, we can draw two lines on the scatter diagram, a horizontal line through the mean strength and a vertical line through the mean height, as shown in Figure 2.

Because large heights tend to go with large strength and small heights with small strength, there are more observations in the top right quadrant and the bottom left quadrant than there are in the top left and bottom right quadrants.

In the top right quadrant, the deviations from the mean will be positive for both variables, because each is larger than its mean. If we multiply these together, the products will be positive. In the bottom left quadrant, the deviations from the mean will be negative for both variables, because each is smaller than its mean. If we multiply these two negative numbers together, the products will also be positive.

In the top left quadrant, the deviations from the mean will be negative for height, because the heights are all less than the mean, and positive for strength, because strength is greater than its mean. The product of a negative and a positive number will be negative, so all these products will be negative. In the bottom right quadrant, the deviations from the mean will be positive for height, because the heights are all greater than the mean, and negative for strength, because the strengths are less than the mean. The product of a positive and a negative number will be negative, so all these products will be negative also.

When we add the products for all subjects, the sum will be positive, because there are more positive products than negative ones. Further, subjects with very large values for both height and strength, or very small values for both, will have large positive products. So the stronger the relationship is, the bigger the sum of products will be. If the sum of products positive, we say that there is a positive correlation between the variables.

Figure 3 shows the relationship between strength and age in Table 1.

Strength tends to be less for older men than for younger men. Figure 4 shows lines through the means, as in Figure 2.

Now there are more observations in the top left and bottom right quadrants, where products are negative, than in the top left and bottom right quadrants, where products are positive. The sum of products will be negative. When large values of one variable are associated with small values of the other, we say we have negative correlation.

The sum of products will depend on the number of observations and the units in which they are measured. We can show that the maximum possible value it can have is the square root of the sum of squares for height multiplied by the square root of the sum of squares for strength. Hence we divide the sum of products by the square roots of the two sums of squares. This gives the correlation coefficient, usually denoted by r.

Using the abbreviation ‘r’ looks very odd. Why ‘r’ and not ‘c’ for correlation? This is for historical reasons and it is so ingrained in statistical practice that we are stuck with it. If you see an unexplained ‘r =’ in a paper, it means the correlation coefficient. Originally, ‘r’ stood for ‘regression’.

Because of the way r is calculated, its maximum value = 1.00 and its minimum value = –1.00. We shall look at what these mean later.

The correlation coefficient is also known as Pearson’s correlation coefficient and the product moment correlation coefficient. There are other correlation coefficients as well, such as Spearman’s and Kendall’s, but if it is described simply as ‘the correlation coefficient’ or just ‘the correlation’, the one based on the sum of products about the mean is the one intended.

For the example of muscle strength and height in 41 alcoholic men, r = 0.42. This a positive correlation of fairly low strength. For strength and age, r = –0.42. This is a negative correlation of fairly low strength.

Figures 5 to 13 show the correlations between several simulated variables. Each pair of variables was generated to have the correlation shown above it. Figure 5 shows a perfect correlation.

The points lie exactly on a straight line and we could calculate Y exactly from X. In fact, Y = X; they could not be more closely related. r = +1.00 when large values of one variable are associated with large values of the other and the points lie exactly on a straight line. Figure 6 shows a strong, but not perfect, positive relationship.

Figure 7 also shows a positive relationship, but less strong.

The size of the correlation coefficient clearly reflects the degree of closeness on the scatter diagram. The correlation coefficient is positive when large values of one variable are associated with large values of the other.

Figure 8 shows what happens when there is no relationship at all, r = 0.00.

This is the not only way r can be equal to zero, however. Figure 9 shows data where there is a relationship, because large values of Y are associated with small values of X and with large values of X, whereas small values of Y are associated with values of X in the middle of the range.

The products about the mean will be positive in the upper left and upper right quadrants and negative in the lower left and lower right quadrants, giving a sum which is zero. It is possible for r to be equal to 0.00 when there is a relationship which is not linear. A correlation r = 0.00 means that there is no linear relationship, i.e. that there is no relationship where large values of one variable are consistently associated either with large or with small values of the other, but not both. Figure 10 shows another perfect relationship, but not a straight line.

The correlation coefficient is less than 1.00. r will not equal –1.00 or +1.00 when there is a perfect relationship unless the points lie on a straight line. Correlation measures closeness to a linear relationship, not to any perfect relationship.

The correlation coefficient is negative when large values of one variable are associated with small values of the other. Figure 11 shows a rather weak negative relationship.

Figure 12 shows a strong negative relationship.

Figure 13 shows a perfect negative relationship.

r = –1.00 when large values of one variable are associated with small values of the other and the points lie on a straight line.

## Test of significance and confidence interval for r

We can test the null hypothesis that the correlation coefficient in the population is zero. This is done by a simple t test. The distribution of r if the null hypothesis is true, i.e. in the absence of any relationship in the population, depends only on the number of observations. This is often described in the terms of the degrees of freedom for the t test, which is the number of observations minus 2. Because of this, it is possible to tabulate the critical value for the test for different sample sizes. Bland (2000) gives a table.

For the test of significance to be valid, we must assume that:

• at least one of the variables is from a Normal distribution,
• the observations are independent.

Large deviations from the assumptions make the P value for this test very unreliable.

For the muscle strength and height data of Figure 1, r = 0.42, P = 0.006. Computer programmes almost always print this when they calculate a correlation coefficient. As a result you will rarely see a correlation coefficient reported without it, even when the null hypothesis that the correlation in the population is equal to zero is absurd.

We can find a confidence interval for the correlation coefficient in the population, too. The distribution of the sample correlation coefficient when the null hypothesis is not true, i.e. when there is a relationship, is very awkward. It does not become approximately Normal until the sample size is in the thousands. We use a very clever but rather intimidating mathematical function called Fisher’s z transformation. This produces a very close approximation to a Normal distribution with a fairly simple expression for its mean and variance (see Bland 2000 if you really want to know). This can be used to calculate a 95% confidence interval on the transformed scale, which can then be transformed back to the correlation coefficient scale. For the strength and height data, r = 0.42, and the approximate 95% confidence interval is 0.13 to 0.64. As is usual for confidence intervals which are back transformed, this is not symmetrical about the point estimate, r.

For Fisher’s z transformation to be valid, we must make a much stronger assumption about the distributions than for the test of significance. We must assume that both of the variables are from Normal distributions. Large deviations from this assumption can make the confidence interval very unreliable.

The use of Fisher’s z is tricky without a computer, approximate, and requires a strong assumption. Computer programmes rarely print this confidence interval and so you rarely see it, which is a pity.

## References

Bland M. (2000) An Introduction to Medical Statistics. Oxford University Press.

Hickish T, Colston K, Bland JM, Maxwell JD. (1989) Vitamin D deficiency and muscle strength in male alcoholics. Clinical Science 77, 171-176.