Minimization

This is a section from Martin Bland’s text book An Introduction to Medical Statistics, Fourth Edition. I hope that the topic will be useful in its own right, as well as giving a flavour of the book.


2.14 Minimization

Stratification (Section 2.3) ensures that we have randomized groups which are similar for specified variables, but it works only for large samples. Minimization is an alternative approach suggested by Taves (1974). This sets out to allocate non-randomly so as to balance the groups on selected variables.

I shall explain via an example (Bland 2014). A television company, Outline Productions, wanted to do some randomized control trials of novel remedies, for a series called Health Freaks, first broadcast in the UK on Channel 4, 21st October, 2013. One remedy for psoriasis was using oatmeal in a bath. The plan was to allocate about 20 volunteers either to oatmeal or a control treatment using a regular bath. There was the usual concern that random allocation would result in imbalance on some important variable. I promised to avoid this by minimization. They wanted the groups to be balanced for age, gender, and severity of psoriasis. I decided to classify age as <30 years or 30+. Severity was graded as mild, moderate, or severe.

I received a list of participants and their characteristics as shown in Table 2.11:

Table 2.11 Characteristics of volunteers for the oatmeal bath trial, with subsequent allocated group
Participant Age
group
Gender Psoriasis
severity
Allocated
group
13 Younger Male Moderate Control
6 Older Female Mild Control
1 Older Female Moderate Oatmeal
3 Older Female Mild Oatmeal
5 Younger Male Severe Oatmeal
7 Younger Female Severe Control
11 Younger Male Moderate Oatmeal
10 Younger Female Severe Control
2 Older Female Severe Oatmeal
14 Younger Male Severe Control
16 Older Female Moderate Control
15 Older Female Moderate Control
8 Younger Female Severe Oatmeal
9 Younger Female Moderate Control
12 Older Male Moderate Oatmeal
4 Older Male Severe Control

The minimization algorithm which I used allocated the first participant, #13, at random, to Control. We then consider the participant #6. For each of the variables, we count the number with the same characteristic as the new participant and sum these over all variables, for each group. For Oatmeal, which has no-one allocated, this is 0 + 0 + 0 = 0. For Control it is also 0 + 0 + 0 = 0, because the participant #6 is in a different age category, has different gender, and different severity, from the first. The totals are the same, the imbalance will be the same whichever group we put #6 into, so #6 was allocated randomly, to Control. We then consider #1. The sumfor Oatmeal is 0+0+0 = 0. For Control it is 1 + 1 + 1 = 3, because #6 is in the same age category and the same gender category and #13 is in the same severity category. The total for Oatmeal is less than for Control, the imbalance will be greater if we put #1 into Control than if we put #1 into Oatmeal, so we allocate to Oatmeal. We then consider #3. The sum for Oatmeal is 1 + 1 + 0 = 2. For Control it is 1 + 1 + 1 = 3. We allocate to Oatmeal.We continue like this until all have been allocated.

Age group and severity were quite well balanced, younger:older ratio for Oatmeal being 5:4 and for Control 3:4 and mild:moderate:severe being 1:4:4 for Oatmeal and 1:3:3 for Control. Gender was less well balanced, F:M ratio for Oatmeal being 6:3 and for Control 4:3. Exact balance is not possible in a sample of 13 people. Some of the participants withdrew before the trial and a further list of participants arrived which I allocated in the same way, having removed the dropouts.

An obvious objection to this procedure is that it is quite deterministic and predictable. The second person allocated is very likely to get the opposite allocation to the first for example. I randomized the order of the list of participants to make it more random, but this can be done only if we have a list at the start. In most trials participants are recruited one by one and this is not possible. Some minimization algorithms introduce a random element by finding the group preferred to reduce imbalance and then allocating the participant to that group with a random element, e.g. 3:1 in favour of the indicated group, 1:3 in favour of the other group.

Minimization is often used in cluster randomized trials (Section 2.12), because the number of clusters is usually quite small. For example, in the Collaborative Care trial (Section 2.12), 51 primary care practices were allocated separately by minimization in four different geographic locations, using practice list size, number of doctors at the practice, and index of multiple deprivation for the practice area as minimization variables. We did not start with a list of practices but recruited them in ones and twos. The Collaborative Care arm had 24 practices, average list size 6 615 patients, average number of doctors (whole time equivalents) 3.8, and average index of multiple deprivation rank 9 210. For the 27 practices in the Treatment as Usual arm, the figures were 7 152, 4.0, and 8 449 respectively, so the balance was quite good.

Because of the predictability, some drug regulators do not accept minimized trials. I understand their feelings, but it is really a method for small trials, not regulatory trials, and for them it can be useful, particularly for anxious researchers.

References

Bland, M. (2014). Health freaks on trial: duct tape, bull semen and the call of television. Significance, 11(2), 32–35.

Taves, D.R. (1974). Minimization – new method of assigning patients to treatment and control groups. Clinical Pharmacology & Therapeutics, 15, 443–53.


Adapted from pages 21–23 of An Introduction to Medical Statistics by Martin Bland, 2015, reproduced by permission of Oxford University Press.


Back to An Introduction to Medical Statistics contents

Back to Martin Bland's Home Page

This page maintained by Martin Bland.
Last updated: 7 August, 2015.

Back to top