In the The Fiddler on the Roof, Tevye sings of tradition: “You may ask, ‘how did this tradition get started?’ I’ll tell you … [in sotto voce:] I don’t know. But it’s a tradition!” He could have been describing medical research.
But I can tell you how this tradition got started. Clinical practice was once dictated by the clinician’s experience, which was biased and hampered learning. The advent of the randomized clinical trial, or RCT, in the 1940s changed everything, because randomization eliminated bias but also because the process involved collecting data prospectively. Medical research moved away from case study and anecdote, and became a real science. The RCT became and remains the gold standard of medical research. It is as close a substitute for truth as seems possible.
The RCT has changed little in 70 years. Such resilience is testament to the esteem given to it by the research community. On the other hand, such stasis invites complacency that is unusual in modern science and technology, which are all about change and innovation. Randomization established an important plateau in medical research, but trial designers have been resistant to change and largely oblivious to the biological volcanoes erupting on that plateau.
The focus of traditional clinical trials is science: learning which therapies will benefit future patients. This focus is codified in the 1979 Belmont Report—the bible of clinical research ethics—which emphasizes that clinical research is distinct from clinical practice. In the same vein, regulatory agencies are mandated to focus on safety and efficacy when evaluating medical products. The standards for determining safety and efficacy are the same for all diseases and conditions, regardless of how common they are.
Traditional trials are tests of hypotheses. The trial is the unit of inference. A fundamental principle in traditional design is understanding and controlling the trial’s false positive rate. The basis is that trials should not have a good chance of showing efficacy for an ineffective product. To satisfy this principle requires large trials. Phase 3 trials, those used for regulatory approval, have sample sizes in the hundreds and often in the thousands.
Traditional RCTs do not accommodate the amazing changes occurring in biology. In no therapeutic area is this more evident than in cancer. Biologists are slicing and dicing diseases into ever smaller subsets. Soon every cancer patient will have an ultra-rare disease. Developing therapies in rare diseases has never fit well in the traditional mold. Trials cannot have sample sizes in the thousands when there are fewer than 100 patients in the world. And even if we could run a large trial, its results would be irrelevant when they are finally announced.
Controlling the false positive rate is irrelevant in rare diseases. What should matter is treating effectively those patients who have the disease or will have it after the trial, recognizing that therapies being evaluated in today’s trials are constantly being improved or replaced.
A radically different paradigm of medical research is to recognize the patient and not the trial as the unit of inference. Such an approach can preserve the scientific advantages of randomization while focusing on effectively treating patients who have the disease, those in the trial and those coming later. One such approach I’ll call adaptive randomization.
The focus is on patients who have the disease. The goal is to treat them as effectively and safely as possible in the context of the disease’s prevalence. The trial’s design adapts to the information accruing in the course of the trial which is used in assigning therapy to future patients in the trial. Such adaptations are anathema in the traditional approach because they affect the trial’s false positive rate. Therapies that are performing better are used with greater probability and therapies that are doing poorly are used with lower probability and eventually may be dropped. There may be a formal “trial” or simply data collection and analysis during the course of treating patients who have the disease. In both cases, clinical research and practice become one.
Patients in the trial receive better treatment overall for the obvious reason that it is better to use information than not. A complaint I’ve heard about such an approach is that patients later in the trial receive better treatment than do earlier patients, because later patients benefit from information gleaned from earlier ones. I regard this to be a virtue of the approach. Tomorrow’s patients are better off than today’s and it’s always better to delay getting a disease.
Adaptive randomization is being used at my home institution, M.D. Anderson Cancer Center, and it is starting to be used in some major national and international trials in cancer and other diseases. For example, we have adaptively randomized more than 800 breast cancer patients to nine different experimental therapies (and counting) in a nationwide trial called I-SPY 2. Breast cancer is not rare, but the adaptive randomization occurs within subtypes of the disease. The design allows for dropping a therapy in one subtype even though it is performing extremely well in another. Such an approach enables more precise matching of therapies to patient subtypes, but it also treats patients in the trial more effectively, and without sacrificing science.
We need a new tradition, including a revision of the Belmont Report. We need an approach to clinical trial design in which effectively treating patients in a clinical trial counts every bit as much as learning about therapies so as to effectively treat those patients who are treated after the trial. We need to merge clinical research with clinical practice.
Donald Berry is founder of Berry Consultants, a company that designs adaptive clinical trials for pharmaceutical and medical device companies. He is also a professor in the biostatistics department at the Univ. of Texas M.D. Anderson Cancer Center.
For more about healthcare, watch this Fortune video: