Ethics and Options in Clinical Trial Design

Newspaper advertisements seeking patients and ...
Image via Wikipedia

You can’t help but be moved by the plight of the patients in Saturday’s NY Times article “New Drugs Stir Debate on Rules of Clinical Trials.” The article caused me to reflect on my role, as a medical device executive, in the design of clinical trials.  The story presents the ethical challenges of a randomized controlled clinical trial (RCT) of a new drug, through the story of two patients in the trial.  The same challenges faced by drug trials also apply to medical device trials.   What are some of the issues and what are some of the options?

A double-blinded randomized controlled trial is typically considered the gold standard pivotal trial design, as it provides the maximum protection against bias, and the clearest differentiation between the control arm and the experimental arm of the trial.  In cases where blinding is not possible (e.g. the REMATCH trial which compared the HeartMate LVAD with optimal medical therapy), randomization is still critical.  Prior to an RCT, a state of clinical equipoise must exist – there must be genuine uncertainty, based on available evidence, as to whether the new treatment (experimental arm) is better or worse than the current standard of care (control arm).  Typically, an RCT is only conducted after one or more pilot studies have shown that the new treatment has promise.

Indeed, the trial in the NY Times article was an RCT, performed after a promising Phase I pilot study in which a small number of patients saw a temporary benefit with the experimental treatment.  The pharmaceutical companies Plexxikon and Roche designed the trial together with leading physician-scientist investigators, and the trial was reviewed and approved by the FDA and Institutional Review Boards at each investigative site.  All patients were told of the risks, and all patients consented to participate.  So, what’s the issue?

First, when one of the arms turns out to be superior, hindsight shows that the patients in the other arm received an inferior treatment.  The larger the size of the trial, the more patients receive the inferior treatment.  Second, larger numbers of trial patients improve the validity of the results, but a longer trial may delay the ultimate availability of a new superior treatment.  So even though the RCT is the gold standard design, the question is whether the same scientific conclusion can be reached with a faster, smaller trial design.

What are some of the options?

First, consider a smaller trial for a narrower patient population.  Less people get the inferior treatment, and the trial is completed more quickly.  While this strategy can reach a clinical endpoint faster, the potential market may be much smaller than desired, and allowable marketing claims may be restrictive.  Further, clinical equipoise remains for the larger patient population, so that many patients who might benefit from the new treatment may not receive it.  Of course, any off-label use of the product could draw FDA scrutiny, and may prevent a second study in a larger population (patients might prefer to take the off-label drug than to participate in a trial). To help mitigate these problems, this smaller pre-market trial could be supplemented with a post-market registry to capture data on off-label use, in support of further FDA approval or clearance.  This strategy is best discussed with the FDA ahead of time.

Second, consider the use of interim analysis points in the trial design, allowing you to end the study early if definitive results are found.  In a standard RCT, statistical analysis is performed only once at the end of the trial.  Each time the stats are analyzed, there is a possibility of an incorrect conclusion due to random chance. While interim analysis may enable early study termination, interim analyses also increase the possibility of an incorrect conclusion due to chance.  Therefore, studies with interim analysis points require a larger number of patients or a stricter p-value to compensate  for the increased possibility of error.  If the study is not terminated early, the trial could both be longer and enroll more patients in the inferior arm.

Third, consider an adaptive clinical trial design.  In their 2010 draft guidance for drugs and biologics, FDA describes an adaptive clinical trial as one with:

adaptive features (i.e., changes in design or analyses guided by examination of the accumulated data at an interim point in the trial) that may make the studies more efficient (e.g., shorter duration, fewer patients), more likely to demonstrate an effect of the drug if one exists, or more informative (e.g., by providing broader dose-response information).

Several years ago, Mitchell Krucoff, M.D. of Duke University introduced me to the relatively new concept of adaptive clinical trials.  Pfizer’s 2005 ASTIN study was an early example of this type of trial design.  While I am not aware of any medical device study that has yet used this technique, in a 2006 editorial the FDA suggested that adaptive designs could be appropriate for new trials of mechanical circulatory support devices and implanted cardiac valves.  (If you are aware of such a device trial please leave me a note in the comments, and I’ll update the post).

Your clinical affairs team can provide even more options, such as a cross-over design which allows non-responders in one arm to cross to the other arm of a study.  So, as executives, what questions should we ask when thinking about a pivotal trial design?

  1. What data supports the assertion that there is clinical equipoise between the investigational treatment and the current best medical care?
  2. Why is the primary outcome measure the right measure?  Why is the study hypothesis the right hypothesis?  Will the study outcome show that my device provides a clinically meaningful benefit, not just a statistically significant effect?
  3. What data demonstrates that the study is appropriately powered (i.e right number of patients) to show the difference between the control and treatment arms?
  4. Have I considered alternate study designs? Why is this study design the best?  Would I be comfortable publishing this rationale in a peer-reviewed journal?
  5. Could this design create the appearance that I am putting the needs of the business ahead of the needs of the patient?
  6. If I were a knowledgable patient with the condition under investigation, why would I agree that the clinical trial is appropriately designed?

The design of clinical experiments on human subjects is one of the most important responsibilities of medical device executives.  Regulatory authorities and institutional review boards don’t our design trials; they only respond to our proposals.  As medical device executives, it is our responsibility to design trials well.

Please leave me a comment and let me know what I’ve missed.  What’s your clinical trial acid-test?

Advertisement

One thought on “Ethics and Options in Clinical Trial Design

  1. Well written and timely article.

    In the PLX4032 study, from other press, it appears that because of the good Phase I study, the sponsor was conducting a Phase II and III study in parallel, most unusual. It is unclear if this parallel track is why they were pursuing the RCT – even when the primary endpoint was overall survival, and where the new treatment had apparently fallen out of equipoise.

    With the pace of technological innovation, designing clinical studies that balance patient safety, regulatory requirements and ethical concerns is increasingly a challenge. For example, in a disease state, where many patients don’t respond to available medications, and are therefore candidates for study of a potential life-saving, implantable device that involves risky surgery, is it ethical to enroll patients in a control arm with a sham surgery? Especially if the sham surgery poses more risks compared to the standard of care?

    What about when the surgical risks for a new treatment are in equipoise with current standard of care? It is not easy to dismiss the value of a sham procedure, to rule out a placebo effect. Just recently a medical device company studying a neurostimulation device to control obesity had disappointing study results, where weight loss with the device was no better than a dummy device (placebo) . Did the device actually fail or were patients who participated in the study motivated to lose weight otherwise? A 2000 Journal of Clinical Epidemiology article noted a strong placebo effects (perhaps due to heightened expectations) in medical device trials but also cautioned that it was hardly conclusive.

    The RCT may not always be the best trial design for new life saving devices, especially where current standard of care is for the most part ineffective or risky.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s