Why We Need Both Randomized Controlled Trials and Real World Evidence

Vijay Pande

1/ RCTs vs RWE’s has become a heated debate. Some believe that RWE would be cheaper and better, as there’s more data. Others believe that RCTs are the only statistically validated means to test a drug. Who’s right?

2/ I discuss the nuances of this beyond black and white in this week’s episode of 16 Minutes, and offer some frameworks for thinking about it as well here.

3/ First, there’s no one-size-fits-all. There’s a spectrum of roles for drugs. For the terminally ill, worrying about efficacy will cause deaths. But if this is for something mild (eg a headache), or something widely taken prophylactically, we don’t want nasty adverse effects.

4/ For the most part, experts agree on the extremes. The question is in the messy middle. How can we judge efficacy? “Common sense” vs “statistics”. Statistics is common sense — it’s the most principled way we can make any sort of decisions.

5/ But there are arbitrary aspects, such as how high of a bar of statistical significance we’d demand (as evidenced by the ongoing replication crisis and more). Again, this has to go hand in hand with the role of the drug. Hence the importance of RCTs.

6/ People criticize randomized controlled trials as time consuming and expensive. But not all RCTs need be onerous! If the effect size is large, then small N is fine. In many cases, trials can be small and inexpensive.

7/ It’s only when the effect size is small (eg a 2nd generation drug) that trials become very large, and therefore very expensive.

8/ RWE is appealing, since it could both come at minimal cost and at much higher precision, as data science analysis could examine millions of people instead of hundreds.

9/ The 21st Century Cures Act and FDA have talked about the importance of incorporating such real world evidence based on real world data.

10/ In fact, we need both: Where you have some bar for RCTs for demonstrating efficacy; it doesn’t have to be ridiculously high bar (and frankly can’t be, due to financial and time limits limits of RCTs). And that’s where real-world evidence comes in.

11/ The “RCTs vs RWE” framing mirrors a classic problem in statistics and machine learning — the tradeoff between precision and bias.

12/ Given our goal is to generalize beyond the “training set,” i.e. beyond the specific people who get the drug, people typically push to minimize bias, which pushes to well designed RCTs, where issues of bias can be designed around.

13/ If the RWE data set is big enough, one can “design” a trial of sorts from a subset of the data, designed to minimize bias.

14/ Moreover, new statistical methods have been developed to directly infer causation — “correlation doesn’t mean causation” doesn’t have to apply here, as we can go beyond correlation to true statistical causation, if certain requirements are met in the data set.

15/ In the end, we are ***already*** doing RCTs + RWE. We perform RCTs to provide evidence to the FDA. But in the end, providers (i.e. health insurance companies) do their own RWE analyses to determine whether a drug is worth paying for. Reimbursement is the real arbiter.

16/ This is a statistical learning problem combined with a policy problem. When lives are literally at stake, this isn’t just an abstract mathematical problem.

17/ Ultimately, the patient has to be at the center — and this discussion has to connect the perspectives of clinicians, statisticians, bioethicists, policymakers and patients.

18/ More details on the @a16z podcast here.

Want more a16z Bio + Health?

Sign up for our bio + health newsletter to get the latest take from us on the future of biology, technology, and care delivery.

Thanks for signing up for the a16z Bio + Health newsletter.

Check your inbox for a welcome note.

MANAGE MY SUBSCRIPTIONS By clicking the Subscribe button, you agree to the Privacy Policy.