The Bio + Health Newsletter

Bio Monthly Newsletter: July 2019

a16z editorial

Posted August 1, 2019

To receive this monthly update from the a16z Bio Team, sign up for our bio newsletter here. 

What’s Next in Dx/Rx/Px

Andy Tran

Predicting clinical trial success
Fixing the very expensive, very slow, and very risky process of drug development is a holy grail of the pharma industry. Taking a drug from R&D to approval costs on average more than $2.5Bn, and ten years of time—not to mention the failure rate, with an industry-wide drug approval rate of less than 10%. And it’s getting worse, as Eroom’s Law notes, with the cost to develop drugs roughly doubling each decade. This new study, led by Andrew Lo et al. at MIT, demonstrates how machine learning applied to healthcare data can potentially combat this crisis—in this case, by helping to predict clinical trial success. Understanding which drug candidates are most likely to work would let drug developers better focus the right time and resources (precious human and financial capital) in the right places.

Lo’s study—which leveraged data from more than ten thousand clinical trials, several thousand unique drugs, and a few hundred disease indications—showed that their ML and statistical techniques can predict the probability of success of drug candidates advancing from phases 2 and 3 to approval, with predictive measures of 0.75 and 0.81 AUC (Area Under the receiver operating characteristic Curve), respectively. They also learned, interestingly, that some of the most critical features for predicting trial outcomes include trial status, trial accrual rates, duration, prior approval for another indication, and clinical trial sponsor quality.

Although we still have some work to do before we can predict with certainty what drugs are most likely to be successful at the early stages, if this ML challenge is cracked it could transform the current guessing game of drug development into a streamlined system—one that efficiently translates drugs to patients with engineering predictability. And in the longer term, that could even potentially help us halt Eroom’s law, rapidly accelerating the time to life saving drugs while also drastically reducing the cost.


 

Andy Tran is a Principle at Matrix Capital Management Company.

Care Delivery

Julie Yoo

How software is eating care delivery

We’re in a time of massive change in the care delivery space. And while the whole space is ripe with opportunity, there are specific zones of the healthcare market that we believe offer the most opportunity for new tech-enabled companies. The intersection of several key tailwinds is now enabling market uptake of certain business models that, when powered by technology from the ground up, are poised to scale and translate across geographic markets with much more favorable economics than we’ve seen in the past. In this video, I identify some of the company characteristics and business strategies that we believe make for a higher likelihood for success—the things we look for when meeting with entrepreneurs, and evaluating investment opportunities.

(P.S.: You can read an article version of that video here, or get exclusive access to the deck itself by signing up for our newsletter!)


 

Julie Yoo is a general partner on the Bio + Health team at Andreessen Horowitz, focused on transforming how we access, pay for, and experience healthcare.

Bio as a Platform

How to think about BCIs (Brain Computer Interfaces)
The magic of an “I was there when…” event is undeniable. And the chance to see Elon Musk and his team talk about Neuralink and the field of BCIs at the California Academy of Sciences drew a crowd eager to say just that. (We analyze this news in this podcast.) What was really interesting about Neuralink’s approach is how they aim to combine expertise and innovation across many areas at once: neuroscience, materials, electronics, algorithms, and fabrication together, rather than in siloed fields in academia—which makes sense for tackling a neuroscience, fabrication, electronics, materials, and data processing problem.

The field is very young and it may be too early to tell, but it’s useful to think about how these approaches might break down into two different broad categories: input/out, and invasive/non-invasive. Technologies that interface with the brain are fundamentally solving the input/output problem (speed of information transfer, privacy of information transfer, multiplexity of information output, etc.). These technologies either interpret “output” from the brain, for example to control a prosthetic limb or computer; or they provide “input” to the brain, aiming to do things like to help deaf or blind patients, or treat neurological disorders like epilepsy or Parkinson’s. The second defining framework is obvious: invasive vs external to the body. Installing a device directly into the brain is far from without risk. But thanks to laws of physics, the skull is a pesky impediment to many non-invasive approaches for interacting with neurons in the brain. The more invasive neuroprosthetic devices therefore have the potential to input and output more complex signals with higher accuracy.

Thinking about these categories can help us understand the opportunities and the challenges of this new technology. The scientific challenges, as well as the difficulty of getting investment in a nascent field and the (appropriate!) gauntlet of FDA approval, have so far prevented a vibrant ecosystem of BCI startups. Each of the four quadrants of invasive/external x output/input contains a number of opportunities, but each will require different design choices that cut across materials, electronics, algorithms, and fabrication. By showing the power of assembling a deeply interdisciplinary team, Neuralink has provided a blueprint for new companies in the space, regardless of the quadrant they ultimately choose.


 

Market Moves

Kouki Harasaki

The Quest for True EHR Interoperability
There’s been a lot of hype—and disappointment!—around the promise of EHRs (electronic health records) for years. The ultimate vision of seamless interoperability and user-centric design is yet to come to full fruition. But the market is still huge—recently sized at $31.5B for 2018, in this study published by market research firm Kalorama Information.

The discrepancy here can be traced back to the HITECH Act, passed in 2009, which created $19B+ in government reimbursements to providers that install EHR systems and prove meaningful use. The HITECH Act achieved the intended effect of incentivizing US providers to move their patient records from paper to digital, with EHR adoption of office-based physicians nearly doubling between 2009 and 2017. Of course, we know today that also had the unintended consequence of creating an EHR ecosystem marred by widespread technology/data fragmentation (700+ EHR vendors), a lack of interoperability, poor functionality, and a lack of common API standards. The net result is a US healthcare system where 86% of providers are using some type of EHR system, but fewer than one in three hospitals can find, send, receive, and integrate data from another provider. To make matters worse, the poor functionality of EHRs is recognized as a major cause of the epidemic of physician burn-out. A recent Health Affairs study revealed that EHR systems are clogging up over 50% of physicians’ inboxes with system-generated emails.

But the value of solving these problems remains clear. The US Government, under the 21st Century Cures Act, is currently proposing a set of carrots and sticks to force interoperability from a top-down policy approach. Apple is providing interoperability from a bottoms-up consumer approach, and recently announced that Apple Health Records is now open to all US providers via a self-service registration site. And meanwhile, current EHR vendors continue to acquire new functions via M&A—such as EHR vendor Allscripts’s recent acquisition of specialty drug prescription startup ZappRx, which will expand Allscript’s capabilities in the space.

Kouki Harasaki LinkedIn