Editor’s Note: Jorge Conde is Andreessen Horowitz’ newest general partner on our bio fund, which was formally established two years ago with general partner Vijay Pande to lead the firm’s investments at the cross section of biology, computer science, and healthcare. Before joining a16z, Conde was at Syros Pharmaceuticals — where he most recently served as Chief Strategy Officer (and before that, as CFO and Chief Product Officer) — advancing a new wave of medicines to treat cancer and other diseases through a novel gene control platform. He also co-founded, with renowned Harvard and MIT genetics professor George Church, genomics interpretation company Knome (acquired by Tute Genomics in 2015). With degrees in both business and biology, Conde has also worked in marketing, operations, and, biotech investment banking. Currently based in Boston, he will soon move to California.
Jorge Conde joined Vijay Pande and the a16z editorial team last week to discuss all things bio — or as he calls it, “The Century of Biology”…
a16z: When we first formally established the a16z bio fund, there was a lot of heated internal discussion around NOT calling it a ‘biotech fund’ because it’s such a loaded term — at least for those who come from the world of software and experienced past waves of biotech in Silicon Valley. It conjures up this impression of expensive, time-consuming, heavily regulated work compared to what happens with software, which has all these mechanisms (like Moore’s Law, network effects, and more) for advancing innovation in biology and healthcare.
Vijay: Instead of framing it as “biotech” vs. “bio”, another way of framing this is to simply ask whether what we’re looking at still has a lot of real science that has to be done — or whether it’s something that’s moving into the world of engineering. Science is more empirical and involves discovery, so it can’t be done on a typical roadmap. Whereas with engineering, we can plan things out, make incremental innovations, and progress in a very systematic way. And so if you think about how software companies are built, something like Moore’s Law — whether for engineering computers or for engineering genome sequencing — can come in to accelerate things. There may still be challenges to overcome in whatever you’re building, but the fundamental science was worked out years ago, and so now it’s more about engineering to push it forward. Also, with an engineering approach, you can test out the market; as you’re building a company, you get a lot of signal along the way.
You don’t just read the code of biology but you can also write, or design, with it.
Jorge: When engineering, biology, and computer science come together it elevates bio to a “read/write” paradigm. That is, you don’t just read the code of biology but you can also write, or design, with it. Take genomics to begin with; “read” is already happening at an unprecedented speed and scale there. When I started in the space about 10 years ago, next-generation sequencing was starting to come online. The first generation was based on a technology known as “Sanger sequencing”, where you would isolate a specific region of a genome or gene, and then painstakingly read out the letters of the biological code — A-C-G-T, the four nucleotide bases that made up that genome. It was very, very low throughput: we’re talking on the order of 1000s of bases, when there are 3 billion nucleotide bases in the human genome! What the next-generation sequencing platforms did (with Illumina and the like in the late 2000s) was let us do this better and cheaper. But more importantly, we can now generate that data in a day or so. That timescale is definitely more engineering than science.
a16z: So the Moore’s Law of genomics is even faster than the one for computing. We’ve talked about that phenomenon before; what about it specifically is interesting to you there, right now?
Jorge: To me what’s most fascinating for entrepreneurs and startups is that we’ve gone from a “single lens” view of biology where the focus was on genomics (the A, C, T, G code of DNA) to where we can now look at biology via multiple lenses… That is, various biological signals — DNA, RNA expression levels, proteomics — in a more multi-dimensional and high-throughput way. We can integrate all these different lenses together to get a much clearer picture of what’s happening from a disease biology standpoint.
That’s where I find epigenetics/ epigenomics to be a fascinating, promising area. If you think of DNA as the genetic “source code”, and “cell programs” as the set of genes being expressed within different cell types, then epigenomics focuses on how levels of specific genes are controlled within those different cell types — basically, the specific portions of the source code that a particular cell is relying on. Understanding this can help us unravel a lot around normal cell development as well as how disease develops. That code isn’t necessarily corrupted; it could just be that the cell program has been modified for any number of reasons.
The other thing to point out is that we’re starting to do all this at high resolution — at the level of a single cell. And that’s important because there are a lot of applications where genomics and various other -omics matter at very low signals, because you’re essentially hunting for a needle in a haystack. With non-invasive prenatal testing for example, there are so few fetal cells compared to the mother’s count. Or with a tumor, which is actually comprised of a very heterogenous population of cancer cells, querying each one of those cells individually allows you to see things more clearly. Before, you weren’t actually sequencing a genome; you were really sequencing the average of millions of genomes in cells from patient samples.
a16z: So multiple lenses, single cells. Where does machine learning come in then?
Vijay: That’s the other interesting angle to those “high-res” trends Jorge describes — machine learning now translates beautifully to genomics. All those computers analyzing 2-D images? Well, a genome is like a picture, only it’s a 1-D grid of pixels. And just as convolutional neural nets don’t care where the dog is in a picture (“translational invariance”), it doesn’t care where the DNA strand is. Machine learning lets you deal with the locality to find things.
Jorge: Yes, if you take these new technologies that provide new lenses into disease biology, you can actually “deconvolute” what’s going on in a way we couldn’t do before. Because we just didn’t have analytical capabilities to derive meaning from the various data streams; it was overwhelming, we couldn’t really “see” the biology.
The practical implications of this are staggering: For instance, at Syros, we found that when you looked at a cell type that was normal, and you looked at its related diseased counterpart, you could see changes in the differential set of genes that were being expressed in one cell type vs. another (the cell program). Sometimes there’s not a mutated gene that’s causing the disease — it’s not just a bug or “bad gene” in the code (genomics) — it’s the cell running the wrong program, which is where epigenomics comes in again. The gene doesn’t just turn on and off like a light switch to cause disease, it can be more like a dimmer, going higher or lower (too much or too little of a gene). What causes a healthy cell to shift in to a diseased state, whether it’s cells multiplying uncontrollably in cancer or dormant immune cells suddenly getting activated in autoimmune disease? The ability to understand how the genome is being deployed or regulated within a cell — is it the wrong dosage or level for a particular set of genes? — becomes a very interesting new avenue to help us get to the right drug, for the right patient, at the right dose, and at the right time.
a16z: Clearly, precision, personal medicine is the holy grail there. But where are we right now in the overall trend of the “code” of biology and the code of computers coming together?
Jorge: We’ve essentially hit the read and write or “read/write” stage of biology — because we can now read biology more comprehensively with more lenses and higher resolution, we can also increasingly write to biology. We can increasingly program biological systems, whether it’s gene editing with CRISPR or with various genetic engineering tools that have been refined over time. But as Vijay framed it earlier, when you make something an engineering problem vs. a science problem, then you can industrialize a lot of these processes to do them at higher throughput, higher resolution, better costs, and higher quality… basically at scale with high precision.
This leads us to the second derivatives of read/write, which is to enable insight/design. We’re not just generating data on the read side, we’re also fundamentally understanding it deeply in ways that we couldn’t before, especially with multiple data streams and machine learning to help make sense of it all. And on the write side, we’re not just editing, we’re also designing with biology. In the future, biology can become its own creative medium of sorts.
a16z: That’s what happens when you have both read/write for biology?
In the future, biology can become its own creative medium of sorts.
Jorge: I believe this will be the Century of Biology. Just as the Information/ Computer Age yielded technology that allows us to assemble and move data around in amazing ways, there is no known force in this universe that’s more effective at moving around and assembling matter than biology.
And so our ability to read it, to write it, to analyze it, to design with it is going to touch not just health but every industry — just as the computing industry did before it. Software first started disrupting the industries where the primary product was information, right? And then it eventually moved into the physical world, revolutionizing existing industries whether through Amazon or Ebay or Airbnb or Lyft. Similarly, our ability to read/write biology will disrupt a wide range of industries. In addition to its obvious impact across health, we’re increasingly using biology for manufacturing. Eventually it’s going to impact areas people don’t typically think about as “biological” — like textiles, architecture, and many more areas — in ways we can’t even conceive of yet.
a16z: If machine learning is a vector to much of this, how has your thesis evolved there Vijay? When we started the bio fund a few years ago, you said that machine learning would have a huge impact where biology and computing meet.
Vijay: Just think about machine learning now versus two years ago; that alone is pretty dramatic. There was this expectation back then that the “wave” was about to crest, and now it’s indeed finally here — computers are not just getting close to human performance, computers are exceeding human performance. Sometimes, when you’re in the middle of it, you can’t quite tell where you are on the wave, whether it’s the trough or the crest, which is why I love the surfing analogy; sometimes it comes, sometimes it peters out. In this case it came through, big time.
But now what we’re realizing is that machine learning is just the means to an end, the opening salvo of something much bigger. Extending the wave analogy, “the bigger ocean” out there is that machine learning can be used towards engineering biology; we think of DNA as the fundamental map of our existence, but it’s also just a tool that can be used to put DNA bar codes on things or even store digital content and so much more.
Now what we’re realizing is that machine learning is just the means to an end, the opening salvo of something much bigger.
a16z: When you talk about “engineering”, it’s not just that you’re able to build it to a roadmap, it’s that it’s scalable. You can do it, whatever “it” is, at an industrial, production-ready scale. What does this mean for building companies in the bio space?
Jorge: I like to think of bio companies as operating in systems at three levels. Often you have a small startup (1) trying to understand and develop technologies for a complex biological system; (2) doing so for a much larger customer base, such as the biopharmaceutical industry… which is another complex system with its own specific needs, processes, and challenges; and (3) deploying innovations into another complex, often heavily regulated system, which is of course the healthcare delivery system. Even if your actual end product doesn’t address those levels directly, you will, still, ultimately be doing all this in some form. You can’t operate in isolation.
Often startups in this space will have the killer technology, but it’s not obvious yet how it can be applied to the next two levels. Instead of product-market fit, it’s more product to science-engineering fit. And then, as you move downstream towards healthcare delivery, you have to think more about how you innovate to ensure that both your business model and product fit within this complex system.
a16z: That “systems” view for bio startups sounds like such a neat, linear progression: first you start with this killer tech, then figure out product-market fit, build in the business model. But surely with this kind of hard science and tech, it’s far messier in practice?!
Jorge: Yeah, it’s definitely not that neat. Here’s a rough analogy: You’re trying to build a jumbo jet. And you’re trying to figure out if customers not only want or need a jumbo jet, but if they can pay for it. And will the government give you permission to fly your jumbo jet? Oh, and at the same time, you’re trying to figure out, wait, can this jumbo jet actually fly — do we actually really understand the laws of gravity and aerodynamics as well as we thought we did? And so on…
a16z: Yup, things never work out so neatly in the end, even though the narrative sounds neat in hindsight. On that note, Vijay, what’s one thing that surprised you from a couple years ago when we started the bio fund? Sounds like the machine learning expectation delivered in spades. But is there anything that hasn’t quite panned out yet as hoped?
Vijay: Well, back then I honed in on on three pillars: “computational biomedicine”, “cloud biology”, and “digital therapeutics”. I think the one that hasn’t really come to fruition is cloud bio, this idea that you can do wet lab experiments in the cloud. Currently, companies use contract research organizations (CRO) for this. So I spent some time looking for a more modern, “programmable CRO”. Not only would this enable experiments to go faster, but it would lower capex (capital expenditure) and even opex (operating expenses) for bio startups to scale the way computer startups did with AWS — actually, not quite like AWS, but more akin to PCs in a garage. But I haven’t seen this really happen yet.
Jorge: It’s basically just where you can start up a company on a credit card and get a killer experiment to proof of concept without having to go set up a lab. The challenge is that so much of what happens at the early stages of biology is the tinkering part (optimizing protocols, etc.) — it’s more like playing with the recipe and making sure it works before you can send it to an industrial kitchen.
a16z: Ok let’s switch gears. Can’t resist saying this, but “Pande & Conde” (sorry not sorry) is like the name of a law firm… or a band. Actually, it’s like an East Coast vs. West Coast rap battle, isn’t it, since you’re both coming from two different, competing ecosystems for bio — one in Boston, one in Silicon Valley?
Vijay: I would say that the Silicon Valley tech ecosystem is really realizing the potential of bio, and that the Boston bio ecosystem is finally realizing the value of tech. So it’s like we’re digging a tunnel from both ends, and meeting in the middle. It might even be a race to see who can get to the middle fastest;)
Jorge: And you’re right to call them “ecosystems”. Boston became an epicenter of biotech because it connects not only the universities and the academic labs, and industry co-located here, but also a deep hospital system (Children’s Hospital, cancer hospitals, maternity/women’s hospitals, and of course general hospitals) that feeds into all of those. That’s helped a lot of patient-focused development to take place. But if you look at the number of publications in medicine, you see that the Silicon Valley approach of tackling tough problems with compute and other technologies is increasingly valuable in bio.
There’s a reason they started out evolving as two distinct ecosystems on separate islands in the Galapagos; they each have their own strengths. But by bringing tech and bio together, we’re creating a new, integrated, and more evolved ecosystem. That’s what drew me to a16z, Vijay, and the bio fund.
a16z: So how does that play out with the entrepreneurs, the founding team? Is our ideal profile someone who comes to bio from computing, or the other way around? Can’t they simply hire the “other side”?
Vijay: The ideal entrepreneur can go deep into both individually, not just across a team. Because one person who can do both is like two people who are telepathic, and no one can read each other’s minds (yet)! Jorge’s actually a great example of somebody who is deep in the bio, but also fast with getting the computer science. It doesn’t matter when and where you learn it, whether at age 12 or in grad school or on the job. The key is that you have enough deep understanding to understand all the nuances, and details, and abstractions, and complexity on both sides.
Jorge: I agree. I’d also add that because there’s a lot of “illogical” aspects to the industry, it’s good to have someone that has had some direct exposure or been battle-tested somehow in one of the two systems. Otherwise you run the risk of naïvely attacking the problem from just one side.
Otherwise you run the risk of naïvely attacking the problem from just one side.
a16z: On that note, Jorge, I know you were an undergrad biology at Johns Hopkins, but you have an MBA from Harvard — and also worked as an investment banker at Morgan Stanley! How does that all add up?
Jorge: I went to finance to see if I could understand the business side of things: what drives an industry, how does the operation actually work? It was a nice boot camp for someone that didn’t study business in undergrad. But then I realized I didn’t want to stay on the advisory side of things, that I wanted to build and do. And so when I went back to grad school, I also did additional graduate work in the sciences at the medical school at Harvard and at MIT. And that’s actually what got me hooked on genomics.
But beyond starting companies, the most formative experiences for me along the way have been working at a large biotech company that had a drug on the market and had to worry about how it got paid, had to deal with the payers, had to deal with doctors, had to deal with patients. I’ve been in front of the FDA. This goes back to the second and third levels of the systems view I was describing earlier.
Vijay: Look, the two things that we’re always looking for are: the killer tech, and the killer go to market. And when you see the makeup of the full bio fund team, we combine both of these things — especially with Jorge adding an even deeper and more nuanced understanding of go to market. And as much as I love tech and live tech and dream tech… it’s the killer go to market that always wins.
It’s the killer go to market that always wins.