Raising Health

AI at the Intersection of Bio with Vijay Pande, Surya Ganguli, and Bowen Liu

Surya Ganguli, Bowen Liu, Vijay Pande, Kris Tatiossian, and Olivia Webb

Posted August 13, 2024

Bowen Liu, PhD, investing partner, and Surya Ganguli, PhD, venture partner, join Vijay Pande, PhD, general partner of a16z Bio + Health.

Together, they detail different methods through which AI could assist drug development, the opportunity for AI to flag new targets and compounds for scientists to investigate, and the science fiction-sounding notion of developing a foundation model that untangles biology.

This is an in-depth conversation from three AI experts and biologists, so we’re publishing the transcript below.

Transcript lightly edited and may contain occasional errors.

Olivia Webb: Hello and welcome to Raising Health, where we explore the real challenges and enormous opportunities facing entrepreneurs who are building the future of health. I’m Olivia,

 Kris Tatiossian: and I’m Kris. In this episode, we take a deep dive into AI for biology with Vijay Pande, general partner, Surya Ganguli, venture partner, and Bowen Liu, investment partner. 

Olivia Webb: Together, they detail different methods through which AI could assist drug development, the opportunity for AI to flag new targets and compounds for scientists to investigate, and the science fiction-sounding notion of developing a foundation model that untangles biology. 

Kris Tatiossian: This is an in-depth conversation from three AI experts and biologists, so we’ll also publish the transcript alongside the episode on our website if you want to follow along. You’re listening to Raising Health, from a16z Bio + Health.

Vijay Pande: Surya and Bowen, thank you so much for joining me on Raising Health. 

Surya Ganguli: Yeah, thanks for having us. 

Vijay Pande: Did either of you have an aha moment that like, oh, this is going to be big, that this is going to be interesting?

Surya Ganguli: Yeah, I’ve been working in AI for a while, almost a decade or more. And for me, in AI in general, ChatGPT was just eye opening. I’d seen GPT 2, it was kind of impressive and so forth. But ChatGPT was the first thing that did things that we had no idea it could do. We couldn’t predict it. It was remarkable and so forth. 

And then going to more of the biology domain, I was actually really impressed by ESMFold, this average scale modeling where you could kind of do the same thing that you do for language, but do it for sequences. And then you learn representations that know about the structure of proteins. And so this kind of going from one modality to another was kind of remarkable. I was kind of taken aback by that. 

Bowen Liu: I think for me, it was probably the first year of grad school. Coming from someone who had worked in the lab and the bench to, in our collaboration with Karla Kirkgaard, we did some drug repurposing project trying to figure out [and] adapt existing drugs for dengue infections.

And I remember doing some of the calculations on my laptop and then a month later buying the compounds and having our collaborators test it and that actually working. So that really made me think, hey, this computational kind of chemistry computational drug design, it was really exciting.

Vijay Pande: Where we stand today is that AI and drug design is, I think, no longer a question of if, it’s a question of how, how does it get rolled out?

How is it useful? So the, if has gone away, but I think it took a lot to get here, like multiple decades to get here. 

Maybe we could start by talking about that early sort of shift for when we were in machine learning. What could we do then? And then what makes it AI now? 

Bowen Liu: I think even before ML, I think computational chemistry, computational biology, they’ve been around for like 40 years or more, right?

And I think the early methods we had were maybe from two camps. One, we had the physics-based methods, where you kind of start from the underlying low level physics to make predictions about chemistry. And on the other end, you have these expert systems.

Humans would encode some heuristics or rules to make predictions. There were pros and cons for each. For the physics approach, very generalizable, right? But then a lot of methods are very computationally expensive, especially for the systems that we’re interested in drug discovery and biology.

Whereas the expert systems, very efficient once you had it all coded up but weren’t super generalizable, right? We had machine learning methods that fell in between these two extremes, where you would learn from the data sets that you had. So that, ideally, they would generalize a bit better than just the human-encoded rules. But way more computationally efficient. 

But these ML approaches, you still had to define the input features, right? A scientist where you have to define how to best represent a molecule, but then deep learning came around, where the whole philosophy was [that] you would take probably the most raw representation  and then also have the algorithm learn what the best representations are to solve the particular task.

Vijay Pande: And actually one of my favorite examples for representation: if I asked you 25 plus 17, that’s really easy to do. If I gave you that same problem in Roman numerals, you’d probably have to think about that back into Arabic and then do the computation and put it back into Roman.

Some representations make computation natural…unless you’re Roman. But so then what happens from there? Where’s the story go? 

Surya Ganguli: The deep learning revolution kind of revolutionized everything. And it’s really a confluence large amounts of data. That’s key. That allows us to train larger models. But also, a key thing was self-supervised learning so that you can learn from unlabeled data using a very simple task.

So if we go through the list of things and place things in context and think about how much data is required. GPT 4, it’s thought that it’s trained in about 5 trillion unique token sequences. You can think of tokens as like sub words, right? That’s a huge amount of data, and all it’s trained to do is to predict the next word. But then it learns representations that can solve all sorts of other problems. 

To get a sense of how massive that data set is, it would take humans about 20,000 years to read that amount of text, right? So now if we go from language to genomic sequences or amino acid sequences, underlying proteins. ESM3, evolutionary scale modeling, did the same language modeling, but now in amino acid sequences of about 2.8 billion sequences. So in a rough estimate of, say, 300 amino acids per protein, that’s just under a trillion tokens. Which is about the same order of magnitude as GPT 4, like one fifth as much, right?

So it’s kind of cool, like evolution left [behind] less protein text on our planet than humans left on the internet. 

Vijay Pande: But some of it is what we’ve sequenced so far too, right? 

Surya Ganguli: Exactly. We haven’t sequenced everything yet. But now you can start to see that, going from language to proteins, we have a lot less data.

For comparison, for 3D structure, we have even less data. We have about 200,000 or so solved protein structures in the protein databank. Small molecule discovery is another one where deep learning had a huge revolution. And here the stable chemicals is about 10 to the 180, right?

That’s a huge space of molecules. But the space of drug-like compounds that are soluble, bind with biological molecules, proteins and stuff. That’s an infinitesimal fraction. It’s like 10 to the 40, right? Just as a comparison, the number of stars in the universe is about 10 to the 24.

So how do we explore this small fraction—but still a huge fraction—of space? People are able to use language models to do that as well. And then single cell gene expression you can create foundation models for cell biology with 36 million cells, and in neuroscience, we can do ECOG arrays and try to decode speech from the brain.

So the availability of data, compute, and algorithms is what really changed everything. 

Vijay Pande: One of the things that I think it’s interesting to emphasize is that, with deep learning, you have something that’s a pretty rich model that can learn the representations that Bowen was talking about because it’s a neural net into another neural net into another neural net. And then it almost becomes like a complex physics problem itself, right? With all these parameters. 

But you also talk about self-supervised learning, which is a key distinction. Because often in machine learning you have unsupervised learning, which is some sort of clustering, like these are similar to each other, or supervised, which is, this is a drug, this active, this is inactive.

But self-supervised is a little different, because you don’t have that many labels. And so that was also another huge thing, that you actually don’t need all these labels. And I think that’s a big problem in drug design, because often the common knock on machine learning for drug design is like, yeah, if you have a hundred actives, we can train a great model.

But if you have a hundred actives, you don’t need machine learning, you’re basically ready to go into the clinic. 

Surya Ganguli: Yeah, exactly. 

Vijay Pande: Part of the self-supervised stuff is low shot, right? How do you think about this low shot where you don’t have a lot of labels? How does that sort of come to be? How does that work? Especially in the drug design context. 

Bowen Liu: Yeah. I think you touched on probably the core problem of ML applied to science while we have a lot of unlabeled data. There’s just not that much labeled data out there. And a lot of it is because it’s very experimentally expensive to generate data, both in terms of time and cost.

And so you’re right. Usually in a drug discovery project, if you have a hundred actives, you should be close to a drug. But a hundred data points is tiny for machine learning. So then the idea is, can your model learn some information from other sources of data, to either learn better representations such that you can almost fine tune it with the small amount of data you have to perform better in your actual application you’re interested in.

Vijay Pande: What are the recent breakthroughs that are worth pointing to, especially for people who want to understand AI for drug design?

Bowen Liu: The past few years we’ve had a lot of impactful work on the problem of protein structure prediction, right? With AlphaFold, you know, RoseTTAFold…given a protein sequence, can you predict a 3D structure, which is what defines a lot of protein function.

And so in some ways in the past four or five years, this problem of protein structure prediction went from something that was quite far away from being solved to now—you could argue that it’s pretty much being solved for a lot of common proteins. I think that’s an area where recent deep learning advances have completely transformed the field.

There’s also a related area of these large language models applied to protein sequences and biological sequences. These kind of models are able to learn interesting biology, useful biology for scientists. And they also have very promising applications in actual drug discovery as well. 

Vijay Pande: Well, that’s something I want to double click on. Because for actual drug discovery—we should talk about what the stages are—but usually you find a hit, you get a lead, you optimize the lead. You have some sense of ADME, you go through animals, you go through clinical trial phases and then eventually you’re in patients. 

Let me push back on…for structure prediction, what does structure prediction get you for drug design? What’s the element there? What’s the utility?

Bowen Liu: Let’s say small molecule drug design, right? You’re truly trying to find molecules that bind to particular proteins you’re interested in that we think modulate downstream disease. And so a part of that is, given a protein, can you design very strong binders for it?

Having the idea, knowing what the 3D structure of a protein looks like is a very useful starting point to be able to design these smaller co-binders. 

Surya Ganguli: It’s also a multi-objective optimization problem, right? Because you don’t just want a good binder. You don’t want off target effects. You want it to be soluble. You want it to not be toxic. You want it to be easily synthesizable. So there’s machine learning aspects of all of these that are in play.

There’s all sorts of interesting LLMs that can discover synthesis pathways for drugs and things like that. So I think it’s really putting all of it together, which I think is exciting. There’s a lot of exciting work going on there. 

Just to throw out some numbers where there’s a huge opportunity for AI to make drug discovery much more efficient and less costly. The cost of drug design, I know in the industry this is well known, but it’s worth emphasizing. It’s 2.5 billion dollars per drug in 10 to 15 years, right? It’s highly inefficient. 90 percent of drug candidates don’t get FDA approval. There’s this law that the number of drugs brought to market per billion dollar spend is going down by half every nine years.

Part of the problem is that we’re setting the bar higher because new drugs have to outperform existing drugs and FDA-approved drugs. They target only 800 of the 20,000 known genes that we have, 20 to 25,000 known genes that we have. So there’s a huge space of opportunities. 

I really think AI for drug discovery can hit at all the inefficiencies in every step of the drug design process. And we’re kind of just getting started there. 

Vijay Pande: A lot of the excitement these days is about generative AI, or you can not just understand some latent space, but go back up and generate something from that latent space. So where is that going?

Surya Ganguli: That’s the inverse design problem, right? Can you design a molecule with a pre-specified set of properties? There are ideas for using diffusion models, say, with classifier guidance to drive the design and sequence space to get these certain properties and things like that.

There’s a whole bunch of work going on in that space that I think is quite exciting, as usual. In this deep learning field, there’s tons of people tinkering around. It’s more of an art than a science. And that’s where the success comes, with many, many people tinkering around.

Bowen Liu: Right now, the number of molecules we can buy commercially is probably 10 to the 12, right, like tens of billions. What we have available right now is a very tiny percentage of what’s possible. And so chances are that future medicines and future materials are going to be stuff that we haven’t seen before. Just like looking at what we have right now and screening those is probably not enough. You have to generate new ideas.

I would say the challenge here is that unlike genAI for computer vision or natural language processing…It’s very easy to figure out, to validate how good the generated ideas are [for genAI]. We could look at an image or look at a piece of text and be like, hey, you know, this is good or not. In science, it’s actually the inverse. It’s probably easier to generate ideas, but way harder to validate. 

Vijay Pande: Let me push back on that. You could for sure test something like AOC to some benchmark and compare methods and so on. So presumably there’s been progress there. 

Bowen Liu: Yeah. So definitely in silico, like benchmark basis, we’ve seen improvements in these methods recently.

But I think at the end of the day, we’re still in a regime where we’re going to have to make the things that the models generate and then test in a lab, right? And that’s the biggest bottleneck for a lot of this drug design and generative modeling.

Vijay Pande: Yeah, and I think part of it too is going to be, if you’re basically using ML or AI design libraries and you screen a million for one active, that’s going to take a long time. But if it gets to the point where you design five, you screen five, and five are active, or maybe four are active, then we’re in a very different regime.

Surya Ganguli: Yeah. We’ve seen examples from the industry where you can generate maybe 10 or so, and they have a false positive rate of active of, say, 20 to 30%, right? That’s not so bad. It’s getting there. 

Vijay Pande: Yeah. Because that’s a very different regime. Then we’re not spending all this time making it. And then there’s some reasonable hope for success. 

And also, frankly, from a cultural point of view, if you make 20 things as an AI engineer and one works, I think your experimental collaborators are not going to be loving you. And probably not trusting you. You make 20 and like 15 work…

Surya Ganguli: Yeah, exactly. That’s getting actually kind of interesting. There’s several companies that are basically starting to get to that level, which makes the computational approach quite exciting.

Vijay Pande: Modern AI really offers opportunity for foundation models. We might have one model. 

The fantasy, it’s one model that designs all of our drugs. Right? And so, where are we today in terms of that aspect of AI? Because that’s a huge shift, that’s probably one of the bigger philosophical shifts.

Surya Ganguli: Yeah, it’s an interesting question. I maintain—this is going to be controversial. 

Vijay Pande: Oh, that’s good. 

Surya Ganguli: That the best model is the one that has your test example of interest in the training set. 

Vijay Pande: Well, the best in what sense? 

Surya Ganguli: Let me get there. The next best model is where you have many, many training examples that are close to your test example so that you can interpolate, right?

So why am I saying that? The biggest failure mode of ML is that it can’t really do out of distribution, generalization. Take AlphaFold3, right? It was heralded as a big success, although they didn’t release their code, so the academic community can’t really knock the tires on it yet.

But folks that say Inductive Bio, one of our portfolio companies, they actually created a stronger physics based docking algorithm to predict protein ligand binding. And they compared it to AlphaFold. And it didn’t do as well as AlphaFold on the 50 most common ligands.

And it’s not surprising that AlphaFold did well on that, because those 50 most common ligands appear more than 100 times in the protein databank. But if you take out those 50 most common ligands, and look at the rest of them, their basic physics based docking did way better than AlphaFold, like 8 percent better accuracy than AlphaFold.

So physics beats ML when the training data is not like the test data, right? That’s a key lesson, I think. So going back to the facetious statements I made, I think you’re best off with a specialized model trained on the data that’s very relevant to the thing, the task you want to solve.

Your second best bet is to start with a foundation model that understands the broad space and fine tune it. Again, on data that’s specialized to what you want to solve. 

Vijay Pande: Well, I’m curious to double click on this because what’s extrapolation, what’s interpolation is interesting to think about. It’s a very interesting question.

We’re both trained as physicists, so classic examples of physics is like, Newton studies an apple falling from the tree. Apocryphally, perhaps. And then from that you get F equals ma and you get planets orbiting the sun.

Surya Ganguli: Yes, exactly. 

Vijay Pande: And you might say, oh, there’s a huge extrapolation from apples to planets. But actually it’s the same latent space of F equals ma and all that stuff. Is that an extrapolation or is that an interpolation? 

Surya Ganguli: It’s finding the right latent space and interpolating in the correct latent space.

Vijay Pande: So that’s the thing is that if you have the right latent space, what might seem like an extrapolation from the outside—apples to planets—may actually be no extrapolation.

Surya Ganguli: Yeah. There are theories about this in the ML world where these language models seem to be able to solve endless amounts of tasks.

So the theory for how this is possible is that maybe the space of tasks isn’t that complicated. Maybe there’s an underlying set of a finite number of skills you need to solve. And then any new task is a combinatorial combination of this finite number of skills. So in drug discovery, it’s like the latent space to predict properties of a protein.

What are the sub-problems you really need to solve? And how do you combine them in different ways for different proteins? And so I think that part of that is really important to really understanding why things succeed or fail. 

Bowen Liu: Yeah. Proteins evolved, right? So they’re modular and so forth. Small molecules are the outcome of complicated synthesis pathways. There were partially evolved through metabolism, but there’s all sorts of other aspects of chemistry that evolve through modular protein.

Vijay Pande: Exactly. And catalysts and so forth.

Bowen Liu: Yeah, exactly. So it’s complicated, which is a challenge, right? Because with protein sequences and biological sequences, you can actually do self supervised learning. Because there is this complicated, generative process with evolutionary pressure that you can learn from.

There’s no equivalent for small molecules right now. Unless you’re kind of looking at metabolites. 

Vijay Pande: So to review, we’re trying to understand the biology of proteins. We’re trying to find the right target. Was it like 80 percent of drugs fail in Phase 2 or 3 in trials? And that’s not because it’s toxic. It’s because we screwed up the biology.

So understanding the biology is a big deal. So how about where we are for AI for understanding biology for targets and so on? 

Bowen Liu: You can look at it from the level of a cell or the level of like a human. From the level of the cell, I think quite a few folks are looking at using perturbational studies, so taking a cell that ideally represents or captures some aspect of disease. Then perturbing it genetically and seeing if you can change the state of the cell to help us learn if this particular protein or gene actually affects your disease or phenotype.

Vijay Pande: So the key idea there is that the cellular phenotype could recapitulate the disease phenotype sufficiently to predict therapeutic interventions. 

Bowen Liu: Exactly. And then using methods of microscopy to capture high content imaging, and then training ML models on that.

Vijay Pande: But why would we think the cellular phenotype would be enough? 

Bowen Liu: Obviously a cell is not a human. But I think for a lot of biology, if you design the in vitro cell model in a good enough way, it can recapitulate a lot of key aspects of the model.

Vijay Pande: I guess that’s what we’re seeing in the latent spaces that come out of these models.

Surya Ganguli: Yeah. For example, there’s this foundation model for cell biology where they take 36 million single cell RNA gene expression patterns. And then they learn an auto encode—again, self supervised learning, right?

And they can get an embedding space for all of cell biology and they can even have held out species and they put them into the embedding and they make sense, right? And then you can ask, how do drugs move you in the latent space? How do different diseases change you in the latent space? Can you try to control the latent space and design drugs that control it?

I think it’s incredibly exciting. It’s wild. 

Vijay Pande: And to some degree that is understanding biology, right? If you have the right latent space, you basically have understood it. 

Surya Ganguli: Exactly. And the whole thing about latent spaces is they enable transparency and control, right?

Vijay Pande: What do you mean by transparency? What do you mean by control? 

Surya Ganguli: So there’s a beautiful thing about, say, variational autoencoders is what we actually have some theory on why they do this called disentangling. Like in a more familiar setting, like faces: A face can be happy or sad, have glasses or not. And you can learn an autoencoder that put faces into a latent space and you can find interpretable directions. If I move in this way, I can turn a frown upside down, right? And make you smile. Or I can, if I move in another direction, I can put on glasses.

So if we can learn these disentangled latent spaces for biology, we can find interpretable directions that move you in desirable directions or undesirable directions. And then we can design drugs to move you in that space. So I think this disentangling of biology would be fantastic.

Vijay Pande: What we loved about physics is that math was such a natural sort of language and sort of latent space for these complex systems. And it was highly…

Surya Ganguli: Interpretable.

Vijay Pande: Very interpretable. But biology is so complicated, those latent spaces might not be as complicated or quite so elegant from a mathematical point of view, but still could be learned and could still exist. And still have the similar predictive value that we’d expect from something that we’d normally associate with something like physics.

Surya Ganguli: Exactly. And there’s a deep reason they have to exist, I think, because biological systems have survived for almost four billion years of evolution. They’ve tolerated all sorts of insults, competition, and so forth. So they’re extremely robust. Because they’re robust, their function can’t depend on all of the details.

That means there must be a low-dimensional structure that controls their function, right? And so I think studying systems that have a function, which doesn’t exist in physics, gives you another handle on underlying simplicity that can be exploitable. 

Vijay Pande: And the second aspect is that life on earth is evolvable. So that evolvability, which is where the modularity and robustness comes in. I think that’s gonna go hand in hand with those latent space. 

Surya Ganguli: Absolutely. 

Vijay Pande: Okay, so let’s say we figure out our target. AI has accelerated or made the undruggable druggable. We’re heading into the clinic. How does AI help that? What’s the role there? 

Surya Ganguli: In clinical trials, for example, the numbers are quite dismal, like 80 percent of clinical trials just fail to meet enrollment targets. So the main problems are poor patient recruitment and retention. 

For example, you can start to use AI to select patients. And a key issue there will be to limit patient heterogeneity. A lot of drugs work differently in different patients with different genetic backgrounds, with different biomarkers, and so forth. So you could imagine, for example, AI systems that search EMR records, that search biomarkers, and match them to clinical trial databases to find the optimal patient population for each clinical trial. And that will improve success rates for the clinical trial. 

Of course you want interpretability of these AI systems because you’re going to have to explain to FDA regulators why you chose the patients you’re choosing, and you’ll have to use the same selection process when you decide: am I going to assign a drug to a certain patient or not?

And then in terms of retention, you can imagine wearable devices or other things to make adherence easier and automatic and so forth. So I think there’s a lot of work on trying to make these clinical trials less inefficient in that way. 

Vijay Pande: Yeah. I mean one of the things for me that’s always been a fantasy too is to be able to predict the outcomes of trials. And so if hopefully we’re unraveling the biology, we’ll be able to predict which trials may have challenges or not. 

And think about—trials are so expensive and 80 percent fail. If we could even just go from 20 percent succeeding to 30 percent succeeding…

Surya Ganguli: Exactly. 

Vijay Pande: I mean, that’s enormous. 50 percent of an increase in drugs, that’s a dramatic change.

Surya Ganguli: And that can attack the inverse Moore’s law or Eroom’s law of exponential decay in successful drugs per dollar spent. 

Bowen Liu: Because the worst case is like, you fail post-Phase 3, right? If you could fail, even fail early. That’s transformative.

Vijay Pande: And prioritize.

Bowen Liu: Yeah. 

Vijay Pande: Yeah. Well, then also, after that, we’re basically into real world evidence and personalized medicine. And in principle, the same models could be used for that…or how do you see that space playing out? 

Surya Ganguli: Yeah, personalized medicine is a field that has been coming every decade for the last four decades or so, right?

It’s almost here. It’s almost here. I mean, I find very interesting academic work on IPSC technology, induced pluripotent stem cells, where you can, for example, take a person’s skin cell, turn them back into, say, heart cells, and get heart tissue. 

And so if you want to figure out, will a drug be cardiotoxic? You can apply it to human heart tissue and look at different patient populations and see how it differentially affects different patient populations and so forth. 

I think this is a little bit on the academic side. I don’t know if it’s ready for prime time in industry. But, as usual, personalized medicine is extremely seductive.

It’s seductive to me. I think it’s exciting. We have unprecedented input into human phenotypes now. Partially because IPSC, gene expression assays, other biomarkers and so forth. So I’m quite excited about it. 

Vijay Pande: For me, when I think about this, if we put all the things we just talked about together—is there a mega foundation model that unravels biology, that lets us understand targets, lets us do trials better, lets us do a little evidence better into personalized medicine that it’s both the AI biologist and the AI doctor of sorts?

It feels very science fiction-y, but yet you can also see how we are on this trajectory, where those things are coming together. 

Surya Ganguli: Yeah, basically the dream would be: can you come up with a foundation model for human society from a health perspective? Like can you embed humans in a latent space and really understand the space of possible actions? How drugs move different humans in different directions? Yes, in the latent space. I think that’s the prize. 

Vijay Pande: Yes, I think you could start with human biology, but then you probably have to put in behavior. 

Surya Ganguli: No, that’s part of the foundation model. 

Vijay Pande: Yeah. In time. But you could just start with merely like a digital human that would predict a clinical trial.

Surya Ganguli: Exactly. 

Vijay Pande: And that would be enough for also how it does in the real world and so on. That doesn’t sound that far off concerning the arc of what we’ve just been talking about. But there’s still a lot to build for sure. 

Bowen Liu: The data exists out there right now to build this model.

We are capturing a lot of modalities both vertically and horizontally. Vertically, we can collect a lot of data all the way from ourselves to our tissues to our human level data from wearables. But also horizontally for each particular level in this biological hierarchy, we’re collecting all these different modalities from proteomics, gene expression data, all that kind of stuff.

The seductive idea is that maybe an LLM or some other foundation model can take all these different views of biology and combine them together to hopefully give us additional insights. 

Vijay Pande: That feels like the holy grail.

It’s going to be AI, human AI into human, into another AI, into another human for a while because of all the different stages we’ve talked about. But gradually it will be less and less human and more and more AI. And this could easily take 10 years before we start putting these things into the clinic.

But I think it will happen, a lot can happen in 10 years. And in that arc, when we’re on the other side of it, I think the dramatic thing is all the new worlds that we can explore, that we would never imagine.

Surya Ganguli: And there’s the proverbial moving goal post things of AI, right? Now that we have GPT4, we complain about how dumb it is, whereas two years ago, we never would have predicted it would have existed. High class problems. These are high class problems.

Vijay Pande: Yeah. Well, Surya, Bowen, thank you so much for joining us.

Surya Ganguli: Thanks for having us.

More About This Podcast

Biology and the state of healthcare are undergoing radical shifts.
Raising Health delves into dialogues with scientists, technologists, founders, builders, leaders, and visionaries as they explore how AI, engineering, and technology elevate health to new heights and create a system of enduring health for all.

Learn More