AI Revolution

Where We Go From Here

Mira Murati and Martin Casado

Back to AI Revolution

This conversation is part of our AI Revolution series, which features some of the most impactful builders in the field of AI discussing and debating where we are, where we’re going, and the big open questions in AI. Find more content from our AI Revolution series on www.a16z.com/AIRevolution.

As CTO of OpenAI, Mira Murati oversaw the development and release of GPT-4 and ChatGPT. Here she tells Martin Casado the story behind the release of ChatGPT—and what it tells us about the future of AI and human-machine interactions.

  • [0:00] Mira’s background
  • [4:32] Math, physics, and AI
  • [7:07] Natural language interfaces
  • [10:28] OpenAI Roadmap
  • [17:24] Scaling Laws
  • [20:25] One model to rule them all?
  • [23:11] The next 5-10 years

Mira’s background

Martin: I would love it if you gave us more about your background and what brought you to OpenAI. Bring us up to speed and we’ll go from there.

Mira: I was born in Albania, just after the fall of communism. Very interesting times in this very isolated country. It’s similar to North Korea today. I bring that up because it was very central to my education and focus in math and sciences because there was a lot of focus on math and physics in post-communist Albania. The humanities, like history and sociology, were a bit questionable. The source of information and truthfulness was hard. It was ambiguous, so I got very interested in math and sciences. That’s what I pursued relentlessly. I’m still working fundamentally in mathematics.

Over time, my interests grew more from the theoretical space into actually building things and figuring out how to apply that knowledge to build stuff. I studied mechanical engineering and went on to work in aerospace as an engineer. I joined Tesla shortly after, where I spent a few years. Initially, I joined to work on the Model S dual motor. Then I went on to Model X from the early days of the initial design and eventually led the whole program to launch.

This is when I got very interested in applications of AI, specifically with autopilot. I started thinking more about different applications of AI. What happens when you’re using AI and computer vision in a different domain instead of autopilot?

After Tesla, I went on to work on augmented reality and virtual reality because I just wanted to get experience with different domains. I thought that it was the right time to work on spatial computing. Obviously, in retrospect, it was too early back then. But I learned a lot about the limitations of pushing this technology to the practicality of using it every day.

At this point, I started thinking more about what happens if you just focus on the generality. Forget the competence in different domains and just focus on generality. There were 2 places at the time that were laser-focused on this issue: OpenAI and DeepMind. I was very drawn to OpenAI because of its mission.

I felt like there’s not going to be a more important technology that we all build than AGI. I certainly did not have the same conviction about it as I do now. But I thought that, fundamentally, if you’re building intelligence as such a core unit, universally, it affects everything. What else is more inspiring than elevating and increasing the collective intelligence of humanity?

Why so many AI leaders come from math and physics

Martin: Whenever I meet somebody that’s a real influencer and has made major contributions to the space, they almost invariably have a physics or math background. This is actually very different than it was 15 years ago. Fifteen years ago, it was engineers and they came from electrical engineering or mechanical engineering. But it does feel like there’s something…and I don’t know if it’s some quirk in the network or it’s more fundamental, systemic. Do you think that this is the time for the physicist to step up and contribute to computer science, or do you think it’s just more of a coincidence?

Mira: I think one thing to draw on from the theoretical space of math, but also the nature of problems with math, is that you need to sit with a problem for a really long time and you have to think about it. Sometimes you sleep and you wake up and you have a new idea. Over the course of a few days or weeks, you get to the final solution. It’s not a quick reward and sometimes it’s not iterative. It’s almost like a different way of thinking where you’re building an intuition, but also the discipline to sit with a problem and have faith that you’re going to solve it. Over time, you build an intuition on what problem is the right problem to actually work on.

Martin: Do you think it’s now more of a systems problem or more of an engineering problem? Or do you think that we still have a lot of pretty real science to unlock?

Mira: Both. I think the systems and the engineering problem is massive as we’re deploying these technologies and trying to scale them, make them more efficient, and make them easily accessible so you don’t need to know the intricacies of ML in order to use them.

Actually, you can see the contrast between making these models available through an API and making the technology available through ChatGPT. It’s fundamentally the same technology, maybe with a small difference, with reinforcement learning with human feedback for ChatGPT. The reaction and the ability to grab people’s imagination and to get them to just use the technology every day is totally different.

Natural language interfaces

Martin: I also think that the API for ChatGPT is such an interesting thing. I program against these things myself for fun. It always feels like, whenever I’m using one of these models in a program, I’m wrapping a supercomputer with an abacus. The code itself just seems so flimsy compared to the model that it’s wrapping. Sometimes I’m like, “I’m just going to give the model a keyboard and a mouse and let it do the programming.” Then the API is going to be English and I’ll just tell it what to do and it’ll do all the programming. I’m curious, as you designed stuff like ChatGPT, do you view that over time the actual interface is going to be natural languages, or do you think that there’s still a big role for programs?

Mira: The programming is becoming less abstract where we can actually talk to computers in high bandwidth in natural language. But maybe another vector is the technology is helping us understand how to actually collaborate with it, versus program it. I think the layer of programming is becoming easier and more accessible because you can program things in natural language. But then there is also this other side, which we’ve seen with ChatGPT, that you can actually collaborate with the model as if it was a companion, a partner, or a co-worker.

Martin: It will be very interesting to see what happens over time. You’ve made the decision to have an API, but you don’t have an API as a co-worker. You talk to a coworker. It could be the case that, over time, these things evolve into speaking natural languages. Or do you think there will always be a component of a finite state machine, a traditional computer?

Mira: Right now is an inflection point where we’re redefining how we interact with digital information and it’s through the form of these AI systems that we collaborate with. Maybe we have several of them and maybe they all have different competencies. Maybe we have a general one that follows us around everywhere that knows everything about my context, what I’ve been up to today, what my goals are in life, at work, and guides me through and coaches me and so on. You can imagine that being super, super powerful.

Right now we are at this inflection point of redefining what this looks like. We don’t know exactly what the future looks like and we are trying to make these tools and the technology available to a lot of other people so they can experiment and we can see what happens. It’s a strategy that we’ve been using from the beginning.

With ChatGPT, the week before, we were worried that it wasn’t good enough. We all saw what happened. We put it out there and then people told us it is good enough to discover new use cases, and you see all these emergent use cases that I know you’ve written about. That’s what happens when you make this stuff accessible and easy to use and put it in the hands of everyone.

OpenAI Roadmap

Martin: This leads to my next question. You invent cold fusion and then you’re like, “I’ll just give people electrical outlets and they’ll use the energy.” But when it comes to AI, people don’t really know how to think about it yet. There has to be some guidance, you have to make some choices. You’re at OpenAI and you have to decide what to work on next. If you could walk through that decision process: How do you decide what to work on, what to focus on, what to release, or how to position it?

Mira: If you consider how ChatGPT was born, it was not born as a product that we wanted to put out. In fact, the real roots of it go back to more than 5 years ago when we were thinking about how to make a safe AI system. You don’t necessarily want humans to actually write the goal functions because you don’t want to use proxies for complex goal functions or you don’t want to get it wrong because it could be very dangerous.

This is where reinforcement learning with human feedback was developed. What we were trying to really achieve was to align the AI system to human values and get it to receive human feedback. Based on that human feedback, it would be more likely to do the right thing and less likely to do the thing that you don’t want it to do. Then, after we developed GPT-3 and we put it out there in the API, this was the first time that we actually had safety research become practical into the real world. This happened through instruction-following models.

We used this method to take prompts from customers using the API and then we had contractors generate feedback for the model to learn from. We fine-tuned the model on this data and built instruction-following models. They were much more likely to follow the intent of the user and to do the thing that you actually want it to do. This was very powerful because AI safety was not just this theoretical concept that you sit around and you talk about. It actually became: we’re going into AI safety systems now, how do you integrate this into the real world?

Obviously, with large language models, we see a great representation of concepts, ideas of the real world. But on the output front, there are a lot of issues. One of the biggest is obviously hallucinations. We have been studying the issue of hallucinations, truthfulness. How do you get these models to express uncertainty?

The precursor to ChatGPT was actually another project that we called WebGPT, and it used retrieval to get information and cite sources. This project then eventually turned into ChatGPT because we thought the dialogue was really special. It allows you to ask questions, correct the other person, and express uncertainty.

Martin: And keep finding the error because you’re interacting, and so…

Mira: Exactly, there is this interaction and you can get to a deeper truth. We started going down this path and, at the time, we were doing this with GPT-3 and then GPT-3.5. We were very excited about this from a safety perspective. But one thing that people forget is that, at this time, we had already trained GPT-4. Internally at OpenAI, we were very excited about GPT-4 and put ChatGPT in the rearview mirror. Then we realized, “Okay, we’re going to take 6 months to focus on alignment and safety of GPT-4,” and we started thinking about things that we could do. One of the main things was actually to put ChatGPT in the hands of researchers that could give us feedback since we had this dialogue modality. The original intent was to actually get feedback from researchers and use it to make GPT-4 more aligned and safer and more robust, more reliable.

Martin: Just for clarity, when you say alignment and safety, do you include in that it is correct and does what it wants? Or do you mean safety, actually protecting from some sort of harm?

Mira: By alignment, I generally mean that it aligns with the user’s intent, so it does exactly the thing that you want it to do. But safety includes other things as well, like misuse, where the user is intentionally trying to use the model to create harmful outputs. With ChatGPT, we’re actually trying to make the model more likely to do the thing that you want it to do to make it more aligned. We also wanted to figure out the issue of hallucinations, which is obviously an extremely hard problem.

I do think that with this method of reinforcement learning with human feedback, maybe that is all we need if we push this hard.

Martin: So, there was no grand plan? It was literally, “What do we need to do to get to AGI?” And it’s just one step after the other.

Mira: That’s right, yes—and all the little decisions that you make along the way. Maybe what made it more likely to happen is the fact that we did make a strategic decision a couple of years ago to pursue a product. We did this because we thought it would not be possible to just sit in a lab and develop these things in a vacuum without feedback from users from the real world. That was the hypothesis. I think that helped us along the way to make some of these decisions and build the underlying infrastructure so we could eventually deploy things like ChatGPT.

Scaling laws

Martin: I would love it if you riffed on scaling laws. I think this is the big question that everybody has. The pace of progress has been phenomenal and you would love to think that the graph always does this. [gestures up and to the right] But the history of AI seems to be that you hit diminishing returns at some point and it’s not parametric. It kind of tapers off. From your standpoint—which is probably the most informed vantage point of the entire industry—do you think the scaling laws are going to hold and we’re going to continue to see advancements, or do you think we’re heading to diminishing returns?

Mira: There isn’t any evidence that we will not get much better and much more capable models as we continue to scale them across the axes of data and compute. Whether that takes you all the way to AGI, that’s a different question. There are probably some other breakthroughs and advancements needed along the way. There’s still a long way to go in the scaling laws and to really gather a lot of benefits from these larger models.

Martin: How do you define AGI?

Mira: In our OpenAI charter. We define it as a computer system that is able to autonomously perform the majority of intellectual work.

Martin: I was at lunch and Robert Nishihara from Anyscale was there. He asked what I called a Robert Nishihara question, which I thought was actually a very good characterization. He said, “You’ve got a continuum between a computer and Einstein. You go from a computer to a cat, from a cat to an average human, and from an average human to Einstein.” Then he asked the question, “Where are we on the continuum? What problem will be solved?”

The consensus was we know how to go from a cat to an average human. We don’t know how to go from a computer to a cat because that’s the general perception problem. We’re very close, but we’re not quite there yet, and we don’t really know how to do Einstein, which is set-to-set reasoning.

Mira: With fine-tuning, you can get a lot, but in general, I think, at most tasks, we’re at intern level. The issue is reliability. You can’t fully rely on the system to do the thing that you want it to do all the time. Sometimes, in a lot of tasks, it’s never. How do you increase that reliability over time and then, obviously, expand the emergent capabilities, the new things that these models can do?

I think that it’s important to pay attention to these emergent capabilities, even if they’re highly unreliable. Especially for people that are building companies today, you really want to think about, “What’s somewhat possible today? What do you see glimpses of today?” Very quickly these models could become reliable.

One model to rule them all?

Martin: I’m going to ask in just a second to prognosticate on what that looks like. But before, very selfishly, I’ve got a question about how you think the economics of this are going to pencil out. I’ll tell you what it reminds me of. It reminds me very much of the silicon industry. I remember in the 90s, when you bought a computer, there were all these weird co-processors. “Here’s string matching, here’s a floating point, here’s crypto,” and all of them got consumed into the CPU.

It turns out generality was very powerful, and that created a certain type of economy, one where you had Intel and AMD and it all went in there. Of course, it costs a lot of money to build these chips.

So you can imagine 2 futures. There’s one future where, generality is so powerful that over time, the large models basically consume all functionality. Then there’s another future where there’s going to be a whole bunch of models, things fragment, and there are different points on design space. Do you have a sense of: is it OpenAI and nobody, or is it everybody?

Mira: It depends on what you’re trying to do. Obviously, the trajectory is these AI systems will be doing more and more of the work that we’re doing. They’ll be able to operate autonomously, but we will need to provide direction and guidance and oversight. But I don’t want to do a lot of the repetitive work that I have to do every day. I want to focus on other things. Maybe we don’t have to work 10, 12 hours a day, and maybe we can work less and achieve even higher output. That’s what I’m hoping for. In terms of how this works out with the platform, you can see even today that we make a lot of models available through our API, from the very small models to our frontier models.

People don’t always need to use the most powerful, the most capable models. Sometimes they just need the model that actually fits for their specific use case and it’s far more economical. I think there’s going to be a range. But, yes, in terms of how we’re imagining the platform play, we definitely want people to build on top of our models and we want to give them tools to make that easy and give them more and more access and control. You can bring your data, you can customize these models. You can really focus on the layer beyond the model and defining the products, which is actually really, really hard. There is a lot of focus right now on building more models, but building good products on top of these models is incredibly difficult.

The next 5-10 years

Martin: We only have a couple more minutes, sadly. I would love for you to prognosticate a little bit on where you think this is all going in 3, 5, or 10 years.

Mira: I think that the foundation models today have this great representation of the world in text. We’re adding other modalities, like images and video and various other things, so these models can get a more comprehensive sense of the world around us, similar to how we understand and observe the world. The world is not just in text, it’s also in images. I think we will certainly expand in that direction and we’ll have these bigger models that will have all these modalities in the pre-training part of the work. We really want to get these pre-trained models to understand the world like we do.

Then there is the output part of the model where we introduce reinforcement learning with human feedback. We want the model to actually do the thing that we asked it to do and we want that to be reliable. There is a ton of work that needs to happen here and maybe introducing browsing so you can get fresh information and you can cite information and solve hallucinations. I don’t think that’s impossible. I think that’s achievable.

On the product side, I think we want to put this all together in this collection of agents that people collaborate with and provide a platform that people can build on top of. If you extrapolate really far out, these models are going to be incredibly, incredibly powerful. With that, obviously, comes the fear of having these very powerful models that are misaligned with our intentions. A huge challenge is super alignment, which is a difficult technical challenge. We’ve assembled an entire team at OpenAI to just focus on this problem.

Martin: So, very, very, very last question. Are you a doomer, an accelerationist, or something else?

Mira: Let me say something else. [laughs]

Martin: All right, perfect. Thank you so much, Mira. Fantastic. Thank you, everybody.

Mira: Thank you.