Unlocking AI’s Future: Alexandr Wang on the Power of Frontier Data

David George and Alexandr Wang

In this conversation with a16z general partner David George, Scale AI founder and CEO Alexandr Wang discusses the three pillars of AI—models, compute, and data—and how creating abundant data is core to the evolution of gen AI. With Scale’s work across enterprise, automotive, and the public sector, Alex is also building the critical infrastructure that will allow any organization to use their proprietary data to build bespoke gen AI applications. In addition to talking about frontier data, Alex also shares his learnings from the growth of Scale, his approach to leadership, and what he thinks growth-stage founder/CEOs tend to get wrong about hiring. 

  • [00:00:58] How frontier data will change gen AI
  • [00:08:47] Are big tech companies over-investing in AI?
  • [00:14:39] Where the best AI businesses will thrive
  • [00:17:05] How enterprise businesses are approaching AI adoption
  • [00:19:50] What does the next phase of gen AI products look like?
  • [00:23:23] Alex’s approach to scaling Scale
  • [00:25:36] The founder fallacy
  • [00:30:12] MEI and how Alex views talent acquisition

How frontier data will change gen AI

David: We’re very excited today to have Alex Wang, the founder and CEO of Scale AI, with us. Alex, thanks for being here.

Alex: Thanks for having me.

David: I always love talking to you and I always learn a ton. But maybe to start, why don’t you just tell us a little bit about what you’re building at Scale AI and then we’ll dive in.

Alex: At Scale we’re building the data foundry for AI. Taking a step back, AI boils down to three pillars. All the progress we’ve seen has come from compute, data, and algorithms. And the progress among all three of these pillars, compute has been powered by folks like NVIDIA. The algorithmic advancements have been led by the large labs like OpenAI and others, and data is fueled by Scale. 

Our goal is to produce the frontier data necessary to fuel frontier-level advancements in partnership with all the large labs, as well as enable every enterprise and government to make use of their own proprietary data to fuel their frontier AI development.

David: So on this topic of frontier data, practically, like, how do you actually get it?

Alex: I think this will be one of the great human projects of our time, if that makes sense. And, you know, the only model that we have in the world for the level of intelligence that we seek to create is humans, or is humanity. And so, the production of frontier data looks a lot like a sort of marriage between human experts and humanity with, you know, technical and algorithmic techniques around the models to produce huge amounts of this kind of data. 

And by the way, all the data that we’ve produced today—the internet—has looked like that too. The internet, in many ways, is this collaboration between machines and humans to produce large amounts of content and data. You know, it’ll look like the internet on steroids. Like, what happens if the internet, instead of just being a human entertainment device with this byproduct of data generation, what if it were just this large-scale data generation experiment?

David: So you have a very unique perspective into the state of the industry. How would you characterize the state of models, language models right now? I’d love to sort of get into things like market structure, but just what’s the state of the industry right now?

Alex: Yeah. I think we’re sort of closing in at the end of maybe phase two of language model development. I think phase one was the early years of almost like, pure research. So phase one hallmarks are the original transformer paper, the original small-scale experiments on GPTs. All the way leading up probably until GPT-3 was this sort of phase one research, very, very focused on small-scale tinkering and algorithmic advancements.

Phase two, which is sort of GPT-3 until now, is really the initial scaling phase. We had GPT-3 that worked pretty well. Then OpenAI really scaled up these models to GPT-4 and beyond. And then many companies—you know, Google, Anthropic, Meta, xAI—have also joined this race to scale up these models to incredible capabilities. 

I think for the past two-ish years, or let’s say three years, it’s almost been more about execution than anything. It’s a lot of engineering. How do you actually have large-scale training work well? How do you make sure there aren’t weird bugs in your code? How do you set up the larger clusters? There’s been a lot of executional work to get to where we are now, where we have a number of very advanced models. 

I think we’re entering a phase where the research is going to start mattering a lot more. There will be a lot more divergence between a lot of the labs in terms of what research directions they choose to explore and which ones ultimately have breakthroughs at various times. And so, it’s sort of an exciting alternating phase between maybe just raw execution versus sort of a more innovation powered cycle.

David: They’ve kind of gotten to a point where I wouldn’t say there’s abundant compute, but they’ve had enough compute in order to get to the models where they’re at. That’s not a constraint, necessarily. They’ve kind of exhausted as much data as they possibly can, all of the frontier labs. The next thing will be advancing the ball on the data side. Is that fair?

Alex: Yeah. If you look at the pillars, compute, we’re obviously continuing to scale up the training clusters. So I think that that direction is pretty clear. On the algorithms, I think there has to be a lot of innovation there. Frankly, I think that’s where a lot of the labs are really working hard, on the pure research of that. And then data, you kind of alluded to it, we’ve kind of run out of all the easily accessible and easily available data out there, you know, and…

David: Common crawl is all done, everyone’s had the same access to it.

Alex: Yeah, exactly. And so, a lot of people are talking about the data wall: we’re kind of hitting this wall where we’ve leveraged all the publicly-available data. One of the hallmarks of this next phase is actually going to be data production. What is the method that each of these labs is going to use to actually generate the data necessary to get you to the next levels of intelligence? And how do we get towards data abundance? I think this is going to require a number of fields of advanced work and advanced study.

The first is really pushing on the complexity of the data. So moving towards frontier data. For a lot of the capabilities that we want to build into the models, the biggest blocker is actually a lack of data. For example, “agents” has been the buzzword for the past two years, and basically, no agent really works. Well, it turns out there’s just no agent data on the internet. There’s no pool of really valuable agent data that’s just sitting around anywhere. And so we have to figure out how to produce really high quality data.

David: Give an example of what you would have to produce?

Alex: We actually have some work coming out on this soon, which demonstrates that, right now, if you look at all the frontier models, they suck at composing tools. If they have to use one tool, and then another tool… Let’s say they have to look something up, and then write a little Python script, and then chart something. If they use multiple tools in a row, they just suck at that. They just are really, really bad at utilizing multiple tools in a row. And that’s something that’s actually very natural for humans to do.

David: Yeah. But it’s not captured anywhere, that’s the point, right?

Alex: Exactly.

David: You can’t actually capture somebody going from one window to another into a different application, and then feed that to the model so it learns, right?

Alex: Exactly. So these sort of reasoning chains… When humans are solving complex problems, we naturally will use a bunch of tools, we’ll think about things, we’ll reason through what needs to happen next, we’ll hit errors and failures, and then we’ll go back and reconsider. A lot of these reasoning chains, these agentic chains, the data just doesn’t exist today. That’s an example of something that needs to be produced.

But taking a big step back, what needs to happen on data? First is increasing data complexity, so moving towards frontier data. The second is just data abundance, increasing the data production, so…

David: Capturing more of what humans actually do in the field of work?

Alex: Yeah. Both capturing more of what humans do, and investing into things like synthetic data, hybrid data—so utilizing synthetic data, but having humans be a part of that loop so that you can generate much more high-quality data. With chips, we talk a lot about chip foundries and how we ensure that we have enough means of production of chips, and the same thing is true for data. We need to have data foundries and the ability to generate huge amounts of data to fuel the training of these models.

The last leg of the stool is measurement of the model and ensuring that we actually have… You know, I think for a while, the industry has said, oh yeah, we just add a bunch more data and we see how good the model is, and we add a bunch more data and we see how good the model is. But we’re going to have to get pretty scientific about exactly what the model is not capable of today, and therefore, what exact kinds of data need to be added to improve the model’s performance.

Are big tech companies over-investing in AI?

David: How much of an advantage do the big tech companies have with their corpus of data versus the independent labs?

Alex: Well, there’s a lot of regulatory issues that they have with utilizing the data, their existing data corpuses. This is before all this generative-AI work, but at one point Meta did some research that utilized basically all the public Instagram photos along with their hashtags to train really good image-recognition algorithms. They had a lot of regulatory problems with that in Europe. It turned out to be a huge pain in the ass. So I think that’s one thing that’s difficult to reason through, which is to what degree from a regulatory perspective, particularly in Europe, these companies are going to be able to utilize their data advantages. I think that one’s kind of TBD.

The real way in which a lot of the large labs have dramatic advantages is just that they have very profitable businesses that can provide near infinite sources of capital for these AI efforts. That’s something that I’m watching pretty intently, or I’m very curious to see how it plays out.

David: There’s this whole question if the industry is over-investing. And if you listen to their earnings calls of the big tech companies, they’re like, look, our risk is under-investing, not over-investing. What do you make of that?

Alex: Put yourself in the shoes of, you know, Sundar Pichai or Mark Zuckerberg or Satya Nadella. To your point, if they really nail this AI thing, they could generate another trillion dollars of market cap probably very easily. If they really are ahead of the competition and they productize in a good way, a trillion dollars of market cap is kind of a no-brainer. And if they don’t invest the extra, whatever it is, 20 or 30 billion dollars of CapEx per year, and they miss out on that, there’s some real existential risk. All their businesses are potentially deeply disruptible by AI technology. 

The risk-reward for them is very obvious. So that’s the big picture thing. And then from a more tactical level, I think all of them are going to be able to pretty easily recoup their CapEx investments just by, worst case, making their core businesses more efficient and effective. 

David: Yeah, GPU utilization for Facebook advertising.

Alex: Yeah. If Facebook or Google make their advertising systems a little bit better, they can recoup billions of dollars just by…

David: Yeah, better performance.

Alex: Yeah, better performance there. Apple can easily recoup the investments if it drives through an upgrade cycle. I mean, these are things that I think are pretty clear.

David: Look, it’s generally great for the industry that they’re investing so much capital, because they also are in the business of renting this compute out, or at least in the case of Google and Microsoft they are.

Alex: And the models are making their way… Llama 3.1 is open source. And so, even the literal fruits of all the investment are becoming broadly accessible. The surplus generated from the open source in these models is kind of insane.

David: It’s insane. Okay. So that’s a great segue into market structure at the model layer. So what do you think actually happens? Are there the few players that we’ve all identified now, the handful, and they all compete? Do you think it’s a profitable business? What impact does open source have on the quality of the businesses? Take us a couple years ahead and give us your forecast.

Alex: We’ve seen over the past, even just like, year-and-a-half, the pricing for model inference, you know, fall dramatically, dramatically, dramatically.

David: Order of magnitude.

Alex: Yeah, two orders of magnitude over two years. And so, it’s this shocking thing that it turns out, intelligence might be a commodity. I think that this huge lack of pricing power, let’s say, on the pure model layer certainly indicates that renting models out on their own may or may not be the best long-term business. I think it’s likely to be a relatively mediocre long-term business.

David: Well, I guess it depends on the breakthrough thing, which is the earlier point, right, to the extent that someone actually has a durable breakthrough or multiple people have durable breakthroughs, then potentially the market structure’s different.

Alex: So two things. First, if Meta continues open sourcing, that puts a pretty strong cap as to the value that you can get from the model layer. And then second, if at least a handful of the labs are able to have similar performance over time, then that also dramatically changes the pricing equation. So we think that it’s not 100%, but chances are the pure model renting business is not the highest quality business. Where there are much higher quality businesses is going to be above and below.

David: Yeah, of course.

Alex: So below, I mean, NVIDIA is obviously an incredible business. But the clouds also have really great businesses too, because it turns out it’s pretty hard logistically to actually set up large clusters of GPUs. The cloud providers actually have pretty good margins when they rent out.

David: And the traditional data center business is very much a scale game. So they are massively benefited relative to smaller players.

Where the best AI businesses will thrive

Alex: Yeah, exactly. So I think of picks and shovels. If you’re under the model layer, there are great businesses there. If you’re above the model layer, if you’re building applications—ChatGPT is a great business. And a lot of the apps in the startup realm are actually working pretty well. I mean, none of them are quite as big as ChatGPT, obviously. But a lot of apps, if they nail the early product market fit, end up being pretty good businesses, great businesses as well. Because the value that they generate for customers—if they get the whole user experience correct—far exceeds the inference cost of the models. There’s some cool stuff here, right? Like, I think Anthropic’s launch of Artifacts in Claude…

David: Yeah, that’s cool.

Alex: …it’s like the first pin drop of this major theme of, you know, all the labs are going to be pushing much deeper product integrations to be able to drive higher quality businesses. I think we’re going to see a lot of iteration at the product layer and the product level. The sort of boring chatbots are not going to be the end product. That’s not the end-all deal.

David: It’d be a disappointing outcome.

Alex: Yeah, exactly. The product innovation cycle is very hard to predict because… I mean, OpenAI was surprised how good, or how popular, ChatGPT has been. I don’t think it’s super obvious to me—or anyone in the industry, frankly—what exact products are going to be the ones that hit and what’s going to provide the next legs of growth. But, you know, you have to believe that an OpenAI or an Anthropic can build great applications businesses for them to be long-term, independent and sustainable.

David: And then it’s like, what drives competitive advantage? Obviously, you have the model, a tightly-integrated product on top of it, and then the good old-fashioned modes from there. Workflows, integrations, you know, all that stuff.

Alex: You can clearly see their thinking. I mean, both OpenAI and Anthropic hired chief product officers within, I don’t know, two months of each other.

How enterprise businesses are approaching AI adoption

David: Yeah, they’re figuring it out. You’ve got an application business with some really interesting customers. What are you hearing from enterprises as to how they’re actually putting this into place?

Alex: What we’ve seen is that there was a huge amount of excitement from the enterprise. A lot of enterprises were like, “Shit, we have to start doing something. We have to get ahead of this. We have to start experimenting with AI.” I think that that led them to this fast POC cycle where they’re like, “Okay. What are all the low-hanging-fruit ideas that we have?”

David: Go buy AI stuff.

Alex: And let’s go try all of it. Some of those things are good. Some of them aren’t good. But regardless, it’s been this big frenzy. Much fewer of the POCs have made it to production than I think the industry overall expected. A lot of enterprises are looking at now and, you know, the doomsday that they thought might have happened, hasn’t really happened. AI has not fully terraformed and transformed most of the major industries. 

David: It’s sort of marginal stuff. It’s like efficiency gains and support, you know, and then some of the creative tasks and things like that.

Alex: Yeah, exactly.

David: Otherwise, it’s pretty light.

Alex: The thing that we think a lot about is like, what AI improvements or AI transformations or AI efforts that we’re working on actually can meaningfully drive the stock price of the companies that we’re working with.

David: Oh, interesting.

Alex: That’s what we encourage all of our customers to really be thinking about, because at the end of the day, the potential is there. There’s latent potential for almost every enterprise to implement AI at a level that would meaningfully boost their stock price.

David: Mostly in the form of cost savings, efficiency gains.

Alex: Well, today in the form of cost savings, but then also much better customer experiences. In a lot of industries where there’s a lot more manual interaction with customers, you should be able to drive much better customer interactions if you have more standardization and are able to use more automation. Those eventually would make their way to gains of market share with respect to competitors. That’s what we’re pushing our customers towards, and I see it. Some of the CEOs that we work with, they’re all on board and they understand that it’s going to be a multi-year investment cycle. They might not see gains next quarter, but if they actually pull through the other side, they’re going to see massive transformations.

I think that a lot of the frenzy around small use cases and the more marginal use cases is good. I think it’s exciting. I think they should be doing it, but to me, that’s not what we’re all here to do.

What does the next phase of gen AI products look like?

David: Yeah. It’s very much like the application layer is phase one right now, which is: it’s coming. There’s some automation, but it’s largely chatbots. My hope as a startup investor is that over time, there’s a window that opens for the startups, where product innovation will help them to win and beat the incumbents. My partner Alex Rampell has this phrase, which is, is the startup going to get to distribution before the incumbent finds innovation? I think there’s an opportunity for it, but the tech is too early right now. I don’t know if you would agree with that, but…

Alex: I think the tech is too early to imagine. Again, because it’s mostly cost saving. I think if most of the benefit is on the cost saving side, then that’s not really enough to disrupt large incumbents that have already pushed their way through all the costs of growing and distribution.

David: How valuable do you think the data is inside of enterprises? Like you’ve said, JP Morgan has whatever, 15 petabytes of data or something. I don’t remember what the number is, but is that overrated? How much of it is actually useful? Because today, most of that data has not given them some meaningful competitive advantage. Do you think that actually changes?

Alex: I think AI is the first time you could see that potentially change because obviously there’s the whole Big Data wave. Big Data boils down to better analytics, which is helpful for business decision-making, but not deeply.

David: It doesn’t massively change the way the products work.

Alex: Yeah, exactly. Whereas now, you actually can imagine some massive transformation in the way the products work. Let’s take any big bank. A lot of the valuable interactions between a user and a large bank like a JP Morgan or Morgan Stanley are human-driven, are people-driven. And, you know, they try their best to ensure that the quality of experience is very high across the board. Obviously with any large manual process, there’s only so much you can do to assure that. 

But all of your prior customer interactions and the way that your business has worked historically make up the only available data to be able to train models to do well at this particular task. If you think about wealth management, there’s very little in-distribution data of that on the internet that you could train a model off of.

David: So behind the walls, there’s actually quite a bit. It’s very rich.

Alex: Yeah, huge amounts of data. I think that a lot of the data is probably not super relevant to actually transforming your business, but some of the data is hyper-valuable. But enterprises have a lot of trouble and challenges around actually utilizing any amount of data that they have.

David: Right.

Alex: I mean, it’s poorly organized. It’s sort of all over the place. They pay consulting firms tens of millions of dollars, hundreds of millions of dollars to do these data migrations. And, you know, even after that…

David: No change of results.

Alex: Yeah. No change of results. So, I think it’s historically a very difficult place for enterprises to really drive transformation. In some ways, this is the race: are they going to be able to figure out how to utilize and leverage their data faster than, you know, some startup figures out how to…

David: Create a massively different product with a little subset of the data.

Alex: Yeah, exactly.

Alex’s approach to scaling Scale

David: Shifting gears to how you run your company: One of the things that you’ve talked about is a mistake that you made during the go-go times of 2020 and 2021 around hiring and this notion that in order to scale, you had to hire a ton. It’s something we saw with all of our portfolio companies. It was this war for talent: “Oh, we got to go higher. We got to go higher. We got to go higher.” So, what were the lessons that you learned through that process? And then, how have you changed how you’ve done things afterward?

Alex: So over the past few years, we’ve basically kept our headcount flat. The takeaway from this entire process is that it feels very logical that more people equals better results and more people equals more stuff being done. But rather paradoxically, I think if you have a very high-performing team and a very high-performing org, it’s nearly impossible to grow it dramatically without losing all of that high performance and all of the winning culture.

David: Yeah. Reducing the communication and coordination overhead actually increases productivity.

Alex: That’s definitely true. And I think it’s actually something even deeper, which is like, a very high-performing team of a certain size is almost like this very intricate sculpture in this interplay between all the people on the team. If you just add a bunch of people into that, even if the people are great, it just screws the whole thing up. No matter what, as you add people, you’re going to have regression to the mean. You know, if you observe companies that do scale headcount a lot, and that’s pretty core to their financial results, they acknowledge that regression. So if you think about scaling large sales teams, for example, you acknowledge that you’re going to have that mean regression. But you just operationalize so that you’re a little bit above the mean. If you’re able to do that, then the whole equation still works financially.

David: Yeah, I’d say sales is different than product then.

The founder fallacy

Alex: Yeah, totally. But our observation is that startups work because you have very high-performing teams, and you want to keep those high-performing teams intact as long as you possibly can. 

I think a common startup failure mode is that you have something that works, but everybody in the company is really junior. So then things are scaling, but all the wheels are kind of falling off. Your investors tell you, “Hey, you should hire some executives.” You go through these searches that are somehow uniquely soul-crushing every time. But you go through this, and if you’re great at it, it works half the time. 

So you go through the exec searches. You bring in an exec and then you give the exec a lot of rope. And your exec says, “Hey, we need to hire a massive team for us to hit our results.” And you’re like, “Yeah. I mean, I’m pretty experienced. You seem really experienced. Let’s do what you say.” And you let these big teams be built. The reality is, I think this almost always results in ruin.

This isn’t to say that you can’t hire executives from the outside, but I think what you need to do when you hire executives from the outside is ensure they really get steeped in how the company works. Before they make any major sweeping suggestions, they get into the rhythm and the operations of the company. Do they understand why the whole thing works in the first place? Why are the things that are working, working? Then they make thoughtful suggestions. Initially they take small steps, and you trust and verify each of these small steps. Eventually, maybe they can make more sweeping suggestions, but it should be at a point where they have a clear track record of making small steps that have been really beneficial.

David: That’s interesting and very tangible: start small when you hire a big executive. It’s a little bit counterintuitive, and it’s not the way that any of those executives want to go.

Alex: Yeah. I think there’s an exec fantasy that I’ve noticed. And by the way, I think executives are great people and they’re incredible. But there is a tendency for an executive fantasy, particularly for Silicon Valley companies with young founders and whatnot, which is, “Oh, I’m going to come in and I’m going to fix this fucker.” I shouldn’t say that. But, “I’m going to come in, I’m going to fix this whole thing. I’m going to make this a professional operation.” 

You’re recruiting teammates at the end of the day. You’re not recruiting some magic wand. You’re recruiting a teammate who you believe, over an extended period of time, is going to have great judgment in making repeated decisions about the business. This is where we’ve made mistakes: you’re not buying some magical bag of goods that is going to bring this magic formula into your business that all of a sudden make the whole thing work.

You know, on the flip side, there’s a founder fantasy. The founder fantasy, or the founder/CEO fantasy, is, “Oh, I’m going to just hire a bunch of incredible execs. They’re all going to be fucking pros. And then, I’m going to go…”

David: They’ll do the stuff I don’t want to do.

Alex: Yeah. “They’ll do all the stuff I don’t want to do. And I’m going to be able to sit back and just watch the cogs happen and watch the machine work.” That’s also extremely unrealistic because the flip side is also true. The reason that you are a good founder/CEO is because you make very good decisions over and over and over again, over an extended period of time. To pull yourself out of those decision-making loops would be kind of crazy.

David: That’s a pattern we’ve seen a lot, which is “I’m going to hire executives. I’m going to step back a bit.” And then there’s the “Oh, shit” realization where some big decisions go wrong and founders remember this is the point of them being there.

Alex: I think it can work if your industry is very stable, potentially.

David: Well, look at any public company when they change CEOs, and the stock price moves like, 2%. Oh, okay. Well, actually, it doesn’t really matter. Like, that is a cog, but that is very different from a high-growth startup that’s run by a founder.

Alex: Exactly. I think that a lot of startups and a lot of companies are valuable because of an innovation premium.

David: One hundred percent.

Alex: Investors believe that founder-led companies are going to out-innovate the market. So your job is to out-innovate the market. So you better be in the strategic decisions.

MEI and how Alex views talent acquisition

David: How about MEI? You recently rolled out this concept. I think half of my X feed was praising you—you know, it’s probably more than half. Some portion of my X feed was yelling at you. Talk about the concept. What are your observations of rolling it out so far?

Alex: So MEI, we rolled out this idea of merit, excellence, and intelligence. The basic idea is, in every role, we’re going to hire the best possible person, regardless of their demographics. We’re not going to do any sort of quota-based optimization of our workforce to meet certain demographic targets. That doesn’t mean we don’t care about diversity. We actually care about having diverse pipelines and diverse top-of-funnel for all of our roles. But at the end of the day, the best, most capable person for every job is going to be the one that we hire. It’s one of these things that was mildly controversial. But if we were to just take a big step back as to who companies should be hiring, I think it’s kind of obvious.

Companies should hire the most talented people. There’s obviously this big question of how much social responsibility do companies have in what they do? My take is I operate in a very competitive industry. Scale’s role is to help fuel artificial intelligence. It’s very important technology. We need incredibly smart people to be able to do this, and we need the best people to be able to accomplish this. Most people at Scale would say it was implicitly true, or it wasn’t a departure from how many of us thought of what we do at Scale. But it was really valuable for us to codify it because it gives everybody confidence. Even if this is how we operate today, companies change over time, but we’re not going to change this quality.

David: Well, this has been awesome. I want to close with an optimistic question and forecast. What is your sort of own view or definition of AGI, and what is your expected timeline to when we reach that?

Alex: Yeah, I like the definition of this. Let’s say 80-plus percent of jobs that people can do purely at computers, so digital-focused jobs, AI can accomplish those jobs. It’s not imminent, it’s not immediately on the horizon. So on the order of four-plus years. But you can see the glimmers, and, depending on the algorithmic innovation cycles that we talked about before, could make that much sooner.

David: Very exciting. Well, Alex, thanks for being here. Great to chat with you, as always. I learned a ton. I really appreciate it.

Alex: Yeah, thanks for having me.