AI Revolution

How AI Is Changing Warfare with Brian Schimpf, CEO of Anduril

David George and Brian Schimpf

AI Revolution

In our conversation series AI Revolution, we ask industry leaders how they’re harnessing the power of generative AI and steering their companies through the next platform shift. Find more content from our AI Revolution series on www.a16z.com/AIRevolution.

Anduril reimagined how startups can build software and hardware for the defense sector. Now, they’re using AI to reimagine modern warfare.

Speaking with a16z Growth General Partner David George, Anduril cofounder and CEO Brian Schimpf discusses how AI helps humans make better strategic decisions by sorting through the enormous amount of data collected from modern battlefields. Brian also discusses navigating the US government’s complex procurement processes, using commercial technologies to kickstart their own product development, and the growing opportunities for startups in defense. Throughout, Brian offers a deep dive into the intersection of technology, geopolitics, and the future of defense.

  • [00:00:19] Anduril and the rise of defense tech
  • [00:03:27] The state of AI in warfare
  • [00:06:02] The role of AI in decisionmaking
  • [00:09:36] The deterrence value of AI
  • [00:11:05] How automated can warfare become?
  • [00:14:51] How to sell to the Department of Defense
  • [00:18:25] Improving the procurement process
  • [00:21:23] The urgency for improving R&D cycles
  • [00:25:16] Driving down costs and speeding innovation
  • [00:30:26] Project Warp Speed for defense
  • [00:31:30] The perception and reality of defense startups

Anduril and the rise of defense tech

David: Let’s jump right in. What is Anduril? Tell us what you do.

Brian: All right. So, we were founded in 2017. We’re about seven years in. The basic idea was, we thought there was a better way to make defense technology. So, number one, the tech for the next 20 or 30 years was going to be, primarily, how do you just have more cheap, autonomous systems on the battlefield? Just more sensors, just more information flowing in? That seemed like it had to be true. So we invested in the core software platform we call Lattice, that enables us to make sense of all these things. We have built dozens of varieties of autonomous products that we fielded over the last seven years. Just an outrageous pace. And we’re really working on all aspects of national security and defense.

David: And how did you guys get on to national defense as the place to spend your time?

Brian: So, I was at Palantir for about 10 years. I had been working on a variety of government programs. And then several of the co-founders—Trae Stephens was also at Palantir; Matt Grimm, our COO, was at Palantir; and we’re all really good friends. We’d been talking about doing this idea of a next generation defense company. And then Trae and Palmer met through the VC world. Palmer was just getting out of Oculus, and he was like, “this is the same thing I want to do.” So we decided to kick this off together.

But for me, working in defense was—it was just obvious the degree to which there was a problem, right? It’s like, you work in this space. The tech is old. It is not moving fast. It is very lethargic. There are relatively few competitors at this point. It just felt very ripe to do something different. And it’s the sort of thing that once you get into it, the people who are actually serving just had this patriotic motivation to solve the problem. It’s just a very, very motivating problem to work on.

David: How did you land on the first product?

Brian: So, the first product we worked on was what we call Sentry. It’s for border security. And this was a Palmer idea. He believed that tech could actually solve this. So, we have these automated cameras with radars. We can monitor the border, for miles away, from these cameras. Palmer thought, “This is something we could solve super fast with technology.” And it really kind of hit what has ended up being a very good pattern for us: find an urgent problem, that actually has a real tech solution, that we can apply the cutting-edge technologies to.

Early on, say 2017, computer vision was just starting to work, it wasn’t even really embedded GPUs yet. We were literally taking desktop GPUs, and liquid cooling them, to get these things to work in a hot desert under solar power. But we were able to go and get a prototype up in about three months, and then move into a pilot in about six months, and then full scale in about two and a half years. So, really, really quick timeline. But it fit this problem set of, we had a technical insight of how you could do this better, and there was urgency to solve the problem. They actually wanted to make a dent in this.

The state of AI in warfare

David: All right. I’m going to ask you a lot more about that stuff. But one of the things that people say to me all the time, and you hear it in speeches and all this stuff, is just “AI is going to change the nature of warfare.”

Brian: Yeah.

David: And it’s like, okay, on the one hand, the major breakthrough that we just had, the way everyone interacts with it, is a chatbot and an LLM.

Brian: It’s pretty cool.

David: It’s pretty…it’s amazing.

Brian: Yeah.

David: It’s awesome. I use it for everything. But what are the implications of this new wave of AI, generative AI, on modern warfare, physical sensors, and the software side?

Brian: So, when I think about where AI is going to drive the most value for warfare, it is dealing with the scale problem, which is really the amount of information—that is, the number of sensors, the sheer volume of systems that are going to be fielded, and it’s going to go through the roof.

David: So, this is Lattice. Maybe start there. Everything has a sensor. Everything’s communicating. Now what?

Brian: What do people do in the DOD? There’s a lot of things they do, but what’s the primary warfighting function? They are trying to find where the adversaries are.

David: Yes.

Brian: They need to then deploy effects against them. That can be a strike. That can be deterring them by a show of force. That can be jamming and non-kinetic things. And they’ve gotta then assess, did that actually work? Right?

David: Yeah.

Brian: You’ve got to find them, you’ve got to engage, and you’ve got to assess, right? It’s pretty straightforward. That is the primary thing that the military does. What do you need to do that? You need a ton of sensors. You need a ton of information on what is going on, with an adversary who’s constantly trying to hide from you and deceive you.

David: Yeah.

Brian: It’s just huge amounts of information, to make it as intractable as possible for them to be able to hide. Or when they are deceiving you, you can figure it out, right?

David: And this technology exists, right?

Brian: The sensors exist.

David: And the sensors are deployed, right?

Brian: Yep. And they’re going to get better, and you’re going to be cheaper, and you’re going to be able to do more of them. But a lot of the limit of why can’t we do more is, what the hell are you going to do with the data?

David: Processing capabilities? Yeah.

Brian: Yeah. Like, processing, but also just operationally. Now, say I had a perfect AI system that could tell me where every ship, aircraft, and soldier was in the world. What are you going to do with that? So now I know everything. That is overwhelming, right?

David: Yeah.

Brian: Then being able to sift through that information to figure out, well, okay, they’re maneuvering here. What does that imply? Is this an aggressive action? Is it outside their norms? Is this different than we’ve seen in the past? Being able to pull out that signal from just this overwhelming amount of information that exists. And then on the other side, you have to act, right? So, now I’ve got to actually be able to carry out these missions.

David: Yeah.

The role of AI in decisionmaking

Brian: So, this is where, on the autonomy side, it really comes in, which is, I want to send fighter pilots out, or a Predator drone today is still flown by a guy with a joystick. It’s all manually piloted. But that doesn’t really scale, and there presents a lot of limitations on communications, jamming, all these things.

So, I want to be able to task a team of drones to go out and say, “Hey, go in this area, and find any ships, and tell me where they are.” I just want it to be that simple. And they just need to figure out their own route. And if I lose some of them, they rebalance. They just go out and handle it. They’re running target recognition. They can pop back whatever’s relevant.

That is where I think the autonomy side really comes in, which is I can just drive scale into the number of systems I can operate in the environment. The promise of AI, in a lot of ways, in the long run with this, is just the ability to scale the types of operations I’m doing, the amount of information I have. If done very well, it will put humans into a place of sort of better decision-making, right?

David: Yes.

Brian: Instead of being inundated by a volume of data, and then all of our capacity goes to these mechanical tasks, we can have humans with much better context, much better understanding, historical understanding of what this means, what the implications of different choices are. Those are all things that AI can enable, over time.

David: Ideally, better decision-making.

Brian: I think you can get wildly better decision-making.

David: Because we’re working with both limited information and imperfect judgment.

Brian: Yeah. And so the more you can have AI augmentation for these things, and synthesis, and clarity, that is where the promise of this is. The US posture on this is very much, “We want to have humans accountable for what happens in war.”

David: Yes.

Brian: That is how it should be.

David: Yeah.

Brian: The military commander that employs a weapon is accountable for the impact of those weapons.

David: Yeah.

Brian: That is correct. I think that is the system we should have. Nobody is talking about having full-blown AI deciding who lives and dies. That is a crazy version that nobody wants to have.

David: Well, I think it’s also far-fetched, in the sense that it presumes some sort of objective function that isn’t driven by us.

Brian: Exactly.

David: Like, this is my conversation with everybody when they come in, they’re like, “Oh, my god. What about when the AI goes Terminator on us?”

Brian: Why?

David: I’m like, it’s a tool for humans. It doesn’t have an objective function. That’s a leap that is not on the scientific roadmap today.

Brian: That’s right. What was its goal?

David: So why would that be the case in warfare?

Brian: That’s right. And so, I think the reality for these things is it’s going to be human augmentation. It is going to be enabling humans to operate at a much larger scale, with much higher precision on these things. That is the opportunity with it.

To me, it is unethical to not apply these technologies to these problems. Our view has always been, if we’re the best technologists on these problems, or we can get the best technologists to it, giving the best tools on these absolutely critical decisions, that are extremely material, that seems like probably a good thing. And engaging in the question of how you can use this technology responsibly and ethically is incredibly important.

David: Yeah. Is it more humane to have a fighter pilot, you know, in the way of danger, or having an autonomous system piloting in a conflict? And by the way, I have friends who are fighter pilots. I love fighter pilots. But, you know the technology has advanced significantly, and you could make the argument that it is more humane not to put them in the line of fire.

Brian: That’s right. Yeah. It’s like, we’re not going to want to put US troops at risk.

David: Yes.

The deterrence value of AI

Brian: And I think also the deterrence factor of the US saying, “I have this capability, and I’ve reduced my political cost of engaging on these things,” it’s actually a pretty good deterrent as well.

David: Yeah, exactly.

Brian: Like, I’m not putting US troops at risk. Or I can give this to allies, and they can defend themselves.

David: Yeah. And keep us out of the fight.

Brian: Keep the troops out of the fight, and it changes the calculus quite a bit. And so I think that actually, in a lot of ways, if done well, [autonomy] has a significant stabilizing impact, a deterrent impact, on… It just is harder to use force to get your political ends.

David: Yeah, exactly. Yeah. I keep coming back to deterrence, and we need to find a way to create a sense of urgency, for the sake of deterrence…

Brian: Yep.

David: …not for the sake of going to war.

Brian: That’s right.

David: And so, it feels like that’s universally known, and hopefully we can make some progress.

Brian: Yeah. I think people largely agree. Look, Vladimir Putin was very convincing on this.

David: Yes.

Brian: Turns out invading Ukraine was probably the single biggest shift I’ve seen in terms of people recognizing that there are still bad actors in the world. They will use force to get to their political will, if they think it will work.

David: Yeah.

Brian: If the cost is worth it, they’re going to do it. And I don’t think there’s any reason to believe that’s going to stop. It’s been true for tens of thousands of years.

How automated can warfare become?

David: Do you think the future of warfare… so, you said it will be AI as an augmentation for humans.

Brian: Mm-hmm. Yep.

David: How fully automated do you think a conflict can become, say, in the next 10 years?

Brian: I think the mechanics of a scenario where, there’s this airfield, and you want to go surveil it, and take it. You’re going to do some strike. You’re going to do some surveillance. You’re going to do all these things. There will be a large degree of automation in that. Right? Like, I could just say, “Hey, send this team of drones out in these waves, to go conduct this operation. Find things that pop up that are a threat. Pop up to the human to ask to engage or not.” It goes.

David: Yeah, yeah.

Brian: It can move at a much faster pace. You know, I think a lot of the things that were starting to happen in Ukraine, lot of the great work Palantir did, was on things like this, where the targeting process of going from satellite imagery through to “Hey, this looks like a tank,” through to an approval of is this a legitimate military target or not? That was streamlined and compressed and much, much faster. So, I think those things will happen very, very quickly. Like, very, very quickly. Then okay, now it turns into a matter of policy and degree and scope. That is a thing that I think we’re just going to have to figure out as we work through it with the military.

David: Yeah.

Brian: So then, what we think about from the technology side is, okay, I don’t want to design anything that precludes more advanced forms of this over time. We’ve got to architect it correctly. But at first the crawl phase is just to get a lot of the basics automated. Very mechanical things. Make it very predictable. Don’t have any surprises. And then you can add more sophistication.

As you build trust, the AI advances, these things get more sophisticated over time. And one of the best examples is kind of on the defensive side, where it’s kind of right for AI. So, we do a lot of work in counter-drone systems. This is one of the areas we’re partnering with OpenAI on. And it’s looking at this question of, if you have multiple drones flying at you, and you have minutes to respond before the strike happens on you…

David: How do you make an optimal decision…

Brian: How do you make an optimal decision? When you are panicked, you are nervous, and your life is at risk? It’s very hard.

David: Is that a person manually sitting there, making those decisions today, and…

Brian: Oh, yeah.

David: Yeah, okay.

Brian: Yeah. No, it’s often three, because they have a separate radar, from a camera, which is separate from the guy pulling the trigger on the weapons systems. The coordination costs can be significant. So, you can automate a lot of this. And then the other problem with this is then, as we’ve seen in Ukraine, every single unit, every single soldier, is now at risk of drones, right?

David: Yes.

Brian: So, this has to proliferate out from being a specialty that you do in an operation center, now to every vehicle in the field. Everyone has to have this capability. You need the ability to have these systems just process all that sensor data automatically, fuse it together, tell you viable options for countering this, tell you what’s a threat and what’s not a threat. These are the types of things you need to be able to do, respond with intelligent suggestions, and then have the system just automatically carry it out from there. These are the types of problems we’re working on, where you need it, right? Like, there’s no choice, because the timelines are too short, and the urgency is too high. It’s a very straightforward area to understand where technology can really improve the problem.

David: Yeah, it’s the highest-stakes version of, you know, decision-making that autonomous driving cars are doing today…

Brian: Yeah.

David: …you know, but with way more sensor information.

Brian: Yes. Yes. With an adversary who’s constantly trying to fool you, deceive you, and…yes. It’s very, very hard.

How to sell to the Department of Defense

David: So, that’s one of the big parts of the partnership with OpenAI?

Brian: Yeah, yeah. So, we’ve been… They’ve been great, and I think, you know, Sam especially has been very clear that he supports our warfighters, and he cares about having the best minds in AI working in national security. And who better exists to work through these hard problems?

David: Yeah.

Brian: And so, I was just incredibly proud of them for coming out in favor of this, and saying they’re going to work on this. They’re going to do it responsibly. They’re going to do it ethically. This is an important problem that the best people should be working on.

David: The defense industry is notoriously difficult for startups to navigate. So, how did you guys actually get traction in the first place? And do you think that’s going to change in the future? Do you think it will continue to be hard? Do you think the primes will continue to have a stranglehold?

Brian: It is very hard. You know, look, I think we built a lot of the right technology. I think we got the right business model of investing in things that we believe need to exist. I think we’re picking a lot of the right problems to go after, but probably more than anything, I think we understood the nature of what it took to sell, right? You know, the congressional relationships, the Pentagon relationships, the military relationships, all of this that you need to be able to say, “We have the right tech. You can trust us. We can scale. We can actually solve these problems for you.” Proving that it works, and then kind of catalyzing all of these really complex processes around it.

You know, I think the other part that we’ve done quite well is, we’re just finding ways to find those early adopters, and we kind of understand those playbooks. Who’s going to move quickly? How do you just build that momentum and advocacy in the government, to make this go? It’s more bureaucratic in certain ways. Is it much worse than selling to a bank or an oil and gas company? I don’t know. Maybe 30% worse, but probably not 5X worse. And so, I think the reality is, enterprise sales are actually very hard.

David: Yeah. Especially the ones with long sales cycles and massive commitments.

Brian: That’s right. These are large capital investments the customer’s making. That is a slow sales cycle. That is how it works. And so, I think there’s a lot of complaining and frustration. Well, also, being bad at business means you’re bad at business. If you don’t understand your customer you’re going to lose. That’s how it works. So do I think the government needs to be a better buyer of these things? Do I think they need to take better strategies that’ll get them more of what they want? Absolutely. They’re taking observably bad strategies to get to the outcome they actually want. Do I think it’s necessary to change for us to be successful? Not really. Like, we’re just going to play the game that they present.

David: What are the observably bad strategies? And then what are the good ones? And maybe also wrap it into this idea that how do you actually convince the government that your ideas are the right ideas? So, should you go spend money building a whole new generation of F-35s, you know, with manned pilots, or a whole new generation of aircraft carriers, or should you do something different? And how do you actually get your points across to them?

Improving the procurement process

Brian: So, okay. So, there’s the question of how do they contract and buy, and what’s been going wrong there? And then it’s, even if you could buy perfectly well, what should you be buying?

David: Two different questions. Okay.

Brian: Yeah, yeah. And so, how are they buying poorly? So, the typical government contracts are done in what’s called cost plus fixed fee. And this actually came out of World War II, when we were retooling industry to work on national security problems. It was just, “We’re going to cover all your costs, and we’ll give you a fixed profit percentage on top.” And so the incentives here are sort of obvious. If it’s more expensive, you get more profit. If it is less reliable, you get more profit. Right?

David: The longer it takes, the more profit coming in, yeah.

Brian: The longer it takes, the less reliable it is, the more complicated it is. There’s incentive in there to actually drive down costs. And you see this play out, right? It’s like, you look at the companies have gotten so used to this, where you look at even something like Starliner, where, you know, I think SpaceX had a third the amount of money that Boeing was given to make Starliner work. And SpaceX did it on time, probably faster than they even predicted. They did it probably incredibly profitably, and it worked. And so I think these incentives that don’t hold you accountable are actually bad for your company. Like, it just makes you a worse company.

David: But do people in the government realize that it’s bad for the country?

Brian: I think they are frustrated. I think they understand that this is not really working. So, you look at F-35, as an example of one of these programs. It took 25 years to get it from initial concept to fielding. There’s this awesome chart, which shows how long it takes to get commercial aircraft or autos from kickoff to fielding. And it’s been flat to slightly better for all of those things, on the order of two to three years. The military aircraft side just went linearly straight up. Like, these things are taking longer and longer and longer. There’s an amazing quote from the ’90s, this guy said that, if you extrapolate this out, by 2046, the US government will be able to afford one airplane that the Air Force and Navy share, and the Marine Corps every other day of the week.

David: It better be a good airplane. Geez.

Brian: Yeah. These things are just crazy. And so I think they recognize that this is not working, right? It’s like, this is broken. Now, the other part of this is they haven’t had a lot of alternatives. So, you have a relatively small cartel of these companies, who sort of all say “We won’t do fixed-price programs anymore.” So, they won’t do things on a fixed cost basis.

David: Yeah, yeah. Sure.

Brian: So okay, if you’re the government, and you’re a buyer, what are you going to do?

David: Yeah, of course.

Brian: You don’t have a lot of choice here. There’s been a lot of problems with trying to get this model right. Now, in terms of things that can work a lot better, you know, I think SpaceX really proved this, where they literally built a reusable rocket that you catch with chopsticks, commercially. Like,I think we can solve these things, guys.

David: I think we can build an airplane.

Brian: I think we can build it. And so there’s not really a question that this is the only part of some magical thing that can only be done by certain people.

David: Well, and you guys with autonomous fighter jets. I mean, you know…

The urgency for improving R&D cycles

Brian: Exactly. Yeah. Like, it’s proven now that that can be built. And so I think the alternatives are there now. And then models that can work a lot better, it’s like, you know one of my crazier ideas is weapons. A new missile takes about 12 years to go from concept through to fielding. 12 years.

David: That’s insane.

Brian: It’s insane. And so, okay, if you’re in that world…

David: Wait, how fast is the technology evolving? Like 12 years from now what we’ll be able to do, technologically.

Brian: Right. Exactly.

David: And then we’ll still be on the previous system.

Brian: No, there’s there’s even crazier examples, like the Columbia class nuclear submarine is going to service in 2035, and its expected lifetime is through 2085. So, how good were we in 1960 at guessing where we were today? It’s like, it’s kind of unclear. It’s kind of unclear.

David: But we had technology to go to the Moon. It was pretty good.

Brian: Yeah. Exactly. Like, actually, it was pretty good. And so, these timelines just get longer and longer, and it’s just this death spiral of these things.

David: Contrast that, you know, cycle of development with China. Like, do they take 12 years? And maybe how does their tech stack up to ours?

Brian: The single best stat for this is they are running hundreds of tests a year of hypersonic weapons. The US is running four.

David: Right.

Brian: Anyone who’s worked in technology understands the compounding value of iterating on these things. It is just so undervalued. And so, why is that the case? Look, in the US, all these tests are very expensive, very complicated. There’s so much buildup because every test has to go well, because we do relatively few tests, so then it increases the risk and the duration that you prep for these tests, and increase your cost.

David: Yeah. So the cycle times are long, yeah.

Brian: Yeah, and you’re just in this vicious negative cycle. Like, anyone who’s worked in software understands this the old school way of releasing software: if you did a yearly release, you try to shove everything you can into that.

David: Yeah. Good luck. Yeah.

Brian: The risk goes through the roof. Quality is a disaster. Going fast has an insane quality of its own, in just how quickly you can learn, and how much you can actually reduce costs on these things. And so, they’re just much more willing to test and iterate in a way that the US is not right now. And so, I think that is long-term, the biggest thing I worry about for the US, is just pace. The pace of iteration, pace of iteration on these things. It probably is the single biggest determining factor of how successful you’re going to be over a 20- to 30-year period.

David: How do we create a sense of urgency?

Brian: Look, you look at that retooling. We had a two-year period of lend-lease, and the amount of GDP that was spent on lend-lease at the time was through the roof. And we weren’t at war then. Right?

David: Yeah.

Brian: We were supplying other people. So, we had a two-year head start to recondition US industry around this, before we even entered into a conflict. That’s about how long it took. Empirically, Russia, you know, about the same duration. About two years to retool their industry against defense production. They are now out-producing all of NATO on munitions.

David: Russia.

Brian: Russia.

David: Yeah.

Brian: And we’ve sanctioned them to hell, and they still are doing, right?

David: Well, they still have gas.

Brian: They still have plenty of gas. So, it’s quite tricky to think you’re going to reconstitute this in a single day. I think the Department has a lot of urgency on it. You know, one of the areas where we see it is showing up with weapons. So, when you look at these wargaming sort of scenarios, and, you know, all these war games are sort of questionable in their own ways, but, pretty consistently, the stockpile of key US munitions is exhausted in about eight days. It’s hugely problematic.

David: Yeah.

Brian: And that is because we have gone down this path of thinking that we’ll be able to have this Gulf War strategy of concluding a conflict in two or three days, and that’s how we’re going to fight our wars. And it’s just not true, right?

David: Well, for any adversary that matters…

Brian: That’s right.

David: …it’s not even close.

Brian: We’ve gotta be prepared to sustain these protracted conflicts, and that in and of itself is probably one of the best deterrent factors we could have.

David: Exactly.

Driving down costs and speeding innovation

Brian: It’s like, we will not stop. We will not back down. We will have the capacity to withstand anything, right? That is a message we need to send to our adversaries worldwide. We have critical gaps on a lot of the constituent parts of the supply chain. This is a national security issue. So, I think there is a feeling that this is a problem. I don’t think anyone thinks everything’s going great.

Now the question is, what are the strategies to get a way out? I don’t think there’s any debate that we are on our back foot, in terms of the capacity we need, the mass we need, the type systems we need. Now how do you get out of it? That’s a much harder question. And do it in a way that is going to work with Congress, is affordable, is actually something we can sustain? The path we’re on is probably more incremental than revolutionary, I would say, with US government, where companies like us are going to come in and win incremental new programs, and show that different models work.

David: Yeah, so we’ll be more innovative.

Brian: We’ll be more innovative. I think that flywheel’s really starting to go.

David: We still have a volume issue.

Brian: But it is a major volume issue, and I think on the weapons production side, look, the only solve out of this is to actually tap into the commercial and industrial supply chains that exist. Like, we’re pretty good at building cars. We’re pretty good at building electronics.

David: Yeah.

Brian: And you could design your systems in a way that take advantage of those commercial supply chains. One example we have is, we made a low-cost cruise missile.

David: Yeah.

Brian: It’s very cool. You know, several hundred-mile range. And we made the exterior fuselage in this process that you use for making acrylic bathtubs. This is a hot press process. It just works great.

David: This is a mad scientist thing. It’s great.

Brian: It’s incredible.

David: It’s awesome.

Brian: The fuel tank is made of the same things as with rotomolding. You use it for making plastic toys. It works great.

David: Yeah.

Brian: There’s a huge supply base that’s available to do these things. Contrast it with most of these traditional weapons, where it’s overly bespoke components. You’re like, we gotta get the dude that knows how to solder this one thing out of retirement. And the supply chains are super deep. Like, four-year lead times on these weapons. It’s really, really bad once you get into it. I saw this thing where the defense primes were like, “We need to change the federal acquisition rules so that we can stockpile four-year lead time parts.” You’re like, “A four-year lead time part? What are we even doing?” Like, what are we doing here?

David: The world has changed in four years.

Brian: Yeah. Like, what is happening? And so I think there’s a problem. But then the government doesn’t help, or they don’t allow them to change the components. There’s no incentive to change the components.

David: Well, so this is the problem. There’s no urgency. It goes back to there’s no forcing function.

Brian: Exactly. And so look, I think a lot of the traditional players are patriots and they really care, but they’re in a system that doesn’t encourage them or support them. I kind of boil it down to two key things. One is meaningful redirection of resources. So right now, the amount of money that’s actually spent on capabilities, like the types of things we’re working on, is somewhere like between 0.1% and 0.2% of the defense budget. That seems pretty low. Even if we got to 2%. Two percent…

David: Yeah.

Brian: …we are in a wildly different world, in terms of what you can do with that type of money.

David: You’re making a VC-sounding pitch.

Brian: Yeah. If I could even get…look at how big the market is.

David: You know? The TAM is this. If I could just get to 2%…

Brian: If I could just get 1%.

David: But that’s actually very helpful context, all kidding aside.

Brian: Yes.

David: This is a crazy small number.

Brian: It’s a crazy small number, and even the small numbers are pretty big, but you really need to up this. So number one is, make the hard choices, to drive redirection of resources into the technologies that are actually going to be what you need, right? Where they’re so stuck with these legacy costs. Number two is every company in the world gets this, which is, you need to empower good people to run hard at a problem, and put all the things that they need to do it, and all the approvals and all of that, under their command, to just get to yes.

David: Yes. Yes.

Brian: It’s very simple, right? This is how every company operates. And that is how you are successful. Just empower good leaders to get results.

David: Yeah.

Brian: And hold them accountable. It is the opposite of how it works in the Pentagon, where every time something has gone wrong, a new process and a new office has been added to check the homework and say “no”, and they stall progress out. I think there’s relatively simple things that can be done, with some combination of congressional action and executive action, to flip that on its head, say, “Nope. These program offices are fully empowered to field their capabilities, and they will, and they are just accountable to senior leaders on the risk and tradeoffs.”

David: Yeah.

Brian: And that’s it.

David: And you give them a budget.

Brian: Give them a budget, give them a target, and they have to understand the risk. They have to do all this, but they’re going to make informed choices on risk and cost and schedule and performance tradeoffs.

David: Yeah. Yeah.

Brian: Like, that’s their job. That’s what we’re hiring them to do. And if we create really empowered people, to actually field stuff, you will get amazing results, because there are really good people in the government. It’s just there are 10 times as many people who say no as there are to people who are accountable for delivering.

David: Oh, that’s fascinating. Ten times more people who hang around and say no than say yes.

Brian: That’s right.

Project Warp Speed for defense

David: Could you do just a Project Warp Speed for defense? I know that implies something short-term, like it’s like a one-time catch-up or something.

Brian: Yeah, this is…

David: But it probably needs to be just a permanent shift?

Brian: I think you have to do both, right? So, you’ve got to say look, we need a Warp Speed for autonomous systems, or weapons. We need that, right? That’s a no-brainer, that we need to have. And in doing that, you can tease out what are those things that you cut and everything worked out fine? And you just didn’t need to do it again.

And then, in parallel, you do the painful and slow process of just schwacking back all these bureaucratic things that exist. I think you gotta do something right, and use that as a template. And so, these things that prove you can be successful, do more of them. Go at bigger scale. While also cutting back all the nonsense on things that just don’t need to exist anymore. They made sense at the time. Now let’s revert, walk back, and reset where we actually need to be for where we are. Tech has changed. The pace has changed. Reflect that in your process.

The perception and reality of defense startups

David: So, even before, you know, the stuff we were just talking about in 2019, when you guys started the company in 2017, starting a company in defense was extremely unpopular.

Brian: Yeah.

David: When you talk about what you need to succeed as a startup, there’s so many things, but capital, talent, relationships with customers. All of those things are way, way harder, or were way, way harder in defense in 2017—and in fact radioactive for some, in 2017.

Brian: Oh, a hundred percent.

David: You know, so, a lot of the engineers and things, there’s like, a, you know…

Brian: Yeah.

David: …you know, religiously opposed. Now it seems that there’s this whole new burgeoning interest in defense startups, and, you know, we have an American Dynamism Fund, and lots of people are interested. How did that happen? Because it seemed to happen a little bit before Ukraine, too?

Brian: Started to shift just before Ukraine.

David: Yeah. What was the cause of that?

Brian: When we started, the number of VCs who gave us ethics interviews, or just said no, or… Look, my crass take is that Silicon Valley is quite memetic. The VC world as well. And once the mainline funds, like you guys, Founders Fund, General Catalyst, all came out and said we’re doing this, and our valuation was high enough…

David: Then they got it.

Brian: …chase, chase, chase.

David: Yeah, then they got it. Yeah, yeah, yeah.

Brian: So, I think that was step one, it was sort of normalized.

David: Yeah.

Brian: The mainstream VC funds were saying, “No, we’re doing this. This is important.” I know Marc put out a post on it at the time. And so I think that was kind of this, the snowball then, of, okay, this is succeeding. It’s actually okay. Everyone’s been told it’s okay. And then there was this catalyzing event around Ukraine. And then I think, on why there are now so many defense tech startups, this stuff is very important work. It’s also, as an engineer, just some of the hardest and most interesting problems you’re going to work on.

David: Yeah.

Brian: So I think a lot of engineers grew up looking at Skunk Works, and seeing the SR-71 Blackbird. Like, all these wild things that the US was able to pull off, and it was just that was your inspiration, growing up as an engineer.

David: Yeah.

Brian: This stuff is iconic. People want to work on these things. And so I think it just really mobilized people who really cared about this. And then you have a ton of vets who are leaving the military, and just want to solve problems that they encountered.

David: Yeah.

Brian: You have a ton of interest in working on it, now a ton of capital, because they’ve seen our success. They know it can be done. And then just the social normalization of the whole thing…

David: Yeah. Yeah.

Brian: …really flipped. Flipped the narrative.

David: Yeah. And I would say, the evolution of the sort of primitives for technology has actually advanced the opportunity big-time, right?

Brian: Oh, a hundred percent.

David: So a lot of the dollars that would go to something like an aircraft carrier, which is untouchable for a startup…

Brian: Yes, yes, yes. Yes.

David: …should go to a smaller form factor, you know, attritable, fully autonomous equipment.

Brian: Yes. Yeah. You’re 100% right. And this has been a big part of our strategy on this: we are leaning into everywhere where there’s commercial investment, and so many of the things that historically have been defense exclusive are no longer the case.

David: Totally.

Brian: One of the examples of this is we built this electronic warfare system. It’s really cool. It’s a jammer. Senses, jams radio signals. If we did that 5 years ago, 10 years ago, you would have had to custom tape-out chips. It’s hundreds of millions in.

David: Yeah, of course.

Brian: It’s a huge thing. So, only government-funded things did it. It was on a really slow cycle. Well, now, with all the 5G tech, this is the performance of these things is through the roof.

David: Yeah, exactly.

Brian: You just take commercial parts.

David: Yeah.

Brian: And then just being the fastest to integrate and understand how to utilize these technologies becomes the advantage. Same with AI. We don’t do AI model research.

David: Yep.

Brian: We don’t need to.

David: Yeah. Just plug into the best models.

Brian: We just take the best things that are there, and so riding these tech waves has been a huge part of it. And that is the macro shift that occurred, which the Department hasn’t reconciled yet: the innovation is much more coming from the commercial world, so it becomes a matter of being the best adopter. It is no longer these 10-year tech roadmaps the Department controls.

David: Yes, exactly.

Brian: It is a totally different world we’re living in. And so, I think yeah, the macro piece of why a company like us can see major technology shifts around where the innovation’s coming from, huge geopolitical shifts…

David: Yes.

Brian: …and then the consolidation of the existing industrial base with the bad incentives has led to an erosion of capacity. And so you kind of combine all these things together, and the conditions were sort of set for us to be successful on this.

David: Yes. Yes.

Brian: I don’t think we could have done it five years or later, would be too late. Five years earlier probably would have been too early.

David: It wouldn’t have worked. Yeah.

Brian: I think we were in this two to three-year window, where we could ride all those waves correctly.

David: Yeah. Brian, it’s so fun to be with you. Thanks a ton for spending the time.

Brian: Yeah. Thank you.

David: Thank you for what you’re building, as your investor, but more importantly, for all of America.

Brian: Thank you.