AI + a16z

How to Think About Foundation Models for Cybersecurity

Zane Lackey, Joel de la Garza, and Derrick Harris

Posted May 10, 2024

In this episode of the AI + a16z podcast, a16z General Partner Zane Lackey and a16z Partner Joel de la Garza sit down with Derrick Harris to discuss how generative AI — LLMs, in particular — and foundation models could effect profound change in cybersecurity. After years of AI-washing by security vendors, they explain why the hype is legitimate this time as AI provides a real opportunity to help security teams cut through the noise and automate away the types of drudgery that lead to mistakes.

Here are some highlights:

[8:07] Zane Lackey: “Often when you’re running a security team, you’re not only drowning in noise, but you’re drowning in just the volume of things going on. And so I think a lot of security teams are excited about, ‘Can we utilize AI and LLMs to really take at least some of that off of our plate?’

“I think it’s still very much an open question of how far they go in helping us, but even taking some meaningful percentage off of our plate in terms of overall work is going to really help security teams overall.”

[15:06] Joel de la Garza: “As far as security foundation models go, that’s going to be interesting. . . . The first iteration of AI and ML didn’t work particularly well for security because, to a large extent, people don’t want to share security data so that they can train these models.

“If you’re a company and you have a lot of incidents, you have a lot of security data, [and] you would be a great place to train these models, you’re very unlikely to share this with anyone. Because if you have 20,000 incidents a year, like a large org does, probably half of those would make a really juicy New York Times story. And so you tend to be very protective of this data that you don’t necessarily want to see out there.”

[24:55] Joel de la Garza: “I think the constraints around the infrastructure to run a lot of this stuff are painful, but I think that’s improving. . . .  The other thing that’s happening is that you have the release of these open source models and you’re actually seeing the development of meaningful open source. And I just think that when you start to allow that to happen, you unlock a lot of innovation.

“. . . It’s the classic Julian Simon versus [Paul] Ehrlich debate, about innovation versus resource scarcity. And the bet is always that innovation will find a way around scarcity. So that’s the bet I’m happy to make. I think these open source models are going to really unlock a lot of innovation, and I think you’ll see people starting to innovate around some of the supply constraints.”

[32:00] Zane Lackey: “If you went and talked to CISOs, most would say they don’t misunderstand [generative AI]. It’s just, they’re trying to fully grasp how it is impacting their organization and how it’s impacting the entire industry. . . . And from the flip side, what attacks and threat factors does it really change? What ones does it [not] change that much yet? And really feeling like you’ve got a comprehensive understanding of that.

“Now, the tough bit is, if you’re a CISO, you’re still a full-time CISO every day. And this world is changing . . . every few weeks and months. So even if you were able to get up to speed three months ago, the world looks different now. And it’s going to look different three months from now.”

More About This Podcast

Artificial intelligence is changing everything from art to enterprise IT, and a16z is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.

Learn More