In this episode of the AI + a16z podcast, Socket founder and CEO Feross Aboukhadijeh, and a16z partner Joel de la Garza, discuss the open-source software supply chain. Feross and Joel share their thoughts and insights on topics ranging from the recent xz Utils attack to how large language models can help overcome understaffed security teams and overwhelmed developers.
Despite some increasingly sophisticated attacks making headlines and compromising countless systems, they’re optimistic that LLMs, in particular, could be a turning point for security blue teams. Here are some highlights:
[14:25] Joel de la Garza: “The whole new microservices and decomposed development model has been awesome at making sure that whatever method developers are using is the correct method. Because the problem before that was that people would roll their own methods of implementing things. . . . This new methodology actually means that people, when they implement these things, they’re probably using an approved cryptographic method. It’s probably implemented mostly correctly. . . .
“I think it’s made a lot of things better, but it has just created a new attack surface, and it’s created a new set of issues around actually managing the things that are going in there and making sure that you’re validating the correctness of these things.”
[19:03] Feross Aboukhadijeh: “The way we think about gen AI on the defensive side is that it’s not as good as a human looking at the code, but it’s something. . . . Our challenge is that we want to scan all the open source code that exists out there. That is not something you can pay humans to do. That is not scalable at all. But, with the right techniques, with the right pre-filtering stages, you can actually put a lot of that stuff through LLMs and out the other side will pop a list of of risky packages.
“And then that’s a much smaller number that you can have humans take a look at. And so we’re using it as a tool . . . to find the needle in the haystack, what is worth looking at. It’s not perfect, but it can help cut down on the noise and it can even make this problem tractable, which previously wasn’t even tractable.”
[29:23] Feross Aboukhadijeh: “What we see . . . every day is that the bar is so low. It’s not like you’re not dealing with a two-year, state-backed kind of an attack. You’re dealing with somebody, they added five lines of code to the bottom of one of the files of the open source project that you’re using, and it just steals your environment variables and sends them off to the attacker. And it’s right there. And if anyone had looked, they would have seen it. It was right there in the file and literally no one looked. And it’s not just that your company didn’t look. It’s that no one in any company looked. That’s the kind of thing that we see hundreds of per week coming through the feeds. . . .
“And people have this mistaken assumption that, ‘Oh, it’s open source, so it’s safe.’ ‘Oh, I didn’t write that code; I wrote the app code and I just used this dependency, so it’s not my problem.’ But it is your problem. At the end of the day, it’s going to run in the same process as the rest of your app and it’s going to ship into products and it’s going to affect all your users. So, it is your problem.”
[36:05] Joel de la Garza: “I do think that a lot of the attackers, they do have cost constraints and they do have resource constraints that a lot of the blue teams don’t have. And, generally, the adage has always been that the red team always wins. But I do think that with this generative AI wave, and if we do believe that we can do meaningful, agentic-type products that will at least be the level of an intern, perhaps even a level-one analyst . . . I think that if you can deploy 10,000 of those and give them an infinite amount of time, things will get better. I do actually see a path here for things to get markedly better, even though the adversaries [also] have access to these tools.”
Artificial intelligence is changing everything from art to enterprise IT, and a16z is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.